RE: [U2] Dynamic files, big transactions
0 115 RU 21074 ameij11521047 22282244 12580680 115 RU 21074 ameij11521058 22282244 12580680 115 RU 21074 ameij11521090 22282244 12580680 115 RU 21074 ameij11521120 22282244 12580680 115 RU 21074 ameij11521150 22282244 12580680 115 RU 21074 ameij11521159 22282244 12580680 115 RU 21074 ameij11521190 22282244 12580680 11 13 RU 21074 ameij11515014 22282244 12580680 11 13 RU 21074 ameij11515067 22282244 12580680 11 13 RU 21074 ameij11515074 22282244 12580680 11 13 RU 21074 ameij11515096 22282244 12580680 11 13 RU 21074 ameij11515129 22282244 12580680 11 13 RU 21074 ameij11515154 22282244 12580680 11 13 RU 21074 ameij11515169 22282244 12580680 11 13 RU 21074 ameij11515184 -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Andre Meij Sent: Saturday, March 10, 2007 10:51 AM To: u2-users@listserver.u2ug.org Subject: RE: [U2] Dynamic files, big transactions Rick, Charles, et others, Thank you for the quick answers. I have quickly tried your solutions and both the initial create with bigger hash and the change+resize on an existing file seem to work, I can now easily lock 20k records in one transaction. I have yet to find out why we hit this issue on live servers in our application (so why the hash on the affected files is apparently so small. I will take a look at that Monday. For now I am quite happy because I have a solution for the immediate problem. Also I have been trying to SELECT 40k records after the write of 20k new ones and that worked also, I am very confused now because I have been brought up with the idea that there is no thing more disastrous then a SELECT in a Universe transaction (locking table problems all around I am told.) Is that something you have heard of? Or is it just another fable? Thanks again for the help, it is very much appreciated. Regards, Andre Meij Innovate-IT -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Stevenson, Charles Sent: Saturday, March 10, 2007 7:04 AM To: u2-users@listserver.u2ug.org Subject: RE: [U2] Dynamic files, big transactions Andre, I'm with Rick. He suggested new partfile. But maybe some kind of queue or workfile, that routinely gets flushed, merging to modulo 1. And maybe zero length record or very small, so that 250 ids all land in the same group? Is group size 4KB? What does that have to do with the lock table in memory, you (or some lurker) may ask? When a record is locked, UV uses the inode group# to determine where to plant the lock in the lock table. So that means that all these records will be assigned to the same lock group, since inode group# (i.e., 1) will be the same for all. If you gave it a larger minimum.modulus, or converted that queue/work file to static, then, when you lock many or all records at once, that would spread the load across several lock groups, since the inodegroup# combo would vary from record to record. cds P.S. I *think* splits and merges are suspended on groups that have records currently in the lock table. (Since group# determines where something is in the lock table, you couldn't have that being changed out from under you.) So as long as a record remains locked, your dynamic file will be not quite so dynamic. You might be hitting that, too. --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/ --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/ --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
Re: [U2] Dynamic files, big transactions
22282244 12580680 115 RU 21074 ameij 11520309 22282244 12580680 115 RU 21074 ameij 11520324 22282244 12580680 115 RU 21074 ameij 11520377 22282244 12580680 115 RU 21074 ameij 11520384 22282244 12580680 115 RU 21074 ameij 11520421 22282244 12580680 115 RU 21074 ameij 11520440 22282244 12580680 115 RU 21074 ameij 11520451 22282244 12580680 115 RU 21074 ameij 11520476 22282244 12580680 115 RU 21074 ameij 11520511 22282244 12580680 115 RU 21074 ameij 11520520 22282244 12580680 115 RU 21074 ameij 11520553 22282244 12580680 115 RU 21074 ameij 11520577 22282244 12580680 115 RU 21074 ameij 11520592 22282244 12580680 115 RU 21074 ameij 11520610 22282244 12580680 115 RU 21074 ameij 11520648 22282244 12580680 115 RU 21074 ameij 11520685 22282244 12580680 115 RU 21074 ameij 11520689 22282244 12580680 115 RU 21074 ameij 11520719 22282244 12580680 115 RU 21074 ameij 11520757 22282244 12580680 115 RU 21074 ameij 11520794 22282244 12580680 115 RU 21074 ameij 11520830 22282244 12580680 115 RU 21074 ameij 11520860 22282244 12580680 115 RU 21074 ameij 11520878 22282244 12580680 115 RU 21074 ameij 11520893 22282244 12580680 115 RU 21074 ameij 11520898 22282244 12580680 115 RU 21074 ameij 11520915 22282244 12580680 115 RU 21074 ameij 11520935 22282244 12580680 115 RU 21074 ameij 11520960 22282244 12580680 115 RU 21074 ameij 11520993 22282244 12580680 115 RU 21074 ameij 11521030 22282244 12580680 115 RU 21074 ameij 11521047 22282244 12580680 115 RU 21074 ameij 11521058 22282244 12580680 115 RU 21074 ameij 11521090 22282244 12580680 115 RU 21074 ameij 11521120 22282244 12580680 115 RU 21074 ameij 11521150 22282244 12580680 115 RU 21074 ameij 11521159 22282244 12580680 115 RU 21074 ameij 11521190 22282244 12580680 11 13 RU 21074 ameij 11515014 22282244 12580680 11 13 RU 21074 ameij 11515067 22282244 12580680 11 13 RU 21074 ameij 11515074 22282244 12580680 11 13 RU 21074 ameij 11515096 22282244 12580680 11 13 RU 21074 ameij 11515129 22282244 12580680 11 13 RU 21074 ameij 11515154 22282244 12580680 11 13 RU 21074 ameij 11515169 22282244 12580680 11 13 RU 21074 ameij 11515184 -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Andre Meij Sent: Saturday, March 10, 2007 10:51 AM To: u2-users@listserver.u2ug.org Subject: RE: [U2] Dynamic files, big transactions Rick, Charles, et others, Thank you for the quick answers. I have quickly tried your solutions and both the initial create with bigger hash and the change+resize on an existing file seem to work, I can now easily lock 20k records in one transaction. I have yet to find out why we hit this issue on live servers in our application (so why the hash on the affected files is apparently so small. I will take a look at that Monday. For now I am quite happy because I have a solution for the immediate problem. Also I have been trying to SELECT 40k records after the write of 20k new ones and that worked also, I am very confused now because I have been brought up with the idea that there is no thing more disastrous then a SELECT in a Universe transaction (locking table problems all around I am told.) Is that something you have heard of? Or is it just another fable? Thanks again for the help, it is very much appreciated. Regards, Andre Meij Innovate-IT -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Stevenson, Charles Sent: Saturday, March 10, 2007 7:04 AM To: u2-users@listserver.u2ug.org Subject: RE: [U2] Dynamic files, big transactions Andre, I'm with Rick. He suggested new partfile. But maybe some kind of queue or workfile, that routinely gets flushed, merging
RE: [U2] Dynamic files, big transactions
Rick, Charles, et others, Thank you for the quick answers. I have quickly tried your solutions and both the initial create with bigger hash and the change+resize on an existing file seem to work, I can now easily lock 20k records in one transaction. I have yet to find out why we hit this issue on live servers in our application (so why the hash on the affected files is apparently so small. I will take a look at that Monday. For now I am quite happy because I have a solution for the immediate problem. Also I have been trying to SELECT 40k records after the write of 20k new ones and that worked also, I am very confused now because I have been brought up with the idea that there is no thing more disastrous then a SELECT in a Universe transaction (locking table problems all around I am told.) Is that something you have heard of? Or is it just another fable? Thanks again for the help, it is very much appreciated. Regards, Andre Meij Innovate-IT -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Stevenson, Charles Sent: Saturday, March 10, 2007 7:04 AM To: u2-users@listserver.u2ug.org Subject: RE: [U2] Dynamic files, big transactions Andre, I'm with Rick. He suggested new partfile. But maybe some kind of queue or workfile, that routinely gets flushed, merging to modulo 1. And maybe zero length record or very small, so that 250 ids all land in the same group? Is group size 4KB? What does that have to do with the lock table in memory, you (or some lurker) may ask? When a record is locked, UV uses the inode group# to determine where to plant the lock in the lock table. So that means that all these records will be assigned to the same lock group, since inode group# (i.e., 1) will be the same for all. If you gave it a larger minimum.modulus, or converted that queue/work file to static, then, when you lock many or all records at once, that would spread the load across several lock groups, since the inodegroup# combo would vary from record to record. cds P.S. I *think* splits and merges are suspended on groups that have records currently in the lock table. (Since group# determines where something is in the lock table, you couldn't have that being changed out from under you.) So as long as a record remains locked, your dynamic file will be not quite so dynamic. You might be hitting that, too. --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/ --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
RE: [U2] Dynamic files, big transactions
Hi Andre It is hard to suggest a best approach without understanding your application. Usually if you are going to lock 250 records at the same time in the same file, you should be considering escalating to locking the File. This is a bigger problem for RDBMS, which is why they prefer optimistic locking to pessimistic locking. If File Locking is not acceptable, you should consider moving to an optimistic model. With Large number of records being locked, you could be increasing youre chances of deadlock situations as well. Regards David Jordan With our current settings we can have a maximum of 250 locks in one group; this means most certainly for auto numbers we can only lock records 1000 to 1249 without getting an abort and a rollback because a new lock cannot be acquired. --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
Re: [U2] Dynamic files, big transactions
Andre, This seemed very strange, since normally 250 record keys would never hash into the same group of a dynamic file. The exception might be if you created a new part file and then sought to lock and add a large group of records at one time. The new dynamic part file would have a modulus of 1, and all of the records that hashed into that file would be locked in the same group until some of them were written, and the file split to a larger modulus. If your algorithm for the Distributed file could cause this situation, then the solution may be to create the new parts with a MINIMUM.MODULUS value large enough to split out the record keys into separate groups (~23? Bigger is better). CONFIGURE.FILE MINIMUM.MODULUS ... (in the Prime/PI Open version) would accept a keyword of IMMEDIATE to force the splitting of groups. Universe lacks this option, so you should specify MINIMUM.MODULUS at the time that you create the part file. -Rick Nuckolls Lynden Inc. On Mar 9, 2007, at 10:39 AM, Andre Meij wrote: Hi, We have a highly technical problem with universe related to the locking tables and their configuration: We have a big application running on Universe 10.1 (Solaris). This application is build on Distributed Dynamic files. Some of the keys are auto numbers; others are supplied by external systems. With our current settings we can have a maximum of 250 locks in one group; this means most certainly for auto numbers we can only lock records 1000 to 1249 without getting an abort and a rollback because a new lock cannot be acquired. This 250th lock cannot be acquired because all these locks fall within the same lock group which is limited to 250 locks. I know of 2 uvconfig parameters that define this locking behavior however the maximum settings of these settings are limited by the maximum size of the shared memory segment. # GLTABSZ - sets the number of group lock entries GLTABSZ 250 # RLTABSZ - sets the number of read lock entries RLTABSZ 250 Our testing indicated that these numbers cannot be raised any higher (due to the shared memory limit). This all means that I cannot lock more than 250 records in one transaction; this is unfortunately not always enough, we occasionally have to implement some extensive tricks to circumvent this. I very much would like to see this resolved on a Universe level so that the programmers can stop worrying about this. Anyone who has experience or knows of someone with experience, please help :-). Maybe you have knowledge of this problem yourself or know of somebody within IBM who could help us resolve this. Regards, Andre Meij Innovate-IT --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/ --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
Re: [U2] Dynamic files, big transactions
UniVerse's CONFIGURE.FILE does nothing apart from update the file header. You need RESIZE filename * * * USING directorypath to actually effect an immediate change. --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/
RE: [U2] Dynamic files, big transactions
Andre, I'm with Rick. He suggested new partfile. But maybe some kind of queue or workfile, that routinely gets flushed, merging to modulo 1. And maybe zero length record or very small, so that 250 ids all land in the same group? Is group size 4KB? What does that have to do with the lock table in memory, you (or some lurker) may ask? When a record is locked, UV uses the inode group# to determine where to plant the lock in the lock table. So that means that all these records will be assigned to the same lock group, since inode group# (i.e., 1) will be the same for all. If you gave it a larger minimum.modulus, or converted that queue/work file to static, then, when you lock many or all records at once, that would spread the load across several lock groups, since the inodegroup# combo would vary from record to record. cds P.S. I *think* splits and merges are suspended on groups that have records currently in the lock table. (Since group# determines where something is in the lock table, you couldn't have that being changed out from under you.) So as long as a record remains locked, your dynamic file will be not quite so dynamic. You might be hitting that, too. --- u2-users mailing list u2-users@listserver.u2ug.org To unsubscribe please visit http://listserver.u2ug.org/