Back in December '06 there was a thread about "How to do locking in
RDBO".  It started here:

  http://osdir.com/ml/lang.perl.modules.dbi.rose-db-object/2006-12/msg00028.html

I'm trying to solve a very similar problem and while I have several
versions of things that limp along, I'm sure that I could be doing it
better.

In my case, I need to generate a series of identifiers that are a
concatenation of a date and a two digit counter, e.g. 2007101001,
2007101002, 2007101003, 2007101101, ...

I'm working with both SQLite and MySQL, but have only been hacking on
ideas in my SQLite test world.  I have the following table defined
(sample sheets can have many sample sheet numbers):

  create table sample_sheet_numbers (
        id integer primary key autoincrement,   
        sample_sheet_id integer references sample_sheets(id),
        the_date datetime not null default now,
        counter integer not null default 0
  );

And the following fragment in my SampleSheetNumber.pm

    columns => [
        id              => { type => 'integer' },
        sample_sheet_id => { type => 'integer' },
        the_date        => { type => 'datetime', default => 'now', not_null => 
1 },
        counter         => { type => 'integer', default => 0, not_null => 1 },
    ],

I initially did something along the lines of John's suggestion in:

  http://osdir.com/ml/lang.perl.modules.dbi.rose-db-object/2006-12/msg00032.html

but used SQLite's 'BEGIN EXCLUSIVE TRANSACTION' and 'COMMIT' to do the
lock/unlock.  It almost works, but craps out for some reason if I try
to create a bunch of objects (it dies after 248, odd...) in a loop.  I
haven't tracked down what's going on there yet.

However, at a couple of points in the thread, Perrin made the point
that one can avoid the locking mess by doing the desired operation in
the a single sql statement.  That led me to cook up the following
method (in RoR::DB::SampleSheetNumber::Manager):

  sub get_next_sheet_number {
    my $class = shift;
    my $new_num;
    my $table;
    my $dbh;
    my $sql;
  
    $new_num = RoR::DB::SampleSheetNumber->new();
    $new_num->save();
  
    $dbh = $new_num->db->dbh;
    $table = RoR::DB::SampleSheetNumber->meta->table();
  
    # ->today() is just ->now() truncated to the start of the day.
    my $today = $new_num->the_date();
    my $yesterday = $today->clone(); $yesterday->subtract(days => 1);
    my $tomorrow = $today->clone(); $tomorrow->add(days => 1);
    
    $sql = "update $table set counter = (select max(counter)+1 from $table 
where the_date < '$tomorrow' and the_date > '$yesterday') where id = " . 
$new_num->id;
  
    $dbh->do($sql) or die "Unable to do it.";
  
    return($new_num);
  }

It just creates and saves a new object, then uses a single sql update
statement to set its counter value to one more than the max of the
counter values of all the other numbers for that day.

My questions are:

  - As written, it almost works.  I'm clearly running around behind
    the back of RDBO's copy of that row's values, $new_num->counter is
    still zero even though the row has been updated.  What's the
    cleanest way to fix this?

  - Is there a better way for me to use the single sql statement to
    update the counter value?

  - Can anyone shed any light on the Scalar::Defer trick that John
    mentions?  In particular, when would that code run?

    I'm concerned because I'm depending on the default value of 'now'
    for the the_time column to be set when the object is created, then
    I'm using it in my get_next_sheet_number so that I know what day
    it is.

Thanks!

g.

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
Rose-db-object mailing list
Rose-db-object@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rose-db-object

Reply via email to