Re: [SQL] Assigning data-entry tasks to multiple concurrent clients

2009-06-01 Thread Jasen Betts
tOn 2009-06-01, Jamie Tufnell  wrote:
> --00163646d8e6795c49046b4163e0
> Content-Type: text/plain; charset=ISO-8859-1
> Content-Transfer-Encoding: 7bit
>
> Hi,
>
> I am trying to provide a simple data entry interface to allow multiple
> people to efficiently work through every record in a table and fill in the
> missing values.
>
> The interface is a web application that simply loads up record after record
> until they're all complete.
>
> I want to minimize the possibility of assigning the same record to two
> users.

update records set locked_since = now() where id = (select
id from recored order by  locked_since is NOT NULL, locked_since desc
limit 1)  returning *

your operators may end up fighting over the last two records, 
would that be a bad thing?





-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Assigning data-entry tasks to multiple concurrent clients

2009-06-01 Thread Scott Marlowe
On Sun, May 31, 2009 at 9:54 PM, Jamie Tufnell  wrote:
> BEGIN;
> SELECT * FROM records
> WHERE in_edit_queue AND id NOT IN (
>   SELECT record_id FROM locked_records
>   WHERE locked_since < now() + interval '5 minutes')
> LIMIT 1;
>
> INSERT INTO locked_records (record_id, locked_since) VALUES (?, now());
> COMMIT;

There's a race condition here but a unique constraint on record_id
will take care of that, as long as you catch the error and retry.

> Then to save (first-in wins is acceptable for this environment):
>
> BEGIN;
> UPDATE records SET in_edit_queue = false WHERE id = ? AND in_edit_queue =
> true;
> DELETE FROM locked_records WHERE record_id = ?;
> COMMIT;
>
> Is this a sane approach?  Is there a better way to do this with PostgreSQL?

It'll work.  The key to any kind of system like this is monitoring the
progress for things that get stuck / fail to be processed and running
them a second time if need be.  I had a system to process 1M rows at a
time from an 880M row db, and I used a secondary sequence and recid/1M
to partition it out.  So, the next job up grabs a sequence id from t
secondary sequence, which matches the record(or set) to be processed.
With that method there's no locking or anything needed, and no one
needs to "check out" the records, because incrementing the secindary
sequence is in fact checking them out.  Just check the finished table
to see if there's any holes and if there are put those jobs back in
the queue by simply updating their id to the next value for the
porimary id sequence.

Sequences can be an elegant way of assigning jobs to multiple threads
without locking issues.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Avoiding "will create implicit index" NOTICE

2009-06-01 Thread Bryce Nesbitt
I'm looking for a good way to avoid triggering the "will create implicit 
index" NOTICE that Postgres (all versions) puts out.  This ends up 
spamming cron scripts for no good reason:


=> create table junk_six (foo int, primary key (foo));
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
"junk_six_pkey" for table "junk_six"

CREATE TABLE

I've got a hacky solution (in perl), pulled from  
http://www.perlmonks.org/index.pl/jacques?node_id=540511

which suppresses the warning:

my $tmpwarn = $SIG{__WARN__};
$SIG{__WARN__} = sub { print STDERR @_ if $_[0] !~ m/NOTICE:  CREATE 
TABLE/; };

$sqldb->sql_execute("create table junk_six (foo int, primary key (foo));");
$SIG{__WARN__} = $tmpwarn;

And I know that I can edit the warning level in postgresql.conf with 
some other side effects.
But the best solution would be to avoid the notice in the first place.  
Is this possible?


   -Bryce

Keywords: postgres, warning, error, suppress, disable, avoid, hide, 
stderr, stdout


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Avoiding "will create implicit index" NOTICE

2009-06-01 Thread Scott Marlowe
On Mon, Jun 1, 2009 at 1:32 PM, Bryce Nesbitt  wrote:
> I'm looking for a good way to avoid triggering the "will create implicit
> index" NOTICE that Postgres (all versions) puts out.  This ends up spamming
> cron scripts for no good reason:
>
> => create table junk_six (foo int, primary key (foo));
> NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
> "junk_six_pkey" for table "junk_six"
> CREATE TABLE
>
> I've got a hacky solution (in perl), pulled from
>  http://www.perlmonks.org/index.pl/jacques?node_id=540511
> which suppresses the warning:
>
> my $tmpwarn = $SIG{__WARN__};
> $SIG{__WARN__} = sub { print STDERR @_ if $_[0] !~ m/NOTICE:  CREATE TABLE/;
> };
> $sqldb->sql_execute("create table junk_six (foo int, primary key (foo));");
> $SIG{__WARN__} = $tmpwarn;
>
> And I know that I can edit the warning level in postgresql.conf with some
> other side effects.
> But the best solution would be to avoid the notice in the first place.  Is
> this possible?

You can also set log_min_messages by the connection, by the user, and
by the database.

alter user bubba set log_min_messages=error;

etc.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql