Hi,
I am faced with a modeling problem and thought I'd see if anyone has run
into something similar and can offer some advice.
Basically my problem domain is cataloguing "snippets of information" about
"entities" which are loosely identified.
Entities can be identified up to 3 different methods
OT: Sorry for top-posting I've been using a ticket interface all morning.
--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
Hi,
I am trying to provide a simple data entry interface to allow multiple
people to efficiently work through every record in a table and fill in the
missing values.
The interface is a web application that simply loads up record after record
until they're all complete.
I want to minimize the pos
Just want to correct a small but significant error in the first query:
WHERE locked_since < now() + interval '5 minutes'
Should read:
WHERE locked_since BETWEEN now() - interval '5 minutes' AND now();
On 6/1/09, Jamie Tufnell wrote:
> Hi,
>
> I am tryin
On 2/6/09, Tom Lane wrote:
> As the view definition printout suggests, tomorrow this view will produce
>
> 2009-02-06 | 2009-02-05
>
> because the constant isn't going to change.
Thanks for that explanation Tom. Very clear and helpful.
Jamie
--
Sent via pgsql-sql mailing list (pgsql-sql@post
Hi,
I was doing EXPLAIN ANALYZE on a query where I compare against
current_date and noticed the following:
Filter: (date <= ('now'::text)::date)
I knew about now() but did not know about 'now' and have since learnt
of 'today', 'tomorrow', etc. Great!
So, I changed my condition to <= 'now' t
On 1/17/09, Erik Jones wrote:
> On Jan 15, 2009, at 8:06 PM, Tom Lane wrote:
>> "Jamie Tufnell" writes:
>>> item_count int -- this is derived from (select count(*) from items
>>> where group_id = id)
>>> ...
>>
>>> item_count would
Hi,
I (think I) want to denormalize an aggregate for performance reasons.
I think that is the right terminology, please correct me if not.
Consider these two tables:
items:
id serial (pk),
group_id int (fk),
...
groups:
id serial (pk),
item_count int -- this is derived from (select count(*)
On 10/16/08, Frank Bax <[EMAIL PROTECTED]> wrote:
> Jamie Tufnell wrote:
>> If someone uses a search query on my site like this:
>>
>> "abc def"
>>
>> I would like to return all results for 'abc & def' first, followed by
>>
Hi
If someone uses a search query on my site like this:
"abc def"
I would like to return all results for 'abc & def' first, followed by
all results for tsquery 'abc | def' is there some way to express this
in one tsquery? What's the most efficient way to go about this? The
search is on one col
Hi Dirk,
On 3/11/08, Dirk Jagdmann <[EMAIL PROTECTED]> wrote:
> I vote against duplicating site_group_id in the users table and the
> proposed unique constraint with a function. Because all those might
> fail, if you ever want to change the relationship between a site and a
> site group.
Good poi
On 3/9/08, A. R. Van Hook <[EMAIL PROTECTED]> wrote:
> The script worked fine untill I tried in on the following entries:
>
> 1841 | 2 | 9228 : Caty Case : SO:Bra:.
> 1841 | 3 | 9621 : Kelsie Greenlee : PROM.
>
> 2072 | 3 | null : Cookie Jared Cook :..
> 2072 | 4 | null : Cookie Jar
On 3/7/08, Jamie Tufnell <[EMAIL PROTECTED]> wrote:
> On 3/7/08, Bart Degryse <[EMAIL PROTECTED]> wrote:
> > I haven't tested these two statements, but I'm using exactly this
> > concept on some tables myself.
> > My equivalent of your users table contain
Hi Jorge,
On 3/7/08, Jorge Godoy <[EMAIL PROTECTED]> wrote:
> Em Thursday 06 March 2008 22:10:14 Jamie Tufnell escreveu:
> > I'm not sure how I would enforce that the site_group_id
> > added to the users table would correspond correctly with
> > the site_id (as
Hi Bart,
On 3/7/08, Bart Degryse <[EMAIL PROTECTED]> wrote:
> I haven't tested these two statements, but I'm using exactly this
> concept on some tables myself.
> My equivalent of your users table contains some 3,000,000 records.
> My equivalent of your sites table contains some 150,000 records.
>
Hi Ray,
On 3/7/08, Ray Madigan <[EMAIL PROTECTED]> wrote:
> How I think about it.
>
> A user has access to a site and all of the sites within the site group that
> the site is in.
>
> if you reword your condition
>
> A user has access to all of the sites in a site group with a default defined
> by
Hi Ray,
Thanks for your reply!
On 3/6/08, Ray Madigan <[EMAIL PROTECTED]> wrote:
> I don't think I understand. You have a constraint that a user has implied
> access to any site in the group, explain why you think it would be wrong to
> have the group_id as an instance variable fro the user. Othe
Hi,
I'm remodeling our authentication tables at the moment to accomodate
future changes. I've hit a wall and thought I'd ask here for some
help. Hopefully I'm just missing something simple.
I'll do my best to explain the domain..
We have users, sites, and groups of sites.
Usernames should be
On 1/9/08, Erik Jones <[EMAIL PROTECTED]> wrote:
> On Jan 9, 2008, at 1:09 PM, Scott Marlowe wrote:
> > On Jan 9, 2008 12:20 PM, Steve Midgley <[EMAIL PROTECTED]> wrote:
> >> This is kludgy but you would have some kind of random number test at
> >> the start of the trigger - if it evals true once p
On 1/8/08, Erik Jones <[EMAIL PROTECTED]> wrote:
> > Hmm so rather than NOT IN ( .. LIMIT 50) would you suggest IN ( ...
> > OFFSET 50) like in Erik's example? Or something else entirely?
>
> Well, that would give you some gain. Think about it like this: once
> a given user's history records ar
On 1/8/08, codeWarrior <[EMAIL PROTECTED]> wrote:
> Jamie:
>
> I think you are probably having slowdown issues in your "DELETE FROM WHERE
> NOT IN SELECT ORDER BY DESCENDING" construct -- that seems a bit convoluted
> to me
Hmm so rather than NOT IN ( .. LIMIT 50) would you suggest IN ( ...
O
Hi codeWarrior,
codeWarrior wrote:
> > For user_ids that have more than 50 rows, I want to keep the most
> > recent 50 and delete the rest.
> How about using a trigger to call a stored procedure ? [ON INSERT to
> user_item_history DO ...]
[snip]
Thanks for your input! I've implemented this but
Hi,
I have a table that stores per-user histories of recently viewed items
and I'd like to limit the amount of history items to <= 50 per user.
I'm considering doing this with a query run from cron every so often
but I'm not happy with what I've come up with so far, and since it's a
quite active t
Hi,
I am storing a log of HTTP requests in a database table (including IP
address):
http_log: id(PK), path, time, ip
I have another table that contains CIDR ranges and names for them:
network_names: id(PK), cidr, name
Some example data for both tables:
network_names:
1, 192.168.0.0/24, 'Engin
Hi list,
I have a many-to-many relationship between movies and genres and, in the
link table I have a third field called which orders the "appropriateness" of
the relations within each movie.
For example:
movie_id, genre_id, relevance (i've used movie/genre titles for clarity
here, but in realit
25 matches
Mail list logo