Re: [GENERAL] Why is this query not using GIN index?

2016-11-13 Thread Aaron Lewis
Hey guys,

I'm trying to understand the performance impact of "Index Recheck", I
googled for Index Recheck, but didn't find much details about it,
where can I know more about it?

And how did you know the performance is being significantly hurt by
inadequate work_mem?

I'm running PG 9.6.1, built from source.


On Mon, Nov 14, 2016 at 2:51 AM, Tom Lane  wrote:
> Oleg Bartunov  writes:
>> On Sun, Nov 13, 2016 at 6:05 PM, Aaron Lewis 
>>> It takes 500ms with 10m rows, could it be faster?
>
>> sure.  Recheck with function call is pretty expensive, so I'd not recommend
>> to create functional index, just create separate column of type tsvector
>> (materialize to_tsvector) and create gin index on it.  You should surprise.
>
> I doubt it'll help that much --- more than half the time is going into the
> bitmap indexscan, and with over 1m candidate matches, there's no way
> that's going to be super cheap.
>
> I wonder whether a gist index would be better here, since it would support
> a plain indexscan which should require scanning much less of the index
> given the small LIMIT.
>
> (Materializing the tsvector would probably help for gist, too, by reducing
> the cost of lossy-index rechecks.)
>
> BTW, it still looks like the performance is being significantly hurt by
> inadequate work_mem.
>
> regards, tom lane



-- 
Best Regards,
Aaron Lewis - PGP: 0x13714D33 - http://pgp.mit.edu/
Finger Print:   9F67 391B B770 8FF6 99DC  D92D 87F6 2602 1371 4D33


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why is this query not using GIN index?

2016-11-13 Thread Aaron Lewis
Sigh, didn't notice that. Thanks for the heads up.

It takes 500ms with 10m rows, could it be faster?
I've increased work_mem to 256MB

test=# explain analyze select * from mytable where
to_tsvector('english', title) @@ 'x264'::tsquery limit 1000 offset 10;
  QUERY PLAN
---
 Limit  (cost=684.06..2949.42 rows=1000 width=83) (actual
time=348.506..536.483 rows=1000 loops=1)
   ->  Bitmap Heap Scan on mytable  (cost=661.41..158917.22 rows=69859
width=83) (actual time=345.354..536.199 rows=1010 loops=1)
 Recheck Cond: (to_tsvector('english'::regconfig, title) @@
'''x264'''::tsquery)
 Rows Removed by Index Recheck: 12242
 Heap Blocks: exact=20 lossy=186
 ->  Bitmap Index Scan on name_fts  (cost=0.00..643.95
rows=69859 width=0) (actual time=333.703..333.703 rows=1044673
loops=1)
   Index Cond: (to_tsvector('english'::regconfig, title)
@@ '''x264'''::tsquery)
 Planning time: 0.144 ms
 Execution time: 537.212 ms
(9 rows)

On Sun, Nov 13, 2016 at 10:33 PM, Julien Rouhaud
 wrote:
> On 13/11/2016 15:26, Aaron Lewis wrote:
>> Hi Oleg,
>>
>> Can you elaborate on the title column? I don't get it.
>>
>
>>>> create table mytable(hash char(40), title varchar(500));
>>>> create index name_fts on mytable using gin(to_tsvector('english',
>>>> 'title'));
>
> You created an index on the text 'title', not on the title column, so
> the index is useless.
>
> Drop the existing index and create this one instead:
>
> create index name_fts on mytable using gin(to_tsvector('english', title));
>
>> On Sun, Nov 13, 2016 at 10:10 PM, Oleg Bartunov  wrote:
>>>
>>>
>>> On Sun, Nov 13, 2016 at 2:50 PM, Aaron Lewis 
>>> wrote:
>>>>
>>>> I have a simple table, and a gin index,
>>>>
>>>> create table mytable(hash char(40), title varchar(500));
>>>> create index name_fts on mytable using gin(to_tsvector('english',
>>>> 'title'));
>>>
>>>
>>>
>>> ^
>>>
>>>>
>>>> create unique index md5_uniq_idx on mytable(hash);
>>>>
>>>> When I execute a query with tsquery, the GIN index was not in use:
>>>>
>>>> test=# explain analyze select * from mytable where
>>>> to_tsvector('english', title) @@ 'abc | def'::tsquery limit 10;
>>>>  QUERY PLAN
>>>>
>>>> 
>>>>  Limit  (cost=0.00..277.35 rows=10 width=83) (actual
>>>> time=0.111..75.549 rows=10 loops=1)
>>>>->  Seq Scan on mytable  (cost=0.00..381187.45 rows=13744 width=83)
>>>> (actual time=0.110..75.546 rows=10 loops=1)
>>>>  Filter: (to_tsvector('english'::regconfig, (title)::text) @@
>>>> '''abc'' | ''def'''::tsquery)
>>>>  Rows Removed by Filter: 10221
>>>>  Planning time: 0.176 ms
>>>>  Execution time: 75.564 ms
>>>> (6 rows)
>>>>
>>>> Any ideas?
>>>>
>
> --
> Julien Rouhaud
> http://dalibo.com - http://dalibo.org



-- 
Best Regards,
Aaron Lewis - PGP: 0x13714D33 - http://pgp.mit.edu/
Finger Print:   9F67 391B B770 8FF6 99DC  D92D 87F6 2602 1371 4D33


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why is this query not using GIN index?

2016-11-13 Thread Aaron Lewis
Hi Oleg,

Can you elaborate on the title column? I don't get it.

On Sun, Nov 13, 2016 at 10:10 PM, Oleg Bartunov  wrote:
>
>
> On Sun, Nov 13, 2016 at 2:50 PM, Aaron Lewis 
> wrote:
>>
>> I have a simple table, and a gin index,
>>
>> create table mytable(hash char(40), title varchar(500));
>> create index name_fts on mytable using gin(to_tsvector('english',
>> 'title'));
>
>
>
> ^
>
>>
>> create unique index md5_uniq_idx on mytable(hash);
>>
>> When I execute a query with tsquery, the GIN index was not in use:
>>
>> test=# explain analyze select * from mytable where
>> to_tsvector('english', title) @@ 'abc | def'::tsquery limit 10;
>>  QUERY PLAN
>>
>> 
>>  Limit  (cost=0.00..277.35 rows=10 width=83) (actual
>> time=0.111..75.549 rows=10 loops=1)
>>->  Seq Scan on mytable  (cost=0.00..381187.45 rows=13744 width=83)
>> (actual time=0.110..75.546 rows=10 loops=1)
>>  Filter: (to_tsvector('english'::regconfig, (title)::text) @@
>> '''abc'' | ''def'''::tsquery)
>>  Rows Removed by Filter: 10221
>>  Planning time: 0.176 ms
>>  Execution time: 75.564 ms
>> (6 rows)
>>
>> Any ideas?
>>
>>
>> --
>> Best Regards,
>> Aaron Lewis - PGP: 0x13714D33 - http://pgp.mit.edu/
>> Finger Print:   9F67 391B B770 8FF6 99DC  D92D 87F6 2602 1371 4D33
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>
>



-- 
Best Regards,
Aaron Lewis - PGP: 0x13714D33 - http://pgp.mit.edu/
Finger Print:   9F67 391B B770 8FF6 99DC  D92D 87F6 2602 1371 4D33


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigram is slow when 10m rows

2016-11-13 Thread Aaron Lewis
Thanks Oleg.

I've increased work_mem to 128MB, now the query falls down to 1.7s,
faster but still not good enough.

Is there any other thing I can do about it?

test=# explain analyze select * from mytable where title ilike 'x264';
  QUERY PLAN
--
 Bitmap Heap Scan on mytable  (cost=462.69..5639.67 rows=1380
width=83) (actual time=1754.656..1754.656 rows=0 loops=1)
   Recheck Cond: (title ~~* 'x264'::text)
   Rows Removed by Index Recheck: 1220793
   Heap Blocks: exact=197567
   ->  Bitmap Index Scan on title_trgm_idx  (cost=0.00..462.35
rows=1380 width=0) (actual time=346.663..346.663 rows=1220793 loops=1)
 Index Cond: (title ~~* 'x264'::text)
 Planning time: 1.168 ms
 Execution time: 1755.944 ms


On Sun, Nov 13, 2016 at 10:04 PM, Oleg Bartunov  wrote:
>
>
> On Sun, Nov 13, 2016 at 2:54 PM, Aaron Lewis 
> wrote:
>>
>> I have a simple table with Trigram index,
>>
>> create table mytable(hash char(40), title text);
>> create index title_trgm_idx on mytable using gin(title gin_trgm_ops);
>>
>> When I run a query with 10m rows, it uses the Trigram index, but takes
>> 3s to execute, very slow.
>> (I have 80m rows, but only inserted 10m for testing purpose)
>>
>> test=# select count(*) from mytable;
>>   count
>> --
>>  13971887
>> (1 row)
>>
>> test=# explain analyze select * from mytable where title ilike 'x264';
>>   QUERY PLAN
>>
>> --
>>  Bitmap Heap Scan on mytable  (cost=462.69..5639.67 rows=1380
>> width=83) (actual time=2937.308..2937.308 rows=0 loops=1)
>>Recheck Cond: (title ~~* 'x264'::text)
>>Rows Removed by Index Recheck: 11402855
>>Heap Blocks: exact=39557 lossy=158010
>>->  Bitmap Index Scan on title_trgm_idx  (cost=0.00..462.35
>> rows=1380 width=0) (actual time=342.440..342.440 rows=1220793 loops=1)
>>  Index Cond: (title ~~* 'x264'::text)
>>  Planning time: 0.611 ms
>>  Execution time: 2937.729 ms
>> (8 rows)
>>
>> Any ideas to speed things up?
>
>
>Rows Removed by Index Recheck: 11402855
>Heap Blocks: exact=39557 lossy=158010
>
> You need to increase work_mem
>>
>>
>> --
>> Best Regards,
>> Aaron Lewis - PGP: 0x13714D33 - http://pgp.mit.edu/
>> Finger Print:   9F67 391B B770 8FF6 99DC  D92D 87F6 2602 1371 4D33
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>
>



-- 
Best Regards,
Aaron Lewis - PGP: 0x13714D33 - http://pgp.mit.edu/
Finger Print:   9F67 391B B770 8FF6 99DC  D92D 87F6 2602 1371 4D33


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Trigram is slow when 10m rows

2016-11-13 Thread Aaron Lewis
I have a simple table with Trigram index,

create table mytable(hash char(40), title text);
create index title_trgm_idx on mytable using gin(title gin_trgm_ops);

When I run a query with 10m rows, it uses the Trigram index, but takes
3s to execute, very slow.
(I have 80m rows, but only inserted 10m for testing purpose)

test=# select count(*) from mytable;
  count
--
 13971887
(1 row)

test=# explain analyze select * from mytable where title ilike 'x264';
  QUERY PLAN
--
 Bitmap Heap Scan on mytable  (cost=462.69..5639.67 rows=1380
width=83) (actual time=2937.308..2937.308 rows=0 loops=1)
   Recheck Cond: (title ~~* 'x264'::text)
   Rows Removed by Index Recheck: 11402855
   Heap Blocks: exact=39557 lossy=158010
   ->  Bitmap Index Scan on title_trgm_idx  (cost=0.00..462.35
rows=1380 width=0) (actual time=342.440..342.440 rows=1220793 loops=1)
 Index Cond: (title ~~* 'x264'::text)
 Planning time: 0.611 ms
 Execution time: 2937.729 ms
(8 rows)

Any ideas to speed things up?

-- 
Best Regards,
Aaron Lewis - PGP: 0x13714D33 - http://pgp.mit.edu/
Finger Print:   9F67 391B B770 8FF6 99DC  D92D 87F6 2602 1371 4D33


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why is this query not using GIN index?

2016-11-13 Thread Aaron Lewis
I have a simple table, and a gin index,

create table mytable(hash char(40), title varchar(500));
create index name_fts on mytable using gin(to_tsvector('english', 'title'));
create unique index md5_uniq_idx on mytable(hash);

When I execute a query with tsquery, the GIN index was not in use:

test=# explain analyze select * from mytable where
to_tsvector('english', title) @@ 'abc | def'::tsquery limit 10;
 QUERY PLAN

 Limit  (cost=0.00..277.35 rows=10 width=83) (actual
time=0.111..75.549 rows=10 loops=1)
   ->  Seq Scan on mytable  (cost=0.00..381187.45 rows=13744 width=83)
(actual time=0.110..75.546 rows=10 loops=1)
 Filter: (to_tsvector('english'::regconfig, (title)::text) @@
'''abc'' | ''def'''::tsquery)
 Rows Removed by Filter: 10221
 Planning time: 0.176 ms
 Execution time: 75.564 ms
(6 rows)

Any ideas?


-- 
Best Regards,
Aaron Lewis - PGP: 0x13714D33 - http://pgp.mit.edu/
Finger Print:   9F67 391B B770 8FF6 99DC  D92D 87F6 2602 1371 4D33


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Designing tables based on user input and defined values

2016-02-28 Thread Aaron Christensen
On Sun, Feb 28, 2016 at 1:15 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Sat, Feb 27, 2016 at 10:36 PM, Adrian Klaver  > wrote:
>
>> On 02/27/2016 09:19 PM, Aaron Christensen wrote:
>>
>>> There is somewhat a method to this madness :).  There isn't a formula
>>> that determines outcome.  They will just be arbitrary values that I
>>> assign.
>>>
>>> Obviously, I'm new to SQL but I'm trying to understand your suggestion.
>>> It appears that table Final has the composite/primary keys of goal and
>>> size which will be foreign keyed to table User.   How exactly does the
>>> user submit/store his goal/size and be assigned an outcome if the User
>>> table is using FKs  for goal/size?  It seems backwards to me.
>>>
>>
>> Well there a some unanswered questions, answers to which will shape the
>> ultimate design:
>>
>> Who actually creates the relationship between goal/size and outcome, the
>> user or you?
>>
>> Can a user have more than one combination of goal/size?
>>
>> As to how the user picks their goal/size, that is more an application
>> question. What the relationship between user and final does is ensure that
>> a user can only select a goal/size combination that exists, which I assumed
>> is what you where looking for when you mentioned a lookup table. If I
>> misunderstood then maybe the answers to the above questions will clarify.
>>
>
> ​To be a bit more blunt - we are only dealing with 4 fields here so if it
> is unclear how to proceed its not because the model is complex: its because
> it is unknown what is supposed to be happening in the first place.​
>


Hi David,

You are correct.  I don't think it's supposed to be very
complex/complicated and I imagine that my question should've been answered
within the first one or two emails.  And the reason is because I am doing a
really bad job at trying to describe my question.  (I responded with more
information to Adrian: I will be creating the relationship between
goal/size and outcome.  The user can only provide their goal/size on any
particular date.  Based on their goal size, I will assign to them the
corresponding outcome which will be used as an input to some formula.  Any
user can have one or multiple goal/size combinations.  The user will only
be able to select preexisting goals and sizes.)


>
>

> ​Another question not yet put forth is how do you want to deal with
> change?  It would be wise to assume that the chosen outcome value could
> change in the future in which case do you need to record the value
> permanently as of the time the record was created or is changing
> pre-existing data correct?
>

This is an interesting point.  At the time the user inputs his one or many
goal/size combinations, he will be assigned a particular outcome.  Once the
user is assigned the outcome, it will never change for that particular
entry.  In the future, however, the outcome in the lookup table can change
and any newly inputted goal/size combinations will be assigned those new
outcomes.  So, the record will be permanently stored with whatever the
outcome is at that point in time.

Unfortunately, I am not well versed with SQL so it will take me a bit to
digest the code you provided.  Thank you for chiming in!


> ​By reading the only process description provided:
> """
> The user inputs his name, goal, and size.  Based on his goal and size
> combination, he is assigned a particular "outcome".
> """
> I would consider writing something like the following pseudo-code:
>
> CREATE TABLE user_outcome (username text, goal text, size text, outcome
> integer)
> PRIMARY KEY username
>
> CREATE FUNCTION compute_outcome(username, goal, size) RETURNS integer
> AS $$
> IF EXISTS(SELECT username FROM user_outcome WHERE username = username) THEN
> RAISE EXCEPTION 'User % Already Submitted An Outcome', username
> END IF
>
> INSERT INTO user_outcome(username, goal, size, outcome)
> WITH goal_size_lookup (goal, size, outcome) AS (
> VALUES ('short','small',20), (etc)
> )
> SELECT username, goal, size, outcome
> FROM goal_size_lookup
> WHERE goal = goal AND size = size
> RETURNING outcome;
> $$
>
> Thus the user, at the time of submission, is assigned an outcome.  That
> outcome never changes even if the computation of the outcomes changes.
>
> You can choose to store the goal_size_lookup data is a persistent table if
> desired or needed but even should you do so you'd need to consider whether
> you really want there to be a PK/FK relationship.  The function approach
> hides all this detail nicely and lets you 

Re: [GENERAL] Designing tables based on user input and defined values

2016-02-28 Thread Aaron Christensen
On Sun, Feb 28, 2016 at 12:36 AM, Adrian Klaver 
wrote:

> On 02/27/2016 09:19 PM, Aaron Christensen wrote:
>
>> There is somewhat a method to this madness :).  There isn't a formula
>> that determines outcome.  They will just be arbitrary values that I
>> assign.
>>
>> Obviously, I'm new to SQL but I'm trying to understand your suggestion.
>> It appears that table Final has the composite/primary keys of goal and
>> size which will be foreign keyed to table User.   How exactly does the
>> user submit/store his goal/size and be assigned an outcome if the User
>> table is using FKs  for goal/size?  It seems backwards to me.
>>
>
> Well there a some unanswered questions, answers to which will shape the
> ultimate design:
>
> Who actually creates the relationship between goal/size and outcome, the
> user or you?
>
> Can a user have more than one combination of goal/size?
>
> As to how the user picks their goal/size, that is more an application
> question. What the relationship between user and final does is ensure that
> a user can only select a goal/size combination that exists, which I assumed
> is what you where looking for when you mentioned a lookup table. If I
> misunderstood then maybe the answers to the above questions will clarify.
>


It's not that you're misunderstanding, it's that I'm doing a horrible job
describing my question.

For answers to your questions:
I will be creating the relationship between goal/size and outcome.  The
user can only provide their goal/size on any particular date.  Based on
their goal size, I will assign to them the corresponding outcome which will
be used as an input to some formula.  Any user can have one or multiple
goal/size combinations.  That is correct, the user will only be able to
select preexisting goals and sizes.



>
>
>> On Feb 27, 2016 7:04 PM, "Adrian Klaver" > <mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 02/27/2016 03:12 PM, Aaron Christensen wrote:
>>
>> Hi Adrian,
>>
>> Thank you for responding with the SQL code.  However, outcome
>> cannot be
>> a primary key because outcome values will be duplicates in some
>> instances.  I am not sure how else to have a lookup table that
>> stores
>> static values.
>>
>>
>> Well first is there a method to the madness:)?
>>
>> In other words is the choice of an outcome arbitrary or is there
>> some calculation that goes into it?
>>
>> Otherwise, something like?:
>>
>> test=> create table final(goal varchar, size varchar, outcome int,
>> PRIMARY KEY(goal, size));
>>
>> test=> create table user_tbl(user_id int PRIMARY KEY, user_name
>> varchar,  goal varchar, size varchar, CONSTRAINT g_s_fk  FOREIGN KEY
>> (goal, size)  REFERENCES final(goal, size));
>>
>>
>> test=> \d final
>>Table "public.final"
>>   Column  |   Type| Modifiers
>> -+---+---
>>   goal| character varying | not null
>>   size| character varying | not null
>>   outcome | integer   |
>> Indexes:
>>  "final_pkey" PRIMARY KEY, btree (goal, size)
>> Referenced by:
>>  TABLE "user_tbl" CONSTRAINT "g_s_fk" FOREIGN KEY (goal, size)
>> REFERENCES final(goal, size)
>>
>> test=> \d user_tbl
>>Table "public.user_tbl"
>>Column   |   Type| Modifiers
>> ---+---+---
>>   user_id   | integer   | not null
>>   user_name | character varying |
>>   goal  | character varying |
>>   size  | character varying |
>> Indexes:
>>  "user_tbl_pkey" PRIMARY KEY, btree (user_id)
>> Foreign-key constraints:
>>  "g_s_fk" FOREIGN KEY (goal, size) REFERENCES final(goal, size)
>>
>>
>>
>>
>>
>> Thanks!
>> Aaron
>>
>> On Sat, Feb 27, 2016 at 5:15 PM, Adrian Klaver
>> mailto:adrian.kla...@aklaver.com>
>> <mailto:adrian.kla...@aklaver.com
>>
>> <mailto:adrian.kla...@aklaver.com>>> wrote:
>>
>>  On 02/27/2016 01:15 PM, Aaron Christensen wrote:
>>
>>  Hello,
>>
>>  I am trying to figure out the correct way to design the
>

Re: [GENERAL] Designing tables based on user input and defined values

2016-02-27 Thread Aaron Christensen
There is somewhat a method to this madness :).  There isn't a formula that
determines outcome.  They will just be arbitrary values that I assign.

Obviously, I'm new to SQL but I'm trying to understand your suggestion. It
appears that table Final has the composite/primary keys of goal and size
which will be foreign keyed to table User.   How exactly does the user
submit/store his goal/size and be assigned an outcome if the User table is
using FKs  for goal/size?  It seems backwards to me.
On Feb 27, 2016 7:04 PM, "Adrian Klaver"  wrote:

> On 02/27/2016 03:12 PM, Aaron Christensen wrote:
>
>> Hi Adrian,
>>
>> Thank you for responding with the SQL code.  However, outcome cannot be
>> a primary key because outcome values will be duplicates in some
>> instances.  I am not sure how else to have a lookup table that stores
>> static values.
>>
>
> Well first is there a method to the madness:)?
>
> In other words is the choice of an outcome arbitrary or is there some
> calculation that goes into it?
>
> Otherwise, something like?:
>
> test=> create table final(goal varchar, size varchar, outcome int, PRIMARY
> KEY(goal, size));
>
> test=> create table user_tbl(user_id int PRIMARY KEY, user_name varchar,
> goal varchar, size varchar, CONSTRAINT g_s_fk  FOREIGN KEY (goal, size)
> REFERENCES final(goal, size));
>
>
> test=> \d final
>   Table "public.final"
>  Column  |   Type| Modifiers
> -+---+---
>  goal| character varying | not null
>  size| character varying | not null
>  outcome | integer   |
> Indexes:
> "final_pkey" PRIMARY KEY, btree (goal, size)
> Referenced by:
> TABLE "user_tbl" CONSTRAINT "g_s_fk" FOREIGN KEY (goal, size)
> REFERENCES final(goal, size)
>
> test=> \d user_tbl
>   Table "public.user_tbl"
>   Column   |   Type| Modifiers
> ---+---+---
>  user_id   | integer   | not null
>  user_name | character varying |
>  goal  | character varying |
>  size  | character varying |
> Indexes:
> "user_tbl_pkey" PRIMARY KEY, btree (user_id)
> Foreign-key constraints:
> "g_s_fk" FOREIGN KEY (goal, size) REFERENCES final(goal, size)
>
>
>
>
>
>> Thanks!
>> Aaron
>>
>> On Sat, Feb 27, 2016 at 5:15 PM, Adrian Klaver
>> mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 02/27/2016 01:15 PM, Aaron Christensen wrote:
>>
>> Hello,
>>
>> I am trying to figure out the correct way to design the database
>> table
>> to support the following situation.
>>
>> To start, I have an Excel spreadsheet that maps particular
>> combinations
>> of Goal and Size to an Outcome.  Goal choices are "Long",
>> "Average", and
>> "Short".  Size choices are "Big", "Medium", and "Small".  The
>> designated
>> Outcome for each goal/size combination are number values between
>> 12 and
>> 20.  Please refer to attachment "goalSizeExcel.pdf" for the Excel
>> spreadsheet version.
>>
>> In order to use this data in the database, I converted it to an
>> SQL
>> table with attributes "Goal", "Size", and "OUTCOME".  "Goal" and
>> "Size"
>> serve as composite primary keys.  Please refer to attachment
>> "TableFinal.pdf" for the illustration.
>>
>> Please refer to "UserOutcome.jpg" for the ER diagram.  The user
>> inputs
>> his name, goal, and size.  Based on his goal and size
>> combination, he is
>> assigned a particular "outcome".
>>
>> I am not exactly sure if my attached ER diagram is the correct
>> way to
>> model this.  I don't want to add a UserId [FK] to table Final
>> because
>> table Final is supposed to serve as a lookup or reference table
>> (I am
>> not sure of the correct terminology).
>>
>> Please advise if I am on the right track or if I should follow a
>> different design.  I intend to have a few other lookup/reference
>> tables
>> that will serve a similar purpose.
>>
>>
>>  >From a quick look it seems to me that outcome is the primary key
>> to goal and size, so
>>
>> CREATE TABLE final (
>>outcome int PRIMARY KEY,
>>goal varchar,
>>size varchar
>> )
>>
>> CREATE TABLE user (
>> name varchar,
>> outcome_fk int REFERENCES final(outcome) ON ...
>> )
>>
>>
>>
>> Thank you!
>> Aaron
>>
>>
>>
>>
>>
>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] Designing tables based on user input and defined values

2016-02-27 Thread Aaron Christensen
Hi Adrian,

Thank you for responding with the SQL code.  However, outcome cannot be a
primary key because outcome values will be duplicates in some instances.  I
am not sure how else to have a lookup table that stores static values.

Thanks!
Aaron

On Sat, Feb 27, 2016 at 5:15 PM, Adrian Klaver 
wrote:

> On 02/27/2016 01:15 PM, Aaron Christensen wrote:
>
>> Hello,
>>
>> I am trying to figure out the correct way to design the database table
>> to support the following situation.
>>
>> To start, I have an Excel spreadsheet that maps particular combinations
>> of Goal and Size to an Outcome.  Goal choices are "Long", "Average", and
>> "Short".  Size choices are "Big", "Medium", and "Small".  The designated
>> Outcome for each goal/size combination are number values between 12 and
>> 20.  Please refer to attachment "goalSizeExcel.pdf" for the Excel
>> spreadsheet version.
>>
>> In order to use this data in the database, I converted it to an SQL
>> table with attributes "Goal", "Size", and "OUTCOME".  "Goal" and "Size"
>> serve as composite primary keys.  Please refer to attachment
>> "TableFinal.pdf" for the illustration.
>>
>> Please refer to "UserOutcome.jpg" for the ER diagram.  The user inputs
>> his name, goal, and size.  Based on his goal and size combination, he is
>> assigned a particular "outcome".
>>
>> I am not exactly sure if my attached ER diagram is the correct way to
>> model this.  I don't want to add a UserId [FK] to table Final because
>> table Final is supposed to serve as a lookup or reference table (I am
>> not sure of the correct terminology).
>>
>> Please advise if I am on the right track or if I should follow a
>> different design.  I intend to have a few other lookup/reference tables
>> that will serve a similar purpose.
>>
>
> From a quick look it seems to me that outcome is the primary key to goal
> and size, so
>
> CREATE TABLE final (
>   outcome int PRIMARY KEY,
>   goal varchar,
>   size varchar
> )
>
> CREATE TABLE user (
> name varchar,
> outcome_fk int REFERENCES final(outcome) ON ...
> )
>
>
>
>> Thank you!
>> Aaron
>>
>>
>>
>>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


[GENERAL] Designing tables based on user input and defined values

2016-02-27 Thread Aaron Christensen
Hello,

I am trying to figure out the correct way to design the database table to
support the following situation.

To start, I have an Excel spreadsheet that maps particular combinations of
Goal and Size to an Outcome.  Goal choices are "Long", "Average", and
"Short".  Size choices are "Big", "Medium", and "Small".  The designated
Outcome for each goal/size combination are number values between 12 and
20.  Please refer to attachment "goalSizeExcel.pdf" for the Excel
spreadsheet version.

In order to use this data in the database, I converted it to an SQL table
with attributes "Goal", "Size", and "OUTCOME".  "Goal" and "Size" serve as
composite primary keys.  Please refer to attachment "TableFinal.pdf" for
the illustration.

Please refer to "UserOutcome.jpg" for the ER diagram.  The user inputs his
name, goal, and size.  Based on his goal and size combination, he is
assigned a particular "outcome".

I am not exactly sure if my attached ER diagram is the correct way to model
this.  I don't want to add a UserId [FK] to table Final because table Final
is supposed to serve as a lookup or reference table (I am not sure of the
correct terminology).

Please advise if I am on the right track or if I should follow a different
design.  I intend to have a few other lookup/reference tables that will
serve a similar purpose.

Thank you!
Aaron


goalSizeExcel.pdf
Description: Adobe PDF document


TableFinal.pdf
Description: Adobe PDF document

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Upgrading hot standbys

2015-04-28 Thread Aaron Burnett

Greetings,


I'm in the process of upgrading PG 9.1.3 to 9.4.1 in the near future.


I have several machines which each house unique databases. Each of those are 
replicated to a standby server with matching configurations. A total of 10 
servers, 5 masters, 5 slaves. Everything runs on Ubuntu.


My question, as I can't seem to find any documentation on this part, is once I 
successfully upgrade the master I will need to upgrade the standby as well. 
Will I have to rebuild the standby from scratch, or will the standby pick up 
where it was before the upgrade if I do things correctly?


Thanking you in advance,



Aaron

dunnhumby limited is a limited company registered in England and Wales with 
registered number 02388853 and VAT registered number 927 5871 83. Our 
registered office is at Aurora House, 71-75 Uxbridge Road, London W5 5SL. The 
contents of this message and any attachments to it are confidential and may be 
legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender. dunnhumby may 
monitor and record all emails. The views expressed in this email are those of 
the sender and not those of dunnhumby.


[GENERAL] question on writing a function

2014-01-22 Thread Aaron Burnett

Greetings all,

I *think* there¹s a way to accomplish what I want, but it is eluding me at
this time. Any help or pointers will be greatly appreciated.

What I am trying to accomplish is that when a member deactivates their
account through the UI, my trigger will call a function that deletes or
archives their data. The issue being that we don¹t want the UI to have to
wait for this process because of the huge amount of data and tables that
the queries will have to go through, so we are avoiding doing it
programatically through the UI and I want it to happen in the background.

The problem is this (and I may just not be thinking clearly): Everything
is tied back to the member¹s ID which is the constant throughout the
tables that I will need to manipulate. My trigger will call the (to be
written) function that will do all the heavy lifting when the
active_status changes from Œa¹ to Œx¹, but I somehow need to get the ID of
that member and pass it on to the function so it can then do all that it
needs to.

For simplicity, the member table is just a few columns (id, active_status,
name). How would I grab the ID from that table the moment the trigger
fires to be able to pass it to my function?

I hope that¹s clear.

Thanking you in advance for any help.

Aaron

The contents of this message and any attachments to it are confidential and may 
be legally privileged.
If you have received this message in error you should delete it from your 
system immediately and advise the sender.
dunnhumby may monitor and record all emails. The views expressed in this email 
are those of the sender and not those of dunnhumby.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question About Serializable

2013-09-18 Thread Aaron Carlisle
No particular effort; I saw a talk on the topic. I said it's academic
because I can't think of any real world example where this would matter
(the other definition of the word "academic").

On Thu, Sep 19, 2013 at 12:44 AM, Kevin Grittner  wrote:

> Aaron Carlisle  wrote:
>
> > The documentation states that "concurrent execution of a set of
> > Serializable transactions is guaranteed to produce the same
> > effect as running them one at a time in some order."
> >
> > I'm not sure how the following behavior fits that definition.
> > (Note that this is just an experiment, not a use case. Purely
> > academic.) I run these transactions sequentially, and I get a
> > different result than if I run them concurrently.
>
> > It seems like one of these should error out and not commit, so I
> > must be missing some stipulation.
>
> > [ write skew in pg_class with table creation and check for table
> > existence in two concurrent queries ]
>
> You are correct, and not missing anything -- when serializable
> behavior was added (and through 9.3) the system catalogs were not
> fully transactional using the MVCC semantics.  I'm afraid there
> were many strange things that could happen with the system
> catalogs, including having an object which was being updated be
> missed or seen twice by a concurrent transaction, none of which was
> documented.  Since the serializable techniques in PostgreSQL are
> based on the MVCC snapshot isolation techniques, it was not
> possible to cover DDL in the serializable implementation using
> Serializable Snapshot Isolation.
>
> I assume you can't create such problems in a stable schema; but
> only when serializable transactions include DDL?
>
> On the 9.4 development branch access to the system catalogs has
> recently been made to use MVCC techniques, so it may be possible to
> extend serializable behavior to the catalogs in 9.4 or later.
>
> I don't suppose you want to help develop a patch for that?  :-)
>
> Out of curiosity, is there a particular academic effort this
> question came out of that you can talk about?
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


[GENERAL] Question About Serializable

2013-09-18 Thread Aaron Carlisle
The documentation states that "concurrent execution of a set of
Serializable transactions is guaranteed to produce the same effect as
running them one at a time in some order."

I'm not sure how the following behavior fits that definition. (Note that
this is just an experiment, not a use case. Purely academic.) I run these
transactions sequentially, and I get a different result than if I run them
concurrently.

This is in 9.3.0

First, I set up a table.

create table x (value int);

Then I run the following transactions. If I run them sequentially, in
either order, I get one row in table x. If I run them concurrently, I get
no rows in x.

It seems like one of these should error out and not commit, so I must be
missing some stipulation.

Feel free to repeat this result.

=
begin;

set transaction isolation level serializable;

create table z ();

select pg_sleep(5);

insert into x (value)
  select 0
  where exists (select relname from pg_class
where relname = 'y')
and exists (select relname from pg_class
where relname = 'z');
commit;

=
begin;

set transaction isolation level serializable;

create table y ();

select pg_sleep(5);

insert into x (value)
  select 0
  where exists (select relname from pg_class
where relname = 'y')
and exists (select relname from pg_class
where relname = 'z');
commit;


Re: [GENERAL] Why are stored procedures looked on so negatively?

2013-07-24 Thread Aaron Abreu
a NON-technical version...

st.procedures and automation are great...

but...
sounds like everybody is dancing around the main theme..
so lets say it
that dreaded word that developers and DBA's cring to hear...
the one part of our job that we all hate...

DOCUMENTATION !

My worst fear is simply this...
having to fix something somebody else wrote.. and they
are not there anymore. and the only documentation is the code itself..
been there... on a few occasions just had to write something new...






On Tue, Jul 23, 2013 at 7:29 PM, Some Developer
wrote:

> I've done quite a bit of reading on stored procedures recently and the
> consensus seems to be that you shouldn't use them unless you really must.
>
> I don't understand this argument. If you implement all of your logic in
> the application then you need to make a network request to the database
> server, return the required data from the database to the app server, do
> the processing and then return the results. A stored procedure is going to
> be a lot faster than that even if you just take away network latency /
> transfer time.
>
> I'm in the middle of building a database and was going to make extensive
> use of stored procedures and trigger functions because it makes more sense
> for the actions to happen at the database layer rather than in the app
> layer.
>
> Should I use them or not?
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general<http://www.postgresql.org/mailpref/pgsql-general>
>



-- 
**
Aaron Abreu, Systems Consultant
Bay District Schools, Panama City,  FL
Office: * (850) 767-4288
*>>FOCUS Student system support
>>IRIS phone alert system support
abre...@bay.k12.fl.us

The information contained in this message may be privileged and confidential 
and protected
from disclosure. If the reader of this message is not the intended recipient, 
or an 
employee or agent responsible for delivering this message to the intended 
recipient, 
you are hereby notified that any dissemination, distribution or copying of this 
communication is strictly prohibited. If you have received this communication 
in error, 
please notify us immediately by replying to the message and deleting it from 
your 
computer. Under Florida law, e-mail addresses are public records. If you do not 
want 
your e-mail address released in response to a public-records request, do not 
send 
electronic mail to this entity. Instead, contact this office by phone or in 
writing.


Re: [GENERAL] odd intermittent query hanging issue

2012-05-18 Thread Aaron Burnett
Thanks Steve,

Answers are inserted below:


On 5/18/12 11:09 AM, "Steve Crawford" 
wrote:

>On 05/18/2012 09:17 AM, Aaron Burnett wrote:
>> Greetings,
>>
>> I run a handful of queries overnight when traffic is at it's lowest on
>>our
>> system. One particular query will run perfectly fine (around 5 seconds)
>> for several weeks, then suddenly decide to hang indefinitely and never
>> finish. It needs to be killed manually after several hours (I've
>> intentionally let it run to see if it would ever finish) in order for it
>> to die.
>>
>> The fix _seems to be_ to drop and rebuild the index on
>> xrefchannelmember.member_id. The query then goes back to running in the
>>5
>> seconds and has no problem again for weeks until it happens again.
>>
>> Has anyone heard of such a thing? And if anyone can maybe point me in
>> directions to investigate, it would be much appreciated.
>
>Few answers but several questions...
>
>Is the machine busy processing the query or is it idle?


It is processing and in fact drives the load up a bit.

>
>Does it start happening consistently or is it an individual query. I.e.
>if you run the query again will it hang or complete? Can you get a query
>plan when the query is failing?

The query will run fine many times a night for many nights, then it will
hang. Killing that query and running it again results in the same hang
indefinitely. The only fix so far to get the query to run again is to drop
and rebuild the xrefcampaignmenber.memberid index (I misquoted which index
earlier) and then it will once again run fine for many, many days.


>
>Are there any other queries that may be locking your tables (check
>pg_locks)?

I did check, and no, no locks.


>
>Anything of interest in postgresql or system logs?

Nope.

>
>Autovacuum running properly? Are you running any large
>updates/deletes/etc just prior to the query such that the  statistics
>the planner is using do not reflect reality?

No large updates or deletes to any of the tables involved in the query.
Statistics seems just fine.


>
>Does the network connection to the server still exist?

Yes.


>
>Any other weirdness happening on the machine?

Nothing at all. 
>
>
>> select distinct(id) from member left join xrefchannelmember xrcm on
>> id=xrcm.member_id where id not in (Select memberid from
>> xrefcampaignmember) and xrcm.channel_id in (1)  order by id asc;
>BTW, I have an un-substantiated gut feeling that this query has room for
>improvement. I would experiment with substituting "...where not exists
>(select..." for the "...not in (select...". You could also try using
>"select. except select memberid from xrefcampaignmember..."
>
>I'm also not sure I understand the left join since you have the
>"xcrm.channel_id in (1)" condition so you won't return records in
>"member" without a corresponding record in xrefchannelmember anyway.
>
>I also don't understand the "xcrm.channel_id in (1)" instead of
>"xcrm.channel_id = 1" unless this is a generated query and there could
>be multiple ids in that condition.
>
>Will one of the following (untested) queries return the results you want
>and have a better query plan?:
>
>select distinct(id) from member
>join xrefchannelmember xrcm on id=xrcm.member_id
>where xrcm.channel_id=1
>except
>select memberid from xrefcampaignmember
>order by id asc;
>
>(actually with except the distinct is probably superfluous as well)
>
>or:
>
>select distinct(id) from member
>join xrefchannelmember xrcm on id=xrcm.member_id
>where xrcm.channel_id=1
>and not exists (select 1 from xrefcampaignmember x where x.memberid =
>member.id)
>order by id asc;
>
>or:
>
>select distinct(id) from member
>where
>exists (select 1 from xrefchannelmember where member_id = member.id and
>channel_id=1)
>and not exists (select 1 from xrefcampaignmember where memberid =
>member.id)
>order by id asc;

Yeah, the query is poo... autogenerated... the LEFT JOIN is not needed as
I have pointed out to the person responsible for the code many times, and
the 'in(1)' may indeed have many categories in there. But the OLY one that
hangs is the 'in(1)'


>
>Cheers,
>Steve
>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] odd intermittent query hanging issue

2012-05-18 Thread Aaron Burnett

Greetings,

I run a handful of queries overnight when traffic is at it's lowest on our
system. One particular query will run perfectly fine (around 5 seconds)
for several weeks, then suddenly decide to hang indefinitely and never
finish. It needs to be killed manually after several hours (I've
intentionally let it run to see if it would ever finish) in order for it
to die.

The fix _seems to be_ to drop and rebuild the index on
xrefchannelmember.member_id. The query then goes back to running in the 5
seconds and has no problem again for weeks until it happens again.

Has anyone heard of such a thing? And if anyone can maybe point me in
directions to investigate, it would be much appreciated.

Postgresl v 9.1.3
Ubuntu v 11.10 
Linux db 3.0.0-16-generic #29-Ubuntu SMP Tue Feb 14 12:48:51 UTC 2012
x86_64 x86_64 x86_64 GNU/Linux

Query, explain plan (when ot works) and table structure below:

(Thanking you in advance for any help)

select distinct(id) from member left join xrefchannelmember xrcm on
id=xrcm.member_id where id not in (Select memberid from
xrefcampaignmember) and xrcm.channel_id in (1)  order by id asc;

explain plan:
--
 Unique  (cost=237234.66..239267.33 rows=406533 width=4) (actual
time=4790.823..4922.621 rows=418843 loops=1)
   ->  Sort  (cost=237234.66..238251.00 rows=406533 width=4) (actual
time=4790.818..4826.201 rows=418879 loops=1)
 Sort Key: member.id
 Sort Method: quicksort  Memory: 31923kB
 ->  Hash Join  (cost=167890.31..199359.99 rows=406533 width=4)
(actual time=3357.406..4532.952 rows=418879 loops=1)
   Hash Cond: (xrcm.member_id = member.id)
   ->  Seq Scan on xrefchannelmember xrcm
(cost=0.00..19273.69 rows=813066 width=4) (actual time=0.015..219.259 row
s=814421 loops=1)
 Filter: (channel_id = 1)
   ->  Hash  (cost=162586.21..162586.21 rows=424328 width=4)
(actual time=3357.001..3357.001 rows=444626 loops=1)
 Buckets: 65536  Batches: 1  Memory Usage: 15632kB
 ->  Seq Scan on member  (cost=66114.02..162586.21
rows=424328 width=4) (actual time=2357.280..3216.076 rows
=444626 loops=1)
   Filter: (NOT (hashed SubPlan 1))
   SubPlan 1
 ->  Seq Scan on xrefcampaignmember
(cost=0.00..57931.82 rows=3272882 width=4) (actual time=0.021..
670.086 rows=3272870 loops=1)
 Total runtime: 4963.134 ms

Table structure:

 Table "public.xrefcampaignmember"
Column|   Type   | Modifiers
   
--+--+-
---
 campaignid   | integer  | not null
 memberid | integer  | not null
 joined   | timestamp with time zone | default
('now'::text)::timestamp without time zone
 reservedslot | integer  | default 0
Indexes:
"XrefCampaignMember_pkey" PRIMARY KEY, btree (campaignid, memberid)
"xcm_campaignid_idx" btree (campaignid)
"xcm_joined_idx" btree (joined)
"xcm_memberid_idx" btree (memberid)
"xcm_reservedslot_idx" btree (reservedslot)







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg 9.1.3 pg_crypto question

2012-04-10 Thread Aaron Burnett

Thanks, Tom. That was exactly it.

Best Regards


On 4/10/12 2:50 PM, "Tom Lane"  wrote:

>Aaron Burnett  writes:
>> 9.1.3 is just not decrypting nor throwing errors.
>
>> 9.1.3# select decrypt_iv(decode('rkMRWpnnbjaFoHyLmCD/bg==', 'base64'),
>> decode('bcRJvbqeWMPDXMtIP8pPOQ==', 'base64'), '',
>> 'aes-cbc');
>> decrypt_iv
>> --
>> \x48656c6c6f205468657265
>> (1 row)
>
>I think this is the same result, it's just being shown in hex.
>See the bytea_output configuration parameter.
>
>   regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Pg 9.1.3 pg_crypto question

2012-04-10 Thread Aaron Burnett

Hi,

Finally upgrading from 8.2.5 to 9.1.3 (got the latest release as of
3/12/2012)

OS: Ubuntu 11.10

The only issue I am encountering is in the pg_crypto/decrypt_iv/decode
No errors in the log, but here's what I am seeing on both 8.2.5 and 9.1.3,
I am hoping someone can help me out here:

8.2.5#
8.2.5=# select encode(encrypt_iv(text2bytea('Hello There'),
decode('bcRJvbqeWMPDXMtIP8pPOQ==', 'base64'), '',
'aes-cbc'), 'base64');
  encode  
--
rkMRWpnnbjaFoHyLmCD/bg==
(1 row)

8.2.5=# select decrypt_iv(decode('rkMRWpnnbjaFoHyLmCD/bg==', 'base64'),
decode('bcRJvbqeWMPDXMtIP8pPOQ==', 'base64'), '',
'aes-cbc');
decrypt_iv  
-
Hello There
(1 row)

So, the 8.2.5 is working as it always has.

9.1.3 is just not decrypting nor throwing errors.


9.1.3#
9.1.3# select encode(encrypt_iv(text2bytea('Hello There'),
decode('bcRJvbqeWMPDXMtIP8pPOQ==', 'base64'), '',
'aes-cbc'), 'base64');
  encode  
--
rkMRWpnnbjaFoHyLmCD/bg==
(1 row)

9.1.3# select decrypt_iv(decode('rkMRWpnnbjaFoHyLmCD/bg==', 'base64'),
decode('bcRJvbqeWMPDXMtIP8pPOQ==', 'base64'), '',
'aes-cbc');
decrypt_iv
--
\x48656c6c6f205468657265
(1 row)




Thanking you in advance,

Aaron



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Questions of the privileges to use the pg_cancel_backend and pg_terminate_backend function. Thanks.

2012-04-04 Thread Aaron
When I need to give other users access to a function that someone must
be superuser to execute I write a security definer function.
See: http://www.postgresql.org/docs/9.1/static/sql-createfunction.html
Also: 
http://www.ibm.com/developerworks/opensource/library/os-postgresecurity/index.html
  Using the security definer

Think if is like sudo for a db.


Aaron Thul
http://www.chasingnuts.com



On Wed, Apr 4, 2012 at 8:39 AM, leaf_yxj  wrote:
> Hi Guys. I got one problem. I need to give some of the non-super users( kind
> of dba) to get the privileges
> to can cancel other users's query, DML.  After I granted the execute on
> pg_cancel_backend and pg_terminate_backend function to them, they still get
> the error message as follows when they call these two function :
>
> ERROR : must be superuser to signal other server processes.
>
> QUestion : is it possible to make the non superuser to have these two
> privileges??
>
> Thanks.
>
> Regards.
>
> Grace
>
> --
> View this message in context: 
> http://postgresql.1045698.n5.nabble.com/Questions-of-the-privileges-to-use-the-pg-cancel-backend-and-pg-terminate-backend-function-Thanks-tp5618129p5618129.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What filesystem to use for postgres?

2010-09-14 Thread Aaron
Some more current numbers can be found here:
http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide


Aaron Thul
http://www.chasingnuts.com



On Tue, Sep 14, 2010 at 4:00 AM, A B  wrote:
> Hello.
>
> Is there any doc or wiki page that describes what filesystems that are
> recomended to  use (OS is Linux) for PostgreSQL?
> Information about  filesystems options/mount options   and how well
> they work with different RAID setups is also of interest.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Some insight on the proper SQL would be appreciated

2010-06-08 Thread Aaron Burnett

Greetings,

I hope this is the proper list for this, but I am a loss on how to achieve
one particular set of results.

I have a table which is a list of users who entered a contest. They can
enter as many times as they want, but only 5 will count. So some users have
one entry, some have as many as 15.

How could I distill this down further to give me a list that shows each
entry per user up to five entries per user? In other words, I need a
separate line item for each entry from each user up to the maximum of 5 rows
per user.

Table looks like this:
  username   | firstname |  lastname   |  signedup
--+---+-+---
-
 ffc4bd0a-ccff-4203-a03c-4bd6b8c23a03 | Jean  | Mathews | 2010-03-13
 ffc4bd0a-ccff-4203-a03c-4bd6b8c23a03 | Jean  | Mathews | 2010-05-07
 ffc4bd0a-ccff-4203-a03c-4bd6b8c23a03 | Jean  | Mathews | 2010-06-06
 ff8720d4-f808-4ee6-90e6-649872fdfa55 | Janis | Bunch   | 2010-03-12
 ff48d89b-6afe-4ba0-8539-9683bec62c89 | Sandy | Yashnyk | 2010-04-25
 ff48d2a4-0637-4e46-9461-3dd0dc0d4d83 | Kendra| Elliott | 2010-05-09
 ff3a2a14-2575-44d5-b40b-3780f4d2506a | Kay   | Maher   | 2010-04-20
 fef7625b-d1e1-4c1a-bc82-d35a4a6db16d | Carolyn   | Woodul  | 2010-04-05
 fef21bbb-07a0-4c84-8708-4dc41b8b770b | Laurie| Montijo | 2010-04-03
 feee6473-af4d-4e70-b20c-a74ba08c000f | Geneva| Anderson| 2010-04-03
 feb690fc-0afb-4e87-b0d1-bdb2c4603fd1 | Judith| Astroff | 2010-06-05
 fea0f9a6-e89f-4dbd-b3fd-83efed27221f | Jennifer  | Lavigne | 2010-02-09
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-03-20
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-03-27
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-04-03
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-04-10
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-04-17
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-04-25
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-05-01
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-05-08
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-05-16
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-05-22
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-05-30
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-06-06
 fe825a6d-6820-4282-b8e9-2e3cb9f660e8 | Susan | Stowe   | 2010-03-12
 fe825a6d-6820-4282-b8e9-2e3cb9f660e8 | Susan | Stowe   | 2010-03-15

But in John Smith's case where he has more than 5 entries, I would like
query results to limit him to just 5 entries to look like this:

  username   | firstname |  lastname   |  signedup
--+---+-+---
-
 ffc4bd0a-ccff-4203-a03c-4bd6b8c23a03 | Jean  | Mathews | 2010-03-13
 ffc4bd0a-ccff-4203-a03c-4bd6b8c23a03 | Jean  | Mathews | 2010-05-07
 ffc4bd0a-ccff-4203-a03c-4bd6b8c23a03 | Jean  | Mathews | 2010-06-06
 ff8720d4-f808-4ee6-90e6-649872fdfa55 | Janis | Bunch   | 2010-03-12
 ff48d89b-6afe-4ba0-8539-9683bec62c89 | Sandy | Yashnyk | 2010-04-25
 ff48d2a4-0637-4e46-9461-3dd0dc0d4d83 | Kendra| Elliott | 2010-05-09
 ff3a2a14-2575-44d5-b40b-3780f4d2506a | Kay   | Maher   | 2010-04-20
 fef7625b-d1e1-4c1a-bc82-d35a4a6db16d | Carolyn   | Woodul  | 2010-04-05
 fef21bbb-07a0-4c84-8708-4dc41b8b770b | Laurie| Montijo | 2010-04-03
 feee6473-af4d-4e70-b20c-a74ba08c000f | Geneva| Anderson| 2010-04-03
 feb690fc-0afb-4e87-b0d1-bdb2c4603fd1 | Judith| Astroff | 2010-06-05
 fea0f9a6-e89f-4dbd-b3fd-83efed27221f | Jennifer  | Lavigne | 2010-02-09
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-03-20
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-03-27
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-04-03
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-04-10
 fea0334e-4ae1-4599-b24d-9a5d8ce4fd37 | John  | Smith   | 2010-04-17
 fe825a6d-6820-4282-b8e9-2e3cb9f660e8 | Susan | Stowe   | 2010-03-12
 fe825a6d-6820-4282-b8e9-2e3cb9f660e8 | Susan | Stowe   | 2010-03-15

The username is unique for each user.

pg version 8.25 on RHEL

Any help in this would be greatly appreciated.

Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Connect RDF to PostgreSQL?

2010-02-02 Thread Aaron
I am presenting on PostgreSQL tonight and someone e-mailed me with a
question before my talk so I might be able to find him an answer:

"We are starting a new, large project that uses an ontology based (
RDF - Resource Description Framework
http://en.wikipedia.org/wiki/Resource_Description_Framework ) approach
to organize data, which provides a lot of flexibility. The only viable
database we have found that supports this is Oracle, who have tightly
integrated their database interface to efficiently support sparql (
http://en.wikipedia.org/wiki/SPARQL ) queries. Other databases seem to
only interface to the open source jena technology, which fails to
scale to larger data sets as it tries to read huge chunks of data into
memory.

We would prefer to go with an open source option, but can't. Do you
know of a software package that can efficiently connect RDF to
PostgreSQL?"

Aaron Thul
http://www.chasingnuts.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Creation of tablespaces

2010-01-15 Thread Aaron
Ubuntu never shipped with selinux, it is available by installing the
"selinux" meta-package.  Ubuntu does ship with AppArmor and loaded by
default in Hardy 8.04 and beyond but I don't believe there are any
PotgreSQL profiles.


Aaron Thul
http://www.chasingnuts.com



On Fri, Jan 15, 2010 at 10:51 AM, Tom Lane  wrote:
> =?iso-8859-1?Q?DURAND_Beno=EEt?=  writes:
>> I work with Ubuntu 9.10 (upgraded from 8.04 LTS yesterday) and PostgreSQL 
>> 8.3.
>> I can't create tablespaces. Pgsql seems to try changing access rights of the 
>> directory and fails to do it (permission denied), despite the directory is 
>> owned by the postgres user
>
> Are you sure that postgres has r+x rights on all the directories above
> that one?
>
> If this were a Red Hat distro I would also wonder about selinux
> permissions, but I don't know whether Ubuntu has selinux or enables
> it by default.
>
>                        regards, tom lane
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Updating column on row update

2009-11-22 Thread Aaron Burnett

this is how I do it if this helps:

column_name timestamp without time zone NOT NULL DEFAULT 
('now'::text)::timestamp(6) without time zone



-Original Message-
From: pgsql-general-ow...@postgresql.org on behalf of Thom Brown
Sent: Sun 11/22/2009 2:50 PM
To: PGSQL Mailing List
Subject: [GENERAL] Updating column on row update
 
Hi,

This should be simple, but for some reason I'm not quite sure what the
solution is.  I want to be able to update the value of a column for rows
that have been updated.  More specifically, if a row is updated, I want it's
modified_date column to be populated with the current time stamp.  I've
looked at triggers and rules, and it looks like I'd need to create a
function just to achieve this which seems incredibly clumsy and unnecessary.
 Could someone enlighten me?

Thanks

Thom



Re: [GENERAL] [lapug] LAPUG Social - Tuesday Evening July 14

2009-07-11 Thread Aaron Glenn
Depending on the details, I'd love to attend. Any more specifics, by chance?

On 7/11/09, Richard Broersma  wrote:
> Josh Berkus and David Fetter will be in West Los Angeles area the
> beginning of next week.   This would be a good opportunity to get
> together to enjoy dinner and drinks with PostgreSQL's community
> leaders.
>
> Please reply to the LAPUG mailing list if you would like to attend.
>
> --
> Regards,
> Richard Broersma Jr.
>
> Visit the Los Angeles PostgreSQL Users Group (LAPUG)
> http://pugs.postgresql.org/lapug
>
> --
> Sent via lapug mailing list (la...@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/lapug
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] GiST or GIN, I feel like I am doing something wrong

2009-06-17 Thread Aaron
Tom,

Our maintenance_work_mem is 1024MB so there should have been plenty of
memory for INDEX creation.  I happened to be watching top when we
created the GiN INDEX and the process used about 500MB of non-shared
memory.

Aaron Thul
http://www.chasingnuts.com



On Wed, Jun 17, 2009 at 11:30 AM, Tom Lane wrote:
> Aaron  writes:
>> CREATE INDEX profile_images_fulltext_gin ON profile_images_fulltext
>> USING gin(content);
>> CREATE INDEX profile_images_fulltext_gist ON profile_images_fulltext
>> USING gist(content);
>
> What did you have maintenance_work_mem set to while you did this?
> GIST doesn't care, but GIN likes to have lots of workspace while
> building an index.  I'm not entirely sure if small workspace only
> affects build time, or if it could result in a bloated/inefficient
> index ... but if the latter is true it might explain your results.
>
>                        regards, tom lane
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] GiST or GIN, I feel like I am doing something wrong

2009-06-17 Thread Aaron
We are testing full text searching on a small chunk of our data. We
have created an INDEX to make searching faster.  From the PostgreSQL
8.3 docs, we are running 8.3.7, it seems we should be running GIN
indexes.  The reason GIN on paper seems like the right INDEX:
* we have static data
* we have over 241071 unique words (lexemes)
* GIN index lookups are about three times faster and we are 99.9% searching

The problem is that we have been testing with both INDEX types and
GiST is killing GIN.  I believe it has to do with the size of our GiST
index.
SELECT * from relation_size where relation like '%full%';
  relation  |  size
+
 public.profile_images_fulltext_gin | 437 MB
 public.profile_images_fulltext | 161 MB
 public.profile_images_fulltext_gist| 66 MB
 public.profile_images_fulltext_pif_key_key | 18 MB
(4 rows)

So my questions...
Why is the GiST index so large?
Would the large size likely effect performance?
Am I doing something fundamentally wrong?
Yes I was sure to ANALYZE public.profile_images_fulltext between all
my INDEX DROP and CREATE

More details:
owl=# \d profile_images_fulltext
 Table "public.profile_images_fulltext"
  Column  |Type |   Modifiers
--+-+---
 pif_key  | bigint  | not null
 content  | tsvector|
 datetime_created | timestamp without time zone | default now()
 raw  | text|

owl=# SELECT count (pif_key) from public.profile_images_fulltext;
 count

 630699
(1 row)

owl=# SELECT count(word) FROM ts_stat('SELECT content FROM
profile_images_fulltext');
 count

 241071
(1 row)

CREATE INDEX profile_images_fulltext_gin ON profile_images_fulltext
USING gin(content);
CREATE INDEX profile_images_fulltext_gist ON profile_images_fulltext
USING gist(content);



Any and all thoughts would be greatly appreciated,
Aaron Thul
http://www.chasingnuts.com
Life is complex: it has real and imaginary components.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] custom stemming in full text search

2009-04-14 Thread Aaron Patterson
Hello,

I'm trying to use full text search against text that contains lots of
technical terms, and I'd like to add some custom stemming.  I have many
occurrances of "libblah" or "blahtool", and I'd like to remove the "lib"
and "tool" terms since they are meaningless search terms.

Is there an easy way to accomplish this?  I read through the text search
sections of the documentation and this website:

  http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/custom-dict.html

Is there an easier way?

Thanks in advance.

-- 
Aaron Patterson
http://tenderlovemaking.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postmaster never finishes starting up, silent to boot

2009-03-18 Thread Aaron Glenn
On Tue, Mar 17, 2009 at 11:27 PM, Scott Marlowe  wrote:
> Hard to say with what you've told us so far.

what more should I post/need? I was suspecting that as well as I've
never had postgres be silent and not work -- I've also never let a db
fill its disk and get f'ed like this. should I just let the pg_ctl
start run it's course? for a 35GB+ database how long should I wait? is
there no way to log the status of what the postgres daemon is actually
doing while I wait? what's the standard course of action for a
postgres instance that filled its disk and shut itself down -- if
there is one?

apologies for the admittedly amateur questions but I haven't been able
to find much for the situation I'm in.


thanks,
aaron

-
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postmaster never finishes starting up, silent to boot

2009-03-17 Thread Aaron Glenn
Greetings,

I've gotten myself in a pickle and had a postgresql (8.2) instance
fill its disk completely and shutdown itself down. I've moved the
entire data directory to a new, larger slice however postmaster never
finishes "starting". Despite configuring postgresql.conf for excessive
'verboseness' nothing gets outputted to syslog or the --log specified
file.  I have a feeling I'm just not hitting the right search terms,
but shouldn't I be able to simply start a fully copied data directory
without issue? at the very least I'd expect some kind of output to
some kind of log. I have to kill it with a 'pg_ctl stop -D /mnt/data
-m i' -- immediate is the only one that actually kills it; and I get
this in syslog:

Mar 17 22:36:49 rtg postgres[1879]: [8-1] WARNING:  terminating
connection because of crash of another server process
Mar 17 22:36:49 rtg postgres[1879]: [8-2] DETAIL:  The postmaster has
commanded this server process to roll back the current transaction and
exit, because another server
Mar 17 22:36:49 rtg postgres[1879]: [8-3]  process exited abnormally
and possibly corrupted shared memory.
Mar 17 22:36:49 rtg postgres[1879]: [8-4] HINT:  In a moment you
should be able to reconnect to the database and repeat your command.
Mar 17 22:36:49 rtg postgres[1879]: [8-5] CONTEXT:  xlog redo zeropage: 16645

there are no other postgres instances running on this machine;
actually there is nothing else but the OS running on this machine.

Appreciate a cluebat hit.

thanks,
aaron.glenn

-
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] iSCSI SAN Suggestions

2009-03-06 Thread Aaron
We are facing an immediate need to refresh and expand some of our
iSCSI storage we are running our PostgreSQL instance on.  I am open to
just about anything but I wondered if this group would have any
suggestions.  One item that is a bit unique to our situation, I am not
in the need of staggering performance so much as I am in the need for
very cost effective large mounts of storage.  For more details on what
I am doing: http://www.slideshare.net/AaronThul/this-is-your-postgresql-on-drugs

At the moment I am looking at Sun Storage 7000 Unified Storage Systems
or a Dell Equallogic.  Would anyone like to suggest other options?
Has anyone had any problems with those vendors? Anyone like to share a
glowing review or success story?




Aaron Thul
http://www.chasingnuts.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question on viewing dependencies

2009-02-22 Thread Aaron Burnett

Thanks Tom,

It was not supressed for notice, so I changed it to 'debug1' and it gave me
the answers I was looking for.


On 2/22/09 6:07 PM, "Tom Lane"  wrote:

> Aaron Burnett  writes:
>> Hopefully a quick answer. Went to drop a table:
> 
>> drop table table_foo;
>> ERROR:  cannot drop table table_foo because other objects depend on it
>> HINT:  Use DROP ... CASCADE to drop the dependent objects too.
> 
>> Wanted to see what the dependencies were:
> 
>> BEGIN;
>> drop table table_foo CASCADE;
>> DROP TABLE
>> ROLLBACK;
> 
>> Am I overlooking a step to actually seeing the dependant objects?
> 
> Maybe you have client_min_messages set to suppress NOTICEs?
> 
> regards, tom lane
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] question on viewing dependencies

2009-02-22 Thread Aaron Burnett

Hi,

postgresql version 8.25 running on RHEL4

Hopefully a quick answer. Went to drop a table:

drop table table_foo;
ERROR:  cannot drop table table_foo because other objects depend on it
HINT:  Use DROP ... CASCADE to drop the dependent objects too.

Wanted to see what the dependencies were:

BEGIN;
drop table table_foo CASCADE;
DROP TABLE
ROLLBACK;

Am I overlooking a step to actually seeing the dependant objects?

Thanking you in advance,

Aaron





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] getting elapsed query times

2009-01-03 Thread Aaron Burnett

Hi,

I think I am experiencing the "forest through the trees" type of scenario
here.

In a nightly cron I have a shell script that executes a couple of things for
our data warehouse.

I call it like this from the cron:

/home/postgres/DB1/sys/createDB1.sh >> /home/postgres/DB1/logs/createDB1.log
2>&1

within the script:

The first calls a function which essentially calls a handful of views.

psql -d DB1 -c 'select execute_function_foo();'

The second just calls a simple sql script.

psql -d DB1 -f /path/to/sql/script.sql

What I am trying to get is the elapsed time logged for each individual query
or view that the function calls, as though I entered "\timing" from within
psql.

like:

db1=# \timing
Timing is on.
db1=# select count(*) from table_foo;
  count  
-
 5232924
(1 row)

Time: 3248.064 ms 

except I need that to show up in the nightly log for each sql statement.

running postgresql 8.2.5 with RHEL 4


Thanking you in advance for any assistance.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Storage location of temporary files

2008-10-31 Thread Aaron
I too have used a symlink for some time (years) to put temp onto
dedicated disks without any problems.  I am not sure if 8.3 is
different but I symlink the directory: base/pgsql_tmp


Aaron Thul
http://www.chasingnuts.com



On Fri, Oct 31, 2008 at 8:11 AM, Sam Mason <[EMAIL PROTECTED]> wrote:
> On Fri, Oct 31, 2008 at 09:01:29AM +0100, Christian Schrrrder wrote:
>> So I would like
>> to use a faster disk for these temporary files, too, but I could not
>> find where the temporary files are located. Is there a separate
>> directory? I have found a "pgsql_tmp" directory inside of the database
>> directories ("base//pgsql_tmp"). Is this what I'm looking for?
>
> Yes, I believe it's accepted practice to replace this directory with a
> symlink.  I've done this before and it's behaved as I'd expect.
>
>
>  Sam
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Storing questionnaire data

2008-10-25 Thread Aaron
You may want to look at this article for some ideas:
http://www.varlena.com/GeneralBits/110.php
The article talks about doing dynamic schema design for online
surveys... it's pretty interesting.

Aaron Thul
http://www.chasingnuts.com



On Wed, Oct 22, 2008 at 10:59 AM, Thom Brown <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Is there any optimal and generally agreed way to store questionnaire
> data in a database?
>
> The questionnaire would have to support both of the following:
>
> - different question types (e.g. What is your name? (free form text)
> Are you a smoker? (yes/no checkbox)  Are you male or female? (radio
> buttons)  Select which country you are from (drop-down box).)
>
> - multiple paths (e.g. if a user were asked what their primary mode of
> transport is and they answered "a motorbike" they would be asked if
> they carry pillion passengers and how many ccs the engine is, whereas
> if they said something like walking they would be asked how far they
> walk to work and how long it takes)
>
> I have previously had a questionnaire which had 5 tables, questions
> and answers and question types, questionnaire and results.
>
> questions
> ===
> id (serial) [PK]
> question (text)
> question_type (int)
>
> question_types
> ===
> id (serial) [PK]
> description (text)
>
> answers
> ==
> id (serial) [PK]
> answer (text)
> next_question_id (int) [FK to questions.id]
>
> questionnaire
> ==
> id (serial) [PK]
> questionnaire_date (timestamp)
>
> results
> =
> id (serial) [PK]
> questionnaire_id [FK to questionnaire.id]
> question_id (int) [FK to questions.id]
> answer_id (int)
> answer_text (text)
>
> If the question was for free form text, the answer_id would be 0,
> which seems a bit kludgey to me.  Plus because an answer ID can't be
> required due to free form text answers, I can't enforce a foreign key.
>
> Is there a nice elegant solution anyone knows of?
>
> Thanks
>
> Thom
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large inserts and fsync

2008-09-06 Thread Aaron Burnett

Yeah, the backup is standard operating procedure before the start of the 
release process.

I'm going to try a few of the suggestions offered here first, then fall back on 
the fsync option if I can't get an appreciable increase in speed.


thanks all for the help and thoughts.


-Original Message-
From: Greg Smith [mailto:[EMAIL PROTECTED]
Sent: Sat 9/6/2008 4:45 AM
To: Tom Lane
Cc: Aaron Burnett; Sam Mason; pgsql-general@postgresql.org
Subject: Re: [GENERAL] large inserts and fsync 
 
On Fri, 5 Sep 2008, Tom Lane wrote:

> The trouble with turning fsync off is that a system crash midway through
> the import might leave you with a corrupt database.  If you're willing
> to start over from initdb then okay, but if you are importing into a
> database that already contains valuable data, I wouldn't recommend it.

If you have enough disk space, realistically if you're running with fsync 
off you should setup enough PITR features to get a base backup first, or 
just copy the database directory if you can take the server down a bit. 
Then your worst case becomes just starting over from that backup rather 
than initdb.

--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD



Re: [GENERAL] large inserts and fsync

2008-09-05 Thread Aaron Burnett

Yes, the developer already made sure of that and I verified.


On 9/5/08 11:10 AM, "Sam Mason" <[EMAIL PROTECTED]> wrote:

> On Fri, Sep 05, 2008 at 09:16:41AM -0400, Aaron Burnett wrote:
>> For an upcoming release there is a 16 million row insert that on our test
>> cluster takes about 2.5 hours to complete with all indices dropped
>> beforehand.
>> 
>> If I turn off fsync, it completes in under 10 minutes.
> 
> Have you tried bundling all the INSERT statements into a single
> transaction?  If you haven't then PG will run each statement in its own
> transaction and then commit each INSERT statement to disk separately,
> incurring large overheads.
> 
> 
>   Sam


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] large inserts and fsync

2008-09-05 Thread Aaron Burnett

Forgive me if this is a many-times rehashed topic. I¹m very new to
postgresql, most of my background is in Oracle.

Running postgres 8.2.5 with one master and three slaves (using slony)

For an upcoming release there is a 16 million row insert that on our test
cluster takes about 2.5 hours to complete with all indices dropped
beforehand.

If I turn off fsync, it completes in under 10 minutes.

Other than the protection that fsync will allow me should there be a crash
in the middle of such a process, my thinking was to turn off fsync for this
part of the release, get the insert done, then restart fsync once the insert
is complete.

Am I opening myself up to any dangers that aren¹t obvious by doing this? Any
advice to the contrary? And of course, if I am out of my mind for doing
this, please let me know.

Thanks in advance

Aaron


Re: [GENERAL] Assistance with SQL

2008-09-01 Thread Aaron Burnett

Thank you.

I was making it way too over-complicated.

Works perfectly


On 9/1/08 3:39 AM, "hubert depesz lubaczewski" <[EMAIL PROTECTED]> wrote:

> On Sun, Aug 31, 2008 at 11:31:32PM -0400, Aaron Burnett wrote:
>> table1 has 25 columns
>> table2 is a subset of table1 (create table2 as select
>> id,field1,field2,field3,field4,field5,field6 from table1) with just 7
>> columns
>> There is a primary key on ID
>> table2 was exported to a CSV, truncated, then the ³cleaned² CSV was
>> re-imported to table2
>> In a nutshell I need to find the difference between the 6 columns in table2
>> vs table1 and update table1, again, with the ID column being the pk.
> 
> update table1 as t1
> set
> field1 = t2.field1,
> field2 = t2.field2,
> field3 = t2.field3,
> field4 = t2.field4,
> field5 = t2.field5,
> field6 = t2.field6
> from
> table2 t2
> where
> t1.id = t2.id
> and (
> ( t1.field1 is distinct from t2.field1 ) OR
> ( t1.field2 is distinct from t2.field2 ) OR
> ( t1.field3 is distinct from t2.field3 ) OR
> ( t1.field4 is distinct from t2.field4 ) OR
> ( t1.field5 is distinct from t2.field5 ) OR
> ( t1.field6 is distinct from t2.field6 )
> );
> 
> should work.
> 
> Best regards,
> 
> depesz


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Assistance with SQL

2008-08-31 Thread Aaron Burnett

Hi,

I¹m hoping someone can help me out on this one.

Two tables

table1 has 25 columns
table2 is a subset of table1 (create table2 as select
id,field1,field2,field3,field4,field5,field6 from table1) with just 7
columns
There is a primary key on ID

table2 was exported to a CSV, truncated, then the ³cleaned² CSV was
re-imported to table2

In a nutshell I need to find the difference between the 6 columns in table2
vs table1 and update table1, again, with the ID column being the pk.


Thanks in advance for any help here.


Re: [GENERAL] In the belly of the beast (MySQLCon)

2008-04-17 Thread Aaron Glenn
On Thu, Apr 17, 2008 at 11:29 AM, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
> Hello,
>
>  I am currently chilling at MySQLCon. If any other Elephant riders who
>  are doing a little Dolphin hunting are about... I am in Ballroom E
>  about to give a talk on what Mysql can learn from PostgreSQL.

is there a live video feed those of us not attending can watch?
I've got my popcorn in the microwave already...

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I avoid PGPool as a single point of failure?

2008-01-31 Thread Aaron Glenn
On Jan 29, 2008 6:30 PM, Chander Ganesan <[EMAIL PROTECTED]> wrote:
>
> > Is it possible to point two servers running PGPool at the same two
> > database servers? If so, I seem to recall reading about being able to
> > use some kind of virtual IP address to split the traffic to both.
> CARP...I've never tried to use it for such a thing (I've used it for
> firewall redundancy in the past..which is what I think it was designed
> for), but I'm fairly certain you could do so without too much trouble.

CARP *and* pfsync.
this late at night off the top of my head I can't see any blatantly
obvious reason this wouldn't work (with at least pgpool that is, dunno
about your data)

aaron.glenn

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Need software infrastructure advice

2007-05-23 Thread Aaron Zeitler

Note: The meat and potatoes of my request is in the "What I would like"
section.  Everything else is just explanation.

Currently my company is using software that really isn't meeting our needs.
We have tried to get the 3rd party to allow us to help with their source,
but they won't allow us to help with it (even after we offered to sign
non-disclosure and non-competition contracts).  Because of a number of
issues we are looking at developing our own solution.  I have been thinking
for sometime about PostgreSQL to help with the solution.  I'm just looking
for some advice on how I might pull off the features we would like.


What I would like:
-Data sharing between locations.
-Offline availability (in case Internet connection goes down).
-The ability to make a change at the home office, and have that change made
at the applicable (not necessarily all) remote locations.  Currently we have
to log in (VNC) to each location to make a change, I would like to remove
this requirement.
-Company-wide querying built in.


What I have now:
-60 remote locations (in different states) with ADSL or Cable Internet
connection (connection reliability is just like it is for a home user,
sometimes great, sometimes "flakey").
-Each remote location has at least 2 computers, most have 3, a few have 4,
and one has 5.  The naming structure is station-##-1, station-##-2, etc.
where the ## is the store number.  Computers have Windows 2000 or Windows XP
Pro.
-The current application we have is a MS Access front-end, with a MS Access
back-end shared peer-to-peer.  The back end is located on the station 1 of
each location.  The other stations at the location link to this back-end.
Each store's data is separate, they do not share data between locations
(though they may share actual clients).
-Backups consist of a batch file that zips the back-end and transfers the
file to our home office via sftp.
-We run MS SQL Server 2000 at the home office.  I have connected the MS
Access back-ends as linked servers and run Stored Procedures to combine the
data from the various back-ends into single tables in MS SQL Server.  I then
can run company-wide reports against this combined data (something not
normally possible or provided by the software as they are essentially
separate databases).


What I have considered:
-At the most basic I could use the same setup we have now.  MS Access
front-end and back-end, but it just be OUR application running it.  But this
doesn't meet other "wants" I would like to fulfill.
-Central database at the home office.  This would allow us to share data
between stores (which is desirable).  The front-end could simply be a web
application.  Or it could be a Java app that connects over the Internet.  We
used software like this before our current solution years ago (but it wasn't
OUR software).  The problem with this was that if the store lost their
Internet connection, they could not help customers.  This was not
acceptable.
-Same setup as now (using the Access databases), but providing a web page
linked to the MS SQL Server at the home office.  This would allow separate
stores to look up (and thus share) information between each other.  The
problems with this is that the data would only be updated daily (I would
like it to be hourly at least) and the data is not integrated into the
software.  Integration into the software would be best in my opinion (no
need to log onto a site for each check, no way to forget checking against
other store's data).
-PostgreSQL as a back-end.  Each store would have its own PostgreSQL
back-end that would then be replicated to the home office into it's own
database or schema or table.  At the home office I would combine the data
into "unioned" versions of what exists locally at each location.  I could
then replicate these "unioned" tables back to the remote locations.  So a
store would be the master of it's own data, and a slave to the "unioned"
tables, while the home office was a slave to store tables, but a master for
the "unioned" tables.  The software would store it's customer's data in its
regular tables, but would perform checks against the "master" tables to see
the customer currently has data somewhere else.  I feel that this would
allow data to be available, and customers could still be helped, even if the
Internet went down for a few minutes (or days).  The main problem with this
solution is that Slony-1 says that you really shouldn't have more than a
dozen places that you replicate to, and I have 60.  I would also like to be
able to go well beyond 60 (so I am looking for a scalable solution).  Also,
sometimes the Internet connection to a location goes down.  Some are more
"flakey" than others.  Slony-1 says it is not for this type of situation.

What I need:
-Suggestions.  I am open to just about anything.  This is a long-term
project.  We certainly don't have unlimited resources, but we've paid around
$300k (most of the cost being custom programming) over the last 4 years for
a so

Re: [GENERAL] [ADMIN] Kill a Long Running Query

2007-04-25 Thread Aaron Bono

On 4/25/07, Mageshwaran <[EMAIL PROTECTED]> wrote:


Hi ,
Any body tell me how to kill a long running query in postgresql, is
there any statement to kill a query, and also tell me how to log slow
queries to a log file.

Regards
J Mageshwaran




See if this helps:
http://archives.postgresql.org/pgsql-hackers-win32/2004-12/msg00039.php




--
======
  Aaron Bono
  Aranya Software Technologies, Inc.
  http://www.aranya.com
  http://codeelixir.com
==


Re: [GENERAL] sql formatter/beautifier

2007-03-08 Thread Aaron Bingham

Joris Dobbelsteen wrote:

PostGreSQL (7.4 and onward) has such a thing build-in, but its not
particulary good (simple case works, but once it gets complex it makes a
mess out of it).

Hi,

Cleaning out my pgsql-general mail, I ran across your post.  How do I 
invoke PostgreSQL's built-in SQL beautifier?  I wasn't able to find a 
reference to this feature in the documentation.


Thanks,

--
----
Aaron Bingham
Senior Software Engineer
Cenix BioScience GmbH



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Experiences with 3PAR

2006-10-02 Thread Aaron Glenn

I'm curious if anyone on the list has any hands on performance
experience with running PostgreSQL on 3PAR appliances (big and small).
If you do, please contact me offlist - I'll be happy to summarize for
the list archives.

Thanks,
Aaron

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Database corruption with Postgre 7.4.2 on FreeBSD 6.1?

2006-07-26 Thread Aaron Glenn

On 7/26/06, aurora <[EMAIL PROTECTED]> wrote:

 From your experience do you expect the database would run into this from
time to time that requires DBA's interventions? Is so it would become a
problem for our customers because our product is a standalone system. We
don't intend to expose the Postgre database underneath.

wy


Is there a particular reason you're using 7.4.2? At the very least, I
would be using 7.4.13.

regards,
aaron.glenn

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] join on next row

2006-06-21 Thread Aaron Evans


sorry to nitpick, but I think that to get this query to do exactly  
what you want you'll need to add ordering over EventTime on your sub- 
selects to assure that you get the next event and not just some event  
later event on the given day.


-ae

On Jun 20, 2006, at 11:12 AM, Gurjeet Singh wrote:


Gurjeet Singh wrote:
> It would have been quite easy if done in Oracle's 'lateral view'
> feature. But I think it is achievable in standard SQL too; using
> subqueries in the select-clause.
>
> Try something like this:
>
> select
> Employee, EventDate,
> EventTime as e1_time,
> EventType as e1_type,
> (select
> EventTime
> from
> Events
> whereEmployee = O.Employee
> andEventDate = O.EventDate
> andEventTime > O.EventTime
> limit1
> )as e_time_1,
> (select
> EventType
> from
> Events
> whereEmployee = O.Employee
> andEventDate = O.EventDate
> andEventTime > O.EventTime
> limit1
> )
> from
> Events
>
> Hope it helps...
>
> Regards,
> Gurjeet.




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Exporting data from view

2006-06-20 Thread Aaron Koning
copyable, importable... into Excel or another postgres db?On 6/20/06, Martijn van Oosterhout <kleptog@svana.org
> wrote:On Tue, Jun 20, 2006 at 10:29:21AM -0700, Aaron Koning wrote:> CREATE TABLE sometable AS SELECT * FROM someview;
> pg_dump -t sometable dbname> DROP TABLE sometable>> Que? Si!Eh? If you're going to create the table anyway, I'd use psql:psql -c "COPY table TO STDOUT"If you put "CSV" there you can get the output in CSV. Far cleaner than
pg_dump.--Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/> From each according to his ability. To each according to his ability to litigate.
-BEGIN PGP SIGNATURE-Version: GnuPG v1.4.1 (GNU/Linux)iD8DBQFEmE9VIB7bNG8LQkwRAhJTAJ9loGC2v/inI+28RLvbRGGAljS6/ACdEJDne/aJg1Qu6XaBNIuhiPWt+MU==rpRd-END PGP SIGNATURE-----
-- +|  Aaron Koning|  Information Technologist|  Prince George, BC, Canada.+
|  http://datashare.gis.unbc.ca/fist/|  http://datashare.gis.unbc.ca/gctp-js/+


Re: [GENERAL] Exporting data from view

2006-06-20 Thread Aaron Koning
CREATE TABLE sometable AS SELECT * FROM someview;

pg_dump -t sometable dbname

DROP TABLE sometable



Que? Si!On 6/20/06, Worky Workerson <[EMAIL PROTECTED]> wrote:
On 6/20/06, Aaron Koning <[EMAIL PROTECTED]> wrote:> google pg_dumpReally?  What command do you use?  I've tried the following:pg_dump -t viewname dbname
and I get the view definition, whereas I would like the data.  Isthere an option to pg_dump that I'm missing?---(end of broadcast)---TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your   message can get through to the mailing list cleanly
-- +----|  Aaron Koning|  Information Technologist|  Prince George, BC, Canada.+|  
http://datashare.gis.unbc.ca/fist/|  http://datashare.gis.unbc.ca/gctp-js/+


Re: [GENERAL] Exporting data from view

2006-06-20 Thread Aaron Koning
google pg_dumpOn 6/20/06, Worky Workerson <[EMAIL PROTECTED]> wrote:
I read recently about the efforts underway to COPY from a view,however I was wondering what the current best-practices are for beingable to copy out of a view and import that data into an actual tableelsewhere.  I am currently doing psql -c "SELECT ..." and the using a
bit of perl to transform that into something copyable (i.e. CSV), butis there a way to directly export the data in an easily importableform?Thanks!---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster-- +----|  Aaron Koning|  Information Technologist|  Prince George, BC, Canada.
+|  http://datashare.gis.unbc.ca/fist/|  http://datashare.gis.unbc.ca/gctp-js/
+


Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully implementing

2006-06-12 Thread Aaron Bingham

Aaron Bingham wrote:


David Fetter wrote:


In SQL, you can do this (this example condensed from Libkin's
"Expressive Power of SQL" on the page above):

SELECT
   (SELECT count(*) FROM table_1) <
   (SELECT count(*) FROM table_2)
   AS "Can't compare cardinalities in first order logic";

Note the name of the output column.  It's important and true, as you
can verify if you care to do your homework on foundations of
mathematics.  Relational algebra is a subset of first-order logic
<http://en.wikipedia.org/wiki/Relational_algebra>, and as a direct
consequence, you can't do this simple but interesting thing with it.
 

I must be missing something important.  What aspect of the above query 
is supposedly impossible in relational algebra and/or relational 
calculus?


Having looked at this again, I now see that your statement above is 
strictly correct, but misleading.  Relational algebra consists of a 
limited number of operators on relations.  As such, relational algebra 
says nothing about aggregate functions such as COUNT, or how to build a 
relation from scaler values.  Relational algebra is, however, only part 
of the relational model as defined by Date, and Tutorial D includes all 
the previsions we need to re-write the above query.  The above query 
could be expressed in Tutorial D more-or-less as follows (I'm not sure 
if arbitrary strings are allowed as column names in Tutorial D, but 
that's beside the point):


  RELATION { TUPLE { "Can't compare cardinalities in first order logic" 
(COUNT(table_1) < COUNT(table_2)) } }


Placing the result of the comparison in a relation seems unnecessary, 
but I have done so for equivalence to your example.  Or did I miss the 
point?


Regards,

--

Aaron Bingham
Senior Software Engineer
Cenix BioScience GmbH



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully implementing

2006-06-12 Thread Aaron Bingham

David Fetter wrote:


On Fri, Jun 09, 2006 at 03:55:04PM +0200, Aaron Bingham wrote:
 


[EMAIL PROTECTED] wrote:
   


I'm reading, and enjoying immensely, Fabial Pascal's book "Practical Issues in 
Database Management."
 



If you're interested in the theory of RDBMSs, you can start with the papers on 
Leonid Libkin's page and the books and papers they reference. :)

http://www.cs.toronto.edu/~libkin/publ.html
 


Thanks for the pointer, I will look into it later.


I also found this book very useful when I first started doing serious database 
work.  For a more thorough treatment of many of these issues, see An 
Introduction to Database Systems by Chris Date. The latter book is so full of 
detail that it is sometimes hard to follow,
   


It certainly has an elaborate and well-thought-out system of ideas. As an 
empiricist, I find it crucially important that despite decades of whining about 
it, no one has come up with an actual *computer* system which implements this 
/gedankenexperiment/.
 

Whether or not there will ever by an implementation of their definition 
of the RM, their ideas provide useful guidance in designing real-world 
databases today.  The lack of an implementation of this RM is 
unfortunate, but there are many possible explanations for this lack 
other than faults in the theory itself.  The theory _could_ be flawed, 
of course, but I have not seen sufficient evidence to support that 
conclusion.



but it's worth the effort.
   


Why?  There are much more entertaining cranks out there if crank study is your 
thing.
 

I'm not at all into crank study.  I'm interested in basing design 
decisions on a solid foundation, grounded in logic.


"An Introduction to Database Systems" is not a thorough exposition of 
relational theory, as you seem to imply, but an _introduction_ to 
database fundamentals and the application thereof to SQL-DBMSs.  Were 
you thinking of "The Third Manifesto" by Date and Darwen?



Though I've just gotten started with the book, he seems to be saying that 
modern RDBMSs aren't as faithful to relational theory as they ought to be, and 
that this has many *practical* consequences, e.g. lack of functionality.

Given that PostgreSQL is open source, it seems a more likely candidate for 
addressing Pascal's concerns. At least the potential is there.
 


Although some DBMSs have invented new ways to break the relational model, the 
fundamental problems are in SQL.
   



Um, no.  As I'll demonstrate below, it's the model that's broken and SQL that 
got it right.
 


I'm unconvinced.  See below.


No DBMS based on SQL is going to be able to support RM correctly.
   



Aha!  I spy, with my little eye, a fanboy.  

I may have come across as over-enthusiastic late on Friday afternoon; I 
was attempting to counter over-enthusiastic claims in the other 
direction.  I find your choice of words insulting, but I won't hold it 
against you.


I appreciate Date and Pascal's work because it is well reasoned and 
grounded in mathematics and logic.  I do not consider their work final 
in any way.  They themselves are quick to acknowledge gaps in their 
understanding.  I have not seen a better conceptual framework for 
thinking about databases.  If I were presented with one, I would not 
cling to Date's or Pascal's views.



You have to be a bit of a theory wonk to call Date's stuff "RM."  You seem to 
be implying here
that Date's "RM" is somehow more desirable than what SQL actually provides.  To 
be more desirable, I don't think it's unreasonable to say that it should be more powerful 
in some essential way.
 

We were discussing Fabian Pascal's book.  His book is based on his 
definition of "RM", which is largely similar to (though not identical 
with) Date's.  My above claim applies to "RM" as defined by Date or 
Pascal, not some other "RM".  If you could point me to alternate 
definitions of "RM" which are not in conflict with SQL, I would be 
curious to see them.; a better framework for thinking about SQL 
databases would be invaluable.



In SQL, you can do this (this example condensed from Libkin's
"Expressive Power of SQL" on the page above):

SELECT
   (SELECT count(*) FROM table_1) <
   (SELECT count(*) FROM table_2)
   AS "Can't compare cardinalities in first order logic";

Note the name of the output column.  It's important and true, as you
can verify if you care to do your homework on foundations of
mathematics.  Relational algebra is a subset of first-order logic
<http://en.wikipedia.org/wiki/Relational_algebra>, and as a direct
consequence, you can't do this simple but interesting thing with it.
 

I must be missing something important.  What aspect of the above query 
is supposedly

Re: [GENERAL] Fabian Pascal and RDBMS deficiencies in fully implementing

2006-06-09 Thread Aaron Bingham

[EMAIL PROTECTED] wrote:


I'm reading, and enjoying immensely, Fabial Pascal's book "Practical
Issues in Database Management."
 

I also found this book very useful when I first started doing serious 
database work.  For a more thorough treatment of many of these issues, 
see An Introduction to Database Systems by Chris Date.  The latter book 
is so full of detail that it is sometimes hard to follow, but it's worth 
the effort.



Though I've just gotten started with the book, he seems to be saying
that modern RDBMSs aren't as faithful to relational theory as they
ought to be, and that this has many *practical* consequences, e.g. lack
of functionality.

Given that PostgreSQL is open source, it seems a more likely candidate
for addressing Pascal's concerns. At least the potential is there.
 

Although some DBMSs have invented new ways to break the relational 
model, the fundamental problems are in SQL.  No DBMS based on SQL is 
going to be able to support RM correctly.



Some questions:

1) Is PostgreSQL more faithful to relational theory? If so, do you find
yourself using the additional functionality afforded by this? e.g. does
it really matter to what you do in your daily work.

Within the limitations imposed by the SQL standard, PostgreSQL seems to 
do about as well as could be expected, but falls short as all SQL DBMSs 
must.  For example, PostgreSQL allows NULLs and duplicate rows (there 
are preventive measures against both of these but you have to be careful 
to avoid them, and sometimes you can't).  One feature of RM PostgreSQL 
lacks are nested relations (a bad idea for base table design but useful 
in query results).


Regards,

--
----
Aaron Bingham
Senior Software Engineer
Cenix BioScience GmbH



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Having problems with a 25 million row table on 8.1.3

2006-04-27 Thread Aaron Evans

try:

select tax_code from warehouse.sec_trans group by tax_code

there was a discussion about this on the pgsql-performance a while back:

http://archives.postgresql.org/pgsql-performance/2004-10/msg00053.php

-ae

On Apr 25, 2006, at 4:10 PM, Tony Caduto wrote:

select DISTINCT tax_code from warehouse.sec_trans We let this run  
for 1/2 hour or so and canceled it.


Then I tried select DISTINCT ON (tax_code) tax_code from  
warehouse.sec_trans


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] plpgsql replication stored procedure

2006-04-17 Thread aaron . clauson
Hi,

I'm trying to write a stored procedure that can capture all the changes
to a table and record the changes based on the table's primary key.

I can almost get there but the sticking point is being able to access
the primary key field of the NEW/OLD record in the trigger stored
procedure without knowing it's name.

The stored procedure is below and what I'm trying to do is find a way
to get the column from the NEW record that has the name
constraintColName. If I could do that I would replace NEW.oid with the
equivalent of NEW[constraintColName] and remove the need for oid's on
the table.

create or replace function replicate() returns trigger as
$$

 declare
  constraintName varchar;
  constraintColName varchar;
  keyId varchar;
  slaves record;

 begin
select into constraintName constraint_name from
information_schema.table_constraints where table_name = TG_RELNAME and
constraint_type = 'PRIMARY KEY';
select into constraintColName column_name from
information_schema.key_column_usage where constraint_name =
constraintName;
-- execute 'select ' || constraintColName || ' from ' || NEW into
keyId;

for slaves in
 select slaveid from replicationslaves
loop
 insert into replicationentries values(default, slaves.slaveid,
TG_OP, TG_RELNAME , NEW.oid, default);
end loop;
return NULL;
end;$$
language 'plpgsql';

Aaron


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Meaning of "loops" in EXPLAIN ANALYSE output

2006-04-10 Thread Aaron Bingham

Hi,

I have a query optimization problem and I have failed to find the part 
of the Postgres docs that explains the meaning of the "loops" value in 
the EXPLAIN ANALYSE output.


For example, my EXPLAIN ANALYSE output contains the following line:

Unique  (cost=9775.21..10015.32 rows=1 width=8) (actual 
time=264.889..264.889 rows=1 loops=791)


Does that mean that the entire operation took 264.889 ms, or that a 
single iteration took that long?  The time for the entire query would 
suggest the latter interpretation but I'd like to rule out the 
possibility that something else is causing the execution time to balloon.


Thanks,

--
----
Aaron Bingham
Senior Software Engineer
Cenix BioScience GmbH



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] how to document database

2006-04-10 Thread Aaron Bingham

Ottavio Campana wrote:


I need to document the  database I develop so that other people can
easily understand how it works.

I particularly want to document the stored procedures. By now I've used
a javadoc style to document them. I can't use tools like doxygen on them
but it is always better than nothing.

I'd like to know if you're using some particular tool.


Hi Ottavio,

Have you tried PostgreSQL Autodoc (http://www.rbt.ca/autodoc/)?  It
generates HTML documentation (or other formats) directly from your
database.  It will use the comments added to the DB with the 'comment
on' statement.  It can also generate graphical representations of your
schema using various tools, including Graphviz.

Regards,

--
----
Aaron Bingham
Senior Software Engineer
Cenix BioScience GmbH
Tatzberg 47   phone: +49 (351) 4173-146
D-01307 Dresden, Germany  fax:   +49 (351) 4173-198 




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] [Slightly OT] data model books/resources?

2006-03-30 Thread Aaron Glenn
Anyone care to share the great books, articles, manifestos, notes,
leaflets, etc on data modelling they've come across? Ideally I'd like
to find a great college level book on data models, but I haven't come
across one that even slightly holds "definitive resource"-type status.

Feel free to reply off list to keep the clutter down - I'd be happy to
summarize responses for the list.

Thanks,
aaron.glenn

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] address matching and verification

2006-02-16 Thread Aaron Griffin



I have two large 
databases of addresses (street name and number, postal code and 
municipality).  I need to match the addresses to varify that they actually 
do exist.  has anyone done this before with pgAdmin?
 
Sincerely,
 
Aaron 
Griffin
GIS Data Acquisition 
Technician
 
DMTI Spatial Inc.
www.dmtispatial.com
 
Phone: 905-948-2000 ext. 
2026
Toll Free: 1-877-477-3684 ext. 
2026
Fax: 905-948-9404
 
625 Cochrane Drive, 3rd 
Floor
Markham, ON
L3R 9R9
Canada
 
Now 
Shipping!
CanMap®v2005.4
Canada's #1 choice for street, 
route, and rail mapping data
 


Re: [GENERAL] Allowing Custom Fields

2006-01-27 Thread Aaron Colflesh




Bruno Wolff III wrote:

  On Fri, Jan 27, 2006 at 10:25:00 -0600,
  Aaron Colflesh <[EMAIL PROTECTED]> wrote:
  
  
#2 would seem to be the simplest except I'm really not too keen on the 
idea of manipulating a table like that on the fly (even though I did 
proof of concept it and it seems to be simple enough to be fairly safe 
if adequate checks for entries on table B are put into the system). Does 
anyone know of a 3rd way of doing it? It seems like this shouldn't be an 
all that uncommon task, so I'm hoping there is some slick way of maybe 
putting together a function or view to return data rows with a flexible 
field layout. So far all the in-db tricks I've come up with have 
required me to know what the field names were to generate the final 
query anyway, so they don't really gain me anything.

  
  
Couldn't you let the user creating a view joining A and B?
  

I have yet to find a way to make a query that will take the individual
row values of one table and make them appear to be columns (either by
themselves or as part of a join to another table). If someone can tell
me how to do that, then yes a view would be ideal.
Thanks,
AaronC




[GENERAL] Allowing Custom Fields

2006-01-27 Thread Aaron Colflesh

Hello folks,
I've run into a challenge that doesn't appear to have been discussed in 
the archives anywhere.


I'm designing a database that users need to have the ability to 
customize some. They just need the ability to add extra fields to an 
existing table (oh and they can't touch the predefined fields). So the 
database schema so far is table A (existing table), table B (contains a 
list of custom field names and other meta data) and table C 
(intersection table between A & B containing the values for the custom 
fields for each row). That works really well and all but we have 
problems with retrieving the data. Due to other requirements related to 
reporting we need to be able to present the data in table A along with 
any custom fields in a table as if the custom fields were actually 
fields on A. I only know of two ways of doing this, and I'm hoping one 
of you knows of a third way (I've tried to use a function to do it but 
it just doesn't seem to work).


1. Build the virtual table outside the database in application code
2. Use triggers on table B to actually create and remove custom fields 
on A as they are inserted/removed from B.


#2 would seem to be the simplest except I'm really not too keen on the 
idea of manipulating a table like that on the fly (even though I did 
proof of concept it and it seems to be simple enough to be fairly safe 
if adequate checks for entries on table B are put into the system). Does 
anyone know of a 3rd way of doing it? It seems like this shouldn't be an 
all that uncommon task, so I'm hoping there is some slick way of maybe 
putting together a function or view to return data rows with a flexible 
field layout. So far all the in-db tricks I've come up with have 
required me to know what the field names were to generate the final 
query anyway, so they don't really gain me anything.


Thanks,
Aaron C.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Adding another primary key to a populated table

2006-01-05 Thread Aaron Koning
Are you trying to create a primary key composed of 6 fields? What is
the result you want to achieve with the constraint? If you just want
UNIQUE, NOT NULL values in a field, you can achieve that without
creating a primary key.

AaronOn 1/5/06, Daniel Kunkel <[EMAIL PROTECTED]> wrote:
HiIt makes sense that I can't have more than 1 primary key.Postgres was trying to create another primary key instead of modify theexisting primary key.So...As I understand it, a table does not always have to have a primary key
defined.Would it work to first delete/drop the primary key, then recreate theprimary key on all 6 columns.ALTER TABLE product_price DROP CONSTRAINT  product_price_pkey;I tried this, but it doesn't seem to work...  If I look at the table
from pgAdmin, it is still there, reindexable, I can't add a new primarykey, etc.  But if I try to run the above command twice, it says it'salready been removed.--Just for the record...  the error message I got was:
ERROR:  ALTER TABLE / PRIMARY KEY multiple primary keys for table'product_price' are not allowedOn Fri, 2006-01-06 at 05:19 +, Andrew - Supernews wrote:> On 2006-01-06, Daniel Kunkel <
[EMAIL PROTECTED]> wrote:> > Hi> >> > I'm trying to add another primary key to a table populated with data and> > a number of foreign key constraints.
>> You can only have one primary key on a table.>> You can add additional unique constraints to get the same effect. (A> primary key constraint is just a unique constraint that is also not null,
> and is the default target for REFERENCES constraints referring to the table -> this last factor is why there can be only one...)>---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to   choose an index scan if your joining column's datatypes do not   match


Re: [GENERAL] Best Data type for Binary Data?

2006-01-05 Thread Aaron Koning
BYTEA is the easiest from my point of view. I like being able to treat
my binary data more like any other field (e.g. date, text, etc). Heres
some light reading on the BLOB/BYTEA debate:
  http://search.postgresql.org/www.search?cs=utf-8&fm=on&st=20&dt=back&q=blob+bytea

AaronOn 1/5/06, Ketema Harris <[EMAIL PROTECTED]> wrote:
Hi, I would like to store binary data from a tcpdump (libpcap) file in
a table. What is the best type to use? i have read posts saying lo,
oid, and bytea. Which one would be best for this scenario?Thanks, ketema



Re: [GENERAL] Multi-row update w. plpgsql function

2005-12-13 Thread Aaron Koning
This might be easier to use this SQL:
 UPDATE message_table SET status = 'A' WHERE mid IN (1,2,3);
The following might work for Cocoon (never used it):
 UPDATE message_table SET status = 'A' WHERE
mid IN ();

Aaron


On 12/13/05, Daniel Hertz <[EMAIL PROTECTED]> wrote:
Given a set of checkbox values that are submitted through an html form,how do you loop through the submitted values to update more than one rowin a table?Imagine a table called 'message_table':mid | message | status
+-+---  1  |  Text1   |  H  2  |  Text2   |  H  3  |  Text3   |  H  4  |  Text4   |  HA web page presents the user with all messages flagged with 'H'. Userchecks messages 1,3 and 4 and submits form.
(i.e. approved=1&approved=3&approved=4)After performing postgreSQL update, rows 1, 3 and 4 would be updated to:mid | message | status+-+---  1  |  Text1   |  A  2  |  Text2   |  H
  3  |  Text3   |  A  4  |  Text4   |  AI have never written a plpgsql function, but tried:CREATE OR REPLACE FUNCTION update_messages(approved integer) RETURNSinteger AS$body$DECLARE
 new_status varchar; new_sample record;BEGIN new_status := 'A'; FOR new_sample IN SELECT * FROM message_table WHERE status='H' ORDER BYmid LOOP  UPDATE message_table SET status = new_status
  WHERE mid = approved; END LOOP; RETURN 1;END;$body$LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;I call the function with:SELECT update_messages();
I'm using apache cocoon, which is why you see the variable placeholder:);Unfortunately, the function only updates the first value submitted (mid
1), and doesn't loop through the other two values submitted.Can someone help this novice from getting ulcers?Thanks for your help!Daniel---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to   choose an index scan if your joining column's datatypes do not   match


Re: [GENERAL] is there any way of specifying "i want x GB of space to be avaialble for my database"

2005-12-05 Thread Aaron Koning
If you are using a Linux system then you can make a partition of xGB and place PGDATA there. This may work for other OS as well.
 
Aaron 
On 12/5/05, surabhi.ahuja <[EMAIL PROTECTED]> wrote:


 here is a question
 
say i have a database and all the files(data files) indexes etc must be going to the PGDATA directory
 
The question is this:
is there any way by which i can specify : to reserve x GB amount of space to this database (this x includes all the space which the database will require space for data files, indexes or any other resources).

 
what i want to achieve by doing this is to limit the amount of rows i have inserted into the table and indexes etc.
 
Thanks,
regards
surabhi


Re: [GENERAL] [Fwd: Sun backs open-source database PostgreSQL | Topic: "everything" | ZDNet News Alerts]

2005-11-17 Thread Aaron Glenn
>
> *Sun backs open-source database PostgreSQL*
>

This is going to make PostgreSQL a much easier sell to PHB's (at
least, in my experience)


aaron.glenn

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] lo_import()

2005-11-15 Thread Aaron Steele
hi,



i would like to use lo_import() on the client side to insert images
into postgresql version 8.0.1 on a remote server using psql as follows:



psql -h my.db.host.com -U user -c "insert into binblob (binary_blob)
values(lo_import('/path/on/client/machine/ks.jpg'))" -d mydb



i'm getting the following error:



ERROR:  must be superuser to use server-side lo_import()

HINT:  Anyone can use the client-side lo_import() provided by libpq.



isn't the above command the client-side lo_import()? thoughts? 



thanks,

cb


Re: [GENERAL] pg_dump with low priority?

2005-10-24 Thread Aaron Glenn
On 10/24/05, CSN <[EMAIL PROTECTED]> wrote:
>
> nice comes to mind:
>
> nice pg_dump ...
>

as mentioned earlier...nice isn't going to do anything for I/O. PITR
(point in time recovery) would be, in my opinion, the best solution to
this problem.

aaron.glenn

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL on Slash Dot

2005-10-05 Thread Aaron Glenn
On 10/5/05, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Check it out.
>

http://developers.slashdot.org/developers/05/10/05/1344254.shtml?tid=221&tid=218

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] License question[VASCL:A1077160A86]

2005-10-04 Thread Aaron Glenn
On 10/4/05, Welty, Richard <[EMAIL PROTECTED]> wrote:
> Aaron Glenn wrote:
> >Completely incorrect. You can do whatever you like with PostgreSQL;
> >you just can't sue anyone when things go south.
>
> _and_ you need to preserve the copyright notices.

excellent point.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] License question

2005-10-04 Thread Aaron Smith
I never imagined that I would get so many responses. Thanks for all the 
great information! This really cleared it up for us. It looks like I 
will take a good long look at postgresql and see how well I can make it 
interface into our product. This was extremely good news for me... I had 
thought we were completely sunk until we moved everything over to .Net.


Thanks again!
Aaron



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] License question[VASCL:A1077160A86]

2005-10-04 Thread Aaron Glenn
On 10/4/05, Richmond Dyes <[EMAIL PROTECTED]> wrote:
>  From my understanding of the license for Postgresql, there is no
> licensing fees as long as you are not selling it yourself for a profit.

Completely incorrect. You can do whatever you like with PostgreSQL;
you just can't sue anyone when things go south.

aaron.glenn

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] License question

2005-10-04 Thread Aaron Smith
I did a quick search on the mailing list and didn’t really find my 
answer, so I am posting it to this list…


I’d like to apologize for my lack in understanding all this license 
stuff. I am not an open source developer, I am a commercial developer, 
so this is the first time I have even looked to use open source software 
at all.


We are a small company that develops specialized applications. We use a 
development language that has its own shared file database (like 
access). This is less than ideal in a lot of cases. We don’t really 
charge for our software, we charge for the time we take to make the 
software. Essentially, we have no product, we have a service. However, 
we are a commercial for-profit entity. We have found our built in 
database to be problematic and prone to index corruption. Plus the fact 
when people ask us what we use, we get funny looks, and sometimes told 
we can’t install that on their server because they have never heard of 
it before.


We started to look at alternatives. One of those was the MSDE from 
Microsoft. I started a conversion of a large customer only to find out 
that we hit the 2 gig limit before it even got installed (converted 
their current data). We started to look at prices of the full version of 
SQL Server and the pricing is going to put it out of reach for some of 
our customers. The larger ones will have no problem paying for it, but 
most of our customers are small businesses that just can’t afford it. I 
realize it’s a small price to pay, but you also have to realize that 
with our development language, we have to charge our end users a per 
user runtime fee (that gets paid to the company that makes our 
development language) and a per user fee to connect to any database 
other than the default (again, to the company that made our development 
language). By the time we are done, a 25 user system has a per user cost 
of over $5,000 for the database and the runtime fees. Then when you tack 
on our fees for software modifications, this just put it out of reach 
for the smaller customers. Also keep in mind; we are in process of 
dumping our current language for VB.Net, simply because of these stupid 
fees that our customers have to pay to the company that wrote our 
development environment.


We are looking for alternatives that are reliable, fairly fast, and easy 
to maintain. We immediately thought of MySQL, but the commercial 
licenses have now gone to an annual subscription structure. The basic 
version is fairly cheap, and very reasonable. However, the data 
connector that we have to pay for to get MySQL access will cost our end 
users $3500 for a 25 user system. On the other hand, we can use an ODBC 
connector for $1,000, which puts it back into the somewhat affordable 
range. Once it’s moved to VB.Net, all of these fees will be eliminated, 
but that is pretty far off.


This brings me here. I have heard of PostGreSQL, so it’s not new to me. 
But all this licensing is. And maybe you can help get some clarification 
for the MySQL licensing too.


We will not be selling the database software. We may install it for 
them, though. In fact, most of the time, we will be the ones to install 
it, and we charge for that time. Our customers are very aware that we 
did not write it, nor are we selling it. Even if they are to purchase 
SQL Server, they are the ones to purchase it directly, not us. We 
purchase the connectivity kits that we use to connect to the database, 
and then pass that cost on to the customer. It is purchased in their 
name with their information. We do not charge extra or tack on any fees. 
We do it this way so that no screw ups are made… The connectivity kit is 
the one that connects to the database; we connect to the connectivity 
kit through our software.


Knowing all this, what do we need to purchase, what can we do and what 
can’t we do? It’s hard getting a straight answer from anyone that is why 
I am here. If we can’t do it, we won’t. If we can save our customers 
some money while getting them really good options and software, we would 
like to do that. But on the same token, we don’t to do anything unfair 
or illegal.



Thank you for your time.

Aaron



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] oids and pg_class_oid_index constraint

2005-08-03 Thread Aaron Harsh
> Tom Lane <[EMAIL PROTECTED]> 08/03/05 1:33 PM >>>
> "Aaron Harsh" <[EMAIL PROTECTED]> writes:
> > We've just recently started seeing sporadic constraint violations on system 
> > tables.  For example:
> >   duplicate key violates unique constraint "pg_class_oid_index" [for 
> > Statement "CREATE TEMPORARY TABLE...
> 
> OID wraparound would explain that ...

Fantastic.  Will our plan ('set without oids', pg_dump, pg_restore) take care 
of the problem?

> > and the occasional
> >   unexpected chunk number 0 (expected 1) for toast value
> 
> ... but not that.  The latter might possibly be due to a corrupt index
> on a toast table.  If you have a reproducible way of causing it, I'd
> definitely love to see it.

We've seen the error show up in twice in our serverlog, but I'm not sure what 
caused the toast tables to get in that state.  Is there anything helpful we 
could do with the table next time (save the relevant data/base files; run a 
query against the toast tables)?

Thanks for the advice

-- 
Aaron Harsh
[EMAIL PROTECTED]
503-284-7581 x347


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] oids and pg_class_oid_index constraint violations

2005-08-03 Thread Aaron Harsh
We've just recently started seeing sporadic constraint violations on system 
tables.  For example:

  duplicate key violates unique constraint "pg_class_oid_index" [for Statement 
"CREATE TEMPORARY TABLE...

and

  duplicate key violates unique constraint "pg_toast_4292803267_index" [for 
Statement "INSERT INTO...

and the occasional

  unexpected chunk number 0 (expected 1) for toast value

I suspect that the problem is due to oid reuse, but I'd like to get a second 
opinion.  We've just recently grown to a billion tuples in our database (with 
all our tables built with oids), and the volume of activity makes me think it's 
likely that the majority of records have been updated three or four times.

Our current plan for dealing with the problem is to 'SET WITHOUT OIDS' on our 
tables, then dump & restore the database.

Does it seem reasonable that oid reuse could cause the constraint violations?  
Does it seem likely that this would fix the problem?  Is there an easier way to 
solve the problem?

Thanks in advance

-- 
Aaron Harsh
[EMAIL PROTECTED]
503-284-7581 x347


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] TSearch2 & Phonemes

2005-05-19 Thread Stephen Aaron Knott
Hi,
I hope someone can help me out...  I read some research a little while 
ago about someone using TSearch2 but searching using phonemes; but I 
cannot find that article anymore.

Does anyone remember it & can point me to a link of the article?

Thanks in advance.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] inherit with foreign key reference

2005-05-06 Thread Aaron Steele
dear readers,
i've created a simple Fooey table that inherits from Foo:
!--!
CREATE TABLE Foo(
fooid   serial UNIQUE,
footype text);
CREATE TABLE Fooey(
datatext);
INHERITS(Foo);
!--!
next i try to create a Bar table that references Fooey's fooid (inherited from 
Foo) as a foreign key:
!--!
CREATE TABLE Bar(
fooeyid int REFERENCES Fooey(fooid));
!--!
unfortunately i get the following error:
!--!
ERROR:  there is no unique constraint matching given keys for referenced table 
"pagesrc"
!--!
thoughts?
thanks,
aaron
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] remote tcp connection problem PG 8.0.1

2005-05-04 Thread Aaron Steele
i'm having the same problem running pg8.0.1 on redhat enterprise linux 
AS 3 (2.4.21-27.0.4.ELsmp).

shutting off my firewall via 'service iptables stop' solves the 
problem... what needs to be modified in the iptables to allow remote pg 
connections?

Richard Huxton wrote:
Michael Korotun wrote:
Hi Guys,
I can't get working remote tcp connections on default port 5432. 
Environment is as follows

OS: Fedora Core 3
DB: Postgresql 8.0.1
1) The listen_addresses is set to '*' in postgresql.conf
2) pg_hba.conf is edited with client host ip, (the one which tries to
eastablish connection) 

1. Turn connection logging on in your postgresql.conf and restart 
postgresql.
2. From the database server, try "telnet localhost 5432" - it should 
connect (and display nothing). Type "A" and hit return twice - you 
should be disconnected. Check your logs and there should be a message 
about an invalid startup packet.
3. Do the same from your client, "telnet DB-SERVER-IP-ADDRESS 5432" 
and repeat the test.

If step 3 works, then there's no problem with the server. If you can't 
connect at step 3 and PG doesn't log anything then you probably have a 
firewall in the way.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Strange interaction of union and expressions

2005-04-20 Thread Aaron Bingham
Kevin Murphy <[EMAIL PROTECTED]> writes:

> On Apr 20, 2005, at 1:24 PM, Aaron Bingham wrote:
>> create table a (foo varchar);
>> insert into a (foo) values ('baz');
>> create table b (foo varchar);
>> insert into b (foo) values ('woof');
>> select '"' || foo || '"' as foo
>> from (select foo from a) as bar
>> union select foo from b;
>>
>
> No, it's doing what you asked.
>
> You mean:
>
> select '"' || foo || '"' from (select foo from a union select foo from 
> b) as subq;
>
> Right?

Ah, I get it now.  Thanks!

-- 

Aaron Bingham
Software Engineer
Cenix BioScience GmbH



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Strange interaction of union and expressions

2005-04-20 Thread Aaron Bingham
Hello all,

We noticed some very odd behavior today with Postgres 7.4.5

Running the following SQL:

create table a (foo varchar);
insert into a (foo) values ('baz');
create table b (foo varchar);
insert into b (foo) values ('woof');
select '"' || foo || '"' as foo 
from (select foo from a) as bar 
union select foo from b;

Produces this output:

  foo  
---
 "baz"
 woof
(2 rows)
   
I would expect the following instead:

  foo  
---
 "baz"
 "woof"
(2 rows)

Is this a known issue?  Has it been fixed?

Thanks,

-- 

Aaron Bingham
Software Engineer
Cenix BioScience GmbH



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] PostgreSQL as a filesystem

2005-04-18 Thread Aaron Glenn
On 4/18/05, Christopher Nelson <[EMAIL PROTECTED]> wrote:
>  
> This isn't a high-priority question. 
> 

and if I can latch on to this non-priority question with another in a
similar vain: what sort of RDBMS do huge transactional systems like
Tandy's use? I've read that everything is a database, similar to the
unix paradigm "everything is a file".

Just curious,

aaron.glenn

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] What talk would get you to go to OSCon?

2005-02-08 Thread Aaron Glenn
On Tue, 8 Feb 2005 14:29:08 -0500, Robert Treat
<[EMAIL PROTECTED]> wrote:
> Slony

Yes.

> High Availability

Yes.

> If you have other ideas please feel free to chime in, we'd really like to see
> an uptick in postgresql attendees.

Will Bruce and Tom be attending this year like they did in 2002?

Regards,
aaron.glenn

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Duplicate counting

2005-01-20 Thread Aaron Bingham
Jiří Němec wrote:
Hello all,
I wrote a function which counts price of product from retail price and
discount. It works. But I need to count price with tax of same
product. The best way is to use counted price and add only a tax. I
would like to do by this way:
SELECT count_price(retail, discount) AS price, count_price_tax(price,
tax) FROM foo.
But PostgreSQL reports that "price" column doesn't exist. It doesn't
exist, but is counted by first calling "count_price()" function.
Is there some way how I shouldn't count these prices twice and use
just counted price?
It's not quite clear to me what count_price and count_price_tax are 
supposed to do.  Does count_price_tax return the equivalent of 
price*(1.0+tax) (or maybe price*tax)?  If so, one way to do it is to use 
a sub-query like this:

SELECT price, count_price_tax(price, tax) FROM (SELECT 
count_price(retail, discount) AS price, tax FROM foo) AS bar;

--
----
Aaron Bingham
Application Developer
Cenix BioScience GmbH

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Database "template1" does not exist in the system catalog.

2005-01-10 Thread Aaron Mark
Unfortunately, I can't seem to connect to any other database.  I only had 
one non-system database so I tried that, and that I was thinking there was a 
template0, so I tried that as well.

Thanks in advance for any help you can provide!
Aaron
From: Tom Lane <[EMAIL PROTECTED]>
To: "Aaron Mark" <[EMAIL PROTECTED]>
CC: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Database "template1" does not exist in the system 
catalog. Date: Tue, 11 Jan 2005 00:33:09 -0500

"Aaron Mark" <[EMAIL PROTECTED]> writes:
> When I try to connect to template1 via psql, I get the following error
> message:
> psql: FATAL:  Database "template1" does not exist in the system catalog.
Hm, can you connect to any other databases?  If so, what does "select *
from pg_database" show?
regards, tom lane
---(end of broadcast)---
TIP 8: explain analyze is your friend
_
FREE pop-up blocking with the new MSN Toolbar – get it now! 
http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Database "template1" does not exist in the system catalog

2005-01-10 Thread Aaron Mark
I apologize if this is a duplicate of a message I just sent.  The message 
didn't appear to go through.  I seem to have encountered a major corruption 
error.  I am currently running PostgreSQL 7.3.6.  Here is an example of an 
error I received:

# psql -h 127.0.0.1 -p 5432 -U postgres template1
psql: FATAL:  Database "template1" does not exist in the system catalog.
In digging around the list, I have tried running pgfsck:
# pgfsck -s 8192 -D /Applications/altona/data-corruption template1
-- Detected database format 7.3
-- Table pg_class(1259):Page 0: Incorrect value in header (incorrect 
blocksize?)
-- 00 00 00 00 00 47 A7 CC 00 00 00 09 00 E8 01 5C 20 00 20 01
-- Table pg_class(1259):Page 0: Incorrect value in header (incorrect 
blocksize?)
-- 00 00 00 00 00 47 FE CC 00 00 00 09 00 E8 01 58 20 00 20 01
-- Table pg_class(1259):Page 0: Incorrect value in header (incorrect 
blocksize?)
-- 00 00 00 00 00 48 35 88 00 00 00 09 00 E8 0E E8 20 00 20 01
Couldn't find class 'pg_attribute'

I'm 99% sure the block size is 8192, but I could be wrong.  I proceeded to 
try out pg_filedump on data-corrupt/base/1/1259, but I didn't see anything 
out of the ordinary.  Can't say that I know what I was looking for, though.

Any ideas on what I can try to potentially fix this problem and/or recover 
any of this data?

Thanks,
Aaron
_
Express yourself instantly with MSN Messenger! Download today - it's FREE! 
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] Database "template1" does not exist in the system catalog.

2005-01-10 Thread Aaron Mark
When I try to connect to template1 via psql, I get the following error 
message:

psql: FATAL:  Database "template1" does not exist in the system catalog.
I get a similar error when trying to do a pg_dumpall.
In searching the list and trying to find a good place to start, I downloaded 
pgfsck and I tried the following:

# ./pgfsck -s 8192 -D data-corrupt template1
-- Detected database format 7.3
-- Table pg_class(1259):Page 0: Incorrect value in header (incorrect 
blocksize?)
-- 00 00 00 00 00 47 A7 CC 00 00 00 09 00 E8 01 5C 20 00 20 01
-- Table pg_class(1259):Page 0: Incorrect value in header (incorrect 
blocksize?)
-- 00 00 00 00 00 47 FE CC 00 00 00 09 00 E8 01 58 20 00 20 01
-- Table pg_class(1259):Page 0: Incorrect value in header (incorrect 
blocksize?)
-- 00 00 00 00 00 48 35 88 00 00 00 09 00 E8 0E E8 20 00 20 01
Couldn't find class 'pg_attribute'

I am 99% sure the block size truly is 8192, but I could be wrong; I tried 
other combinations and didn't get any better results.

I wasn't really sure what to make of this error, but noticed the 1259 and 
figured that was a file name.  I then tried downloading pg_filedump and 
tried the following:

# ./pg_filedump data-corrupt/base/1/1259
(See results below)
Am I headed in the right direction?  Any chance of recovering some/all of 
the data?

Thanks,
Aaron
*
* PostgreSQL File/Block Formatted Dump Utility - Version 1.1
*
* File: /Applications/altona/data-corrupt/base/1/1259
* Options used: None
*
* Dump created on: Tue Jan 11 04:11:26 2005
*
Block0 **
 -
Block Offset: 0x Offsets: Lower 232 (0x00e8)
Block: Size 8192  Version1Upper 348 (0x015c)
LSN:  logid  0 recoff 0x0047a7cc  Special  8192 (0x2000)
Items:   53   Free Space:  116
Length (including item array): 236
 --
Item   1 -- Length:  180  Offset: 4308 (0x10d4)  Flags: USED
Item   2 -- Length:  180  Offset: 4128 (0x1020)  Flags: USED
Item   3 -- Length:  180  Offset: 3948 (0x0f6c)  Flags: USED
Item   4 -- Length:  180  Offset: 3768 (0x0eb8)  Flags: USED
Item   5 -- Length:  180  Offset: 3588 (0x0e04)  Flags: USED
Item   6 -- Length:  180  Offset: 3408 (0x0d50)  Flags: USED
Item   7 -- Length:  180  Offset: 3228 (0x0c9c)  Flags: USED
Item   8 -- Length:  152  Offset: 8040 (0x1f68)  Flags: USED
Item   9 -- Length:  180  Offset: 3048 (0x0be8)  Flags: USED
Item  10 -- Length:  180  Offset: 2868 (0x0b34)  Flags: USED
Item  11 -- Length:  180  Offset: 2688 (0x0a80)  Flags: USED
Item  12 -- Length:  180  Offset: 2508 (0x09cc)  Flags: USED
Item  13 -- Length:  180  Offset: 2328 (0x0918)  Flags: USED
Item  14 -- Length:  180  Offset: 2148 (0x0864)  Flags: USED
Item  15 -- Length:  180  Offset: 1968 (0x07b0)  Flags: USED
Item  16 -- Length:  180  Offset: 1788 (0x06fc)  Flags: USED
Item  17 -- Length:  180  Offset: 1608 (0x0648)  Flags: USED
Item  18 -- Length:  180  Offset: 1428 (0x0594)  Flags: USED
Item  19 -- Length:  180  Offset: 1248 (0x04e0)  Flags: USED
Item  20 -- Length:  180  Offset: 1068 (0x042c)  Flags: USED
Item  21 -- Length:  180  Offset:  888 (0x0378)  Flags: USED
Item  22 -- Length:  180  Offset:  708 (0x02c4)  Flags: USED
Item  23 -- Length:  180  Offset:  528 (0x0210)  Flags: USED
Item  24 -- Length:  180  Offset:  348 (0x015c)  Flags: USED
Item  25 -- Length:0  Offset: 4460 (0x116c)  Flags: 0x00
Item  26 -- Length:0  Offset: 4312 (0x10d8)  Flags: 0x00
Item  27 -- Length:0  Offset: 4164 (0x1044)  Flags: 0x00
Item  28 -- Length:0  Offset: 4016 (0x0fb0)  Flags: 0x00
Item  29 -- Length:0  Offset: 3868 (0x0f1c)  Flags: 0x00
Item  30 -- Length:  148  Offset: 7892 (0x1ed4)  Flags: USED
Item  31 -- Length:  148  Offset: 7744 (0x1e40)  Flags: USED
Item  32 -- Length:  148  Offset: 7596 (0x1dac)  Flags: USED
Item  33 -- Length:  148  Offset: 7448 (0x1d18)  Flags: USED
Item  34 -- Length:  148  Offset: 7300 (0x1c84)  Flags: USED
Item  35 -- Length:  148  Offset: 7152 (0x1bf0)  Flags: USED
Item  36 -- Length:  148  Offset: 7004 (0x1b5c)  Flags: USED
Item  37 -- Length:  148  Offset: 6856 (0x1ac8)  Flags: USED
Item  38 -- Length:  148  Offset: 6708 (0x1a34)  Flags: USED
Item  39 -- Length:  148  Offset: 6560 (0x19a0)  Flags: USED
Item  40 -- Length:  148  Offset: 6412 (0x190c)  Flags: USED
Item  41 -- Length:  148  Offset: 6264 (0x1878)  Flags: USED
Item  42 -- Length:  148  Offset: 6116 (0x17e4)  Flags: USED
Item  43 -- Length:  148  Offset: 5968 (0x1750)  Flags: USED
Item  44 -- Length:  148  Offset: 5820 (0x16bc)  Flags: USED
Item  45 -- Length:  148  Offset: 5672 (0x1628)  Flags: USED
Item  46 -- Length:  148  Offset: 5524 (0x1594)  Flags: USED
Item  47 -- Length:  148  Offset: 5376 (0x1500)  Flags: USED
Item  48 -- Length:  148  Offset: 5228 (0x146c)  Flags: USED
Item  49 -- Length:  148  Offs

[GENERAL] warning: pg_query(): Query failed

2005-01-10 Thread Aaron Steele
hi bruno,
yeah, the postgresql problem was fixed by changing IF statements to 
CASE statements in forum.module code:

// remove:
-- $topic = db_fetch_object(db_query_range('SELECT DISTINCT(n.nid), 
l.last_comment_timestamp, IF(l.last_comment_uid, cu.name, 
l.last_comment_name) as last_comment_name, l.last_comment_uid FROM 
{node} n ' . node_access_join_sql() . ", {node_comment_statistics} l 
/*! USE INDEX (node_comment_timestamp) */, {users} cu, {term_node} r 
WHERE n.nid = r.nid AND r.tid = %d AND n.status = 1 AND n.type = 
'forum' AND l.last_comment_uid = cu.uid AND n.nid = l.nid AND " . 
node_access_where_sql() . ' ORDER BY l.last_comment_timestamp DESC', 
$forum->tid, 0, 1));

// add:
++ $topic = db_fetch_object(db_query_range('SELECT DISTINCT(n.nid), 
l.last_comment_timestamp, CASE WHEN l.last_comment_uid = 1 THEN cu.name 
ELSE l.last_comment_name END as last_comment_name, l.last_comment_uid 
FROM {node} n ' . node_access_join_sql() . ", {node_comment_statistics} 
l, {users} cu, {term_node} r WHERE n.nid = r.nid AND r.tid = %d AND 
n.status = 1 AND n.type = 'forum' AND l.last_comment_uid = cu.uid AND 
n.nid = l.nid AND " . node_access_where_sql() . ' ORDER BY 
l.last_comment_timestamp DESC', $forum->tid, 0, 1));

thanks for all the support!
aaron
On Jan 6, 2005, at 10:25 PM, Bruno Wolff III wrote:
On Thu, Jan 06, 2005 at 17:32:30 -0800,
  Aaron Steele <[EMAIL PROTECTED]> wrote:
hi bruno,
turns out that l.last_comment_uid and l.last_comment_name are integer
and char var respectively. since i'm using 7.4.1 with strict boolean
casting, is there a better alternative to instantiating a different
version of pgsql on my server?
A better solution is fixing your code. What do you expect it to do
anyhow? Once you figure out what you want it to do, you should be able
to write a boolean expression that is true, false or null when you
want it to be. If fact from what I saw it seems that you could just use
a CASE expression and skip the function call altogether.
---(end of 
broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] warning: pg_query(): Query failed

2005-01-06 Thread Aaron Steele
hi bruno,
turns out that l.last_comment_uid and l.last_comment_name are integer 
and char var respectively. since i'm using 7.4.1 with strict boolean 
casting, is there a better alternative to instantiating a different 
version of pgsql on my server?

In the php code you showed the following fragment:
IF(l.last_comment_uid,  cu.name,  l.last_comment_name)
I doubt that l.last_comment_uid is a boolean based on its name.
If it isn't this is probably the source of your problem.
---(end of 
broadcast)---
TIP 1: subscribe and unsubscribe commands go to 
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] warning: pg_query(): Query failed

2005-01-05 Thread Aaron Steele
hi steven,
here's another file where the actual SQL query (the one in the error 
message) is defined. it's in the forum_get_forums() function. bruno 
noticed that the IF (via pl/pgsql) might not like the boolean casting 
to "2" since 7.4 is more strict about casting? does this help?


/**
 * @file
 * Enable threaded discussions about general topics.
 */
/**
 * Implementation of hook_help().
 */
function forum_help($section) {
  switch ($section) {
case 'admin/help#forum':
  return t("
  Creating a forum
  The forum module uses taxonomy to organize itself. To create a 
forum you first have to create a taxonomy 
vocabulary. When doing this, choose a sensible name for it (such as 
\"fora\") and make sure under \"Types\" that \"forum\" is selected. 
Once you have done this, add some terms to 
it. Each term will become a forum. If you fill in the description 
field, users will be given additional information about the forum on 
the main forum page. For example: \"troubleshooting\" - \"Please ask 
your questions here.\"
  When you are happy with your vocabulary, go to administer » settings » forum and set 
Forum vocabulary to the one you have just created. 
There will now be fora active on the site. For users to access them 
they must have the \"access content\" permission and to create a topic they must 
have the \"create forum topics\" permission. These permissions can be set in 
the permission pages.
  Icons
  To disable icons, set the icon path as blank in administer » settings » forum.
  All files in the icon directory are assumed to be images. You 
may use images of whatever size you wish, but it is recommended to use 
15x15 or 16x16.", array("%taxonomy" => 
url('admin/taxonomy/add/vocabulary'), '%taxo-terms' => 
url('admin/taxonomy'), '%forums' => url('admin/settings/forum'), 
'%permission' => url('admin/user/configure/permission')));
case 'admin/modules#description':
  return t('Enable threaded discussions about general topics.');
case 'admin/settings/forum':
  return t("Forums are threaded discussions based on the taxonomy 
system.  For the forums to work, the taxonomy module has to be 
installed and enabled.  When activated, a taxonomy vocabulary (eg. 
\"forums\") needs to be created and bound to 
the node type \"forum topic\".", array('%created' => 
url('admin/taxonomy/add/vocabulary')));
case 'node/add#forum':
  return t('A forum is a threaded discussion, enabling users to 
communicate about a particular topic.');
  }

  // The 'add forum topic' form takes a variable argument:
  if (substr($section, 0, 14) == 'node/add/forum'){
return variable_get('forum_help', '');
  }
}
/**
 * Implementation of hook_node_name().
 */
function forum_node_name($node) {
  return t('forum topic');
}
/**
 * Implementation of hook_access().
 */
function forum_access($op, $node) {
  if ($op == 'create') {
return user_access('create forum topics');
  }
}
/**
 * Implementation of hook_perm().
 */
function forum_perm() {
  return array('create forum topics');
}
/**
 * Implementation of hook_settings().
 */
function forum_settings() {
  if (module_exist('taxonomy')) {
$vocs[0] = '<'. t('none') .'>';
foreach (taxonomy_get_vocabularies('forum') as $vid => $voc) {
  $vocs[$vid] = $voc->name;
}
if ($voc) {
  $group  = form_select(t('Forum vocabulary'), 
'forum_nav_vocabulary', variable_get('forum_nav_vocabulary', ''), 
$vocs, t("The taxonomy vocabulary that will be used as the navigation 
tree.  The vocabulary's terms define the forums."));
  $group .= _taxonomy_term_select(t('Containers'), 
'forum_containers', variable_get('forum_containers', array()), 
variable_get('forum_nav_vocabulary', ''), t('You can choose forums 
which will not have topics, but will be just containers for other 
forums.  This lets you both group and nest forums.'), 1, '<'. t('none') 
.'>');
  $output = form_group(t('Forum structure settings'), $group);

  $group  = form_textarea(t('Explanation or submission 
guidelines'), 'forum_help', variable_get('forum_help', ''), 70, 5, 
t('This text will be displayed at the top of the forum submission form. 
 It is useful for helping or instructing your users.'));
  $group .= form_textfield(t('Forum icon path'), 'forum_icon_path', 
variable_get('forum_icon_path', ''), 30, 255, t('The path to the forum 
icons.  Leave blank to disable icons.  Don\'t add a trailing slash.  
Default icons are available in the "misc" directory.'));
  $group .= form_select(t('Hot topic threshold'), 
'forum_hot_topic', variable_get('forum_hot_topic', 15), 
drupal_map_assoc(array(5, 10, 15, 20, 25, 30, 35, 40, 50, 60, 80, 100, 
1)), t('The number of posts a topic must have to be considered 
hot.'));
  $group .= form_select(t('Topics per page'), 'forum_per_page', 
variable_get('forum_per_page', 25), drupal_map_assoc(array(10, 25, 50, 
75, 100)), t('The default number of topics displayed per page; links to 
browse older messages a

Re: [GENERAL] warning: pg_query(): Query failed

2005-01-05 Thread Aaron Steele
hi bruno,
would it be useful to see the IF PLSQL function, or would you recommend 
a modification to the database.pgsql.inc file?

On Jan 5, 2005, at 10:52 AM, Bruno Wolff III wrote:
On Wed, Jan 05, 2005 at 10:17:39 -0800,
  Aaron Steele <[EMAIL PROTECTED]> wrote:
hi,
warning: pg_query(): Query failed: ERROR: invalid input syntax for 
type
boolean: "2"
CONTEXT: PL/pgSQL function "if" line 2 at if in
/Library/WebServer/Documents/dmap/includes/database.pgsql.inc on line
104.

any ideas?
The actual problem is in the PLSQL function named IF that you didn't 
show
us.

Recent versions of postgres have tightened up casting to boolean.
Perhaps in the past '2'::boolean might have worked, but in 7.4 this
won't work. '1' or '0' are valid boolean representations.
---(end of 
broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if 
your
  joining column's datatypes do not match


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] warning: pg_query(): Query failed

2005-01-05 Thread Aaron Steele
hi,
i'm setting up a forum that's connected to postgresql 7.4.2 (via 
drupal.org framework) and i'm getting some errors. here's the link, 
followed by the errors verbatim:
http://128.32.146.140/dmap/?q=forum

warning: pg_query(): Query failed: ERROR: invalid input syntax for type 
boolean: "2"
CONTEXT: PL/pgSQL function "if" line 2 at if in 
/Library/WebServer/Documents/dmap/includes/database.pgsql.inc on line 
104.

user error:
query: SELECT DISTINCT(n.nid), l.last_comment_timestamp, 
IF(l.last_comment_uid, cu.name, l.last_comment_name) as 
last_comment_name, l.last_comment_uid FROM node n , 
node_comment_statistics l /*! USE INDEX (node_comment_timestamp) */, 
users cu, term_node r WHERE n.nid = r.nid AND r.tid = 1 AND n.status = 
1 AND n.type = 'forum' AND l.last_comment_uid = cu.uid AND n.nid = 
l.nid AND '1' ORDER BY l.last_comment_timestamp DESC LIMIT 1 OFFSET 0 
in /Library/WebServer/Documents/dmap/includes/database.pgsql.inc on 
line 121.

any ideas?
here's my system:
- drupal v4.5.1
- mac os x server v10.3.5
- postgresql v7.4.2
- php v4.3.6
thanks,
aaron
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] Unable to read data from the transport connection.

2004-12-14 Thread Aaron
I am attempting to connect to a PostgreSQL database via C# or VB.net 
using the Npgsql .Net data provider, but I get the following error when 
I attempt to open the connection:

***
An unhandled exception of type 'System.IO.IOException' occurred in 
npgsql.dll

Additional information: Unable to read data from the transport connection.
***
Any ideas?
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] Free PostgreSQL Training, Philadelphia, Oct 30

2004-10-20 Thread Aaron Mulder
All,
My company (Chariot Solutions) is sponsoring a day of free
PostgreSQL training by Bruce Momjian (one of the core PostgreSQL
developers).  The day is split into 2 sessions (plus a Q&A session):

 * Mastering PostgreSQL Administration
 * PostgreSQL Performance Tuning

Registration is required, and space is limited.  The location is
Malvern, PA (suburb of Philadelphia) and it's on Saturday Oct 30.  For
more information or to register, see

http://chariotsolutions.com/postgresql.jsp

Thanks,
Aaron

P.S. If you're planning to take the train (from Philly, NYC, etc.) please 
send me a note off-list so we can arrange to get you from the station to 
the event -- it's close but not really walking distance.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] download postgreql problem

2004-10-19 Thread Aaron Glenn
On Tue, 19 Oct 2004 16:24:06 -0400, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
>  
> Internet explorer. 
>  

That would be your problem right there. IE's handling of FTP sessions
is notoriously quirky. Do yourself a favor and download a shareware
FTP program, or open up a DOS prompt.

Regards,
aaron.glenn

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] clustering

2004-10-18 Thread Aaron Glenn
Google "slony"

Regards,
aaron.glenn


On Thu, 14 Oct 2004 10:29:11 +0530, Nageshwar Rao
<[EMAIL PROTECTED]> wrote:
>  
>  
> 
> Hi, 
> 
> We would like use Postgresql as our database. For high availability is it
> possible to cluster DB in Postgresql. Appreciate if you can let me know how
> this can be achieved. 
> 
>  Else is there any other way to achieve High Availability in POstgresql as
> this is mission critical system. 
> 
>   
> 
> Thanks 
> 
> Nageshwar Rao 
> 
> 91-80-25522059

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL CE started

2004-10-13 Thread Aaron Glenn
What is the PostgreSQL project's official stance/view/comment on this?


aaron.glenn


On Thu, 14 Oct 2004 10:48:26 +0900 (JST), Tatsuo Ishii
<[EMAIL PROTECTED]> wrote:
> Hi all,
> 
> We, Software Research Associates, Inc., have started "PostgreSQL CE"
> (PostgreSQL Certificated Engineer), on Oct 1st. PostgreSQL CE is an
> entry level PostgreSQL engineer certification program. We expect at
> least several hundreds of people will take the examin by April
> 2005. We also hope that PostgreSQL CE makes PostgreSQL more and more
> popular.
> 
> For the present examins are held in about 100 test centers in Japan
> and the examin itself is written in Japanese. However since we have a
> partnership with Pearson VUE (http://www.pearsonvue.com/) to operate
> the examin, it is possible that PostgreSQL CE could be taken effect in
> US and/or rest of the world.
> 
> Software Research Associates, Inc.(http://www.sra.co.jp/index-en.html)
> is headquartered in Tokyo and is doing lots of PostgreSQL businesses.
> --
> Tatsuo Ishii
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
>

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


  1   2   >