[GENERAL] ER tool that supports domains and custom types?

2011-03-09 Thread Arturo Perez
Hi all,

Anyone have a suggestion for a graphical ER tool that can work with 
Postgresql's domains and custom types?

I was using Mogwai designer but it can't reverse engineer a DB with the 
above.

tia, 
arturo

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ER tool that supports domains and custom types?

2011-03-09 Thread Arturo Perez
In article 
f260108ab230d44cbaa660b47cc4110a06eb9...@blrisbe01.mlcorp.net,
 Jaiswal Dhaval Sudhirkumar jaiswa...@microland.com wrote:

 E-R data modeling

Couldn't find the E-R data modeling and Open System Architect doesn't 
support Mac OS.  Forgot to mention that latter in my original post.

dbwrench fails to connect to my database even though psql connects just 
fine.  go figure.

-arturo


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] FTS and words near one another

2011-02-28 Thread Arturo Perez
Hi all,

Is there currently anyway in 9.0 to use FTS to search for words that are 
next to each other?

For example, I want to search for Abraham next to Lincoln and ignore 
things like 'Abraham Johnson who lives in Lincoln Nebraska'.

tia,
arturo

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temporal period type and select distinct gives equality error

2011-01-27 Thread Arturo Perez
In article 1296066333.11513.364.camel@jdavis,
 Jeff Davis pg...@j-davis.com wrote:

 On Wed, 2011-01-26 at 09:27 -0500, Arturo Perez wrote:
   and yet when I do a select distinct with a join I get
   
   QueryLogger - *** error.
   org.postgresql.util.PSQLException: ERROR: could not identify an equality 
   operator for type period
   
   If I leave the period column out of the select list then it works fine.
   
   Did I install it wrong?  Or is something else going on?
   
   tia
   arturo
  
  No suggestions from anyone?  Anyone out there? :-)
 
 Sorry, I missed this the first time.
 
 I have added support for this in the CVS repository, but have not
 included it in the latest release. The thing that's missing from the
 released files is a btree opclass, which is what postgresql uses to
 find the right equality operator for DISTINCT.


I thought I saw that in CVS but when I checked it out and installed it 
the error did not go away.

Let me try that again.  Do you think I'd need to reinstall the server 
itself to insure the proper behavior?

tia,
arturo


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] temporal period type and select distinct gives equality error

2011-01-26 Thread Arturo Perez
In article arturo-26a95e.00305123012...@news.gmane.org,
 art...@pleeque.com (Arturo Perez) wrote:

 Hi all,
 
 I'm using the temporal contrib package that creates the period type and 
 a bunch of operators in postgresql 8.4.5.  I plan to use the same code 
 against 9.0 so if anything differs between the two regarding this issue 
 please let me know.
 
 In psql the \do command shows this
  public | =| period | period  | boolean  | 
 
 and yet when I do a select distinct with a join I get
 
 QueryLogger - *** error.
 org.postgresql.util.PSQLException: ERROR: could not identify an equality 
 operator for type period
 
 If I leave the period column out of the select list then it works fine.
 
 Did I install it wrong?  Or is something else going on?
 
 tia
 arturo

No suggestions from anyone?  Anyone out there? :-)

-arturo

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] temporal period type and select distinct gives equality error

2011-01-23 Thread Arturo Perez
Hi all,

I'm using the temporal contrib package that creates the period type and 
a bunch of operators in postgresql 8.4.5.  I plan to use the same code 
against 9.0 so if anything differs between the two regarding this issue 
please let me know.

In psql the \do command shows this
 public | =| period | period  | boolean  | 

and yet when I do a select distinct with a join I get

QueryLogger - *** error.
org.postgresql.util.PSQLException: ERROR: could not identify an equality 
operator for type period

If I leave the period column out of the select list then it works fine.

Did I install it wrong?  Or is something else going on?

tia
arturo


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] iPad and Postgresql...

2011-01-17 Thread Arturo Perez
Didn't someone make an iPod touch app version of pgadmin?

There's this 
http://itunes.apple.com/us/app/dataglass-postgresql/id390298877?mt=8


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multi terabyte fulltext searching

2007-03-22 Thread Arturo Perez
On Wed, 21 Mar 2007 08:57:39 -0700, Benjamin Arai wrote:

 Hi Oleg,
 
 I am currently using GIST indexes because I receive about 10GB of new data
 a week (then again I am not deleting any information).  The do not expect
 to be able to stop receiving text for about 5 years, so the data is not
 going to become static any time soon.  The reason I am concerned with
 performance is that I am providing a search system for several newspapers
 since essentially the beginning of time.  Many bibliographer etc would
 like to use this utility but if each search takes too long I am not going
 to be able to support many concurrent users.
 
 Benjamin



At a previous job, I built a system to do this.  We had 3,000 publications
and approx 70M newspaper articles.  Total content size (postprocessed) was
on the order of 100GB, IIRC.  We used a proprietary (closed-source
not ours) search engine.

In order to reach subsecond response time we needed to horizontally scale
to about 50-70 machines, each a low-end Dell 1650.  This was after about 5
years of trying to vertically scale.

-arturo


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] xpath_list() function

2007-03-22 Thread Arturo Perez
On Thu, 22 Mar 2007 14:36:32 +0100, Andy Dale wrote:

 
 testdb=# SELECT film_name, xpath_list(description, 'year') FROM filminfo;
 film_name | xpath_list
 ---  +
 Casablanca  | 1942
 Rear Window   | 1954
 The Godfather | 1972
 Test film  | 1973,1972
 
 It would seem reasonable in this example that the 1973,1972 gained from
 xpath_list could be reused in a WHERE clause like so
 
 SELECT film_name WHERE '1973' IN (xpath_list(description, 'year'));
 
 But the xpath_list function returns a single string so the above query
 returns no rows.  I can not find a way of splitting the values returned
 from xpath_list
 so that they can be used in an IN ()
 clause, can anybody point me in the right direction for 


For that you're supposed to use xpath_table.  I forget the exact syntax 
but you end up doing something like

select film_name from 
(select xpath_table(...) as FILMS)
where film_year = 1973.

-arturo

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Adding TEXT columns tanks performance?

2007-02-11 Thread Arturo Perez


On Feb 10, 2007, at 12:34 PM, Tom Lane wrote:


Arturo Perez [EMAIL PROTECTED] writes:

Saturday I changed a table to add a varchar(24) and a TEXT column.


You didn't actually say which of these tables you changed?


Sorry, I changed extended_user.



I'm not very good at reading these but it looks like sort memory  
might

be too low?


The runtime seems to be entirely in the index scan on user_tracking.
I'm surprised it doesn't do something to avoid a full-table indexscan
--- in this case, hashing with extended_user as the inner relation  
would

seem like the obvious thing.  Is user_id a hashable datatype?


user_id is an integer;  Here are the table definitions, since this  
seems like a problem

that won't go away anytime soon.

Table public.extended_user
  Column   |Type | Modifiers
---+-+---
create_date   | timestamp without time zone | not null
email | character varying(99)   |
first_name| character varying(99)   | not null
last_name | character varying(99)   | not null
license_agreement | boolean | not null
license_date  | timestamp without time zone |
password  | character varying(32)   | not null
subscription_id   | integer | not null
user_id   | integer | not null
user_name | character varying(99)   | not null
active| boolean | not null
phone | character varying(24)   |
title | text|
Indexes:
extended_user_pkey PRIMARY KEY, btree (user_id) CLUSTER
user_name_uq UNIQUE, btree (user_name)
extended_user_subscription_id_idx btree (subscription_id)
Foreign-key constraints:
extended_user_subscription_id_fkey FOREIGN KEY  
(subscription_id) REFERENCES subscription(subscription_id) DEFERRABLE  
INITIALLY DEFERRED


\d user_tracking
   Table  
public.user_tracking
  Column  |Type  
|Modifiers
--+- 
+--- 
---

action   | character varying(255)  | not null
entry_date   | timestamp without time zone | not null
note | text|
report_id| integer |
session_id   | character varying(255)  | not null
user_id  | integer |
user_tracking_id | integer | not null default  
nextval('user_tracking_user_tracking_id_seq'::regclass)

Indexes:
user_tracking_pkey PRIMARY KEY, btree (user_tracking_id)
user_tracking_monthly_idx btree (date_part('year'::text,  
entry_date), date_part('month'::text, entry_date))
user_tracking_quarterly_idx btree (date_part('year'::text,  
entry_date), date_part('quarter'::text, entry_date))

user_tracking_report_id_idx btree (report_id)
user_tracking_user_id_idx btree (user_id)
Foreign-key constraints:
user_tracking_report_id_fkey FOREIGN KEY (report_id)  
REFERENCES article(article_id) DEFERRABLE INITIALLY DEFERRED
user_tracking_user_id_fkey FOREIGN KEY (user_id) REFERENCES  
extended_user(user_id) DEFERRABLE INITIALLY DEFERRED





It's possible that adding the columns would have affected the plan by
making it look like a sort or hash would take too much memory, but if
that were it then your hand increase in work_mem should have fixed it.
Tis odd.  I don't suppose you know what plan was used before?

regards, tom lane


No, sorry.  Further information:  on disk the user_tracking table is  
over 500MB, I can't
increase shared_buffers (currently 2) because of SHMMAX limits  
(for now, scheduled outage

and all that).

Any suggestions on how to improve the situation?

tias,
-arturo


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Adding TEXT columns tanks performance?

2007-02-11 Thread Arturo Perez


On Feb 10, 2007, at 12:34 PM, Tom Lane wrote:


Arturo Perez [EMAIL PROTECTED] writes:

Saturday I changed a table to add a varchar(24) and a TEXT column.


You didn't actually say which of these tables you changed?

I'm not very good at reading these but it looks like sort memory  
might

be too low?


The runtime seems to be entirely in the index scan on user_tracking.
I'm surprised it doesn't do something to avoid a full-table indexscan
--- in this case, hashing with extended_user as the inner relation  
would

seem like the obvious thing.  Is user_id a hashable datatype?

It's possible that adding the columns would have affected the plan by
making it look like a sort or hash would take too much memory, but if
that were it then your hand increase in work_mem should have fixed it.
Tis odd.  I don't suppose you know what plan was used before?

regards, tom lane



I did this and now the thing is nicely faster:

iht= alter table user_tracking alter column user_id set statistics 500;
ALTER TABLE
iht= analyze user_tracking;
ANALYZE
iht= explain analyze SELECT session_id, action, count(ACTION) as hits
iht- FROM extended_user LEFT JOIN  
user_tracking USING (user_id)

iht- WHERE subscription_id = 1147
iht- GROUP BY session_id, action
iht- HAVING count(ACTION)  0;

QUERY PLAN
 
 
-
GroupAggregate  (cost=125961.69..127082.82 rows=37371 width=60)  
(actual time=679.115..725.317 rows=7312 loops=1)

   Filter: (count(action)  0)
   -  Sort  (cost=125961.69..126055.12 rows=37371 width=60) (actual  
time=679.067..697.588 rows=16017 loops=1)

 Sort Key: user_tracking.session_id, user_tracking.action
 -  Nested Loop Left Join  (cost=5.64..122319.43 rows=37371  
width=60) (actual time=0.160..118.177 rows=16017 loops=1)
   -  Index Scan using  
extended_user_subscription_id_idx on extended_user   
(cost=0.00..161.08 rows=134 width=4) (actual time=0.066..1.289  
rows=119 loops=1)

 Index Cond: (subscription_id = 1147)
   -  Bitmap Heap Scan on user_tracking   
(cost=5.64..905.77 rows=469 width=64) (actual time=0.162..0.730  
rows=135 loops=119)
 Recheck Cond: (outer.user_id =  
user_tracking.user_id)
 -  Bitmap Index Scan on  
user_tracking_user_id_idx  (cost=0.00..5.64 rows=469 width=0) (actual  
time=0.139..0.139 rows=135 loops=119)
   Index Cond: (outer.user_id =  
user_tracking.user_id)

Total runtime: 732.520 ms
(12 rows)


thanks all,
arturo

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Adding TEXT columns tanks performance?

2007-02-10 Thread Arturo Perez


On Feb 9, 2007, at 11:43 AM, Merlin Moncure wrote:


On 2/8/07, Arturo Perez [EMAIL PROTECTED] wrote:

Hi all,

Saturday I changed a table to add a varchar(24) and a TEXT column.
It's used for some reporting purposes (small potatoe stuff really)
and the TEXT column remains mostly empty.  However, this week
performance has gotten terrible.  Queries joining against the
aforementioned table have gone from 40s to 1500s.  The schema change
is the only explanation I have for the 30x slower queries.  The
queries have definitely gotten disk-bound (I can see the connection
process sitting in the D state for several minutes).

This is pg8.1.4 on Linux RedHat. Would adding a TEXT column impact
things that much?  The change to the schema was

alter table foo add column title text;


explain analyze please.

no reason for adding text column to do that.  especially since you
didn't default the column which would effectively update the entire
table.

merlin



Here's the explain analyze.  Note that this query was never very fast
but's it has literally gotten two orders of magnitude slower. This is
with 8.1.4 on linux with nothing special in terms of disks (I think
it's a mirrored system drive).  It's a hosted environment kind of thing.

Shared buffers is 160MB (2) and effective cache is 1GB.  The  
user_tracking
table has about 2M rows and the extended_user table has about 6K.   
I'm not very
good at reading these but it looks like sort memory might be too  
low?  work_mem is 1024,
the default.  I did this set session work_mem to 10; in psql  
but it still takes

quite a while.

iht= explain analyze SELECT session_id, action, count(ACTION) as hits
iht- FROM extended_user LEFT JOIN  
user_tracking USING (user_id)

iht- WHERE subscription_id = 1147
iht- GROUP BY session_id, action
iht- HAVING count(ACTION)  0;

 QUERY PLAN
 
 
--
GroupAggregate  (cost=172717.49..173695.46 rows=32599 width=60)  
(actual time=411713.041..411761.857 rows=7309 loops=1)

   Filter: (count(action)  0)
   -  Sort  (cost=172717.49..172798.99 rows=32599 width=60) (actual  
time=411712.907..411732.032 rows=16012 loops=1)

 Sort Key: user_tracking.session_id, user_tracking.action
 -  Merge Left Join  (cost=0.00..169571.78 rows=32599  
width=60) (actual time=147593.828..411070.706 rows=16012 loops=1)

   Merge Cond: (outer.user_id = inner.user_id)
   -  Index Scan using extended_user_pkey on  
extended_user  (cost=0.00..236.92 rows=117 width=4) (actual  
time=1.627..154.499 rows=119 loops=1)

 Filter: (subscription_id = 1147)
   -  Index Scan using user_tracking_user_id_idx on  
user_tracking  (cost=0.00..164008.04 rows=2000218 width=64) (actual  
time=0.010..408731.064 rows=2000620 loops=1)

Total runtime: 411781.174 ms
(10 rows)



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Adding TEXT columns tanks performance?

2007-02-08 Thread Arturo Perez

Hi all,

Saturday I changed a table to add a varchar(24) and a TEXT column.   
It's used for some reporting purposes (small potatoe stuff really)  
and the TEXT column remains mostly empty.  However, this week  
performance has gotten terrible.  Queries joining against the  
aforementioned table have gone from 40s to 1500s.  The schema change  
is the only explanation I have for the 30x slower queries.  The  
queries have definitely gotten disk-bound (I can see the connection  
process sitting in the D state for several minutes).


This is pg8.1.4 on Linux RedHat. Would adding a TEXT column impact  
things that much?  The change to the schema was


alter table foo add column title text;

tia,
arturo


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] [pgsql-www] Subcribing to this list, what's the

2006-12-16 Thread Arturo Perez
In article [EMAIL PROTECTED],
 [EMAIL PROTECTED] (Joshua D. Drake) wrote:

 
 I also have not been able to duplicate the problem from multiple yahoo
 and gmail tests. I did however note that postgresql.org will not let you
 subscribe as [EMAIL PROTECTED] . Is .not even a valid suffix?
 
 Joshua D. Drake
 

I had a similar problem as the OP when I tried to subscribe to the docs 
mailing list.  And the email I used was the same as the one I used to 
subscribe to this list.

I put in the email, checked not to receive email nor digest (as I use 
NNTP) and it wouldn't take the email.  Again, it was the same email Iuse 
for the other pgSQL lists.

-arturo

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Expected accuracy of planner statistics

2006-09-29 Thread Arturo Perez
In article [EMAIL PROTECTED],
 [EMAIL PROTECTED] (Jim C. Nasby) wrote:

 The problem is that you can't actually get
 a good n_distinct estimate if you're sampling less than a very large
 chunk of the table. Since our sampling maxes out at something like 30k
 pages, at some point the n_distinct estimates just degrade. :(

Can the DBA just set n_distinct?  Sometimes s/he just knows what the 
value should be.

Then, of course, the questions becomes how to keep vacuum et al from 
messing it up.

-arturo

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] PostgreSQL slammed by PHP creator

2006-09-14 Thread Arturo Perez
Title: PostgreSQL slammed by PHP creator






Hi all,


Any response to this:

http://www.internetnews.com/dev-news/article.php/3631831


From the FA:


One performance enhancement that Lerdorf suggested based on code analysis was to use MySQL instead of PostgreSQL for the database. 

If you can fit your problem into what MySQL can handle it's very fast, Lerdorf said. You can gain quite a bit of performance. 

For the items that MySQL doesn't handle as well as PostgreSQL, Lerdorf noted that some features can be emulated in PHP itself, and you still end up with a net performance boost. 

-arturo





Re: [GENERAL] Problems with sequences

2006-09-07 Thread Arturo Perez


On Sep 7, 2006, at 5:35 AM, Alban Hertroys wrote:


Arturo Perez wrote:

What happens is that if I do a select nextval('seq') I get a number
that's lower than the
max primary key id.  This is inspite of my doing
   SELECT setval('seq', ((SELECT MAX(seq_ID) FROM table)+1))
   ALTER SEQUENCE seq RESTART WITH max + 1;
   select pg_catalog.setval(seq, max+1, true);


This seems to be a bit over the top;
SELECT setval('seq', (SELECT MAX(seq_ID) FROM table)
should be enough. Even the +1 isn't necessary, as the first value  
the sequence will return is already 1 higher than the value  
retrieved from MAX.


Note that all of the above was in an attempt to reset the sequence to  
the proper value.  I'm beginning to think that it's a library problem  
as this morning I get:


iht= select max(article_id) from article;
max
--
4992
(1 row)

iht= select nextval('pk_article');
nextval
-
4986
(1 row)

Assuming the sequence number is being used correctly why would they  
be 6 apart?




Are you sure you're using the correct sequence(s) to retrieve your  
column values for the problematic table(s)? How do you set the  
values for seqID?


I tried statement logging but I am not sure it reported anything  
useful.  When I get into work I'll send in those logs.


-arturo

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Problems with sequences

2006-09-07 Thread Arturo Perez

From: Alban Hertroys [mailto:[EMAIL PROTECTED]
Martijn van Oosterhout wrote:
 On Thu, Sep 07, 2006 at 07:15:43AM -0400, Arturo Perez wrote:
 iht= select max(article_id) from article;
 max
 --
 4992
 (1 row)

 iht= select nextval('pk_article');
 nextval
 -
 4986
 (1 row)

 Assuming the sequence number is being used correctly why would they  
 be 6 apart?
 
 The last four transactions could be rolled back, or not committed yet.

   b) the OP inserted numbers not coming from the sequence; he shot his 
own feet.


I prefer to think that my feet were shot off by a library I'm using :-)
Some many layers, so little time (to debug).

In any case, at this point in time it's looking like Cayenne doesn't honor
the rules of the sequence.  It appears to (and is documented as) internally
incrementing rather than fetching the sequence for each insert.

I would still like more debugging tips for this sort of thing.  As I mentioned,
statement logging did not show the relevant details.  What other things could
I have done?

-arturo

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Problems with sequences

2006-09-07 Thread Arturo Perez
From: Alban Hertroys [mailto:[EMAIL PROTECTED]
Arturo Perez wrote:
 In any case, at this point in time it's looking like Cayenne doesn't honor
 the rules of the sequence.  It appears to (and is documented as) internally
 incrementing rather than fetching the sequence for each insert.

I have no experience with Cayenne, but reading 
http://cwiki.apache.org/CAYDOC/primary-key-generation.html it seems 
possible to use database sequences instead of Cayenne-generated ones:

... Generation mechanism depends on the DbAdapter used and can be 
customized by users by subclassing one of the included adapters.



Yes.  I think I am being bitten by a desire to minimize changes required when 
migrating
from MySQL to PostgreSQL.  Contrary to my belief, it appears that the pgSQL
schema creation script was not created by Cayenne configured to work with pgSQL 
but
rather was based on the script Cayenne used to create the MySQL database.  
Looks like I will
be uncovering nits as we go for a bit yet.

I did modify the sequences to increment by 20 as required by Cayenne.  
Hopefully,
this particular issue will be laid to rest by that.

thanks all,

-arturo

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Problems with sequences

2006-09-06 Thread Arturo Perez
Hi all,

My environment is Tomcat5.0, Cayenne 1.2 and PostgreSQL 8.1.4 on a
RedHat ES3 machine.

My webapplication is reusing sequence numbers and getting duplicate
primary key 
failures because of it (error is duplicate key violates unique
constraint).  The 
columns are not defined as SERIAL for historical reasons so it fetches
nextval and
uses that.

The webapp stays connected for days at a time.  It's only using a
handful (usually 2) connections. 

What happens is that if I do a select nextval('seq') I get a number
that's lower than the
max primary key id.  This is inspite of my doing
   SELECT setval('seq', ((SELECT MAX(seq_ID) FROM table)+1))
   ALTER SEQUENCE seq RESTART WITH max + 1;
   select pg_catalog.setval(seq, max+1, true);

I've learned that the first thing is only good for the current session
and I've no idea why the second and third aren't working.

Mostly what I'm hoping for is some debugging tips.  I tried setting
log_statement = 'all' but that doesn't show the parameters to prepared
statements nor any access to the sequence.

Does anyone have any experience helping me to pinpoint the cause of
this?  Tomcat JDBC pooling?  Cayenne caching?

tia
arturo

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Problems with sequences

2006-09-06 Thread Arturo Perez
In article [EMAIL PROTECTED],
 [EMAIL PROTECTED] (Scott Marlowe) wrote:

 On Wed, 2006-09-06 at 16:56, Arturo Perez wrote:
  Hi all,
  
  My environment is Tomcat5.0, Cayenne 1.2 and PostgreSQL 8.1.4 on a
  RedHat ES3 machine.
  
  My webapplication is reusing sequence numbers and getting duplicate
  primary key 
  failures because of it (error is duplicate key violates unique
  constraint).  The 
  columns are not defined as SERIAL for historical reasons so it fetches
  nextval and
  uses that.
  
  The webapp stays connected for days at a time.  It's only using a
  handful (usually 2) connections. 
  
  What happens is that if I do a select nextval('seq') I get a number
  that's lower than the
  max primary key id.  This is inspite of my doing
 SELECT setval('seq', ((SELECT MAX(seq_ID) FROM table)+1))
 ALTER SEQUENCE seq RESTART WITH max + 1;
 select pg_catalog.setval(seq, max+1, true);
 
 When are you doing these statements?  You shouldn't really need to set a
 sequence to a new number except right after a data load or something
 like that.  definitely not when anyone else is using the db.

We (me!) just converted our app from MySQL to PostgreSQL.  We wrote a 
perl script to copy the data from the MySQL instance to the new 
PostgreSQL instance.  As part of that data copy we did the first thing 
as that was recommended by a comment in the online manual for PostgreSQL.

Ever since then the problem described has been happening.  The other two 
statements were done in an attempt to correct the problem without 
restarting the whole application (ie without bouncing tomcat).  

I just had the bounce the app anyway (mgmt :-) so I'm hoping the problem 
won't reoccur but I need steps to take if it does.

-arturo

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Problems with sequences

2006-09-06 Thread Arturo Perez


On Sep 6, 2006, at 8:48 PM, Merlin Moncure wrote:


On 9/6/06, Arturo Perez [EMAIL PROTECTED] wrote:

What happens is that if I do a select nextval('seq') I get a number
that's lower than the
max primary key id.  This is inspite of my doing
   SELECT setval('seq', ((SELECT MAX(seq_ID) FROM table)+1))
   ALTER SEQUENCE seq RESTART WITH max + 1;
   select pg_catalog.setval(seq, max+1, true);



are you running those statements to fetch the next key in the table?
you might have a race condition there.  try wrappnig in a userlock.

merlin


No, not running them to get the next key.  Just trying to reset the  
sequence so that I stop getting duplicates.


A race condition is unlikely as only one person can actually add  
these things to the system.


-arturo


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] xpath_string and group by

2006-08-29 Thread Arturo Perez
In article [EMAIL PROTECTED],
 kleptog@svana.org (Martijn van Oosterhout) wrote:

 On Sat, Aug 26, 2006 at 03:51:06PM -0400, Perez wrote:
  Hi all,
  
  Using 8.1.4 and contrib/xml2.  When I do a 
  
  select xpath_string(note, '//Thing') as note,
 count(aDate) from theTable
  group by lower(xpath_string(note, '//Thing'))
  order by 2 desc;
  
  I get an error:
  GROUP BY must contain note.  
  But I can do that for a plain text/varchar field.  Adding the non-xpath 
  note field messes up the grouping.
 
 I wonder if it's getting confused about which note you're referring
 to in the GROUP BY clause.
 
  select note, count(aDate) from 
(select  lower(xpath_string(note, '//Thing')) as note, aDate from 
  theTable) as foo
  group by note
 
 This is about the same thing, so why not use that?
 
 Have a nice day,

You're right, it is almost the same thing.  But the second form loses
the case of the original - everything is returned lower case.  If I must
I must but I'ld like to preserve the case is possible.

-arturo

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] [8.1.4] Create index on timestamp fails

2006-08-23 Thread Arturo Perez


On Aug 22, 2006, at 5:35 PM, Tom Lane wrote:


Arturo Perez [EMAIL PROTECTED] writes:

I have a table with an column:
entry_date | timestamp with time zone| not null



And when I try to create an index on it like so:
create index entry_date_idx on =
user_tracking(date_part('year',entry_date));



I get a
ERROR: functions in index expression must be marked IMMUTABLE



According to the mailing lists, this has been working since 7.4.


I seriously doubt that.  date_part on a timestamptz is stable, not
immutable, and AFAICT has been marked that way since 7.3.  The problem
is that the results depend on your current TimeZone setting --- for
instance, 2AM 2006-01-01 in London is 9PM 2005-12-31 where I live.

If you only need day precision, try storing entry_date as a date  
instead

of a timestamptz.  Or perhaps consider timestamp without tz.  But you
need something that's not timezone-dependent to make this work.

regards, tom lane



Ah, I knew it was something I was overlooking.  Thanks a ton.  We need
sub-day granularity (it's for a sort of weblog).  Without a TZ sounds  
llke

a winner.

Thanks again,
arturo


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] [8.1.4] Create index on timestamp fails

2006-08-23 Thread Arturo Perez



Hi 
Chris,

user_tracking is not a function, it's the name of the table containing 
the column entry_date. Is my syntax that far off?!

-arturo

  -Original Message-From: Chris Hoover 
  [mailto:[EMAIL PROTECTED]Sent: Tuesday, August 22, 2006 3:02 
  PMTo: Arturo PerezCc: 
  pgsql-general@postgresql.orgSubject: Re: [GENERAL] [8.1.4] Create 
  index on timestamp failsIt appears that 8.1 is stricter 
  on checking the type of function. Look at your user_tracking 
  function. It is probably set as volatile. You need to change it to 
  be immutable.This should fix the issue.Chris 
  On 8/21/06, Arturo 
  Perez [EMAIL PROTECTED] 
  wrote:
  


Hi all, 
Using postgresql 8.1.4 
I have a table with an column: 
 entry_date | timestamp with time zone| not null 
And when I try to create an index on it like 
so:  create index entry_date_idx on 
user_tracking(date_part('year',entry_date)); 
I get a 
 ERROR: functions in index _expression_ must be marked IMMUTABLE 

According to the mailing lists, this has been 
working since 7.4. What am I doing wrong? 
tia, arturo 



[GENERAL] [8.1.4] Create index on timestamp fails

2006-08-22 Thread Arturo Perez
Title: [8.1.4] Create index on timestamp fails






Hi all,


Using postgresql 8.1.4


I have a table with an column:

 entry_date | timestamp with time zone| not null


And when I try to create an index on it like so:

 create index entry_date_idx on user_tracking(date_part('year',entry_date));


I get a 

 ERROR: functions in index _expression_ must be marked IMMUTABLE


According to the mailing lists, this has been working since 7.4. What am I doing wrong?


tia,

arturo