[GENERAL] Unique cosntraint based on contents of a field?

2010-04-07 Thread Warren Bell
Is there a way to create a unique constraint based on the content of a 
field? For instance, say you have an integer field where you only want 
one record with the number 1 in that field but there can be many records 
with say the number 2 or any other single number in that field.


--
Thanks,

Warren Bell


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Bad Timestamp Format at 23 in ...

2008-09-17 Thread Warren Bell
I have Postgresql 8.3 (PostgresPlus) running on an Apple with Tomcat 6. 
I am using the postgresql-8.3-603.jdbc3.jar driver. My app runs fine 
when on the apple, but when I move it over to a Windows machine running 
Tomcat 6 that accesses the same exact database on the Apple I get a Bad 
Timestamp Format at 23 in 2008-09-16 18:41:00.479 error. I am guessing 
that it has something to do with how Java creates Dates on Unix verses 
Windows. Is this a JDBC issue or is there something I need to do in 
Postgres to make this work?


--
Thanks,

Warren Bell
909-645-8864
[EMAIL PROTECTED]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bad Timestamp Format at 23 in ...

2008-09-17 Thread Warren Bell
When I insert a record from the Apple app the java date inserted looks 
like 2008-09-17 19:52:41.584 and ends up in he db the same way. When I 
insert a record from the Windows app the java date inserted looks like 
this 2008-09-17 19:55:44.774 and ends up in the db like this 
2008-09-17 19:55:44.77 loosing the last decimal place of precision. My 
app gets the Bad Timestamp error when it is on Windows and trys to 
read the timestamp that is in miiliseconds vs the timestamp that is in 
hundredths of seconds.


Martin Gainty wrote:
please display the SQL or Java Statement which constructs the 
incorrect date


thanks,
Martin
__
Disclaimer and confidentiality note
Everything in this e-mail and any attachments relates to the official 
business of Sender. This transmission is of a confidential nature and 
Sender does not endorse distribution to any party other than intended 
recipient. Sender does not necessarily endorse content contained 
within this transmission.



 Date: Wed, 17 Sep 2008 18:27:40 -0700
 From: [EMAIL PROTECTED]
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] Bad Timestamp Format at 23 in ...

 I have Postgresql 8.3 (PostgresPlus) running on an Apple with Tomcat 6.
 I am using the postgresql-8.3-603.jdbc3.jar driver. My app runs fine
 when on the apple, but when I move it over to a Windows machine running
 Tomcat 6 that accesses the same exact database on the Apple I get a 
Bad

 Timestamp Format at 23 in 2008-09-16 18:41:00.479 error. I am guessing
 that it has something to do with how Java creates Dates on Unix verses
 Windows. Is this a JDBC issue or is there something I need to do in
 Postgres to make this work?

 --
 Thanks,

 Warren Bell
 909-645-8864
 [EMAIL PROTECTED]


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


Want to do more with Windows Live? Learn “10 hidden secrets” from 
Jamie. Learn Now 
http://windowslive.com/connect/post/jamiethomson.spaces.live.com-Blog-cns%21550F681DAD532637%215295.entry?ocid=TXT_TAGLM_WL_domore_092008



--
Thanks,

Warren Bell
909-645-8864
[EMAIL PROTECTED]



[GENERAL] Problem escaping, nonstandard use of \\ in a string literal

2008-08-13 Thread Warren Bell
I am having a problem with escaping characters after upgrading to 8.3. I 
have changed the postgresql.conf file to contain:


standard_conforming_strings = on

I do not get the error:

nonstandard use of \\ in a string literal

anymore but now I am having a problem escaping certain characters like 
the apostrophe \'. I am now getting the following error:


syntax error at or near S at character 282

that is referring to an apostrophe in an insert statement.

I am not escaping this character in my code. I am assuming that the 
driver is escaping it. I am using the postgresql-8.3-603.jdbc3.jar as 
the driver.


I see that you can escape by doing E'Joe\'s', but how do I get this to 
work without going through my code and shouldn't this be done by the 
driver.


How do I get this all to work?

--
Thanks,

Warren Bell
909-645-8864
[EMAIL PROTECTED]

--
Thanks,

Warren Bell
909-645-8864
[EMAIL PROTECTED]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] CAST(integer_field AS character) truncates trailing zeros

2008-07-31 Thread Warren Bell
I am trying to cast an int to a character. The int is the number 1000 it 
gets cast down to 1 and not 1000. How do I cast from int to 
character without loosing the trailing zeros?


--
Thanks,

Warren Bell


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CAST(integer_field AS character) truncates trailing zeros

2008-07-31 Thread Warren Bell
I guess it would help if I cast to the correct type. I was doing cast( 
cast( 1000 as integer ) as char ) instead of character varying, char(n) 
or text.


Thanks,

Warren



Warren Bell wrote:
I am trying to cast an int to a character. The int is the number 1000 
it gets cast down to 1 and not 1000. How do I cast from int to 
character without loosing the trailing zeros?





--
Thanks,

Warren Bell
909-645-8864
[EMAIL PROTECTED]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PDF Documentation?

2005-09-17 Thread Warren Bell
Is there a pdf version of the documentation ? I find HTML version much
harder to read than a printed copy.

Thanks,

Warren Bell



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Speed problems

2005-09-14 Thread Warren Bell


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Behalf Of Scott Marlowe
 Sent: Wednesday, September 14, 2005 8:24 AM
 To: Warren Bell
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Speed problems


 On Tue, 2005-09-13 at 21:01, Warren Bell wrote:
  On Tue, 2005-09-13 at 13:20, Warren Bell wrote:
   I am having problems with performance. I think this is a
 simple question
  and
   I am in the right place, if not, please redirect me.
  
   I have a table with 36 fields that slows down quite a bit after some
  light
   use. There are only 5 clients connected to this DB and they are doing
  mostly
   table has had no more than 10,000 records and is being
 accesessd at the
  rate
   of once per 5 seconds. It will slow down quite a bit. It will take 10
   seconds to do a `SELECT * FROM` query. I delete all records
 except one
   perform a VACUUM and this will not speed it up. I drop the table and
   recreate it and insert one record and it speeds right back up takeing
  only
   100 ms to do the query.
  
  This sounds like classic table / index bloat.
  
  Are you updating all 10,000 rows every 5 seconds?  Good lord, that's a
  lot of updates.  If so, then do a vacuum immediately after the update
  (or a delete), or change the system so it doesn't update every
 row every
  time.
  
  Next time, try a vacuum full instead of a drop and recreate and see if
  that helps.
  
  
   I am fairly new to Postgres. What do I need to do to keep
 this table from
   slowing down?
  
  Vacuum this table more often.  You might want to look at using the
  autovacuum daemon to do this for you.
  
  You might want to post a little more info on what, exactly,
 you're doing
  to see if we can spot any obvious problems.
  
 
  I have three indexes on this table. One index is a 1 column,
 one index is a
  5 column multi and one is a 2 column multi. I have run EXPLAIN
 ANALYZE on
  all of my queries and they seem to be taking advantage of these indexes.
 
  Would three indexes of this sort be considered index bloat?

 No, index bloat is a different problem.  In the days of yore, postgresql
 had a tendency to grow its indexes over time without reclaiming lost
 space in them, which lead to bloated indexes (back in the day, I once
 had a 100k table with an 80 meg index after a while...  Now that is
 bloat)

 Today, index bloat is generally not a problem, as vacuum can reclaim
 much more space in an index than it once could.  I'm guessing you're
 suffering from a bloating of tables and indexes caused by not vacuuming
 enough.  Use a vacuum full once to clear up the bloated tables and
 indexes, and then regularly scheduled plain vacuums to keep them at a
 reasonable size.

  I am updating no more than 200 records at a time. Here are some
 examples of
  my queries:
 
  UPDATE table SET boolean_col_1 = true WHERE boolean_col_2 = false
 
  UPDATE table SET (several columns = something) WHERE char_col_1
 = 'blah' AND
  int4_col_1 = 11
 
  UPDATE table SET boolean_col_1 = true WHERE boolean_col_2 = false AND
  boolean_col_3 = false AND  boolean_col_4 = false AND
 boolean_col_5 = false
  AND  boolean_col_6 = false

 OK.  But how many are you updating between regular vacuums?  That's the
 real issue.  If your regular vacuums aren't often enough, postgresql
 starts lengthening the tables instead of reusing the space in them that
 was freed by the last updates / deletes.

 Keep in mind, that in postgresql, all updates are really insert / delete
 pairs, as far as storage is concerned.  So, updates create dead tuples
 just like deletes would.

  Is my use of indexes correct?

 Seems good to me.


 ---(end of broadcast)---
 TIP 6: explain analyze is your friend


I have installed pg_autovacuum and also did a VACUUM FULL on the tables.
Speed has improved quite a bit.

Are there any set rules on what the pg_autovacuum -v and -V arguments should
be set to?

I went with the defaults

Thanks for your help,



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Speed problems

2005-09-13 Thread Warren Bell
I am having problems with performance. I think this is a simple question and
I am in the right place, if not, please redirect me.

I have a table with 36 fields that slows down quite a bit after some light
use. There are only 5 clients connected to this DB and they are doing mostly
inserts and updates. There is no load on this server or db at all. This
table has had no more than 10,000 records and is being accesessd at the rate
of once per 5 seconds. It will slow down quite a bit. It will take 10
seconds to do a `SELECT * FROM` query. I delete all records except one
perform a VACUUM and this will not speed it up. I drop the table and
recreate it and insert one record and it speeds right back up takeing only
100 ms to do the query.

I am fairly new to Postgres. What do I need to do to keep this table from
slowing down?

Thanks,

Warren Bell



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Speed problems

2005-09-13 Thread Warren Bell
On Tue, 2005-09-13 at 13:20, Warren Bell wrote:
 I am having problems with performance. I think this is a simple question
and
 I am in the right place, if not, please redirect me.

 I have a table with 36 fields that slows down quite a bit after some
light
 use. There are only 5 clients connected to this DB and they are doing
mostly
 table has had no more than 10,000 records and is being accesessd at the
rate
 of once per 5 seconds. It will slow down quite a bit. It will take 10
 seconds to do a `SELECT * FROM` query. I delete all records except one
 perform a VACUUM and this will not speed it up. I drop the table and
 recreate it and insert one record and it speeds right back up takeing
only
 100 ms to do the query.

This sounds like classic table / index bloat.

Are you updating all 10,000 rows every 5 seconds?  Good lord, that's a
lot of updates.  If so, then do a vacuum immediately after the update
(or a delete), or change the system so it doesn't update every row every
time.

Next time, try a vacuum full instead of a drop and recreate and see if
that helps.


 I am fairly new to Postgres. What do I need to do to keep this table from
 slowing down?

Vacuum this table more often.  You might want to look at using the
autovacuum daemon to do this for you.

You might want to post a little more info on what, exactly, you're doing
to see if we can spot any obvious problems.

---(end of broadcast)---
TIP 6: explain analyze is your friend

I have three indexes on this table. One index is a 1 column, one index is a
5 column multi and one is a 2 column multi. I have run EXPLAIN ANALYZE on
all of my queries and they seem to be taking advantage of these indexes.

Would three indexes of this sort be considered index bloat?

I am updating no more than 200 records at a time. Here are some examples of
my queries:

UPDATE table SET boolean_col_1 = true WHERE boolean_col_2 = false

UPDATE table SET (several columns = something) WHERE char_col_1 = 'blah' AND
int4_col_1 = 11

UPDATE table SET boolean_col_1 = true WHERE boolean_col_2 = false AND
boolean_col_3 = false AND  boolean_col_4 = false AND  boolean_col_5 = false
AND  boolean_col_6 = false

Is my use of indexes correct?

In the meantime, I will start using autovacuum or VACUUM.

Thanks for your help,



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster