Re: [SQL] strange corruption?

2012-12-27 Thread Scott Marlowe
On Thu, Dec 27, 2012 at 7:27 AM, John Fabiani jo...@jfcomputer.com wrote:
 Hi,
 I have the following statement in a function.

 UPDATE orderseq
 SET orderseq_number = (orderseq_number + 1)
 WHERE (orderseq_name='InvcNumber');

 All it does is update a single record by incrementing a value (int).

 But it never completes.  This has to be some sort of bug.  Anyone have a
 thought what would cause this to occur.  To my knowledge it was working and
 does work in other databases.

There are many reasons this may or may not be working.  Can you create
a self-contained test case that reproduces this issue?


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] deciding on one of multiple results returned

2012-12-21 Thread Scott Marlowe
On Fri, Dec 21, 2012 at 10:28 AM, Wes James compte...@gmail.com wrote:
 David and Seth Thanks.  That helped.


 When I have

 select distinct on (revf3)  f1, f2, f3, revers(f3) as revf3 from table order
 by revf3

 Is there a way to return just f1, f2, f3 in my results and forget revf3 (so
 it doesn't show in results)?

Sure just wrap it in a subselect:

select a.f1, a.f2, a.f3 from (select distinct on (revf3)  f1, f2, f3,
revers(f3) as revf3 from table order by revf3) as a;


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] complex query

2012-10-27 Thread Scott Marlowe
On Sat, Oct 27, 2012 at 6:01 PM, Mark Fenbers mark.fenb...@noaa.gov wrote:
 I have a query:
 SELECT id, SUM(col1), SUM(col2) FROM mytable WHERE condition1 = true GROUP
 BY id;

 This gives me 3 columns, but what I want is 5 columns where the next two
 columns -- SUM(col3), SUM(col4) -- have a slightly different WHERE clause,
 i.e., WHERE condition2 = true.

 I know that I can do this in the following way:
 SELECT id, SUM(col1), SUM(col2), (SELECT SUM(col3) FROM mytable WHERE
 condition2 = true), (SELECT SUM(col4) FROM mytable WHERE condition2 = true)
 FROM mytable WHERE condition1 = true GROUP BY id;

 Now this doesn't seem to bad, but the truth is that condition1 and
 condition2 are both rather lengthy and complicated and my table is rather
 large, and since embedded SELECTs can only return 1 column, I have to repeat
 the exact query in the next SELECT (except for using col4 instead of
 col3).  I could use UNION to simplify, except that UNION will return 2
 rows, and the code that receives my resultset is only expecting 1 row.

 Is there a better way to go about this?

I'd do somethings like:

select * from (
select id, sum(col1), sum(col2) from tablename group by yada
   ) as a [full, left, right, outer] join (
select id, sum(col3), sum(col4) from tablename group by bada
) as b
on (a.id=b.id);

and choose the join type as appropriate.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] complex query

2012-10-27 Thread Scott Marlowe
On Sat, Oct 27, 2012 at 7:56 PM, Mark Fenbers mark.fenb...@noaa.gov wrote:
 I'd do somethings like:

 select * from (
 select id, sum(col1), sum(col2) from tablename group by yada
) as a [full, left, right, outer] join (
 select id, sum(col3), sum(col4) from tablename group by bada
 ) as b
 on (a.id=b.id);

 and choose the join type as appropriate.

 Thanks!  Your idea worked like a champ!
 Mark

The basic rules for mushing together data sets is to join them to put
the pieces of data into the same row (horiztonally extending the set)
and use unions to pile the rows one on top of the other.

One of the best things about PostgreSQL is that it's very efficient at
making these kinds of queries efficient and fast.  I've written 5 or 6
page multi-join multi-union queries that still ran in hundreds of
milliseconds, returning thousands of rows.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Insane behaviour in 8.3.3

2012-06-15 Thread Scott Marlowe
Not talking about going to something after 8.3.19, just updating to
the latest 8.3 version.  On most systems it's a simple:

sudo apt-get upgrade

or similar and sit back and watch.

On Fri, Jun 15, 2012 at 2:24 AM, Achilleas Mantzios
ach...@matrix.gatewaynet.com wrote:
 On Παρ 15 Ιουν 2012 10:28:20 Scott Marlowe wrote:
 You do realize you're missing four years of bug fixes right?

 On Πεμ 14 Ιουν 2012 11:39:35 Achilleas Mantzios wrote:
 Unfortunately the remote installations are neither physically accessible
 nor by TCP/IP accesible (comms are done via UUCP and administration via
 minicom, and the costs are just huge 5 USD/min for 33Kbits/sec). So, i
 would exhaust all posibilities before deciding to ship a new postgresql
 version there, and remotely upgrade, physically travel to the ship or even
 trying to do a backup/initdb/restore in the existing version. Any help
 would be really really appreciated.

 Also, as you might have understood, upgrading, although generally a good
 idea, does not apply so easily in our case.

 And i forgot to mention, minicom term emulation quality sucks, even giving 
 simple
 shell commands is a PITA, upgrading the whole fleet would mean bast case 
 scenario
 minimum 21K USD for the whole fleet + suspension of all other activities for 
 two months.
 If physical travel was involved, the cost would be increased at even higher 
 levels.


 -
 Achilleas Mantzios
 IT DEPT



-- 
To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Insane behaviour in 8.3.3

2012-06-15 Thread Scott Marlowe
Well, I'd see about finding a way to upgrade to 8.3.19.  8.3.3 has
know data eating bugs.

On Fri, Jun 15, 2012 at 9:32 AM, Achilleas Mantzios
ach...@matrix.gatewaynet.com wrote:
 On Παρ 15 Ιουν 2012 18:03:26 Scott Marlowe wrote:
 Not talking about going to something after 8.3.19, just updating to
 the latest 8.3 version.  On most systems it's a simple:

 sudo apt-get upgrade

 or similar and sit back and watch.

 Thanx, unfortunately we dont have TCP/IP connectivity to (most of) the ships, 
 and AFAIK apt-get does not yet work
 over advanced UUCP/minicom/kermit or other equivalent hich-tech dial up 
 connection.
 just joking :)


 On Fri, Jun 15, 2012 at 2:24 AM, Achilleas Mantzios

 ach...@matrix.gatewaynet.com wrote:
  On Παρ 15 Ιουν 2012 10:28:20 Scott Marlowe wrote:
  You do realize you're missing four years of bug fixes right?
 
  On Πεμ 14 Ιουν 2012 11:39:35 Achilleas Mantzios wrote:
  Unfortunately the remote installations are neither physically accessible
  nor by TCP/IP accesible (comms are done via UUCP and administration via
  minicom, and the costs are just huge 5 USD/min for 33Kbits/sec). So, i
  would exhaust all posibilities before deciding to ship a new postgresql
  version there, and remotely upgrade, physically travel to the ship or
  even trying to do a backup/initdb/restore in the existing version. Any
  help would be really really appreciated.
 
  Also, as you might have understood, upgrading, although generally a good
  idea, does not apply so easily in our case.
 
  And i forgot to mention, minicom term emulation quality sucks, even
  giving simple shell commands is a PITA, upgrading the whole fleet would
  mean bast case scenario minimum 21K USD for the whole fleet + suspension
  of all other activities for two months. If physical travel was involved,
  the cost would be increased at even higher levels.
 
 
  -
  Achilleas Mantzios
  IT DEPT

 -
 Achilleas Mantzios
 IT DEPT

 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql



-- 
To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] order by different on mac vs linux

2012-05-16 Thread Scott Marlowe
On Wed, May 16, 2012 at 7:58 PM, Wes James compte...@gmail.com wrote:


 On Wed, May 16, 2012 at 5:00 PM, Samuel Gendler sgend...@ideasculptor.com
 wrote:



 On Wed, May 16, 2012 at 3:46 PM, Wes James compte...@gmail.com wrote:



 On Mon, May 14, 2012 at 5:00 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Wes James compte...@gmail.com writes:
  Why is there a different order on the different platforms.

 This is not exactly unusual.  You should first check to see if
 lc_collate is set differently in the two installations --- but even if
 it's the same, there are often platform-specific interpretations of
 the sorting rules.  (Not to mention that OS X is flat out broken when
 it comes to sorting UTF8 data ...)


 I just ran these:

 linux:

 on linux

 # SELECT CASE WHEN 'apache'  '!yada' THEN 'TRUE' ELSE 'FALSE' END FROM
 pg_user;
  case
 ---
  FALSE
 (1 row)

 # show lc_collate;

  lc_collate
 -
  en_US.UTF-8
 (1 row)

 

 on mac os x:

 # SELECT CASE WHEN 'apache'  '!yada' THEN 'TRUE' ELSE 'FALSE' END FROM
 pg_user;
  case
 --
  TRUE
 (1 row)

 # show lc_collate;
  lc_collate
 -
  en_US.UTF-8
 (1 row)


 ---

 Why is the linux postgres saying false with the lc_collage set the way it
 is?


 That's the point - UTF-8 collation is just completely broken under OS X.
  There's much previous discussion of the topic on this list and elsewhere.
  If you're developing on OS X but running linux and you are mostly using an
 ascii character set in your test dataset, set your development OS X boxes to
 use C collation, which will basically do what you expect it do do until you
 start throwing multibyte characters at it.  If you can't constrain your
 testing/development dataset in such a manner and collation order really
 matters during development, then you probably shouldn't develop on OS X.  I
 spent a fair amount of time investigating how to define a new charset in
 what proved to ultimately be a futile attempt to redefine UTF-8 on OSX to
 behave just like ti does on Linux.  I just gave it up after wasting a few
 too many hours on it. It may be possible to do it, but the return on
 invested time was non-existent for me so I abandoned my effort.


 Why are people saying os x is broken in my case?  Looking
 at  http://www.utf8-chartable.de/  and  http://www.asciitable.com/  for sort
 order, ! should come before apache.  On os x it is correct, on ubuntu linux
 it is not.  In the order by output per my previous emails, it is correct on
 os x, but no on linux.  Why do people keep saying os x is broken, when it is
 doing the correct thing from what I've seen?

You're confusing encoding with locale.  UTF-8 is an encoding.  If
there were no Locale (i.e. it was set to C, simple byte ordering) then
you'd be correct.  HOWEVER, a locale is a different animal altogether.
 For instance, most locales ignore many characters when it comes to
sort ordering.  Such as spaces:

smarlowe=# create table test (i text);
CREATE TABLE
smarlowe=# insert into test values ('abc'),(' abc'),('def'),(' def');
INSERT 0 4
smarlowe=# select * from test order by i;
  i
--
 abc
  abc
 def
  def
(4 rows)

Note the spaces are ignored for sorting purposes.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] order by different on mac vs linux

2012-05-15 Thread Scott Marlowe
On Tue, May 15, 2012 at 10:06 AM, Wes James compte...@gmail.com wrote:
 On Mon, May 14, 2012 at 5:00 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Wes James compte...@gmail.com writes:
 Why is there a different order on the different platforms.

 This is not exactly unusual.  You should first check to see if
 lc_collate is set differently in the two installations --- but even if
 it's the same, there are often platform-specific interpretations of
 the sorting rules.  (Not to mention that OS X is flat out broken when
 it comes to sorting UTF8 data ...)

 If you want consistent cross-platform results, C locale will get
 that for you, but it's pretty stupid about non-ASCII characters.

 For more info read
 http://www.postgresql.org/docs/9.1/static/charset.html

                        regards, tom lane

 I tried using the postgres that comes with ubuntu (sudo apt-get
 install postgresql).

 With my app I kept getting invalid password.  I went in to the
 database sudo -u postgres database and did \password and set a
 password, but I still got invalid password error from the app api
 trying to make a connection.

 I then went back to the source installed version and now the output is
 correct.  I'm not sure what changed. hmmm.

It's most likely an issue with the settings in your pg_hba.conf file.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Current transaction is aborted, commands ignored until end of transaction block

2011-12-30 Thread Scott Marlowe
On Fri, Dec 30, 2011 at 6:19 PM, Samuel Gendler
sgend...@ideasculptor.com wrote:


 On Fri, Dec 30, 2011 at 4:03 PM, Scott Marlowe scott.marl...@gmail.com
 wrote:

 On Fri, Dec 30, 2011 at 4:19 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  Jan Bakuwel jan.baku...@greenpeace.org writes:
  Why-o-why have the PostgreSQL developers decided to do it this way...?
 
  Because starting and cleaning up a subtransaction is an expensive thing.
  If we had auto-rollback at the statement level, you would be paying that
  overhead for every statement in every transaction, whether you need it
  or not (since obviously there's no way to forecast in advance whether a
  statement will fail).  Making it depend on explicit savepoints allows
  the user/application to control whether that overhead is expended or
  not.
 
  If you want to pay that price all the time, there are client-side
  frameworks that will do it for you, or you can roll your own easily
  enough.  So we do not see it as a big deal that the database server
  itself doesn't act that way.

 Having used PostgreSQL a LOT, I find that being able to throw an
 entire update at the db and having it fail / be rolled back / CTRL-C
 out of and fix the problem is actually much less work than the
 frameworks for other databases.  Once you've chased down bad data in a
 load file a few times, it's really pretty easy to spot and fix these
 issues and just run the whole transaction again.  Since PostgreSQL
 doesn't have a very big penalty for rolling back a whole transaction
 it's not that bad.  Some dbs, like MySQL with innodb table handler
 have a 10:1 or greater penalty for rollbacks.  Insert a million rows
 in innodb then issue a rollback and go get a sandwich.  In PostgreSQL
 a rollback is generally instantaneous, with the only real cost being
 bloat in the tables or indexes.


 More to the point - if a statement is truly independent of all the other
 statements in a transaction, it would seem that the transaction itself is
 poorly defined.  The whole point of a transaction is to define an atomic
 unit of work. If you don't care about atomicity, enable auto commit and just
 catch the constraint violation exception and continue on your merry way.

But the performance penalty for autocommit is huge.  It's still almost
always faster to run a single big transaction and fix errors than to
do single commits when you're doing a large import.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Correlating Asterisk CDRs

2011-12-07 Thread Scott Marlowe
2011/12/7 Raj Mathur (राज माथुर) r...@linux-delhi.org:
                                                             QUERY PLAN
 -
  Limit  (cost=46782.15..46782.40 rows=100 width=109) (actual 
 time=4077.866..4078.054
 rows=100 loops=1)
   -  Sort  (cost=46782.15..46785.33 rows=1272 width=109) (actual 
 time=4077.863..4077.926
 rows=100 loops=1)
         Sort Key: cdr.calldate, cdr2.calldate, cdr.clid
         Sort Method:  top-N heapsort  Memory: 42kB
         -  Merge Join  (cost=2.95..46733.54 rows=1272 width=109) (actual
 time=0.070..3799.546 rows=168307 loops=1)

Two things to look at here.  First is that the estimation of rows
expected and returned vary by a factor over over 100, which means the
query planner may be making suboptimal choices in terms of the plan it
is running.  If increasing stats target on the target columns in the
query helps, then that's worth trying.  Raise it and re-analyze and
see if you get a closer estimate.  To test if the merge join is the
best choice or not, you can use the set enable_xxx for it (in this
case set enable_mergejoin=off) and then run the query again through
explain analyze and see if the performance gets any better.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] updating a sequence

2011-11-15 Thread Scott Marlowe
On Tue, Nov 15, 2011 at 5:33 PM, Samuel Gendler
sgend...@ideasculptor.com wrote:
 On Tue, Nov 15, 2011 at 4:28 PM, John Fabiani jo...@jfcomputer.com wrote:

 On Tuesday, November 15, 2011 08:33:54 am Richard Broersma wrote:
  On Tue, Nov 15, 2011 at 8:06 AM, John Fabiani jo...@jfcomputer.com
  wrote:
   alter sequence somename restart with (select max(pk) from sometable).
  
   I need this for automating an ETL (using pentaho).
 
 
  http://www.postgresql.org/docs/9.1/interactive/functions-sequence.html#FUNC
  TIONS-SEQUENCE-TABLE


 I don't see how that helps answer my problem.  I know how to update a
 sequence.  I want to pass a value for the restart vaue that depends on a
 query
 - all in one statement.  I would think it is a common problem i.e.
 migrating
 data.


 use a subquery to set the value -
 select setval('foo', select max(some_id) from some_table)
 It's all right there in the docs that you were pointed to. We try to
 encourage people to be somewhat self sufficient around here.

You need to wrap a subselect in ():

select setval('foo', (select max(some_id) from some_table));

That works in 9.1.1.  No clue about previous versions off the top of
my head, but I seem to recall it doesn't work in 8.3 and prior
versions.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] updating a sequence

2011-11-15 Thread Scott Marlowe
On Tue, Nov 15, 2011 at 8:46 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Tue, Nov 15, 2011 at 5:33 PM, Samuel Gendler
 sgend...@ideasculptor.com wrote:
 On Tue, Nov 15, 2011 at 4:28 PM, John Fabiani jo...@jfcomputer.com wrote:

 On Tuesday, November 15, 2011 08:33:54 am Richard Broersma wrote:
  On Tue, Nov 15, 2011 at 8:06 AM, John Fabiani jo...@jfcomputer.com
  wrote:
   alter sequence somename restart with (select max(pk) from sometable).
  
   I need this for automating an ETL (using pentaho).
 
 
  http://www.postgresql.org/docs/9.1/interactive/functions-sequence.html#FUNC
  TIONS-SEQUENCE-TABLE


 I don't see how that helps answer my problem.  I know how to update a
 sequence.  I want to pass a value for the restart vaue that depends on a
 query
 - all in one statement.  I would think it is a common problem i.e.
 migrating
 data.


 use a subquery to set the value -
 select setval('foo', select max(some_id) from some_table)
 It's all right there in the with docs that you were pointed to. We try to
 encourage people to be somewhat self sufficient around here.

 You need to wrap a subselect in ():

 select setval('foo', (select max(some_id) from some_table));

 That works in 9.1.1.  No clue about previous versions off the top of
 my head, but I seem to recall it doesn't work in 8.3 and prior
 versions.

Was wrong, it definitely works in 8.3.  But only with the parens.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to write query to access another odbc source

2011-10-19 Thread Scott Marlowe
On Wed, Oct 19, 2011 at 9:19 AM, Rich rhd...@gmail.com wrote:
 I have a mumps database with an ODBC connection so I can write queries
 from this database.  How can I write a sql in Postgresql to access this
 database to use in my Postgresql reports?

dblink lets one pg server access another via SQL, dbilink lets one pg
server access any other db server via SQL.  Not sure if dbilink is
maintained or not.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Want some basic compare of data type on PostgreSQL and MySQL

2011-09-01 Thread Scott Marlowe
On Thu, Sep 1, 2011 at 7:32 AM, Leif Biberg Kristensen
l...@solumslekt.org wrote:
 On Thursday 1. September 2011 11.16.23 Scott Marlowe wrote:
 The postgresql type text is a varchar with
 no precision that can hold up to about a gig or so of text.  Not that
 i recommend putting a gig of text into a single field in a database.

 Printed out as plain text on paper with 4,000 characters per page, it would
 produce about 500 volumes of 500 pages each. That would take up 20 running
 metres of shelf space.

 In order to avoid global deforestation, please restrain your urge to print out
 those big text fields.

I'm sure with an appropriately small font we could cut that right
down, pardon the pun.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Add one column to another

2011-08-25 Thread Scott Marlowe
On Thu, Aug 25, 2011 at 8:52 AM, Oliveiros d'Azevedo Cristina
oliveiros.crist...@marktest.pt wrote:
 Something like this...?

 SELECT first_name,surname, email1 || ';' || email2
 FROM t_your_table;

If there's any nulls in email1 or email2 they'll need special handling
with coalesce.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] (pgsql8.4) DATA Corruption

2011-08-19 Thread Scott Marlowe
On Fri, Aug 19, 2011 at 1:08 PM, Mikola Rose mr...@power-soft.com wrote:
 Happy Friday people!



 I was wondering if anyone had any suggestions on how to resolve this
 issue...  I am moving otrs to another server and during the backup process I
 am running into this error

 pg_dump: dumping contents of table article_attachment





 pg_dump: SQL command failed

 pg_dump: Error message from server: ERROR:  unexpected chunk number 7
 (expected 6) for toast value 77281 in pg_toast_57366





 pg_dump: The command was: COPY public.article_attachment (id, article_id,
 filename, content_size, content_type, content, create_time, create_by,
 change_time, change_by, content_id, content_alternative) TO stdout;





 pg_dump: *** aborted because of error





 I have tried a vacuum and reindex with no successes.

You'll likely have to figure which blocks are corrupted, and copy out
the good data using a where clause the excludes it, then get what you
can out of it, truncate the table, and reinsert the data.

Then figure out what part of your hardware is / might be dodgy.  mem
test, disk check, etc.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Max column number.

2011-07-13 Thread Scott Marlowe
On Wed, Jul 13, 2011 at 9:45 AM, Miguel Angel Conte diaf...@gmail.com wrote:
 Hi,
 Thanks for your interest. This app load scv files which change every day
 (sometimes the columns too). The sizes of these files are in avg 15MB. So,
 We load something like 100MB each day. We tried to find a better solution
 but we couldn't, becouse one of the our requirement is not to use a lot of
 space. Also, the app is used to consult these information, and for our
 particular type of select's queries, we get the best performance if the
 information is all into a same row.

This definitely sounds like a job for something like hstore.

Also, can you drop and recreate the table every so often?  That would
certainly help.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Max column number.

2011-07-12 Thread Scott Marlowe
On Tue, Jul 12, 2011 at 12:08 PM, Miguel Angel Conte diaf...@gmail.com wrote:
 Unfortunately It's an inherited data model and I can't make any change for
 now...
 Thanks for your answer!

when you can change it, look at hstore

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] what is similar to like operator in mysql for postgresql

2011-07-10 Thread Scott Marlowe
On Sun, Jul 10, 2011 at 1:35 PM, hatem gamal elzanaty ha...@softpro.bz wrote:
 hi all,
 can you tell me what is similar to like operator in mysql for postgresql
 hatem gamal

In postgresql ilike is like mysql's case insensitive like.  like in
postgres is case sensitive.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] To find process that lock a table

2011-06-06 Thread Scott Marlowe
On Mon, Jun 6, 2011 at 5:16 PM, manuel antonio ochoa
manuel8aalf...@gmail.com wrote:
 Hello ,

 do you know how to find a process that is locking a table  ? I try to run a
 vacuum analyze  , an it take a state of waiting , I canceled it after 20
 minutes , then
 I try to run an analyze  and the same happen, take a state waiting.

Try the queries here: http://wiki.postgresql.org/wiki/Lock_Monitoring

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Function to total reset a schema

2011-05-29 Thread Scott Marlowe
On Sun, May 29, 2011 at 1:38 AM, Surfing onlinesurf...@gmail.com wrote:
 Hi all,
 I need to write a function that totally empty a schema.

 So I have written a TRUNCATE statement for each table and set to 0 each
 sequence.
 Btw, it could be good to execute a vacuum statement on each table, but from
 within the function this is not allowed.

You shouldn't need to vacuum truncated tables, as they're basically at
a lower level just wiped out and replaced by a new empty file / table.
 Same goes for the indexes I believe.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Sorting Issue

2011-05-10 Thread Scott Marlowe
On Tue, May 10, 2011 at 11:45 AM, Samuel Gendler
sgend...@ideasculptor.com wrote:


 On Tue, May 10, 2011 at 9:47 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Ozer, Pam po...@automotive.com writes:
  Isn't this the English standard for collation?  Or is this a non-c
  locale as mentioned below?  Is there anyway around this?

         LC_COLLATE = 'en_US.utf8'

 en_US is probably using somebody's idea of dictionary order, which
 I believe includes ignoring spaces in the first pass.  You might be
 happier using C collation.  Unfortunately that requires re-initdb'ing
 your database (as of existing PG releases).


 ugh. So what's the initdb incantation necessary to sort the way I'd expect
 an alphabetic sort to happen?  I'm literally just in the process of bringing
 up a new project, so it's a perfect opportunity for me to get this set up
 correctly to begin with.  THe default on my system was definitely
 en_US.utf8.

initdb --locale=C

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Automating PostgreSql table partition using triggers

2011-01-27 Thread Scott Marlowe
On Thu, Jan 27, 2011 at 10:50 AM, Amitabh Kant amitabhk...@gmail.com wrote:
 Hi

 I am trying to write a function which is being called from a trigger used
 for partitioning a large table. The partitioning is to happen based on an
 integer field (testing_id). A simplified structure of what I am trying to do
 is written below.

My advice is to move the table creation from a trigger to a nightly
cron job.  I have one that runs each night that checks for the
existence of the needed tables and creates new ones for up to a month
in advance and emails me the results.  that way if it starts failing
I've got 30 days to get things fixed.  Then another script runs that
creates my new triggers to work with the new partitions.

Too many race conditions and performance issues with using dynamic DDL
to create partitions.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Compare the resulta of a count sql into bash

2011-01-26 Thread Scott Marlowe
On Wed, Jan 26, 2011 at 10:07 AM, manuel antonio ochoa
manuel8aalf...@gmail.com wrote:

 Hello,
 I have the next :

 COUNTONE=`/var/lib/pgsql/bin/./psql -U 'Thor'   -d princlocal -p 5432 -h
 192.170.1.82  -c select count(*) from monterrey.${NOMBRETB}`
 COUNTTWO=`/var/lib/pgsql/bin/./psql -U 'Thor'   -dprinclocal -p 5432 -h
 192.170.1.82  -c select count(*) from monterrey.$nombre where recibo
 between '$FI' and '$FF'

 I want to compare the result countone with countwo  how does it works  ?

echo $((COUNTONE-COUNTTWO));

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Benchmarking

2011-01-25 Thread Scott Marlowe
On Tue, Jan 25, 2011 at 10:39 AM, manuel antonio ochoa
manuel8aalf...@gmail.com wrote:
 hello

 do you know a tool to benchmark my dbase  and the basic test that I need to
 do ? 
 I found a pgbench !!! ...

pgbench can be used as a generic testing tool by giving it a new set
of sql statements to run with the -f switch.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] check files .backup

2011-01-24 Thread Scott Marlowe
On Mon, Jan 24, 2011 at 5:18 PM, manuel antonio ochoa
manuel8aalf...@gmail.com wrote:
 Do you know if exist a  function to check my file.backup created by pgdump.

I run backups like this in bash:

if (pg_dump yadayada); then
echo backup succeeded.
else
echo backup failed.
fi;

in a cronjob set to email me.  Then when it fails or not, I see it
right there in my nightly email.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Database consistency after a power shortage

2010-12-15 Thread Scott Marlowe
On Wed, Dec 15, 2010 at 8:12 AM, Alberto blob2...@gmail.com wrote:
 My question is regarding a potential situation:

 I have a program that inserts values on 3 tables linked to each other. My
 program is used in a POS. In this specific case, the program has to update
 the tables header_invoice, detail_invoice and
 payments_x_header_invoice.

 In a normal operation, the program should insert first a registry on
 header_invoice, then insert N registries on detail_invoice referencing
 the header_invoice number. After that it should insert N registries
 regarding the payments related to the header_invoice, referencing again the
 invoice.

 So the order goes like this:
 1) Insert 1 new registry on header_invoice
 2) Insert N registries on detail_invoice referencing header_invoice
 3) Insert N registries on payments_x_header_invoice referencing the
 header_invoice

 If lets say the header_invoice registry was inserted, operation was
 committed and then a power shortage occurs and the system shuts down. In
 that case the database will never know that more registries had to be
 inserted, because that happened on the application level.

 Is there any way to make the 3 operations be one transaction for the
 database, so that it keeps them all consistent in case a power shortage
 occurs in the middle?

Yes, put them in a transaction.

begin;
insert into head_invoice ...
insert into detail_invocie ...
insert into payments_x_header_invoice ...
commit;

Then they either all go or none go.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Need magic for inserting in 2 tables

2010-10-03 Thread Scott Marlowe
On Sun, Oct 3, 2010 at 4:14 PM, Andreas maps...@gmx.net wrote:
 insert into staff ( company_fk, ..., department_fk )
 select  company_fk, ..., department_fk
 from     departments,   companies,   company_2_project  AS c2p
 where  company_id      =   c2p.company_fk
    and c2p.project_fk    =   42
    and department_id  in  ( 40, 50 );

 step 2 would be to link those new blank staff records to project 42 by
 inserting a record into staff_2_project for every new staff_id.

 How can I find the new staff_ids while making sure I don't insert ids from
 other sessions?
 Is there an elegant way in SQL ?

Use returning?

insert into .
yada
returning field1, field2, field3

--  To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Need magic for inserting in 2 tables

2010-10-03 Thread Scott Marlowe
On Sun, Oct 3, 2010 at 6:47 PM, Andreas maps...@gmx.net wrote:

 How can a script use what RETURNING dumps out?
 I tried a bit but got nowhere.

The same way it would use the output of a select, it's a record set.
So it's x rows by y columns.

-- 
To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Need magic for inserting in 2 tables

2010-10-03 Thread Scott Marlowe
On Sun, Oct 3, 2010 at 8:31 PM, Andreas maps...@gmx.net wrote:

 insert into t1_t2 ( fk_1, fk_2 )
    insert into table_1 ( txt )
    values ( 'A' ), ( 'B' ), ( 'C' )
    returning id_1, 42;

 The inner insert works and dumps the inserted ids along with the constant
 which is needed in the outer insert as reference to the project.

 Both inserts run together give an error.

Yeah, it's not capable of directly feeding the next insert like that.
You run the first insert, get the results back, then cycle through
them in your code to make the new inserts based on that.


-- 
To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PostGres Tables in ArcSDE and ArcCatalog.

2010-08-31 Thread Scott Marlowe
On Mon, Aug 30, 2010 at 1:31 PM, Michael Andrew Babb ba...@uw.edu wrote:
 Hi All,
 If I execute a make table query along the lines of “select * into
 SF30001_test from SF30001” I can interact with the table in ArcCatalog just
 fine.

what do
\z SF30001
\z SF30001_test
say about the permissions on the two tables?
-- 
To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Question about POSIX Regular Expressions performance on large dataset.

2010-08-17 Thread Scott Marlowe
On Tue, Aug 17, 2010 at 8:21 PM, Jose Ildefonso Camargo Tolosa
ildefonso.cama...@gmail.com wrote:
 Hi!

 I'm analyzing the possibility of using PostgreSQL to store a huge
 amount of data (around 1000M records, or so), and these, even
 though are short (each record just have a timestamp, and a string that
 is less than 128 characters in length), the strings will be matched
 against POSIX Regular Expressions (different regexps, and maybe
 complex).

 Because I don't have a system large enough to test this here, I have
 to ask you (I may borrow a medium-size server, but it would take a
 week or more, so I decided to ask here first).  How is the performance
 of Regexp matching in PostgreSQL?  Can it use indexes? My guess is:
 no, because I don't see a way of generally indexing to match regexp :(
 , so, tablescans for this huge dataset.

 What do you think of this?

Yes it can index such things, but it has to index them in a fixed way.
 i.e. you can create functional indexes with pre-built regexes.  But
for ones where the values change each time, you're correct, no indexes
will be used.

Could full text searching be used instead?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Question about POSIX Regular Expressions performance on large dataset.

2010-08-17 Thread Scott Marlowe
You can do something similar on the same machine if you can come up
with a common way to partition your data.  Then you split your 1B rows
up into chunks of 10M or so and put each on a table and hit the right
table.  You can use partitioning / table inheritance if you want to,
or just know the table name ahead of time.

We did something similar with mnogo search.  We break it up into a few
hundred different schemas and hit the one for a particular site to
keep the individual mnogo search tables small and fast.

On Tue, Aug 17, 2010 at 8:30 PM, Jose Ildefonso Camargo Tolosa
ildefonso.cama...@gmail.com wrote:
 Hi, again,

 I just had this wacky idea, and wanted to share it:

 what do you think of having the dataset divided among several servers,
 and sending the query to all of them, and then just have the
 application unify the results from all the servers?

 Would that work for this kind of *one table* search? (there are no
 joins, and will never be).  I think it should, but: what do you think?

 Ildefonso.

 On Tue, Aug 17, 2010 at 9:51 PM, Jose Ildefonso Camargo Tolosa
 ildefonso.cama...@gmail.com wrote:
 Hi!

 I'm analyzing the possibility of using PostgreSQL to store a huge
 amount of data (around 1000M records, or so), and these, even
 though are short (each record just have a timestamp, and a string that
 is less than 128 characters in length), the strings will be matched
 against POSIX Regular Expressions (different regexps, and maybe
 complex).

 Because I don't have a system large enough to test this here, I have
 to ask you (I may borrow a medium-size server, but it would take a
 week or more, so I decided to ask here first).  How is the performance
 of Regexp matching in PostgreSQL?  Can it use indexes? My guess is:
 no, because I don't see a way of generally indexing to match regexp :(
 , so, tablescans for this huge dataset.

 What do you think of this?

 Sincerely,

 Ildefonso Camargo


 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql




-- 
To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Round integer division

2010-06-25 Thread Scott Marlowe
On Fri, Jun 25, 2010 at 6:53 PM, Lee Hachadoorian
lee.hachadoor...@gmail.com wrote:
 Is it documented anywhere that floating-point numbers round
 scientifically, that is 0.5 rounds to the nearest even number? Compare:

 SELECT round(2.5::real), round(2.5::numeric), round(3.5::real),
 round(3.5::numeric);

 generates

 2 | 3 | 4 | 4

 I stumbled across this when I was trying to use round(a::real/b::real)
 to generate a rounded result to dividing integers, and noticed sometimes
 0.5 was truncated and sometimes it was rounded up. Couldn't find
 anything about this in the archives or the data type documentation. Is
 there something obvious that I'm I missing?

That all floating point representations are approximate?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] oracle to postgres migration question

2010-06-16 Thread Scott Marlowe
On Wed, Jun 16, 2010 at 1:42 PM, Chris Browne cbbro...@acm.org wrote:
 sfr...@snowman.net (Stephen Frost) writes:
 People who are trying to parse psql's output directly should realize
 they probably are going about it the wrong way. :)

 There's a set of people I need to tell that to...

If you're at least making the output something like tab,  space, pipe
delimited you can parse it.  On systems with only bash to play with,
I've done that before because it and psql were the only tools I had to
work with.  I wouldn't try to write some masterpiece artwork of code
around psql output, but for some short scripts it's usable, and way
easier than dealing with Oracle.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] oracle to postgres migration question

2010-06-15 Thread Scott Marlowe
On Tue, Jun 15, 2010 at 1:09 PM, Joshua Gooding jgood...@ttitech.net wrote:
 Hello,

 I'm looking for the postgres equivalent of oracles: set numwidth command.
  Is there an equivalent?

Psql uses dynamic formatting for such things.  Not sure there's really
a big need for it.  Can you give an example of what you're trying to
accomplish?  (and as Bruce mentioned, you'd get more bites if you gave
a reference like
http://www.oracle.com/technology/support/tech/sql_plus/htdocs/sub_var4.html#4_1_6
so people would know what you're asking for.)

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] oracle to postgres migration question

2010-06-15 Thread Scott Marlowe
On Tue, Jun 15, 2010 at 2:19 PM, Stephen Frost sfr...@snowman.net wrote:
 * Bruce Momjian (br...@momjian.us) wrote:
 Joshua Gooding wrote:
  Hello,
 
  I'm looking for the postgres equivalent of oracles: set numwidth
  command.  Is there an equivalent?

 If we knew what it did, we might be able to help you.

 Changes the display-width for numeric values.  SQL*Plus will then
 right-justify the number based on the numwidth value.

Note that psql automagically right justifies numerics and dynamically
sizes all columns so you don't have to do as much of this stuff.
Oracle always made me feel like I was operating the machine behind the
curtain in the Wizard of Oz, lots of handles and switches and knobs I
had to mess with to get useful output.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] import ignoring duplicates

2010-05-17 Thread Scott Marlowe
On Sun, May 16, 2010 at 12:38 PM, Mark Fenbers mark.fenb...@noaa.gov wrote:
 I am using psql's \copy command to add records to a database from a file.
  The file has over 100,000 lines.  Occasionally, there is a duplicate, and
 the import ceases and an internal rollback is performed.  In other words, no
 data is imported even if the first error occurs near the end of the file.

 I am looking for an option/switch to tell psql (or the \copy command) to
 skip over any duplicate key constraint viloations and continue to load any
 data that doesn't violate a duplicate key constraint.  Is there such an
 option?

Sounds like you want this:

http://pgfoundry.org/projects/pgloader/

Note that copy is optimized to work in a single transaction.  Breaking
those semantics WILL result in a slow load time, and there's not much
you can do about that.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] advice on query joining 10 tables

2010-05-10 Thread Scott Marlowe
On Mon, May 10, 2010 at 12:40 PM, Gerardo Herzig gher...@fmed.uba.ar wrote:
 Hi all. Im triyng to see if i can improve the performance of a query
 (mainly a 10 table join)

 1) Besides of triyng to use indexes, there is some rules of thumb to follow?

log long running queries for later analysis?

 2) Should i try to join the bigger tables last in the query?

The query planner can do that for you automatically.

 3) There is some place for understanding EXPLAIN better?

http://explain.depesz.com/

Here and the docs?  I've found it pretty easy to post one here and ask
for help and get it on why a query isn't running well.  Note that
explain analyze is much preferred to plain explain.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] advice on query joining 10 tables

2010-05-10 Thread Scott Marlowe
On Mon, May 10, 2010 at 2:38 PM, Gerardo Herzig gher...@fmed.uba.ar wrote:
 Scott Marlowe wrote:
 On Mon, May 10, 2010 at 12:40 PM, Gerardo Herzig gher...@fmed.uba.ar wrote:
 Hi all. Im triyng to see if i can improve the performance of a query
 (mainly a 10 table join)

 1) Besides of triyng to use indexes, there is some rules of thumb to follow?

 log long running queries for later analysis?

 2) Should i try to join the bigger tables last in the query?

 The query planner can do that for you automatically.

 3) There is some place for understanding EXPLAIN better?

 http://explain.depesz.com/

 Here and the docs?  I've found it pretty easy to post one here and ask
 for help and get it on why a query isn't running well.  Note that
 explain analyze is much preferred to plain explain.

 Well, thanks Tom and Scott for the answers. I will take some more time
 reading the docs. Im looking more to know better, rather than just
 waiting to someone to point out my errors.

Also, search the pgsql archives for explain analyze for lots of
posts.  You can learn quite a bit trawling the archives like that.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Problem with insert related to different schemas

2010-04-21 Thread Scott Marlowe
On Wed, Apr 21, 2010 at 7:25 AM, Gonzalo Aguilar Delgado
gagui...@aguilardelgado.com wrote:
 Hi Tom,



  This is a select query.  I don't think that's the right error message.

  Yes, but IS the correct error message.

 The query being complained of appears to be a generated foreign key
 checking query.  It's not surprising it would appear in the context
 of an insert.

 Yes, that's what I think too...


  It appeared just after upgrading
  to 8.4 as 8.3 had no problem processing this query.

 You've either changed the permissions on schema public from what they
 were in the old installation, or linked an FK constraint to the wrong
 table.  I see no reason to think there is either a bug or a version
 difference here.

 Maybe but I ran even grant all on schema public for this user to check
 if this was the problem.

That doesn't do what you think it does.  You need to grant on the actual object.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Problem with insert related to different schemas

2010-04-20 Thread Scott Marlowe
On Tue, Apr 20, 2010 at 10:32 AM, Gonzalo Aguilar Delgado
gagui...@aguilardelgado.com wrote:

 Hi Everyone,

 I've come along with a problem that appeared with latest version of
 Postgresql 8.4.2.

 I'm trying to insert a row in the analysis schema:


 This is an insert query:

 Yes it is...


 INSERT INTO
 analisys.response_quality
 (uuid,id_survey_question,id_survey,id_survey_status,id_shop,survey_question_response,id_survey_answer,date_survey_answer)
 VALUES ('83968de5-eac7-4f52-bbec-7392a198f556',7,1,1,16,3,9987,'2009-12-01
 00:00:00.00 +01:00:00')


 But it fails because:

 This is a select query.  I don't think that's the right error message.

 Yes, but IS the correct error message. It appeared just after upgrading to
 8.4 as 8.3 had no problem processing this query.

A self-contained example would be helpful.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Table Design for Hierarchical Data

2010-04-07 Thread Scott Marlowe
On Tue, Apr 6, 2010 at 11:43 PM, silly sad s...@bankir.ru wrote:
 P.S.
 almost foget, do not try any oracle-like tree-jouns or special types or
 such a crap.

 your problem as plain as to store a pair of integers
 (or numerics (i prefer))

Since it's an identifier and not really a numeric per se, I'd store it
as text.  I mean it could as easily be a 5 character alpha code as 5
character number code.

With tet you can create indexes on substring(idfield,1,1),
substring(idfield,1,2), substring(idfield,1,3),
substring(idfield,1,4), and substring(idfield,1,5) for fast lookups
and matching.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Rename Index - Deadlock

2010-03-16 Thread Scott Marlowe
On Tue, Mar 16, 2010 at 3:45 AM, Thomas Kenner thomas.ken...@gmail.com wrote:
 Hi,

 Each day I'm recreating the index my_index of the table my_table. Therefore I
 create a new index my_index_new, drop the old index my_index, and rename the
 new index:
 ALTER INDEX my_index_new RENAME TO my_index;

 If an insert or select statement is run at the same time as the ALTER INDEX
 ... RENAME TO statement, it will end up in a deadlock.

A deadlock or just a lock?  What version of pg?  What exact error
messages do you get?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Issue with insert

2010-02-27 Thread Scott Marlowe
On Sat, Feb 27, 2010 at 10:14 AM, Michael Gould
mgo...@intermodalsoftwaresolutions.net wrote:
 I'm having a issue with a insert

 INSERT INTO
  iss.citystateinfo
 (
  citystateinfoid,
  citystate,
  zipcode,
  cityname,
  statecode
 )
 VALUES (
  '31344342-3439-4135-2d32-3044462d3433',
  'Orange Park,FL',
  '32065',
  'Orange Park',
  'FL'
 );

 This inserts correctly, however when I view the data the citystate is always
 displayed as

 Orange Park (FL).  Every row is displayed the same.  I've used PGAdmin to
 look at the data, EMS for PostGres and have exported the data to Excel and
 they all display the data incorrectly.  This is a issue for us because we
 want the user to type in either the zipcode if known to bring back the city
 and state, or the city,state to bring back the zipcode.

 Any idea's on why this could be happening.  I'm running on Windows 7 x64
 using 8.4.2

When you display it, do you use pgadmin or psql?  Does psql do the same thing?

Can we see the ddl that created this table?  Can you create a
self-contained test-case that others can run and see the same results?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Fwd: Help required on query performance

2010-01-31 Thread Scott Marlowe
On Sun, Jan 31, 2010 at 5:50 PM, Dave Clements dclement...@gmail.com wrote:
 Hello, I have this query in my system which takes around 2.5 seconds
 to run. I have diagnosed that the problem is actually a hashjoin on
 perm and s_ast_role tables. Is there a way I can avoid that join? I
 just want to change the
 query and no environment change.

What does

explain analyze select ... (rest of your query)

say?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Fwd: Help required on query performance

2010-01-31 Thread Scott Marlowe
On Sun, Jan 31, 2010 at 6:02 PM, Dave Clements dclement...@gmail.com wrote:
 Hi, following the output from explain analyze.

Without doing any heavy analysis, it looks like your row estimates are
way off.  Have you cranked up stats target and re-analyzed yet?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Fwd: Help required on query performance

2010-01-31 Thread Scott Marlowe
On Sun, Jan 31, 2010 at 6:09 PM, Dave Clements dclement...@gmail.com wrote:
 I did the re-analyze serveral times, using the command:

 ANALYZE tablename;

 Is there any other command as well or another way to do that?

It's important that the stats target get increased as well, it looks
like you're not getting enough buckets to get a good estimate of rows
to be returned for various conditions.

# show default_statistics_target ;
 default_statistics_target
---
 10

# alter database smarlowe set default_statistics_target=200;
ALTER DATABASE
# analyze;

Then run the explain analyze again and see if your row estimates are
closer and if the plan changes.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Fwd: Help required on query performance

2010-01-31 Thread Scott Marlowe
On Sun, Jan 31, 2010 at 9:25 PM, Dave Clements dclement...@gmail.com wrote:
 After the analyze I am getting the time 3.20 ms but there is not
 HashJoin there. Still all of them are NestLoops. But that is fine.

 Now the only problem is the sequence scan on sq_sch_idx table.
 I have a query like this:

 explain analyze select count(*) from sq_sch_idx where value = '%download%';

 This query does a sequence scan on the table. Is there a way I can
 create an index for this?

If it's not left anchored ( value like 'download%') then not with
regular old methods.  If you need to do text searching you might need
to look into the full text search indexing in pgsql.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Partitioning by letter question

2010-01-30 Thread Scott Marlowe
On Sat, Jan 30, 2010 at 7:11 AM, John Lister
john.lister...@kickstone.co.uk wrote:
 john.lister...@kickstone.com wrote:

 . Hi, I was wondering if this was possible. I'm trying to partition a
 table,
 . which is straightforward enough thanks to the great documentation, but i

 have a question:

 If I partition using something like a product_id for example and have
 check
 constraints such as (id=1000 and id2000) then everything is fine and
 the
 planner correctly uses the right subset of the tables. However I would
 like
 to partition by the first letter and using something like this
 substr(word,1,1)='a' is ignored by the planner. From reading the docs I
 understand that complicated check constraints are ignored, but this
 doesn't
 seem overly complicated.

 Am i doing something wrong or is there another better way to do this

 Have you tried:

 (word = 'a' and word 'b')

 Cheers, had my programming head on. One question:

 any ideas about what to put for the last in the list

 i thought something like  (word='z' and word'{') which is based on the
 ascii ordering. - my db is using utf8

 I tried to check this by doing

 select * from words where word '' order by word limit 10;

 which returns '.' as the first result (ok not a word, but that is a
 different issue) but if i do

 select * from words where word '.' order by word desc limit 10

 I get '/...' as the first result, I would expect '', this doesn't seem
 consistent.

Yeah, in non C locales, things like . and   don't count for ordering.

As for the constraints, why not something like:

where word  'a' or word  'z'

Or something like that.  Not that I'm not taking upper and lower case
into consideration here.  you might need something like lower(word) 
'a' etc.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Partitioning by letter question

2010-01-29 Thread Scott Marlowe
On Fri, Jan 29, 2010 at 3:24 PM, John Lister
john.lister...@kickstone.com wrote:
 Hi, I was wondering if this was possible. I'm trying to partition a table,
 which is straightforward enough thanks to the great documentation, but i
 have a question:

 If I partition using something like a product_id for example and have check
 constraints such as (id=1000 and id2000) then everything is fine and the
 planner correctly uses the right subset of the tables. However I would like
 to partition by the first letter and using something like this
 substr(word,1,1)='a' is ignored by the planner. From reading the docs I
 understand that complicated check constraints are ignored, but this doesn't
 seem overly complicated.

 Am i doing  something wrong or is there another better way to do this

Have you tried:

(word = 'a' and word 'b')

?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] CHECK constraint removing brackets

2010-01-11 Thread Scott Marlowe
On Mon, Jan 11, 2010 at 12:49 PM, Andy Shellam
andy-li...@networkmail.eu wrote:
 With the above in mind, I decided on the following check to enforce this:

 (state = 'Unconfirmed'::client.order_state AND invoice_id = NULL) OR (state 
 != 'Unconfirmed'::client.order_state AND invoice_id != NULL)

Nothing can = null.  and invoice_id IS NULL is the proper
nomenclature.  Also, something  NULL makes no sense, because we
don't know what NULL is, so that becomes something IS NOT NULL

Also != is not proper SQL, although many dbs understand it,  is the
proper way to write NOT EQUAL TO.

 However PostgreSQL (8.4.2) converts this to the following:

 state = 'Unconfirmed'::client.order_state AND invoice_id = NULL::integer OR 
 state  'Unconfirmed'::client.order_state AND invoice_id  NULL::integer

ANDs have priority of ORs so the removal of the parenthesis makes no
great change here.  also, SQL standard is  not !=.

I'm guessing the real problems here are your NULL handling.  See if
changing it to IS NULL / IS NOT NULL gets you what you want.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] 8.4.1 distinct query WITHOUT order by

2009-12-22 Thread Scott Marlowe
On Tue, Dec 22, 2009 at 12:11 AM, msi77 ms...@yandex.ru wrote:
 What are the ramifications of renaming the table (containing 8000
 rows) and creating a view of the same name?

 View does not admit ORDER BY clause, at least, Standard does not.

Postgres certainly allows it, but I don't think it will help in this case.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] 8.4.1 distinct query WITHOUT order by

2009-12-21 Thread Scott Marlowe
On Mon, Dec 21, 2009 at 3:38 PM, Thomas Kellerer spam_ea...@gmx.net wrote:
 Gary Chambers wrote on 21.12.2009 23:15:

 The current maintainer is unsure about being able to do the right
 thing and recompile the code after fixing the query.

 Why not simply add the necessary GROUP BY?

Yeah, if you're code base is that fragile, bandaging it up by jumping
through hoops in pgsql is just putting off the inevitable when it (the
code base) has to get recompiled someday anyway.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] 8.4.1 distinct query WITHOUT order by

2009-12-21 Thread Scott Marlowe
On Mon, Dec 21, 2009 at 8:45 PM, Gary Chambers gwch...@gmail.com wrote:
 Yeah, if you're code base is that fragile, bandaging it up by jumping
 through hoops in pgsql is just putting off the inevitable when it (the
 code base) has to get recompiled someday anyway.

 I appreciate (and agree with) the concern about the fragility of the
 codebase.  The maintainer knows that anything except adding ORDER BY
 is a kludge.

 Now, the aforementioned notwithstanding...

Replacing the table with a view should work.  Just be sure to make
insert rules if you need it to be insertable by other parts of the app
I guess.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] 8.4.1 distinct query WITHOUT order by

2009-12-21 Thread Scott Marlowe
On Mon, Dec 21, 2009 at 10:18 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Mon, Dec 21, 2009 at 8:45 PM, Gary Chambers gwch...@gmail.com wrote:
 Yeah, if you're code base is that fragile, bandaging it up by jumping
 through hoops in pgsql is just putting off the inevitable when it (the
 code base) has to get recompiled someday anyway.

 I appreciate (and agree with) the concern about the fragility of the
 codebase.  The maintainer knows that anything except adding ORDER BY
 is a kludge.

 Now, the aforementioned notwithstanding...

 Replacing the table with a view should work.  Just be sure to make
 insert rules if you need it to be insertable by other parts of the app
 I guess.

Actually, since the order by in the view will be applied before the
hash_agg and stuff, I don't think it will work.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] need nelp with aggregate functions

2009-11-18 Thread Scott Marlowe
On Wed, Nov 18, 2009 at 9:55 AM, Another Trad anothert...@gmail.com wrote:
 The DB structure is in attachment.
 I with the number of clients and the number of computers that have
 processors with manufacturer = INTEL and speed = 2GB
 I am trying:

 select count(c) as qtd_client, count(cm) as qtd_computers

   from cliente c
   inner JOIN computer cm on (c.cliente_id = cm.cliente_id)
   inner join processor p on (cm.processor_id = p.processor_id)
 inner join speed s on (s.speed_id = p.speed_id)
 INNER JOIN manufacturer m on (p.manufacturer_id=m.manufacturer_id)

 where m.manufacturer = 'INTEL'
 and s.speed = '2GB'

 but is not working
 anyone can help me?

If you are getting an error message it's a good idea to post it as well.

However, first problem is that c and cm are tables, not fields.  You
need to count some field from those tables.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Foreign key columns

2009-11-05 Thread Scott Marlowe
On Thu, Nov 5, 2009 at 2:08 PM, Svenne Krap svenne.li...@krap.dk wrote:
 Hi.

 Is there a simple way to get foreign key data... for example I found a
 view, that does what I want ...

 It delivers

   fk_table   |     fk_column      |     pk_table      | pk_column |           
 constraint_name

 --++---+---+--

  organisation | customer_rep       | person            | id        | 
 organisation_customer_rep_fkey

  organisation | ekstra_skema       | ekstra_skema      | id        | 
 org_schema_fkey

  organisation | in_group           | organisation      | id        | 
 organisation_in_group_fkey

  organisation | org_paying_company | organisation      | id        | 
 organisation_org_paying_company_fkey

  organisation | primary_contact    | person            | id        | 
 primary_contact_fkey

  organisation | type               | organisation_type | id        | 
 organisation_type_fkey



 The query in question is

 SELECT FK.TABLE_NAME as FK_Table, CU.COLUMN_NAME as FK_Column,
 PK.TABLE_NAME as PK_Table, PT.COLUMN_NAME as PK_Column,
 C.CONSTRAINT_NAME as Constraint_Name
 FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C
 INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK ON C.CONSTRAINT_NAME
 = FK.CONSTRAINT_NAME
 INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK ON
 C.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME
 INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU ON C.CONSTRAINT_NAME =
 CU.CONSTRAINT_NAME
 INNER JOIN (
     SELECT i1.TABLE_NAME, i2.COLUMN_NAME FROM
 INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1
        INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2 ON
 i1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME
        WHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY') PT ON PT.TABLE_NAME =
 PK.TABLE_NAME WHERE FK.TABLE_NAME='organisation' ORDER BY 1,2,3,4;


 The only problem is that this query is sloow, runs in tens of seconds...

It runs in 112 milliseconds on my machine.  Maybe your catalogs are
extremely bloated?

 Is there a good native (i.e. fast) pgsql-query to find that type of
 information?

This one seems to work pretty well.  If you want to see a query to
find such things, the easy way is to start psql with the -E switch,
and issue a \d command on the organisation table and steal the SQL
from there.  That query will be pgsql specific, and possibly / likely
pgsql VERSION dependent, so know that going into it.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] DDL problems: Referential issue?

2009-11-04 Thread Scott Marlowe
On Wed, Nov 4, 2009 at 11:03 AM, Leif Biberg Kristensen
l...@solumslekt.org wrote:
 PostgreSQL 8.3.8 on Gentoo Linux.

 I've got a junction table:

 CREATE TABLE participants (
    person_fk INTEGER REFERENCES persons (person_id),
    event_fk INTEGER REFERENCES events (event_id) ON DELETE CASCADE,
    sort_order INTEGER NOT NULL DEFAULT 1,
    is_principal BOOLEAN NOT NULL DEFAULT TRUE,
    PRIMARY KEY (person_fk, event_fk)
 );
 CREATE INDEX event_key ON participants (event_fk);
 CREATE INDEX person_key ON participants (person_fk);

 Now I want to add some text to a few participants, but as this will probably
 only be for a few per cent, I try to create an extra table like this:

 pgslekt= CREATE TABLE participant_notes (
 pgslekt(     person_fk   INTEGER NOT NULL REFERENCES participants
 (person_fk),
 pgslekt(     event_fk    INTEGER NOT NULL REFERENCES participants (event_fk)
 ON DELETE CASCADE,
 pgslekt(     part_note   TEXT,
 pgslekt(     PRIMARY KEY (person_fk, event_fk)
 pgslekt( );
 NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
 participant_notes_pkey for table participant_notes
 ERROR:  there is no unique constraint matching given keys for referenced table
 participants

 I fail to see what is the problem. I even tried to add a unique constraint to
 participants:

You're referencing a single column, which does not have a unique key
on it.  Being part of a two column unique PK index doesn't count, as
you could have an entry where one column or the other repeats on its
own while the other column changes.  You might want the syntax:

FOREIGN KEY ( column_name [, ... ] ) REFERENCES reftable [ ( refcolumn
[, ... ] )

where you FK a pair of columns to a pair of other columns.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] DDL problems: Referential issue?

2009-11-04 Thread Scott Marlowe
On Wed, Nov 4, 2009 at 11:36 AM, Leif Biberg Kristensen
l...@solumslekt.org wrote:
 On Wednesday 4. November 2009 19.24.29 Scott Marlowe wrote:
 You're referencing a single column, which does not have a unique key
 on it.  Being part of a two column unique PK index doesn't count, as
 you could have an entry where one column or the other repeats on its
 own while the other column changes.  You might want the syntax:

 FOREIGN KEY ( column_name [, ... ] ) REFERENCES reftable [ ( refcolumn
 [, ... ] )

 where you FK a pair of columns to a pair of other columns.

 Thank you very much!

 pgslekt= CREATE TABLE participant_notes (
 pgslekt(     person_fk   INTEGER,
 pgslekt(     event_fk    INTEGER,
 pgslekt(     part_note   TEXT,
 pgslekt(     FOREIGN KEY (person_fk, event_fk) REFERENCES participants
 (person_fk, event_fk)
 pgslekt( );
 CREATE TABLE

 I'd missed that particular syntax.

 This table is now without a primary key, but is that a problem? I don't expect
 it to grow beyond maybe a few thousand rows.

Hard to say, but if you really need a PK, you can always create one later.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] DDL problems: Referential issue?

2009-11-04 Thread Scott Marlowe
On Wed, Nov 4, 2009 at 11:53 AM, Leif Biberg Kristensen
l...@solumslekt.org wrote:
 On Wednesday 4. November 2009 19.37.41 Scott Marlowe wrote:
 On Wed, Nov 4, 2009 at 11:36 AM, Leif Biberg Kristensen
 l...@solumslekt.org wrote:
 I'd missed that particular syntax.

 This table is now without a primary key, but is that a problem? I don't
 expect it to grow beyond maybe a few thousand rows.

 Hard to say, but if you really need a PK, you can always create one later.

 This looks strange to me, but it works:

 pgslekt= CREATE TABLE participant_notes (
 pgslekt(     person_fk   INTEGER NOT NULL,
 pgslekt(     event_fk    INTEGER NOT NULL,
 pgslekt(     part_note   TEXT,
 pgslekt(     PRIMARY KEY (person_fk, event_fk),
 pgslekt(     FOREIGN KEY (person_fk, event_fk) REFERENCES participants
 (person_fk, event_fk)
 pgslekt( );
 NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
 participant_notes_pkey for table participant_notes
 CREATE TABLE

Note that this will limit you to one record in your participant notes
for each record in the participants table.  If you need  1 of those,
then you could either create a serial and use that for a PK, or PK on
person_fk, event_fk and part_not, assuming part_note doesn't get real
big.  If it does you can PK on something like event, person, and
md5(part_note) or something along those lines.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Speed up UPDATE query?

2009-10-31 Thread Scott Marlowe
On Thu, Oct 29, 2009 at 11:50 AM, Lee Hachadoorian
lee.hachadoor...@gmail.com wrote:
 I'm trying to update several tables (all child tables of the same
 parent), and as the number of records increases, the length of time it
 takes to run the update is shooting up exponentially. I have imported
 the new data to an import table, and then join the import table to the
 update table. The update statement looks like:

 UPDATE
        household_2000 h
 SET
        hhincome = new_hhincome
 FROM (
        SELECT
                serial, new_hhincome
        FROM
                import
        WHERE
                year = 2000
        ) r
 WHERE
        h.serial = r.serial

 household_2000 is a child table of a household table that, as you
 might guess, only contains records from the year 2000. I am putting a
 year = 2000 restriction on the import table and then linking on the
 unique identifier.

 For different child tables, this is how long the update takes to run
 (numbers are approximate):

 Records         Cost (via EXPLAIN)            Actual time
 460,000         300,000                            23 seconds
 510,000         320,000                            26 seconds
 1.2 million      670,000                            3:16
 1.3 million      820,000                            3:25
 6.2 million      2.7 million                         ~2.5 hours

 So, the cost estimate given by EXPLAIN seems to be roughly
 proportional to the number of records in the dataset, but the actual
 time it takes to run seems to increase faster than the cost, even for
 the small and medium tables, and shoots through the roof for the large
 tables. Since I need to run this on additional child tables that are
 larger (the largest is 14 million records), I want to know what I can
 do to speed up the query.

 Here's the EXPLAIN for the query. Note that the query plan is the same
 for the small, medium, and large tables.

 Hash Join  (cost=1268532.36..2379787.06 rows=5465837 width=1128)
  Hash Cond: (import_6_17_rev_hh.serial = h.serial)
  -  Bitmap Heap Scan on import_6_17_rev_hh
 (cost=126551.72..308495.69 rows=5465837 width=8)
        Recheck Cond: (year = 1990)
        -  Bitmap Index Scan on import_6_17_rev_hh_pkey
 (cost=0.00..125185.26 rows=5465837 width=0)
              Index Cond: (year = 1990)
  -  Hash  (cost=295596.06..295596.06 rows=5527406 width=1124)
        -  Seq Scan on household_1990 h  (cost=0.00..295596.06
 rows=5527406 width=1124)

Any chance of getting the output of explain analyze for a fast and a
slow run of this query?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] question about timestamp with tz

2009-10-22 Thread Scott Marlowe
On Thu, Oct 22, 2009 at 2:41 PM, the6campbells the6campbe...@gmail.com wrote:
 Question.. is there a way that I can get Postgres to return the tz as
 supplied on the insert statement

PostgreSQL converts the timezone to GMT and stores it with no offset,
then adds an offset based on the TZ of the client requesting it back
later.

If you want to store the offset you'll have to do it yourself.  Note
that offset does not necessarily = timezone...

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] reading last inserted record withoud any autoincrement field

2009-10-05 Thread Scott Marlowe
On Sun, Oct 4, 2009 at 1:34 PM, Rob Sargent robjsarg...@gmail.com wrote:
 Osvaldo Kussama wrote:

 2009/10/4 mohammad qoreishy m_qorei...@yahoo.com


 How can get last inserted record in a table without any autoincrement
 filed?
 I need to  frequently fetch the last inserted record.
 If I must use the Cursor please explain your solution.




 RETURNING clause?
 http://www.postgresql.org/docs/current/interactive/sql-insert.html

 Osvaldo



 It took the OP to mean last insert as in randomly in the past, not as part
 of current transaction.  My fear is OP's schema has no way of identifying
 time-of-insert, nor a monotonically increasing record id and is hoping
 postgres has a some internal value that will return the most recently
 inserted record. Without a table definition it's hard to say.

Given that he's mentioning cursors, I'm guessing he's talking about
during this session / transaction.  So returning would be best.  Note
that returning returns a SET of results, so that if your insert
inserts 10 rows, you'll get back 10 rows from returning.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Can i customize null-padding for outer joins?

2009-10-01 Thread Scott Marlowe
On Thu, Oct 1, 2009 at 12:19 AM, Shruthi A shruthi.i...@gmail.com wrote:
 Hello,

 I have a query where I full-outer-join 2 tables, and all the columns other
 than the join column are numerical columns. For my further calculations i
 need to pad the unmatched tuples with 0 (zero) instead of NULL so that I can
 perform meaningful mathematical calculations on them. Is this currently
 possible?

something like

select a.x, coalesce(b.y,0) from a left join b yada...

work?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Working slow

2009-09-21 Thread Scott Marlowe
On Mon, Sep 21, 2009 at 7:58 AM, Judith Altamirano
jaltamir...@lux.com.mx wrote:
 hello every body, I'm having a data base in a point of sale that is getting
 frozen, I already have run a vacuum -z -d to reindex the data base and
 nothing happens.. Some suggestions to speed the process, Do you guys
 think that the data base is nearly to broke?

vacuumdb does not reindex, reindexdb does that.

Hard to say with so little evidence.

What does top, or vmstat 1, or iostat 10 or

select datname, current_query, waiting, (now()-query_start)::time(0) as runtime
from pg_stat_Activity
where current_query not ilike '%idle%'
order by query_start limit 10

have to say?

Is the database already bloated?  If so, how big is /data/base
compared to usual?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to simulate (run) the function

2009-09-04 Thread Scott Marlowe
On Fri, Sep 4, 2009 at 8:43 PM, bilal ghayyadbilmar...@yahoo.com wrote:
 Hello;

 I have an SQL function and I need to know how to simulate it (calling it and 
 pass for it the argument and see what the value it returns), HOW? Can I do 
 this from the pgAdminIII GUI or from the CLI?

 This method help to check that the function is working fine specially if it 
 will be called using the a specific username which has access to that 
 function.

You could call it in a transaction and roll back at the end of it.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Odd sort behaviour

2009-09-01 Thread Scott Marlowe
On Tue, Sep 1, 2009 at 3:01 PM, Rob Sargentrobjsarg...@gmail.com wrote:
 Since when does . sort as nothing at all

Since you set your locale equal to something like en_US instead of C

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Call Procedure From Trigger Function

2009-08-19 Thread Scott Marlowe
On Wed, Aug 19, 2009 at 3:53 PM, Doug Pisarekd...@strata-group.com wrote:
 I am in the process of coverting an Oracle 10.2.0.3 database to Postgresql
 8.3. I have a number of triggers in Oracle that make a call to packages.
 I know I will need to re-write the Oracle packages to postgres functions.
 The issue which I have can I make a procedure call from inside a postgres
 trigger function?

Yep

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] De-duplicating rows

2009-07-17 Thread Scott Marlowe
On Thu, Jul 16, 2009 at 9:07 PM, Christophex...@thebuild.com wrote:
 The Subject: is somewhat imprecise, but here's what I'm trying to do.  For
 some reason, my brain is locking up over it.

 I'm moving a 7.2 (yes) database to 8.4.  In the table in question, the
 structure is along the lines of:

        serial_number   SERIAL, PRIMARY KEY
        email           TEXT
        create_date     TIMESTAMP
        attr1           type
        attr2           type
        attr3           type
        ...

 (The point of the attr fields is that there are many more columns for each
 row.)

 The new structure removes the serial_number field, and uses email as the
 primary key, but is otherwise unchanged:

        email           TEXT, PRIMARY KEY
        create_date     TIMESTAMP
        attr1           type
        attr2           type
        attr3           type
        ...

 Now, since this database has been production since 7.2 days, cruft has crept
 in: in particular, there are duplicate email addresses, some with mismatched
 attributes.  The policy decision by the client is that the correct row is
 the one with the earliest timestamp.  (The timestamps are widely
 distributed; it's not the case that there is a single timestamp above which
 all the duplicates live.)  Thus, ideally, I want to select exactly one row
 per email, picking the row with the earliest timestamp in the case that
 there is more than one row with that email.

 Any suggestions on how to write such a SELECT?  Of course, I could do this
 with an application against the db, but a single SELECT would be great if
 possible.

OK, assuming we can keep the serial number during the conversion, we
could use something like this:

select distinct a.serial_number from table a join table b on
(a.email=b.email and a.serial_numberb.serial_number)

Now assuming that the serial numbers and the timestamps are in order
together, that'll give us all the serial numbers for all the matching
email addresses EXCEPT the first one.  If the serial numbers are not
in order with the timestamps, then create a sequence, and update them
in order, then the query will work.  Once you've confirmed by hand
that the first hundred or so serial_numbers you're getting back ARE in
fact all n+1 for the same email address, use the select in a subselect
to delete:

delete from table x where serial_number in (select distinct)

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Request new version to support on commit drop for create temp table ... as select ?

2009-07-14 Thread Scott Marlowe
On Tue, Jul 14, 2009 at 10:47 AM, Tom Lanet...@sss.pgh.pa.us wrote:
 Emi Lu em...@encs.concordia.ca writes:
 I googled to find that on commit drop does not support:

 (a) create temp table as select * from table1 where 12;
 http://archives.postgresql.org/pgsql-sql/2005-09/msg00153.php

 Ah, the pitfalls of believing that the first google hit you get
 is authoritative.  Didn't you notice that message was from 2005?

That's what I thought, but

create temp table xyz as select * from abc on commit drop;

still fails on 8.3.  Was this fixed in 8.4 or is my syntax wonky?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Best way to simulate Booleans

2009-07-07 Thread Scott Marlowe
On Mon, Jul 6, 2009 at 7:22 PM, Peter Headlandpheadl...@actuate.com wrote:
 I know, I know, PostgreSQL has Booleans that work very nicely.
 Unfortunately, I have to create a schema that will work on Oracle as well as
 PostgreSQL, by which I mean that a single set of Java/JDBC code has to work
 with both databases. I have an XML meta-schema that enables me to generate
 appropriate DDL; that handles all the INTEGER vs. NUMBER(m,n) stuff. But
 Oracle simply has no Booleans, so I will have to resort to some more or less
 ugly alternative. I am hoping that others here have had to deal with this
 and can suggest an approach that will be minimally loathsome.

The most transportable method would be to use either a char(1) or an
int with a check constraint.

mybool char(1) check (mybool in ('t','f'))
mybool int check (mybool =0 and =1)

Or something like that.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Sequences

2009-07-04 Thread Scott Marlowe
Easiest way is with pg_dump -s -t tablename dbname

On Sat, Jul 4, 2009 at 6:35 AM, Jasmin
Dizdarevicjasmin.dizdare...@gmail.com wrote:
 Nice Information. Does somebody know how to get the complete
 create-statement of an existing table/view?

 2009/7/3 Chris Browne cbbro...@acm.org

 Andre Rothe aro...@phosco.info writes:
  Where are stored the sequence information? How I can query the
  properties of a sequence like increment, max/min value, cache?
  I'm looking for a table like user_sequences in Oracle, where I
  can query all of my sequences.

 cbbrowne=# create sequence foo;
 CREATE SEQUENCE
 cbbrowne=# select * from foo;
  sequence_name | last_value | increment_by |      max_value      |
 min_value | cache_value | log_cnt | is_cycled | is_called

 ---++--+-+---+-+-+---+---
  foo           |          1 |            1 | 9223372036854775807 |
 1 |           1 |       1 | f         | f
 (1 row)

 Each sequence is effectively a relation.
 --
 output = (cbbrowne @ cbbrowne.com)
 http://www3.sympatico.ca/cbbrowne/wp.html
 Where do you want to Tell Microsoft To Go Today?

 --
 Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-sql



 --
 Mit freundlichen Grüßen

 Dizdarevic Jasmin
 Sonnenbergstr. 3
 6714 Nüziders, AUT

 jasmin.dizdare...@gmail.com
 +43 664 411 79 29





-- 
When fascism comes to America, it will be intolerance sold as diversity.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Assigning data-entry tasks to multiple concurrent clients

2009-06-01 Thread Scott Marlowe
On Sun, May 31, 2009 at 9:54 PM, Jamie Tufnell die...@googlemail.com wrote:
 BEGIN;
 SELECT * FROM records
 WHERE in_edit_queue AND id NOT IN (
   SELECT record_id FROM locked_records
   WHERE locked_since  now() + interval '5 minutes')
 LIMIT 1;

 INSERT INTO locked_records (record_id, locked_since) VALUES (?, now());
 COMMIT;

There's a race condition here but a unique constraint on record_id
will take care of that, as long as you catch the error and retry.

 Then to save (first-in wins is acceptable for this environment):

 BEGIN;
 UPDATE records SET in_edit_queue = false WHERE id = ? AND in_edit_queue =
 true;
 DELETE FROM locked_records WHERE record_id = ?;
 COMMIT;

 Is this a sane approach?  Is there a better way to do this with PostgreSQL?

It'll work.  The key to any kind of system like this is monitoring the
progress for things that get stuck / fail to be processed and running
them a second time if need be.  I had a system to process 1M rows at a
time from an 880M row db, and I used a secondary sequence and recid/1M
to partition it out.  So, the next job up grabs a sequence id from t
secondary sequence, which matches the record(or set) to be processed.
With that method there's no locking or anything needed, and no one
needs to check out the records, because incrementing the secindary
sequence is in fact checking them out.  Just check the finished table
to see if there's any holes and if there are put those jobs back in
the queue by simply updating their id to the next value for the
porimary id sequence.

Sequences can be an elegant way of assigning jobs to multiple threads
without locking issues.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Avoiding will create implicit index NOTICE

2009-06-01 Thread Scott Marlowe
On Mon, Jun 1, 2009 at 1:32 PM, Bryce Nesbitt bry...@obviously.com wrote:
 I'm looking for a good way to avoid triggering the will create implicit
 index NOTICE that Postgres (all versions) puts out.  This ends up spamming
 cron scripts for no good reason:

 = create table junk_six (foo int, primary key (foo));
 NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
 junk_six_pkey for table junk_six
 CREATE TABLE

 I've got a hacky solution (in perl), pulled from
  http://www.perlmonks.org/index.pl/jacques?node_id=540511
 which suppresses the warning:

 my $tmpwarn = $SIG{__WARN__};
 $SIG{__WARN__} = sub { print STDERR @_ if $_[0] !~ m/NOTICE:  CREATE TABLE/;
 };
 $sqldb-sql_execute(create table junk_six (foo int, primary key (foo)););
 $SIG{__WARN__} = $tmpwarn;

 And I know that I can edit the warning level in postgresql.conf with some
 other side effects.
 But the best solution would be to avoid the notice in the first place.  Is
 this possible?

You can also set log_min_messages by the connection, by the user, and
by the database.

alter user bubba set log_min_messages=error;

etc.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Distinct oddity

2009-05-08 Thread Scott Marlowe
On Fri, May 8, 2009 at 3:28 AM, Maximilian Tyrtania
maximilian.tyrta...@onlinehome.de wrote:
 am 07.05.2009 20:54 Uhr schrieb Scott Marlowe unter scott.marl...@gmail.com:

 On Thu, May 7, 2009 at 2:21 AM, Maximilian Tyrtania
 maximilian.tyrta...@onlinehome.de wrote:
 Hi there,

 does this look right?

 FAKDB=# select count(distinct(f.land)) from firmen f where
 f.typlist='Redaktion';
  count
 ---
  1975
 (1 row)

 FAKDB=# select count(distinct(f.land||'1')) from firmen f where
 f.typlist='Redaktion';
  count
 ---
  4944
 (1 row)

 Yeah, that does seem odd.  Could it be something like nulls in your
 data set?  just guessing really.  If you could make a small test case
 that shows it happening and allows others to reproduce it you're
 likely to get more bites.

 It doesn't seem to be related to null values (which wouldn't explain it
 anyway) nor to this particular field...

 FAKDB=# select count(*) from firmen where bezeichnung is null;
  count
 ---
     0
 (1 row)

That's not the same field as in the original query.


 My attempts at reproducing this with a freshly created table failed, of
 course.

Instead of trying to create a test case from scratch, isolate some
rows that cause this, put them in another table, and then pg_dump that
one table, cleaned as needed for privacy, here.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Distinct oddity

2009-05-07 Thread Scott Marlowe
On Thu, May 7, 2009 at 2:21 AM, Maximilian Tyrtania
maximilian.tyrta...@onlinehome.de wrote:
 Hi there,

 does this look right?

 FAKDB=# select count(distinct(f.land)) from firmen f where
 f.typlist='Redaktion';
  count
 ---
  1975
 (1 row)

 FAKDB=# select count(distinct(f.land||'1')) from firmen f where
 f.typlist='Redaktion';
  count
 ---
  4944
 (1 row)

Yeah, that does seem odd.  Could it be something like nulls in your
data set?  just guessing really.  If you could make a small test case
that shows it happening and allows others to reproduce it you're
likely to get more bites.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Query with Parameters and Wildcards

2009-04-26 Thread Scott Marlowe
On Sun, Apr 26, 2009 at 6:21 PM, landsharkdaddy
ld...@landsharksoftware.com wrote:

 I have a query that works on SQL Server to return customers that contain the
 string entered by the user by accepting parameters and using the LIKE
 keyword. I would like to move this to postgreSQL but I'm just not sure how
 to get it done. This is the query

 SELECT * FROM Customers WHERE FirstName LIKE @custfirst + '%';

 This works great on SQL Server but not on postgreSQL. Any help would be
 appreciated.

Have you tried:

SELECT * FROM Customers WHERE FirstName LIKE 'custfirst%';

What does the @ do in sql server?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Re: [GENERAL] Frequently unable connecting to db server doesn't listen

2009-04-19 Thread Scott Marlowe
On Sun, Apr 19, 2009 at 8:10 PM, Net Tree Inc. nettree...@gmail.com wrote:

 If this e-mail address is not intend use for asking questions using e-mail
 suscription, please ignore it.  I could not find any official PostgreSQL
 support forum for me to post ask questions. Appreciated if you could you
 refer me a few good PostgreSQL support forum for me to ask questions.

This is a mailing list, so you're here!  No need to look for web
forums or anything.

 The problem I am having has happened a few times within in a week. I am
 repeatly not able to connect to the db and having server doesn't listen
 message without touch anything, it just happen. First time it happen after I
 restarting my computer that has not been restart for almost a month. After
 restarting and trying to connect to DB and it failed, also I notice my
 computer can not be restart nor shutdown under the normal way by click on
 start menu and shutdown or restart. I can only shut it down use the hard
 way by pressing the power button, I don't know why, but its another story.
 Therefore for DB, I only can fix it by uninstall and reinstall PostgreSQL.
 But it happening repeatly.

 Any one experience it? or to guide me to where I can find help?

Several possibilities.  What's your max_connections set to?  What does
the postgresql log have to say (if anything) about these failed
connections?  How many postgres processes are running when this
happens?  What OS is this on? Are you using some kind of persistent
php client?  Or some other kind of pooling client that might be using
up all the connections?

Any other info you can give us will help.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Postgres entering zombie state once a week in production evnvironment

2009-04-14 Thread Scott Marlowe
On Tue, Apr 14, 2009 at 12:25 AM, Bryce Nesbitt bry...@obviously.com wrote:
 We have a medium scale installation of Postgres 8.3 that is freezing about
 once a week.  I'm looking for any hints on how to diagnose the situation, as
 nothing is logged.

 The system is matched pair of Sunfire servers, running Debian Etch with a
 2.6.18-6-amd64 kernel, PostgreSQL 8.3.4, and DRBD 8.0.13.

 During a failed state, pg_stat_activity will show hundreds of statements
 pending.  query_start will show the statements arriving at a normal rate (a
 few per second), but clearly they never complete.  The bulk of these
 statement are a simple select that starts each web session, a statement that
 generally completes in tenths of milliseconds.  Restarting postgres restores
 normal operation, at the loss of all chance of figuring out what was wrong.

What does pg_locks say during this time?  Specifically about locks
that aren't granted?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Postgres entering zombie state once a week in production evnvironment

2009-04-14 Thread Scott Marlowe
On Tue, Apr 14, 2009 at 2:59 PM, Bryce Nesbitt bry...@obviously.com wrote:
 Scott Marlowe wrote:

 What does pg_locks say during this time?  Specifically about locks
 that aren't granted?

 I don't know, yet.  Though these events go for 15-30 minutes before postgres
 restart, and no deadlocks are detected, so I don't think it is locks.

Ummm, deadlocks  locks blocking other queries.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] changing multiple pk's in one update

2009-04-10 Thread Scott Marlowe
2009/4/7 Stuart McGraw smcg2...@frii.com:
 Hello all,

 I have a table with a primary key column
 that contains sequential numbers.

 Sometimes I need to shift them all up or down
 by a fixed amount.  For example, if I have
 four rows with primary keys, 2, 3, 4, 5, I
 might want to shift them down by 1 by doing:


Generally speaking, when you need to do this more than once or twice
in the lifetime of your data, there's something wrong with your data
model.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How would I get rid of trailing blank line?

2009-04-02 Thread Scott Marlowe
On Thu, Apr 2, 2009 at 3:33 PM, Tena Sakai tsa...@gallo.ucsf.edu wrote:
 Hi Everybody,

 I am using postgres 8.3.4 on linux.
 I often use a line like:
   psql -tf query.sql mydatabase  query.out

 -t option gets rid of the heading and count
 report at the bottom.  There is a blank line
 at the bottom, however.  Is there any way to
 have psql not give me that blank line?

Tired of those blank lines in your text files?  Grep them away:

psql -tf query.sql mydatabase | grep -v ^$  query.out



 Thank you for your help.

 Regards,

 Tena Sakai
 tsa...@gallo.ucsf.edu




-- 
When fascism comes to America, it will be the intolerant selling it as
diversity.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Add: Special sort querstion

2009-04-01 Thread Scott Marlowe
On Fri, Mar 27, 2009 at 6:10 AM, Dominik Piekarski
d.piekar...@vivawasser.de wrote:
 Oh, actually every row of the same id-range has the same start_lat/start_lng
 coordinates as the predecessors end_lat/end_lng coordinates. But the
 question remains the same. Is there a way to do something like ORDER BY
 (start_lat = end_lat AND start_lng = end_lng) ? Or maybe another way to
 achieve the same result?

Would something like

order by start_lat-endlat, start_lng-end_lng

OR

case when start_lat=end_lat AND start_lng=end_lng then 0 else 1 end

???

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] alter table on a large db

2009-03-19 Thread Scott Marlowe
On Thu, Mar 19, 2009 at 1:57 AM, Zdravko Balorda
zdravko.balo...@siix.com wrote:

 Hi,

 I need to make some ALTER TABLEs. It takes about 30min to copy
 this quite large databse, bat several ours to run a bunch of ALTER
 TABLE statements.
 Is there any way to make it faster? I wonder what could possibly alter table
 be doing all this time.

Which alter table statements are you running in particular?  Most
alter table stuff runs pretty quickly, like adding a column and such.
Just wondering.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] counts of groupings by date year-month

2009-02-27 Thread Scott Marlowe
On Fri, Feb 27, 2009 at 2:02 PM, Carol Cheung
cache...@consumercontact.com wrote:
 Hi,
 I have a table called temp

  access_date | active | status
 -++
  2009-02-01  | t      |     15
  2009-02-01  | f      |     16
  2009-02-02  | f      |     17
  2009-02-01  | t      |     17
  2009-02-02  | f      |     21
  2009-01-01  | t      |     20
  2009-01-01  | t      |     21
  2009-01-01  | f      |     21


 What I want is to be able to get counts of active by year-month. So the
 output would be like:

  year_month | count
 +---
 200901      | 3
 200902      | 5

 I tried something like
 SELECT to_char(access_date, 'MM') as year_month, count(year_month) FROM
 temp GROUP BY year_month ORDER BY year_month;

 but I'm unable to execute this query because the column year_month doesn't
 exist in temp table.

Try date_trunc:

select date_trunc('day',timestamp), count(*) from table where active
is true group by date_trunc('day',timestamp) order by
date_trunc('day',timestamp);

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Is this possible?

2009-02-17 Thread Scott Marlowe
On Mon, Feb 16, 2009 at 7:36 PM, johnf jfabi...@yolo.com wrote:
 Hi,
 I'm not to sure this is possible.

 I need to replace a primary key (pkid) with the value of a different field.
 I have
 pkid = 200
 attendid = 301

 I need the pkid = 301

 But there may or may not be a pkid that already exist that has the value of
 301.  The attendid is unique and the pkid data type is serial (has a
 sequence).

If the FK is on update cascade just update it.  and setval() the
sequence to be max(pkid)+1.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Grass Root Protectionism

2009-02-07 Thread Scott Marlowe
Oh, and might I ask what you've done for pgsql, Mr. hiding behind an
anonymous email address at yahoo?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] some howto/theory book/tutorial on practical problem solving in SQL

2009-01-11 Thread Scott Marlowe
On Sun, Jan 11, 2009 at 9:32 AM, Ivan Sergio Borgonovo
m...@webthatworks.it wrote:

 My current problem is how to manage discounts in SQL, inside
 transactions. Specifically how to delete promotions if they are
 overlapping, considering I have to display discounted prices on a 1M
 article DB and I may have hundreds of promotions running and they
 may involve even 10% of the catalogue. But this is just the
 beginning.

 I bet I'll have different set of problems later.

Well, if you can't find a book that deals with it, I'm betting someone
on the list will understand the issue and help out. :)

 I skimmed through Celko books and at the moment they seems the
 nearest thing to what I'd like to learn even if too much
 theoretical at the moment. O'Reilly SQL cookbook is another
 example of the kind of stuff I'm looking for... but the examples are
 more like ingredients then recipes.

I highly recommend Celko's SQL books.  They may seem too theoretical,
but they have a lot of good information I found myself reusing all the
time when I first started out.  I was a little disconcerted by his
resemblence to Anton Lavie (sp) at first though.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How much the max image size can be inserted into Postgresql

2008-12-24 Thread Scott Marlowe
On Wed, Dec 24, 2008 at 7:00 AM, venkat ven.tammin...@gmail.com wrote:
 Dear All,

  I want to insert image which is more than 1 GB.is it possible to store that
 same size or we can store more than that.Please let me know .

Wow.  The need for transactional semantics and storage of  1Gig file
size is an odd set of requirements.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] index compatible date_trunc in postgres?

2008-12-18 Thread Scott Marlowe
On Thu, Dec 18, 2008 at 10:46 PM, Bryce Nesbitt bry...@obviously.com wrote:
 I've got a legacy app that does 8.3 incompatible date searches like so:
 explain select count(*) from contexts where publication_date like '2006%';
 explain select count(*) from contexts where publication_date like
 '2006-09%';

 I've got my choice of refactoring, but all these share the same
 sequential scan limitation:
 explain select count(*) from contexts where publication_date::text LIKE
 '2006%';
 explain select count(*) from contexts where
 date_trunc('year',publication_date) = '2006-01-01';
 explain select count(*) from contexts where extract('year' from
 publication_date) = '2006';

 Are there any other index compatible methods, other than turning it into
 a range search?
 explain select count(*) from contexts where publication_date =
 '2006-01-01' and publication_date  '2007-01-01';
 explain select count(*) from contexts where publication_date =
 '2006-09-01' and publication_date  '2006-09-31 24:00:00';

You can create an index on date_trunc (on timestamp without timezone,
but not on timestamp with timezone since it's not immutable)

create index mytable_datetrunc_month on mytable (date_trunc('month',
timestampfield));

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Best way to restrict detail rows?

2008-12-08 Thread Scott Marlowe
On Mon, Dec 8, 2008 at 1:56 PM, Christopher Maier [EMAIL PROTECTED] wrote:
 I have a master-detail kind of situation, as illustrated here:

 CREATE TABLE master(
id SERIAL PRIMARY KEY,
foo TEXT
 );

 CREATE TABLE detail(
id SERIAL PRIMARY KEY
master BIGINT NOT NULL REFERENCES master(id),
bar TEXT
 );

 (this is a simplification, of course)

 I would like a way to restrict the addition of new detail records, but only
 after the initial detail records have been inserted into the system.

After you create the table do something like this:

create rule detail_no_insert as on insert to detail do nothing;
create rule detail_no_update as on update to detail do nothing;

poof.  no more updates or inserts work.  Note that copy will still
work, as it doesn't fire rules.  So, you can update the data with
copy, and otherwise not touch it.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Best way to restrict detail rows?

2008-12-08 Thread Scott Marlowe
On Mon, Dec 8, 2008 at 2:28 PM, Scott Marlowe [EMAIL PROTECTED] wrote:
 On Mon, Dec 8, 2008 at 1:56 PM, Christopher Maier [EMAIL PROTECTED] wrote:
 I have a master-detail kind of situation, as illustrated here:

 CREATE TABLE master(
id SERIAL PRIMARY KEY,
foo TEXT
 );

 CREATE TABLE detail(
id SERIAL PRIMARY KEY
master BIGINT NOT NULL REFERENCES master(id),
bar TEXT
 );

 (this is a simplification, of course)

 I would like a way to restrict the addition of new detail records, but only
 after the initial detail records have been inserted into the system.

 After you create the table do something like this:

 create rule detail_no_insert as on insert to detail do nothing;
 create rule detail_no_update as on update to detail do nothing;

 poof.  no more updates or inserts work.  Note that copy will still
 work, as it doesn't fire rules.  So, you can update the data with
 copy, and otherwise not touch it.


whoops!  do INSTEAD nothing.

-- 
When fascism comes to America, it will be draped in a flag and
carrying a cross - Sinclair Lewis

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Best way to restrict detail rows?

2008-12-08 Thread Scott Marlowe
On Mon, Dec 8, 2008 at 2:31 PM, Richard Broersma
[EMAIL PROTECTED] wrote:
 One Idea that popped into my head that may-or-may-not work would be to
 add a constraint trigger that checks if all of the detail records have
 the same xmin as the order table record.

Yes, it's not as simple as I first thought when I read it.

I'd look at using a udf that used a sec definer that only it had to do
the row adds and do everything at once, inserting to both tables at
the time of the creation of the order.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] adding order by to a group by query

2008-12-06 Thread Scott Marlowe
On Sat, Dec 6, 2008 at 10:31 AM, Louis-David Mitterrand
[EMAIL PROTECTED] wrote:
 On Sat, Dec 06, 2008 at 06:26:06PM +0100, Andreas Kretschmer wrote:
 Louis-David Mitterrand [EMAIL PROTECTED] schrieb:
 
  But if I append this
 
   order by pt.type_fr = 'comédien';
 
  I get this error:
 
  ERROR:  column pt.type_fr must appear in the GROUP BY clause or be 
  used in an aggregate function
 
  It seems I am using pt.type_fr in an aggregate function (array_accum()),
  yet I get the error.
 
  Is there a way to to have a certain pt.type_fr bubble up (or down) in my
  search?

 You can use a subquery like my example:

 test=*# select i, comma(t) from (select distinct i,t from foo) bar group by 
 i;
  i |  comma
 ---+-
  1 | a, b, c
 (1 row)

 Time: 0.554 ms
 test=*# select i, comma(t) from (select distinct i,t from foo order by t 
 desc) bar group by i;

 Thanks Andreas, that would be good solution.

 (still curious about the must be used in an aggregate function error
 though... because I do use it in an aggregate)

You can order by the same thing you selected:

select sum(i) from ... group by j order by sum(i)
OR
select sum(i) from ... group by j order by 1  --  1 stands for the
first select list item...

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Sequence and nextval problem

2008-11-24 Thread Scott Marlowe
On Mon, Nov 24, 2008 at 12:12 PM, Tk421 [EMAIL PROTECTED] wrote:
   Hello everybody.

   I've got an vb aplication that uses an Access database. I'm trying to
 convert the database to postgres. The conversion was done ok, but i've got a
 little problem that i don't know how to solve. Let's see if anyone can help
 me.

   The conversion from access database to postgres worked fine. Everithing
 it's ok. But now, when i use my database i've found a problem with
 sequences. In the conversion, the autonumeric fields from access have been
 converted to sequences, everithing ok in a first view. The problem comes
 because the autonumeric fields in access always return the last value of the
 table +1, but postgres no. Postgres returns lost (i don't know how to call
 them) values. An example.

   This is an example of a table:

  code | description
  - | 
  1  | desc 1
  2  | desc 2
  6  | desc 6
  7  | desc 7


   In access if i execute INSERT INTO table (description) VALUES ('desc 8'),
 the result row is  8 |  desc 8
   But in postgres the same query te result row is 3 | desc 8

   My question is, can i do something to make ANY sequence to take the last
 value from his associated table, and not a lost value?

The sequence should be set to the next value available after loading
data and then left alone.  You can set the value with
setval('seqname');  It looks to me like if you did a few more inserts,
you'd hit the value of 6 for your id field and your insert would fail
until the sequence got past 7 then it would start working.

Note that in postgresql, the value given by nextval is the next value
of the sequence, not max(val)+1 as max(val)+1 doesn't scale / isn't
really transaction safe.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] BULK COLLECT

2008-11-21 Thread Scott Marlowe
2008/11/21 Paul Dam [EMAIL PROTECTED]:
 Hoi,



 Is there an equivalent in PL/pgSQL for BULK COLLECT in PL/SQL of Oracle?

I'm not that familiar with BULK COLLECT in oracle.  What does it do?

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Measuring degredation of CLUSTER INDEX operation

2008-11-10 Thread Scott Marlowe
On Mon, Nov 10, 2008 at 12:54 PM, Bryce Nesbitt [EMAIL PROTECTED] wrote:
 I've got a table for which CLUSTER tablename USING index makes an order of
 magnitude difference.

 Are there ways to determine how unclustered this table becomes over time,
 so I can schedule downtime to recluster?  I could use the pg_stat tables,
 but this seems awkward.

You should be able to run analyze then select correlation from
pg_stats where schemaname='yourschename' and
tablename='yourtablename';

the closer you are to 1.0 the better the clustering.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL]

2008-11-05 Thread Scott Marlowe
On Wed, Nov 5, 2008 at 9:21 AM, Hemant Patel [EMAIL PROTECTED] wrote:
 Let Say I have the array of primary key of some table say XYZ.

 For e.g. (555,222,333,111)



 When I query for these results I will get the result like  in the order of
 (111,222,333,555) .

 So now I need to process in the business logic to maintain the search
 criteria.

The fact that you get results in a certain order without using an
order by clause is a happy coincidence, and not to be relied upon.
Should PostgreSQL choose a different query plan they may come back in
some other order.

I.e. you HAVE to use an order by clause if you want a certain order.  period.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Date Index

2008-10-30 Thread Scott Marlowe
On Thu, Oct 30, 2008 at 2:49 PM, Ryan Hansen
[EMAIL PROTECTED] wrote:
 Hey all,



 I'm apparently too lazy to figure this out on my own so maybe one of you can
 just make it easy on me. J



 I want to index a timestamp field but I only want the index to include the
 -mm-dd portion of the date, not the time.  I figure this would be where
 the expression portion of the CREATE INDEX syntax would come in, but I'm
 not sure I understand what the syntax would be for this.

Really depends on what you want to do with it.  Easiest way is to cast it:

smarlowe=# create table dtest (id int, ts timestamp);
CREATE TABLE
smarlowe=# insert into dtest values (1,'2008-09-01 12:30:00');
INSERT 0 1
smarlowe=# insert into dtest values (1,'2008-09-02 10:30:00');
INSERT 0 1
 create index dtest_tstodate on dtest ((ts::date));
CREATE INDEX
set enable_seqscan=off;
SET
explain select * from dtest where ts::date='2009-09-02';
 QUERY PLAN
-
 Index Scan using dtest_tstodate on dtest  (cost=0.00..8.27 rows=1 width=12)
   Index Cond: ((ts)::date = '2009-09-02'::date)
(2 rows)

Note that since the table is so small the db would have seq scanned it
if I hadn't turned off seqscans to test.  But since it used the index,
that proves it's there and working.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


  1   2   3   4   >