Re: [GENERAL] Windows Library for libpq

2004-02-27 Thread Shachar Shemesh
Shachar Shemesh wrote:

Karam Chand wrote:

Hello

I have read the docs.you need to install PostgrSQL
using cygwin etc. I am not in a position right now to
do that :)
Are no standard libpq.lib for Windows available?

In MySQL you get a precompiled library for C API and
that is very helpful.
Can some body mail me a compiled library of C API for
windows. That will be very helpful...
Karam
 

A binary for libpq.dll is now available for download from 
http://gborg.postgresql.org/project/oledb/download/download.php. Enjoy.

--
Shachar Shemesh
Lingnu Open Systems Consulting
http://www.lingnu.com/
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Windows Library for libpq

2004-02-27 Thread Karam Chand
Thanks.

--- Shachar Shemesh <[EMAIL PROTECTED]> wrote:
> Shachar Shemesh wrote:
> 
> > Karam Chand wrote:
> >
> >> Hello
> >>
> >> I have read the docs.you need to install
> PostgrSQL
> >> using cygwin etc. I am not in a position right
> now to
> >> do that :)
> >>
> >> Are no standard libpq.lib for Windows available?
> >>
> >> In MySQL you get a precompiled library for C API
> and
> >> that is very helpful.
> >>
> >> Can some body mail me a compiled library of C API
> for
> >> windows. That will be very helpful...
> >>
> >> Karam
> >>  
> >>
> A binary for libpq.dll is now available for download
> from 
>
http://gborg.postgresql.org/project/oledb/download/download.php.
> Enjoy.
> 
> -- 
> Shachar Shemesh
> Lingnu Open Systems Consulting
> http://www.lingnu.com/
> 


__
Do you Yahoo!?
Get better spam protection with Yahoo! Mail.
http://antispam.yahoo.com/tools

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Help with a query

2004-02-27 Thread Bas Scheffers
If the user/groups you are talking about are postgres users and groups,
this is it:

select * from pg_catalog.pg_group where (select usesysid from
pg_catalog.pg_user where usename = 'user') = any(grolist)

The place to find this kind of thing is the Postgres Internals section
(system catalogs) that desribes the system tables. "any" is an array
function which is needed as the users that belong to a group are an array
of INT user ids.

Hope that helps,
Bas.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] Case of strings

2004-02-27 Thread Simon Windsor








Hi

 

I have moved an application from MySQL to PgSQL, and after
the making changes I thought all was ok. 

 

However, I have just realised that

 

Where A = ‘STRING’

 

Is not the same as

 

Where A =’String’

 

Is there anyway I can force the PgSQL to accept case
equivalence, or must I add upper()/lower() to force the case and then make
string tests?

 

Ie Where upper(A)=upper(‘String’)

 

Many thanx

 

Simon

 

Simon Windsor

Eml: [EMAIL PROTECTED]

Tel: 01454 617689

Mob: 07960 321599

 





-- 
This message has been scanned for viruses and
dangerous content by
MailScanner, and is
believed to be clean.
Mailscanner thanks transtec Computers for their support.



Re: [GENERAL] Help with a query

2004-02-27 Thread Bas Scheffers
Klint,

> select groname from pg_group
> where (select usesyside from pg_shadow where usename = 'postgres') =
> any(grolist);
Unless you are lgged in as superuser (and applications other than pgAdmin
et al shouldn't be) you will get access denied on pg_shadow. (because it
contains passwords) Selecting on pg_user gives you the same result and can
be done by any user.

Bas.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Case of strings

2004-02-27 Thread cnliou
>Is there anyway I can force the PgSQL to accept case 
equivalence, or must I
>add upper()/lower() to force the case and then make string 
tests?
>
> 
>
>Ie Where upper(A)=upper('String')

I think you already answered your own question as pgsql 
document does in section

"9.4. String Functions and Operators"

Regards,
CN

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL insert speed tests

2004-02-27 Thread Shridhar Daithankar
Sezai YILMAZ wrote:
Test Hardware:
IBM Thinkpad R40
CPU: Pentium 4 Mobile 1993 Mhz (full powered)
RAM: 512 MB
OS: GNU/Linux, Fedora Core 1, kernel 2.4.24
A test program developed with libpq inserts 200.000 rows into table 
logs. Insertions are made with 100 row per transaction (total 2.000 
transactions).

Some parameter changes from postgresql.conf file follows:

shared_buffers = 2048   # min max_connections*2 or 16, 8KB each
I suggest you up that to say 1 buffers..

max_fsm_relations = 2   # min 10, fsm is free space map, ~40 bytes
max_fsm_pages = 20  # min 1000, fsm is free space map, ~6 bytes
max_locks_per_transaction = 256 # min 10
wal_buffers = 64# min 4, typically 8KB each
sort_mem = 32768# min 64, size in KB
You need to pull it down a little, I guess. How about 8/16MB?

vacuum_mem = 16384  # min 1024, size in KB
Not required. 1024 could be done since you are testing inserts anyways. Of 
course, it matters only when you run vacuum..

effective_cache_size = 2000 # typically 8KB each
Is that true? It tells postgresql that it has around 16MB memory. Set it up 
around 15000 so that around 100MB+ is used. Might change the results of index 
scans.. I always prefer to set it to whatever available.

The test was made with both of PostgreSQL 7.3.4 and PostgreSQL 7.4.1 (the
test program was recompiled during version changes).
The results are below (average inserted rows per second).

  speed for speed for
# of EXISTING RECORDSPostgreSQL 7.3.4  PostgreSQL 7.4.1
= 

 0 initial records   1086 rows/s   1324 rows/s
200.000 initial records781 rows/s893 rows/s
400.000 initial records576 rows/s213 rows/s
600.000 initial records419 rows/s200 rows/s
800.000 initial records408 rows/s   not tested because of bad 
results
Do you mean 80? I believe the '.' is a thousands separator here but not too 
sure..:-)

When the logs table reconstructed with only one index (primary key) then 
2941 rows/s speed is reached. But I need all the seven indexes.

The question is why the PostgreSQL 7.4.1 is so slow under heavy work?
Can you run vmstat and see where things get stalled? Probably you can up the 
number of WAL segments and attempt.

Is there a way to speed up inserts without eliminating indexes?

What about concurrent inserts (cocurrent spare test program execution) 
into the same table? It did not work.
What does it mean, it didn't work? Any errors?

HTH

 Shridhar



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Simple, but VERYuseful enhancement for psql command - or am I

2004-02-27 Thread Nick Barr
Ben wrote:
I'm designing a fairly involved database system. As part fo the process, I
use the \i [FILE] command a great deal. I set up fairly involved queries,
sometimes simply for the purpose of shortening column names so the output
is reasonable. For example:
SELECT longname AS abbr,othername as "V" FROM table WHERE how;

...a bunch of these can result in a single-line output on the console,
which is a lot easier to deal with than a dump of the actual field names
which wraps around and makes you scroll back and forth trying to line up
the names with the values.
Now, in my case, I'm dealing with specific orders. So the WHERE clause
might be:
...WHERE zorder=104788;

Which works fine. But, I have to edit the file every time I'm working with
a different order, which is repetative and annoying, something computers
are supposed to save us from. :)
However, you can't leave it out; \i [FILE] expects the query to be
complete, ready to go to the server. As far as I can tell.
So - how about a command to read a file into the input lines withOUT
sending it yet, so that its ready to type the last part, such as:
   104788;

In other words, the file would end here:

...WHERE zorder=104788;
   ^
   |
   |
...then I could just type the number, hit enter, and off it would go.
Or even if it has to be complete, right now, you can use \i [FILE] and it
runs, but you can't edit the thing with the line review editing tools...
it shows the \i [FILE] command, not what the command read. That would work
too, even if it caused a dummy read the first time you used it.
Input, anyone?

--Ben

I am not sure about this exactly, but a workaround could be using 
temporary sequences. I use these a lot in some of my more involved DB 
setup scripts.

So for instance in the top level file you have:

---
CREATE SEQUENCE temp_zorder_num_seq;
SELECT setval('temp_zorder_num_seq', 104788);
\i Somefile.sql

DROP SEQUENCE
---
The in any \i file you can just use:

---
INSERT INTO some_table (zorder_num, ...) VALUES 
(currval('temp_zorder_num_seq'), ...);
---

All you have to change is the setval at the top of the script. Make sure 
you drop the sequences though ;-).

HTH

Nick

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] PostgreSQL insert speed tests

2004-02-27 Thread Bill Moran
I don't know the answer to the question of why 7.4 is slower, but I have
some suggestions on additional things to test, and how to make it faster.
First off, try 200 transactions of 1000 records each, you might even want
to try 20 transactions of 10,000 records each.  Postgres seems to run much
faster the less commits you have, but different configs may change the
sweet spot.
Secondly, one possible solution to your problem is to drop the indexes,
insert the new rows and recreate the indexes.  Of course, for testing,
you'll want to time the entire process of drop/insert/create and compare
it to the raw insert time with indexes intact.  I use a stored procedure
on my databases, i.e.:
select drop_foo_indexes();
...

...
select create_foo_indexes();
Another thing to consider is vacuums.  You don't mention how often you
vacuumed the database during testing, I would recommend a "vacuum full"
between each test (unless, of course, you're testing how much a lack
of vacuum hurts performance ;)
Hope this helps.

Sezai YILMAZ wrote:
Hello

I need high throughput while inserting into PostgreSQL. Because of that I
did some PostgreSQL insert performance tests.

-- Test schema
create table logs (
  logid serial primary key,
  ctime integer not null,
  stime integer not null,
  itime integer not null,
  agentid integer not null,
  subagentid integer not null,
  ownerid integer not null,
  hostid integer not null,
  appname varchar(64) default null,
  logbody varchar(1024) not null
);
create index ctime_ndx on logs using btree (ctime);
create index stime_ndx on logs using btree (stime);
create index itime_ndx on logs using btree (itime);
create index agentid_ndx on logs using hash (agentid);
create index ownerid_ndx on logs using hash (ownerid);
create index hostid_ndx on logs using hash (hostid);

Test Hardware:
IBM Thinkpad R40
CPU: Pentium 4 Mobile 1993 Mhz (full powered)
RAM: 512 MB
OS: GNU/Linux, Fedora Core 1, kernel 2.4.24
A test program developed with libpq inserts 200.000 rows into table 
logs. Insertions are made with 100 row per transaction (total 2.000 
transactions).

Some parameter changes from postgresql.conf file follows:

shared_buffers = 2048   # min max_connections*2 or 16, 8KB each
max_fsm_relations = 2   # min 10, fsm is free space map, ~40 bytes
max_fsm_pages = 20  # min 1000, fsm is free space map, ~6 bytes
max_locks_per_transaction = 256 # min 10
wal_buffers = 64# min 4, typically 8KB each
sort_mem = 32768# min 64, size in KB
vacuum_mem = 16384  # min 1024, size in KB
checkpoint_segments = 6 # in logfile segments, min 1, 16MB each
checkpoint_timeout = 900# range 30-3600, in seconds
fsync = true
wal_sync_method = fsync # the default varies across platforms:
enable_seqscan = true
enable_indexscan = true
enable_tidscan = true
enable_sort = true
enable_nestloop = true
enable_mergejoin = true
enable_hashjoin = true
effective_cache_size = 2000 # typically 8KB each
geqo = true
geqo_selection_bias = 2.0   # range 1.5-2.0
geqo_threshold = 11
geqo_pool_size = 0  # default based on tables in statement,
   # range 128-1024
geqo_effort = 1
geqo_generations = 0
geqo_random_seed = -1   # auto-compute seed

The test was made with both of PostgreSQL 7.3.4 and PostgreSQL 7.4.1 (the
test program was recompiled during version changes).
The results are below (average inserted rows per second).

  speed for speed for
# of EXISTING RECORDSPostgreSQL 7.3.4  PostgreSQL 7.4.1
= 

 0 initial records   1086 rows/s   1324 rows/s
200.000 initial records781 rows/s893 rows/s
400.000 initial records576 rows/s213 rows/s
600.000 initial records419 rows/s200 rows/s
800.000 initial records408 rows/s   not tested because of bad 
results

When the logs table reconstructed with only one index (primary key) then 
2941 rows/s speed is reached. But I need all the seven indexes.

The question is why the PostgreSQL 7.4.1 is so slow under heavy work?

Is there a way to speed up inserts without eliminating indexes?

What about concurrent inserts (cocurrent spare test program execution) 
into the same table? It did not work.
--
Bill Moran
Potential Technologies
http://www.potentialtech.com
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Simple,

2004-02-27 Thread John Sidney-Woollett
Better would be to match Oracle's sqlPlus feature, DEFINE.

The gist of which is that you can create a SQL statement with an "&" (or
other 'defined' character) in it. If DEFINE is ON, then the interpreter
prompts you for the value when it encounters the "&". After getting the
value it then processes the SQL statement.

Here is an example using sqlPlus:

SQL*Plus: Release 9.2.0.1.0 - Production on Fri Feb 27 14:11:18 2004

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Connected to:
Oracle9i Release 9.2.0.1.0 - Production
JServer Release 9.2.0.1.0 - Production

SQL> select wdresourceid from wdresource where wdresourceid = &my_res_id;
Enter value for my_res_id: 615
old   1: select wdresourceid from wdresource where wdresourceid = &my_res_id
new   1: select wdresourceid from wdresource where wdresourceid = 615

WDRESOURCEID

 615

SQL> select wdresourceid from wdresource where wdresourceid = &my_res_id;
Enter value for my_res_id: 1
old   1: select wdresourceid from wdresource where wdresourceid = &my_res_id
new   1: select wdresourceid from wdresource where wdresourceid = 1

no rows selected

You also need the ability to switch off the DEFINE operation in case you
are using a SQL script which contains "&" characters which you don't want
the interpreter to treat as a define.

This would be a cool and useful feature, if it could be implemented in
psql...

John Sidney-Woollett

SQL>
Nick Barr said:
> Ben wrote:
>> I'm designing a fairly involved database system. As part fo the process,
>> I
>> use the \i [FILE] command a great deal. I set up fairly involved
>> queries,
>> sometimes simply for the purpose of shortening column names so the
>> output
>> is reasonable. For example:
>>
>> SELECT longname AS abbr,othername as "V" FROM table WHERE how;
>>
>> ...a bunch of these can result in a single-line output on the console,
>> which is a lot easier to deal with than a dump of the actual field names
>> which wraps around and makes you scroll back and forth trying to line up
>> the names with the values.
>>
>> Now, in my case, I'm dealing with specific orders. So the WHERE clause
>> might be:
>>
>> ...WHERE zorder=104788;
>>
>> Which works fine. But, I have to edit the file every time I'm working
>> with
>> a different order, which is repetative and annoying, something computers
>> are supposed to save us from. :)
>>
>> However, you can't leave it out; \i [FILE] expects the query to be
>> complete, ready to go to the server. As far as I can tell.
>>
>> So - how about a command to read a file into the input lines withOUT
>> sending it yet, so that its ready to type the last part, such as:


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] correlated delete with "in" and "left outer join"

2004-02-27 Thread Michael Chaney
On Thu, Feb 26, 2004 at 06:26:19PM -0800, [EMAIL PROTECTED] wrote:
> I'm using postgresl 7.3.2 and have a query that executes very slowly.
>
> There are 2 tables: Item and LogEvent.  ItemID (an int4) is the
> primary key
> of Item, and is also a field in LogEvent.  Some ItemIDs in LogEvent do
> not
> correspond to ItemIDs in Item, and periodically we need to purge the
> non-matching ItemIDs from LogEvent.

delete from LogEvent where EventType!='i' and
ItemID not in (select ItemID from Item);

delete from LogEvent where EventType!='i' and
not exists (select * from Item where Item.ItemID=LogEvent.ItemID);

You might also use a foreign key, cascading delete, etc.  As for the
query style, I've had cases with the latest 7.4 where the "in" style
wasn't optimized but the "exists" style was.  It's the exact same query,
and technically the optimizer should figure that out.  Use "explain" to
see if it's being optimized to use indexes or if it's just doing table
scans.

Michael
-- 
Michael Darrin Chaney
[EMAIL PROTECTED]
http://www.michaelchaney.com/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] PostgreSQL insert speed tests

2004-02-27 Thread Sezai YILMAZ
Sezai YILMAZ wrote:

create index agentid_ndx on logs using hash (agentid);
create index ownerid_ndx on logs using hash (ownerid);
create index hostid_ndx on logs using hash (hostid);

  speed for speed for
# of EXISTING RECORDSPostgreSQL 7.3.4  PostgreSQL 7.4.1
= 

 0 initial records   1086 rows/s   1324 rows/s
200.000 initial records781 rows/s893 rows/s
400.000 initial records576 rows/s213 rows/s
600.000 initial records419 rows/s200 rows/s
800.000 initial records408 rows/s   not tested because of bad 
results
I changed the three hash indexes to btree.

The performance is increased about 2 times (in PostgreSQL 7.3.4  1905 
rows/s).

Concurrent inserts now work.

Changed indexes are more suitable for hash type. Because, there is no 
ordering on them, instead exact values are matched which is more natural 
for hash type of indexes. But hash indexes has possible dead lock 
problems on multiple concurrent inserts. I think I can live with btree 
indexes. They work better. :-)

-sezai

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] check for user validity

2004-02-27 Thread Bruno Wolff III
On Thu, Feb 26, 2004 at 22:17:14 -0500,
  Alexander Cohen <[EMAIL PROTECTED]> wrote:
> I know in advance all the information a user to start up a connection 
> to postgres and do queries. I would like to be able to check and see if 
> that user will be able to connect with his current crudentials. Is this 
> at all possible? If so, how?

Try to connect to the database as them and see whether or not the
connection is successful.

To do this in general without trying to connect you will need to have knowledge
about the contents of pg_hba.cong and pg_ident.conf. If you know that they
will have access to the database and the application is running as a
postgres superuser, you can check a username and password by looking at
pg_shadow.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Case of strings

2004-02-27 Thread Michael Chaney
On Fri, Feb 27, 2004 at 12:06:58PM -, Simon Windsor wrote:
> Hi
> 
> I have moved an application from MySQL to PgSQL, and after the making
> changes I thought all was ok. 
> 
> However, I have just realised that
> 
> Where A = 'STRING'
> 
> Is not the same as
> 
> Where A ='String'

This is standard SQL.  It's possible to use functions inside index
definitions, so you can force case and search on the same.

Michael
-- 
Michael Darrin Chaney
[EMAIL PROTECTED]
http://www.michaelchaney.com/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgres clustering?

2004-02-27 Thread Keith Bottner
Thanks Andrew, I will do some digging on the Postgres-R front to see what
their focus is.

Keith

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Andrew Sullivan
Sent: Thursday, February 26, 2004 4:36 PM
To: [EMAIL PROTECTED]
Subject: Re: [GENERAL] Postgres clustering?


On Thu, Feb 26, 2004 at 03:22:02PM -0600, Keith Bottner wrote:
> I am very interested in a similar solution and believe that I could 
> get some resources from my company for implementing such a system. Are 
> you interested in helping to develop the functionality for Postgres?
> 
> Is anybody else in the Postgres world even interested in pursuing this 
> functionality set in more than lip service? Are there any currently 
> active Postgres projects out there trying to achieve this?

If you want this, then go and support the folks working on the Postgres-R
project.  They have a design for this sort of thing there, although it does
some remarkably tricky things that not everyone thinks will work.  (I happen
to be among the somewhat optimistic on this front, but I haven't been able
to bring any money to the
project.)

A

-- 
Andrew Sullivan  

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] PostgreSQL insert speed tests

2004-02-27 Thread Greg Stark


> > create index agentid_ndx on logs using hash (agentid);
> > create index ownerid_ndx on logs using hash (ownerid);
> > create index hostid_ndx on logs using hash (hostid);

> > What about concurrent inserts (cocurrent spare test program execution) into
> > the same table? It did not work.

Hash indexes have relatively poor concurrency, though I think it should still
work. You probably want to be using btree indexes for everything though,
unless you can actually profile the two and show hash indexes being a big win.

Note that there were bugs in the hash index code at least through most 7.3
versions.

-- 
greg


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] correlated delete with 'in' and 'left outer join'

2004-02-27 Thread Stephan Szabo

On Fri, 27 Feb 2004, Mike Mascari wrote:

> To do what I think you believe to be happening w.r.t. outer joins,
> you'd have to have a subquery like:
>
> [EMAIL PROTECTED] select a.fookey
> test-# FROM
> test-# (SELECT foo.key AS fookey, bar.key as barkey FROM foo LEFT
> OUTER JOIN bar ON foo.key = bar.key) AS a
> test-# WHERE a.barkey IS NULL;

This AFAICS is pretty much what he did, except that he didn't alias the
join which is okay I believe.  He had one condition in on and two
conditions in where.

The original subquery looked like:
select distinct e.ItemID from LogEvent e left outer join Item i
on e.ItemID = i.ItemID where e.EventType != 'i' and i.ItemID is null


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Case of strings

2004-02-27 Thread Paul Thomas
On 27/02/2004 12:06 Simon Windsor wrote:
Is there anyway I can force the PgSQL to accept case equivalence, or must
I
add upper()/lower() to force the case and then make string tests?


Ie Where upper(A)=upper('String')
You could use ilike

where a ilike 'string'

You can also use POSIX regular expressions.

--
Paul Thomas
+--+-+
| Thomas Micro Systems Limited | Software Solutions for the Smaller 
Business |
| Computer Consultants | 
http://www.thomas-micro-systems-ltd.co.uk   |
+--+-+

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] field must appear in the GROUP BY clause or be used in an aggregate function?

2004-02-27 Thread Bill Moran
Hey all.

I've hit an SQL problem that I'm a bit mystified by.  I have two different
questions regarding this problem: why?  and how do I work around it?
The following query:

SELECT  GCP.id,
GCP.Name
FROMGov_Capital_Project GCP,
WHERE TLM.TLI_ID = $2
group by GCP.id
ORDER BY gcp.name;
Produces the following error:

ERROR:  column "gcp.name" must appear in the GROUP BY clause or be used in an aggregate function

That field is a CHAR, so I'm not sure what kind of aggregate to use,
or (more important to my understanding) why one is necessary.
As I said, I'm not sure I understand why this occurs.  I'm assuming that I
don't understand "group by" as well as I thought I did ;)
This isn't my query, I'm translating a system prototyped in MSSQL to
Postgres.  This query _does_ work in MSSQL.  Does that constitute a
bug in MSSQL, or a shortcomming of Postgres, or just a difference of
interpretation?
--
Bill Moran
Potential Technologies
http://www.potentialtech.com
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] correlated delete with 'in' and 'left outer join'

2004-02-27 Thread Mike Mascari
Stephan Szabo wrote:
On Fri, 27 Feb 2004, Mike Mascari wrote:

To do what I think you believe to be happening w.r.t. outer joins,
you'd have to have a subquery like:
[EMAIL PROTECTED] select a.fookey
test-# FROM
test-# (SELECT foo.key AS fookey, bar.key as barkey FROM foo LEFT
OUTER JOIN bar ON foo.key = bar.key) AS a
test-# WHERE a.barkey IS NULL;
This AFAICS is pretty much what he did, except that he didn't alias the
join which is okay I believe.  He had one condition in on and two
conditions in where.
The original subquery looked like:
select distinct e.ItemID from LogEvent e left outer join Item i
on e.ItemID = i.ItemID where e.EventType != 'i' and i.ItemID is null
That is indeed the original subquery. But the 'i.ItemID is null' 
condition doesn't change the IN list one iota. He was somehow 
expecting the subquery to yield records internally like:

1   NULL
2   NULL
3   3
and simultaneously have the condition 'i.ItemID is null' eliminate 
the third tuple. But that is not how the left outer join executes. 
The 'i.ItemID is null' condition is evaluated, probably always to 
false, which ensures that the left outer join will never find a 
matching row from the 'Item' relation and, if queried not as a 
subquery but stand-alone as:

select distinct e.ItemID, i.ItemID
from LogEvent e left outer join Item i on e.ItemID = i.ItemID
where e.EventType != 'i' and i.ItemID is null
would always yield a relation of the form:

e.ItemID	NULL

for every e.ItemID whose e.EventType != 'i'. That ain't right.

Another example:

[EMAIL PROTECTED] select * from foo;
 key
-
   1
   3
(2 rows)
[EMAIL PROTECTED] select * from bar;
 key | value
-+---
   1 | Mike
   2 | Joe
(2 rows)
[EMAIL PROTECTED] select foo.key, bar.key from foo left outer join bar on 
foo.key = bar.key and bar.key is null;
 key | key
-+-
   1 |
   3 |
(2 rows)

[EMAIL PROTECTED] select foo.key, bar.key from foo left outer join bar on 
foo.key = bar.key;
 key | key
-+-
   1 |   1
   3 |
(2 rows)

[EMAIL PROTECTED] select a.fookey, a.barkey from (select foo.key as 
fookey, bar.key as barkey from foo left outer join bar on foo.key = 
bar.key) as a where a.barkey is null;
 fookey | barkey
+
  3 |
(1 row)

Mike Mascari

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] change db encoding

2004-02-27 Thread Ron St-Pierre
Alexander Cohen wrote:

How would i go about changing a databases encoding? Is this at all 
possible?

There does not seem to be much i can with ALTER DATABASE except change 
its name!

You could try to:
-pg_dump the database to file,
-drop the database,
-recreate the database with unicode encoding (createdb mynewdb -e unicode),
-check the dumped file for any 'strange' characters and change as needed,
-restore to the new database
PS I will have to do this soon to several databases. If you do convert 
yours, please post a follow-up message on this thread outlining your 
experience.

Here are the steps from a previous thread for a conversion to a 
different language (I couldn't find the link so I'm pasting it into this 
message):

Re: [GENERAL] Converting SQL-ASCII encoding database to UNICODE
Jean-Michel POURE wrote:

Le Dimanche 9 Novembre 2003 19:39, Rajesh Kumar Mallah a écrit :
 

If so what is the process
   

The advantage of using a Unicode database is that UTF-8 supports/includes all 
known encodings at once. Therefore, in the process of development, it can 
help you save time.

When using a Unicode database, if the client does not support Unicode, it is 
always possible to recode data streams on the fly with "set client_encoding = 
foo_encoding". Therefore, there is no real drawback in using Unicode. It may 
only be a little slower, but there is no real proof.

The process of conversion is as follows:

- dump the whole database using pg_dump:
pg_dump database_name > pg_data_dump.sql
- Do no drop the old database. Make a backup of the dump:
cp pg_data_dump.sql pg_data_dump.sql.bak
- recode the dump using the GNU recode package:
recode ..u8 pg_data_dump.sql
recode will try to guess the encoding the original ASCII database.

- Open the file in an UTF-8 editor like Kate and verify that all characters 
are preserved and display well (in Hindi?). If it does not work, it may be a 
problem of original encoding. Try to guess it and retry using:

cp -f source_encoding..u8.bak source_encoding..u8
recode source_encoding..u8 pg_data_dump.sql
- create an empty Unicode database:
psql template1
create database new_database with encoding=Unicode;
reload the dump: psql new_database < pg_data_dump.sql
GNU/recode is available in most GNU/Linux distributions.

By the way, as you live in India, do not hesitate to visit
http://pgadmin.postgresql.org/pgadmin3/translation.php if you can help us.
Most translations in languages for India are "stalled", do not hesitate to 
take over the translation in Hindi for example.

Cheers, 
Jean-Michel

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org

 

\

hth
Ron
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] field must appear in the GROUP BY clause or be used

2004-02-27 Thread John Sidney-Woollett
Bill Moran said:
> I've hit an SQL problem that I'm a bit mystified by.  I have two different
> questions regarding this problem: why?  and how do I work around it?
>
> The following query:
>
> SELECTGCP.id,
>   GCP.Name
>  FROMGov_Capital_Project GCP,
>  WHERE TLM.TLI_ID = $2
>  group by GCP.id
>  ORDER BY gcp.name;
>
> Produces the following error:
>
> ERROR:  column "gcp.name" must appear in the GROUP BY clause or be used in
> an aggregate function

Since you're not agregating data, can't you use a select distinct instead?

SELECT  distinct GCP.id, GCP.Name
FROM Gov_Capital_Project GCP, {?something missing here?}
WHERE TLM.TLI_ID = $2
ORDER BY gcp.name;

(BTW, I wasn't clear if the where clause trying to join to another table?)

Doesn't answer your original question, but hope it helps anyway.

John Sidney-Woollett


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] compartmentalizing users

2004-02-27 Thread Bill Moran
[EMAIL PROTECTED] wrote:
Is there a way to segregate users by database or schema so that in a hosting
situation you could keep different customers apart? Otherwise, the
unpleasant alternative is to run separate instances of Postgres. 
You can add users and assign rights at the database level.

--
Bill Moran
Potential Technologies
http://www.potentialtech.com
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] field must appear in the GROUP BY clause or be used

2004-02-27 Thread Bill Moran
John Sidney-Woollett wrote:
Bill Moran said:

I've hit an SQL problem that I'm a bit mystified by.  I have two different
questions regarding this problem: why?  and how do I work around it?
The following query:

SELECT  GCP.id,
GCP.Name
FROMGov_Capital_Project GCP,
WHERE TLM.TLI_ID = $2
group by GCP.id
ORDER BY gcp.name;
Produces the following error:

ERROR:  column "gcp.name" must appear in the GROUP BY clause or be used in
an aggregate function
Since you're not agregating data, can't you use a select distinct instead?
Not sure.  I'll have to get back to the programmer who wrote the orignal
SELECT and find out what kind of data he is actually trying to acquire.
SELECT  distinct GCP.id, GCP.Name
FROM Gov_Capital_Project GCP, {?something missing here?}
WHERE TLM.TLI_ID = $2
ORDER BY gcp.name;
(BTW, I wasn't clear if the where clause trying to join to another table?)
Yes, my bad.  The actual query causing the problem is a bit longer with about
6 joins to it.  I did test:
select id, name from gov_capital_project group by id order by name;

and it causes the same error, so I thought I'd make the question simpler by
removing the parts that obviously weren't contributing to the problem.
Doesn't answer your original question, but hope it helps anyway.
It may, thanks for the input!

Like I said, the most important part (to me) is to understand why
Postgres refuses to run this.  The fact that I don't know why points
to an obvious lack of understanding on my account, and I'd like to
remedy that :D
To that effect, if anyone can point me to a doc that will help me
gain a better understanding of why this error occurs, I'd be happy
to read it!
--
Bill Moran
Potential Technologies
http://www.potentialtech.com
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] On Update (trigger hint)

2004-02-27 Thread MaRCeLO PeReiRA
Hi guys,

Please, give me some advices on how to do the
following:

I have the following table:

CREATE TABLE products (
   idSERIAL,
   description   TEXT,
   lastupdatedate
);

Well, I would like to update the column "lastupdate"
with the value "now()" on every UPDATE executed on a
row of this table.

Do I have to create a function to do it? Can you help
me?

Regards,

Marcelo

__

Yahoo! Mail - O melhor e-mail do Brasil! Abra sua conta agora:
http://br.yahoo.com/info/mail.html

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] field must appear in the GROUP BY clause or be used

2004-02-27 Thread Mike Mascari
Bill Moran wrote:

Hey all.

I've hit an SQL problem that I'm a bit mystified by.  I have two different
questions regarding this problem: why?  and how do I work around it?
The following query:

SELECT GCP.id,
GCP.Name
FROMGov_Capital_Project GCP,
WHERE TLM.TLI_ID = $2
group by GCP.id
ORDER BY gcp.name;
Produces the following error:

ERROR:  column "gcp.name" must appear in the GROUP BY clause or be used 
in an aggregate function
The reason the grouping requires either an attribute to be 
aggregated or apart of the group by list is that if it were not, an 
arbitrary value would have to be selected:

[EMAIL PROTECTED] select * from projects;
   dept|   project
---+--
 Finance   | Y2K
 Corporate | Y2K
 Corporate | Annual Audit
(3 rows)
[EMAIL PROTECTED] select dept, project from projects group by dept;
ERROR:  column "projects.project" must appear in the GROUP BY clause 
or be used in an aggregate function

If this were to be permitted, which project should be selected, 
'Y2K' or 'Annual Audit'?

[EMAIL PROTECTED] select dept, project from projects group by dept, project;
   dept|   project
---+--
 Corporate | Y2K
 Corporate | Annual Audit
 Finance   | Y2K
(3 rows)
Of course, this has little meaning without an aggregate. All you're 
doing is leveraging GROUP BY's sort. You might as well use DISTINCT. 
More useful would be:

[EMAIL PROTECTED] select dept, count(project) from projects group by dept;
   dept| count
---+---
 Finance   | 1
 Corporate | 2
(2 rows)
or perhaps:

[EMAIL PROTECTED] select count(dept), project from projects group by project;
 count |   project
---+--
 2 | Y2K
 1 | Annual Audit
This isn't my query, I'm translating a system prototyped in MSSQL to
Postgres.  This query _does_ work in MSSQL.  Does that constitute a
bug in MSSQL, or a shortcomming of Postgres, or just a difference of
interpretation?
If MSSQL picks an arbitrary value for the non-group by attribute, it 
is violating spec.

Mike Mascari



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] field must appear in the GROUP BY clause or be used

2004-02-27 Thread scott.marlowe
On Fri, 27 Feb 2004, Bill Moran wrote:

> Hey all.
> 
> I've hit an SQL problem that I'm a bit mystified by.  I have two different
> questions regarding this problem: why?  and how do I work around it?
> 
> The following query:
> 
> SELECTGCP.id,
>   GCP.Name
>  FROMGov_Capital_Project GCP,
>  WHERE TLM.TLI_ID = $2
>  group by GCP.id
>  ORDER BY gcp.name;
> 
> Produces the following error:
> 
> ERROR:  column "gcp.name" must appear in the GROUP BY clause or be used in an 
> aggregate function

OK, let's look at a test table:

id | data
-
0 | 'abc'
0 | 'def'
1 | 'ghi'

Now, let's use this query:

select id, data from test_table group by id;

what results should I get back?

I have two possible results for the data column, abc and def.  But I only 
get one row with a 0 in it, so which one of those do I pick?

If I use an aggregate I can be sure to get the first or last one:

select id, max(data) from test_table group by id;

Also, you may want to look at postgresql's extension, "distinct on":

http://www.postgresql.org/docs/7.4/static/queries-select-lists.html#QUERIES-DISTINCT

It can give you the kind of results you want.

select distinct on (id) id, data from test_table;

But is know to be indeterminate, so you may get different results each 
time.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] correlated delete with 'in' and 'left outer join'

2004-02-27 Thread Michael Chaney
> >The original subquery looked like:
> >select distinct e.ItemID from LogEvent e left outer join Item i
> >on e.ItemID = i.ItemID where e.EventType != 'i' and i.ItemID is null

Please, before continuing this thread, read my post below.  What you're
all getting around to, albeit painfully, is that this subquery is
worthless as-is.  This is the mysql way of finding rows in one table
with no match in another without the convenience of the "in" or "exists"
constructs.

Because we're using Postgres and have those constructs, the original
query can be rewritten simply with either:

delete from LogEvent where EventType != 'i' and ItemID not in
(select ItemID from Item)

That's it.  That's the whole query.  It does what he wants.

Michael
-- 
Michael Darrin Chaney
[EMAIL PROTECTED]
http://www.michaelchaney.com/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] field must appear in the GROUP BY clause or be used

2004-02-27 Thread Richard Huxton
On Friday 27 February 2004 16:39, Bill Moran wrote:
> John Sidney-Woollett wrote:
> > Bill Moran said:
> >>
> >>SELECT  GCP.id,
> >>GCP.Name
> >> FROMGov_Capital_Project GCP,
> >> WHERE TLM.TLI_ID = $2
> >> group by GCP.id
> >> ORDER BY gcp.name;

> >>ERROR:  column "gcp.name" must appear in the GROUP BY clause or be used
> >> in an aggregate function

> Like I said, the most important part (to me) is to understand why
> Postgres refuses to run this.  The fact that I don't know why points
> to an obvious lack of understanding on my account, and I'd like to
> remedy that :D

Like the error message says, if you're using GROUP BY everything in the SELECT 
list must be an aggregate SUM(...) or used in the GROUP BY.

So, this is OK:
  SELECT dept, week, SUM(amt_sold)
  FROM weekly_sales
  GROUP BY dept,week;
This isn't:
  SELECT dept, week, SUM(amt_sold)
  FROM weekly_sales
  GROUP BY dept;

Ask yourself which "week" should be returned in the second case.

-- 
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] correlated delete with 'in' and 'left outer join'

2004-02-27 Thread Mike Mascari
Michael Chaney wrote:

Please, before continuing this thread, read my post below.  What you're
all getting around to, albeit painfully, is that this subquery is
worthless as-is.  This is the mysql way of finding rows in one table
with no match in another without the convenience of the "in" or "exists"
constructs.
Because we're using Postgres and have those constructs, the original
query can be rewritten simply with either:
delete from LogEvent where EventType != 'i' and ItemID not in
(select ItemID from Item)
That's it.  That's the whole query.  It does what he wants.
One more minor point. :-)

If you are using 7.3 or earlier, PostgreSQL will sequentially scan 
the IN subquery result, which executes quite slowly and therefore 
the EXISTS method Stephan stated should be used:

DELETE FROM LogEvent
WHERE EventType != 'i' AND NOT EXISTS (
 SELECT 1
 FROM Item
 WHERE Item.ItemID = LogEvent.ItemID
);
If you are using >= 7.4, then your query above is optimal:

http://www.postgresql.org/docs/7.4/static/release.html#RELEASE-7-4

Just something to consider,

Mike Mascari

Michael


---(end of broadcast)---
TIP 8: explain analyze is your friend