Re: [GENERAL] More efficient INs when comparing two columns

2005-09-21 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2005-09-20 20:45:21 +0200:
> I was thinking if this was possible in some way..
> I have this table where we have X and Y coordinates, and i need to
> select several in one go.
> 
> # select * from xy where (x = 1 and y = 2) or (x = 2 and y = 2);
> 
> This works but are not so nice looking.
> It would be nice to be able to do it like this:
> 
> # select * from xy where (x, y) in ((1, 2), (2, 2));
> 
> But that doesn't work.
> A funny thing is that this works:
> 
> # select * from xy where (x, y) = (1, 2);
> 
> What's the most efficient way of doing these kind of selects?

You'd need to write that as

(x, y) IN VALUES (1, 2), (2, 2)

Unfortunately, PostgreSQL's support for table value constructors
is very weak.

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] pg_index_indrelid_index error, any ideas ??

2005-09-21 Thread aw_contact
I've got an error from postgresql server -
"ERROR:  root page 1 of "pg_index_indrelid_index" has level 512, expected 0"
This error appears on every command and i can't do anything with my
database. Is it possible to solve this problem ?



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] deactivating/activating constraint

2005-09-21 Thread Oleg
Thank you very much. With DEFERRABLE INITIALLY DEFERRED (at the end of 
the constraint) it works fine now


Sebastian Böck schrieb:


Oleg wrote:


Dear All,
is it possible to temporary deactivate a constraint in PostgreSQL?
There is a constraint that does not allow me to write some data (see 
e-mail below). But after all datasets are written the constraint is 
valid. So I was wondering wether it is possible to deactivate a 
constraint write all records in all tables then activate constraint 
again. Somebody told me that it is possible in Oracle.

Thanks a lot in advance
Oleg



Have you tried to make the Foreign Key deferrable and initially deferred?

See:
http://www.postgresql.org/docs/8.0/interactive/sql-createtable.html

HTH

Sebastian




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Win32 Backup and Restore of large databases.

2005-09-21 Thread Howard Cole
On a Win32 machine, can I backup a database if the backup file exceeds 
2GB? In linux, I can split the backup file into multiple files. Can this 
be done on Win32?


Regards

Howard Cole
www.selestial.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Win32 Backup and Restore of large databases.

2005-09-21 Thread Tony Caduto

Howard Cole wrote:

On a Win32 machine, can I backup a database if the backup file exceeds 
2GB? In linux, I can split the backup file into multiple files. Can 
this be done on Win32?


Regards

Howard Cole
www.selestial.com

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



I am not sure, but I thought the max file size on NT and up systems was 
4gb, not 2gb.  Also newer versions of Linux should not have the 2gb 
limitation.(that was a limit imposed on the 2.x kernel)
This of course does not mean that the Postgresql code was ever updated 
to allow larger files.

--

Tony Caduto
http://www.amsoftwaredesign.com
Home of PG Lightning Admin for Postgresql 8.x 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Problem with libpq3 & postgresql8

2005-09-21 Thread Alex R. Mosteo
Hello,

I have a client program which worked ok, using libpq3 agains a 7.4
database. Both linux versions. The client and server are in separate
machines connecting via TCP/IP.

I've just updated the database to 8.0, and the client program now can't
connect. There's no error message, it simply gets stuck when doing the
connection call and remains so indefinitely.

Is this a known incompatibility? If not, any suggestions on what to try?

Thanks in advance,

Alex.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Problem with libpq3 & postgresql8

2005-09-21 Thread Devrim GUNDUZ


Hi,

On Wed, 21 Sep 2005, Alex R. Mosteo wrote:


I have a client program which worked ok, using libpq3 agains a 7.4
database. Both linux versions. The client and server are in separate
machines connecting via TCP/IP.

I've just updated the database to 8.0, and the client program now can't
connect. There's no error message, it simply gets stuck when doing the
connection call and remains so indefinitely.

Is this a known incompatibility? If not, any suggestions on what to try?


libpq version was changed in 8.0.2.

If you are using Red Hat / Fedora Core, install this package to solve the 
problem:


http://developer.postgresql.org/~devrim/rpms/compat/compat-postgresql-libs-3-3PGDG.i686.rpm

This RPM installs the following files:

#rpm -qlp compat-postgresql-libs-3-3PGDG.i686.rpm
/usr/lib/libecpg.so.4
/usr/lib/libecpg.so.4.1
/usr/lib/libecpg_compat.so.1
/usr/lib/libecpg_compat.so.1.2
/usr/lib/libpgtypes.so.1
/usr/lib/libpgtypes.so.1.2
/usr/lib/libpq.so.3
/usr/lib/libpq.so.3.1

Regards,
--
Devrim GUNDUZ
Kivi Bilişim Teknolojileri - http://www.kivi.com.tr
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
  http://www.gunduz.org
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] pgclient hostbased authentication

2005-09-21 Thread Bohdan Linda
Hello,

may I ask, how(or which) ip is checked against pg_hba.conf IP entry in NAT
environment? 

Could it be, that psql client packs IP address of the client into athentication 
data?

Regards,
Bohdan 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_autovacuum not sleeping

2005-09-21 Thread Brandon Metcalf
p == pgman@candle.pha.pa.us writes:

 p> Brandon Metcalf wrote:

 ...

 p> > So, pg_autovacuum says it's going to sleep for 4886 seconds, but fires
 p> > up again after just under 600 seconds.
 p> >
 p> > Can anyone explain what I'm seeing?

 p> Yep, this was fixed in 8.0.X CVS in May, two days after the release of
 p> 8.0.3:

 p> revision 1.27.4.4
 p> date: 2005/05/11 17:58:32;  author: momjian;  state: Exp;  lines: +11 -2
 p> Fix pg_autovacuum -s flag to handle values > 2000 by using sleep()
 p> instead of pg_usleep.

 p> Backpatch to 8.0.X.

 p> We are thinking of putting out an 8.0.4 in a few days, but until then
 p> you can pull from CVS branch REL8_0_STABLE and use that fix in
 p> pg_autovacuum.c.


Thanks.  That did the trick.


-- 
Brandon

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Problem with libpq3 & postgresql8

2005-09-21 Thread Michael Fuhr
On Wed, Sep 21, 2005 at 04:18:33PM +0300, Devrim GUNDUZ wrote:
> On Wed, 21 Sep 2005, Alex R. Mosteo wrote:
> >I have a client program which worked ok, using libpq3 agains a 7.4
> >database. Both linux versions. The client and server are in separate
> >machines connecting via TCP/IP.
> >
> >I've just updated the database to 8.0, and the client program now can't
> >connect. There's no error message, it simply gets stuck when doing the
> >connection call and remains so indefinitely.
> >
> >Is this a known incompatibility? If not, any suggestions on what to try?
> 
> libpq version was changed in 8.0.2.
> 
> If you are using Red Hat / Fedora Core, install this package to solve the 
> problem:

While it's a good idea to upgrade the libraries to match the server,
a 7.4 libpq should nevertheless be able to connect to an 8.0 server
(unless the server demands a client certificate, in which case the
connection should fail instead of hanging).  What's your theory
about why the 7.4 library is hanging on the connection attempt?

Alex, are you sure it's the connection call that's stuck?  Have you
used a debugger or added print statements immediately before and
after the connection attempt?  Have you done a process trace or
network sniff to see what's happening when the program hangs?

-- 
Michael Fuhr

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Problems with leftover types in pg_temp schemas

2005-09-21 Thread Janning Vygen
Hi,

last week i asked a question about how to remove a left over pg_type from a 
temp table.

http://archives.postgresql.org/pgsql-general/2005-09/msg00409.php

Tom Lane helped me managing it by reindexing pg_depends and DROPping the 
pg_temp_X.temp_gc.

Now i have the same problem again but with another function which creates and 
drops a temporary table (not temp_gc this time, but "spiele")

If the server is overloaded and lots of requests are made to this function it 
seems that cleaning up pg_type isn't working. 

It happend on two different database servers today. On one of them it happens 
yesterday too. So it can't be a hardware failure. If i do 
DROP pg_temp_N.spiele serveral times (i found the type about 10 times), 
everything works fine. 

What happens in my function is the following: I have a table "spiele" (games) 
where i save scores. Users can use a html form which offers to input scores 
and temporarly overwrite the table "spiele" to do some calculation like "what 
would be the result if the scores were like my input".

For this reason i take the input and build a temporary table "spiele" which 
overrides the main table because both schemas public and pg_temp are in the 
search_path. 

the last two days, when many people used this function, suddenly i got lots of 
pg_type "spiele" which are not deleted after dropping the temporary table.

Is the amount of temporary schemas bound to a specific number? Or can i create 
as many temporary schemas as sessions. Why can there be some types left after 
dropping a temporary table. To me it seems like an odd bug which occurs only 
when using a lot of temporary tables in parallel sessions.

the problem is quite urgent because i can't afford the time for rewriting the 
app to not use temp tables. So i need to fix it. any help is very 
appreciated.

kind regards,
janning

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pgclient hostbased authentication

2005-09-21 Thread A. Kretschmer
am  21.09.2005, um 15:53:45 +0200 mailte Bohdan Linda folgendes:
> Hello,
> 
> may I ask, how(or which) ip is checked against pg_hba.conf IP entry in NAT
> environment? 
> 
> Could it be, that psql client packs IP address of the client into 
> athentication data?

No. Why? Describe your problem.

pg_hba.conf can only see the packet source-ip.



Regards, Andreas
-- 
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47212,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
 ===Schollglas Unternehmensgruppe=== 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pgclient hostbased authentication

2005-09-21 Thread Magnus Hagander
> Hello,
> 
> may I ask, how(or which) ip is checked against pg_hba.conf IP 
> entry in NAT environment? 
> 
> Could it be, that psql client packs IP address of the client 
> into athentication data?

No, it validates the source address on the TCP connection. In the event
of NAT, it will check the NATted address.

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] running vacuum in scripts

2005-09-21 Thread Belinda M. Giardine


On Tue, 20 Sep 2005, Jim C. Nasby wrote:

> On Tue, Sep 20, 2005 at 11:04:44AM -0400, Belinda M. Giardine wrote:
> > > For vacuuming, you could try to use the autovacuum daemon, it is
> > > included in the contrib part of postgres. If you installed from source,
> > > you will likely need to separately install autovacuum, if you installed
> > > a prepackaged postgres, chances are that you already have the
> > > pg_autovacuum executable installed. It is fairly easy to set up.
> >
> > Thanks.  I didn't find autovacuum anywhere in our install.  It was done
> > from source so I sent a request to the sysadmin.  It does sound like what
> > I was looking for.
>
> http://cvs.distributed.net/viewcvs.cgi/stats-sql/tools/ has some scripts
> you might find useful for running autovacuum.
> --
> Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
> Pervasive Software  http://pervasive.comwork: 512-231-6117
> vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461
>

Thanks, yes these scripts do look useful.

Belinda


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] pg_index_indrelid_index error, any ideas ??

2005-09-21 Thread Tom Lane
[EMAIL PROTECTED] writes:
> I've got an error from postgresql server -
> "ERROR:  root page 1 of "pg_index_indrelid_index" has level 512, expected 0"
> This error appears on every command and i can't do anything with my
> database. Is it possible to solve this problem ?

See REINDEX.  Also you ought to run some memory and disk diagnostics
--- it looks like some part of your hardware dropped a bit.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Problems with leftover types in pg_temp schemas

2005-09-21 Thread Tom Lane
Janning Vygen <[EMAIL PROTECTED]> writes:
> the problem is quite urgent because i can't afford the time for rewriting the 
> app to not use temp tables. So i need to fix it. any help is very 
> appreciated.

Show us a self-contained test case.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Network Flow Schema + Bulk Import/Updates

2005-09-21 Thread Michael L. Artz
I'm fairly new at Postgres and had some basic design questions.  My
problem is basically that I want to do large bulk imports (millions of
rows) into a large DB (billions of rows) where there might already be
data that needs to be updated instead of inserting a new row.  I read a
similar post a few days ago, but I just had a couple more questions.

My DB decomposes into storing something similar to Cisco Netflow
records, i.e. source IP, source Port, dest IP, dest Port, and a bit more
information about network flow data, as well as 2 timestamps, a count
field, and a foreign key into a small table (<100 rows) (schema below). 
I will be nightly importing the data (which could number in the millions
of rows) and, since the DB is not quite production, I can do almost
whatever I want, as long as the DB is in a query-able state by morning. 
The nightly imports are the only modifications to the table; otherwise
the table is read-only and accessed via a web application.  Needless to
say, the table will get huge (I'm estimating billions of rows within a
month), although it should level off as I get more and more duplicates.

CREATE TABLE flow (
source_ip   INET,
--- SMALLINT not normally big enough (signed vs. unsigned),
--- but convert in application space
source_port   SMALLINT,
dest_ip   INET,
dest_port   SMALLINT,

comment   VARCHAR(128),
count   INTEGER,
first_seen   DATE,
last_seen   DATE,
fk   INTEGER NOT NULL REFERENCES small_table(small_id)
);
CREATE INDEX flow_source_ip_idx ON flow (source_ip);
CREATE INDEX flow_dest_ip_idx ON flow (dest_ip);

When I import the data (thousands to millions of rows), I want to check
and see if there is a row that already exists with the same source_ip,
dest_ip, and comment and, if so, I want to update the row to increment
and update the first_seen and last_seen dates if need be.  Otherwise,
just insert a new row with a count of 1.

Basic question:  What's the best way to go about this?

>From what I have read, it seemed like the consensus was to import (COPY)
the new data into a temporary table and then work on the inserts and
updates from there.  I also read some suggestions involving triggers ...
which way would be best given my dataset?  I've thought about doing it
in application space (Perl) by querying out all the rows that need to be
updated, deleting said rows, dropping the indexes, and then doing a bulk
COPY of any new rows plus the modified old rows ... does this sound like
a good/bad idea?

Some other basic questions:

-Should I drop and recreate the indexes anytime, given that the updates 
and selects will use them extensively to find matching rows in the
existing flow table?  Perhaps create a new index on (source_ip, dest_ip)?

-What do you think of the schema for storing network flow data
considering that I'm only storing single IPs and I do need the comment
field?  Is the INET data type the best thing to use here (12 bytes, I
think), or should I use my application to convert my IPs to "INTEGER"
equivalents (4 bytes)?  Perhaps create a new Postgres data type? 
Although I have another table which actually uses INET networks, and I'd
like to be able to join the two, so a new data type might be more work
than I initially though (although I'm up for it) :)

-How many rows at a time can I practically insert with the COPY
command?  I've read about people doing millions, but is that realistic,
or should I break my load into X-record chunks/transactions?

Any suggestions/comments/flames would be appreciated.

Thanks
-Mike

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] array_dims array_lower/upper distance

2005-09-21 Thread Tony Wasson
On 9/20/05, Matthew Peter <[EMAIL PROTECTED]> wrote:
> Wondering if there's a way for postgres to return how
> many elements are in a array as a single integer? For
> instance, returning 10 (items in array) instead of
> [-5:4]
>
> Also, is there a way to return the position of an item
> in a array?
>

Try using array_upper and specify which array dimension.

from http://www.postgresql.org/docs/current/static/arrays.htm:

"array_dims produces a text result, which is convenient for people to
read but perhaps not so convenient for programs. Dimensions can also
be retrieved with array_upper and array_lower, which return the upper
and lower bound of a specified array dimension, respectively."


pg8=# SELECT array_dims('{4,2,3,5,6}'::INT[]);
 array_dims

 [1:5]
(1 row)

pg8=# SELECT array_upper('{4,2,3,5,6}'::INT[],1);
 array_upper
-
   5

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Slow search.. quite clueless

2005-09-21 Thread Olly Betts
Yonatan Ben-Nes wrote:
> Actually I even started to look on other solutions and maybe you can say
> something about them also.. maybe they can help me:
> 1. Omega (From the Xapian project) - http://www.xapian.org/

You could certainly do this with Xapian and Omega.  With only 5
million records it should be very quick.

The easiest approach would be to periodically dump the SQL tables
and build a new Xapian index which reflects the SQL database - you'd
probably want to customise the "dbi2omega" script in the Omega
distribution.  This approach works particularly well if the tables
are updated in a batch fashion (one big weekly update, say).

Alternatively you could hook into whatever updates the SQL database
and get it to make corresponding updates to the Xapian index.  That
has the advantage that they'll always be in step, but is probably
more work to set up.

The main drawback compared to doing everything in SQL is that you'd
have two systems to deal with rather than just one...

Cheers,
Olly


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Errror in beltane

2005-09-21 Thread Hartel, Dirk


>  -Original Message-
> From: Hartel, Dirk  
> Sent: 20 September 2005 15:28
> To:   '[EMAIL PROTECTED]'
> Subject:  Errror in beltane
> 
> Hi all, I use yule as intrusion detection system and beltane as frontend.
> All data are in a postgresql DB. Yule writes all data without error in the
> databse but if I want to acknowledge the message I receive every time the
> message:
> 
> Warning: pg_exec(): Query failed: ERROR: permission denied for relation
> log . in /var/www/data/beltane/sql.php on line 187
> ERROR: permission denied for relation log Entry 252904: error while trying
> to set status to ACK
> Warning: pg_exec(): Query failed: ERROR: permission denied for relation
> log . in /var/www/data/beltane/sql.php on line 217
> 
> I contacted the samhain & beltane support, but they told me that it lokks
> like an problem with the postgres db.
> 
> Any idea?
> 
> 
> 
> mit freundlichen Grüssen / with kind regards
> 
> Dirk Hartel
> 
> System Analyst
> Dresdner Kleinwort Wasserstein  
> Dresdner Bank AG
> GBS IT Infrastructure  
> DrKW OI FFT Linux  
> Theodor-Heuss-Allee 50, 3rd floor
> 60486 Frankfurt/Main, Germany
> Voice:  +49-69-713-66903
> Mobil:  +49-171-3014505
> Fax:+49-69-713-26684
> 



The information contained herein is confidential and is intended solely for the
addressee. Access by any other party is unauthorised without the express
written permission of the sender. If you are not the intended recipient, please
contact the sender either via the company switchboard on +44 (0)20 7623 8000, or
via e-mail return. If you have received this e-mail in error or wish to read our
e-mail disclaimer statement and monitoring policy, please refer to 
http://www.drkw.com/disc/email/ or contact the sender. 3167



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Problem with libpq3 & postgresql8

2005-09-21 Thread Alex R. Mosteo
Michael Fuhr wrote:
> On Wed, Sep 21, 2005 at 04:18:33PM +0300, Devrim GUNDUZ wrote:
> 
>>On Wed, 21 Sep 2005, Alex R. Mosteo wrote:
>>
>>>I have a client program which worked ok, using libpq3 agains a 7.4
>>>database. Both linux versions. The client and server are in separate
>>>machines connecting via TCP/IP.
>>>
>>>I've just updated the database to 8.0, and the client program now can't
>>>connect. There's no error message, it simply gets stuck when doing the
>>>connection call and remains so indefinitely.
>>>
>>>Is this a known incompatibility? If not, any suggestions on what to try?
>>
>>libpq version was changed in 8.0.2.
>>
>>If you are using Red Hat / Fedora Core, install this package to solve the 
>>problem:
> 
> 
> While it's a good idea to upgrade the libraries to match the server,
> a 7.4 libpq should nevertheless be able to connect to an 8.0 server
> (unless the server demands a client certificate, in which case the
> connection should fail instead of hanging).  What's your theory
> about why the 7.4 library is hanging on the connection attempt?
> 
> Alex, are you sure it's the connection call that's stuck?  Have you
> used a debugger or added print statements immediately before and
> after the connection attempt?  Have you done a process trace or
> network sniff to see what's happening when the program hangs?

The most I've checked is adding print statements before and after the
call. There's an extra layer of error source, which is that the call is
done from Ada through a binding to the C library. So it could be that
the Ada layer is behaving wrongly with an unexpected error from the C
layer. But then, I would expect an exception and not a hang. I plan on
conducting some more detailed tests if upgrading libpq fails.

That said, the only thing that I've changed is the database, which is
now the one that ships with kubuntu 5.10 experimental, 8.0.3 according
to psql --version.

About the certificate thing, I'm not sure what you mean, but I've
configured the pg_hba.conf file useing hostnossl. I see this line in the
server log:

LOG:  no se pudo aceptar una conexión SSL: se detectó EOF

Which translates to: "Couldn't accept a SSL connection: EOF detected"

but this message only appears after I kill the client process, which
could mean that indeed the connection is alive and hung.

I don't understand where SSL comes into play. Is there any parameter
that I should be adding to the client connection string to disable it?
My connection string is like:

"host=x.homeip.net dbname=xxx user=x password=xx"

Thanks for your help,

Alex.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Win32 Backup and Restore of large databases.

2005-09-21 Thread Lincoln Yeoh

At 12:24 PM 9/21/2005 +0100, Howard Cole wrote:

On a Win32 machine, can I backup a database if the backup file exceeds 
2GB? In linux, I can split the backup file into multiple files. Can this 
be done on Win32?


Max file size depends on file system used.

http://www.microsoft.com/resources/documentation/Windows/XP/all/reskit/en-us/Default.asp?url=/resources/documentation/Windows/XP/all/reskit/en-us/prkc_fil_tdrn.asp

Careful if you are using a network file system (e.g. cifs/smbfs or nfs). 
The max would then depend on the protocol, the relevant file systems 
involved and other limitations.


Of course you can split the backup file into multiple files on win32.

In fact, some Win32 systems made it a bit too easy to do. Typically your 
backup file will be split to multiple files with names like FILE.CHK 
( being zero padded numbers). You may find the original files split too.


;)

Link.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] pgclient hostbased authentication

2005-09-21 Thread Bohdan Linda
> No. Why? Describe your problem.
> 

I have got response like bellow, when connecting to server in completely
different network than 172.x.x.x
--
org.postgresql.util.PSQLException: Connection rejected: FATAL: no
pg_hba.conf entry for host "172.x.x.x", user "XxXxXx", database
"yYyYyY", SSL off

Regards,
Bohdan 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] connecting windows xp to remote server

2005-09-21 Thread Jeanne Thibeault
Hi- I need to be able to access a database on a remote Unix server (postgres 
7.4) with my Windows computer that runs on XP home. I installed postgresql 
8.0.3 and have the PostgreSQL driver available in my ODBC Administrator. 
Apparently I need to be able to send my password as md5 encrypted, but I 
can't figure out how to make that work. Thanks for the help. jmt




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] How to create case insensitive unique constraint

2005-09-21 Thread Andrus
I want to disable dupplicate customer names in a database regardless to 
case.

I tried

CREATE TABLE customer ( id SERIAL, name CHARACTER(70));

ALTER TABLE customer
   ADD constraint customer_name_unique UNIQUE (UPPER(name));

but this is not allowed in Postgres

Any idea ?

Andrus. 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] How to create case insensitive unique constraint

2005-09-21 Thread Csaba Nagy
I guess you can create a unique functional index with upper as the
function. Try to look up the docs for CREATE INDEX.

HTH,
Csaba.


On Wed, 2005-09-21 at 18:15, Andrus wrote:
> I want to disable dupplicate customer names in a database regardless to 
> case.
> 
> I tried
> 
> CREATE TABLE customer ( id SERIAL, name CHARACTER(70));
> 
> ALTER TABLE customer
>ADD constraint customer_name_unique UNIQUE (UPPER(name));
> 
> but this is not allowed in Postgres
> 
> Any idea ?
> 
> Andrus. 
> 
> 
> 
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Postgres locks table schema?

2005-09-21 Thread Stas Oskin










Hi.

 

When running SQL scripts, which change the tables' schema,
we sometimes experience a lock of these tables' schema. The lock occurs on the
first command of the script, on the first modified table. When this occurs, no
further schema modification for this table is possible, until the server is
restarted, which is very problematic for a production environment.

 

Perhaps someone is familiar with this behavior, and how it
can be resolved, without the restart of the server?

 

The server used is PostgreSQL 8.0.3, and the OS is Fedora
Core 3.

 

Thanks.

 

Stas Oskin.










Re: [GENERAL] How to create case insensitive unique constraint

2005-09-21 Thread John D. Burger

I want to disable dupplicate customer names in a database regardless to
case.

I tried

CREATE TABLE customer ( id SERIAL, name CHARACTER(70));

ALTER TABLE customer
   ADD constraint customer_name_unique UNIQUE (UPPER(name));

but this is not allowed in Postgres


As Csaba suggested, a unique functional index does the trick - here's 
how I do it in something I'm working on right now:


CREATE UNIQUE INDEX gazPlaceNames_lower_PlaceName2_Index on 
gazPlaceNames (lower(placeName));


You could use upper() similarly - lower() is better for Unicode data, 
like mine.  Now, If I try to add an alternate casing for an existing 
name, I get slapped:


> select * from gazPlaceNames where lower(placeName) like lower('New 
York');

 placenameid | placename | lang | script
-+---+--+
  291642 | New York  |  |
(1 row)

> insert into gazPlaceNames  (placename) values ('NeW yOrK');
ERROR:  duplicate key violates unique constraint 
"gazplacenames_lower_placename2_"


As a bonus, Postgres will use the index for selects involving 
lower(placename), like the one above.


- John Burger
  MITRE




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] How to create case insensitive unique constraint

2005-09-21 Thread Andrus
> You could use upper() similarly - lower() is better for Unicode data, like 
> mine.

 John,

thank you. Excellent.
I have database encoding UNICODE.  Why lower() is better than upper()?

Andrus.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Fetching column names for a table

2005-09-21 Thread Steve Manes
I need to extract a SETOF column names for a table in plpgsql.  How is 
this done?


-=o&o>-
Steve Manes  http://www.magpie.com
Brooklyn, NY

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Network Flow Schema + Bulk Import/Updates

2005-09-21 Thread Tony Wasson
On 9/20/05, Michael L. Artz <[EMAIL PROTECTED]> wrote:
> I'm fairly new at Postgres and had some basic design questions.  My
> problem is basically that I want to do large bulk imports (millions of
> rows) into a large DB (billions of rows) where there might already be
> data that needs to be updated instead of inserting a new row.  I read a
> similar post a few days ago, but I just had a couple more questions.

You can use the merge trigger below to do this. You'll need to add
some code to update the count. You may also benefit from using the new
constraint exclusion (table partitioning) in PostgreSQL 8.1. I am not
sure if CE works against the inet datatype -- if not, try converting
the IP to an integer.

This merge function is from an earlier version by Mike Rylander I got
from here:
http://archives.postgresql.org/pgsql-sql/2004-05/msg00135.php

His version worked fine, all mistakes are my own. I wanted to allow
multiple key columns.

--
-- Merge on INSERT functionallity for Postgres 8.0+
--
-- Original Author: miker ( at ) purplefrog ( dot ) com / 5-14-04
-- ajwasson (at) gmail (dot) cot -- Added support for multiple key
columns 8-20-05
--
-- CAVEAT EMPTOR: Uses table locks to avoid concurrency issues,
--so it WILL slow down heavily loaded tables.
--This effecivly puts the table into
--TRANSACTION ISOLATION LEVEL SERIALIZABLE mode.
--
-- NOTE: You don't want to use key columns that are NULLable.

CREATE OR REPLACE FUNCTION add_merge_on_insert2(
  tablename TEXT,  -- table name
  keycols TEXT[],  -- key columns
  updatecols TEXT[] -- columns to update is key columns match
) RETURNS TEXT
AS $BODY$

DECLARE
  trig  TEXT;
  upclause  TEXT := '';
  keyclause TEXT := '';
  out_msg TEXT;
BEGIN
  -- setup the where clause with all key columns (probably primary keys)
  FOR a IN 1 .. array_upper(keycols,1) LOOP
keyclause := keyclause || quote_ident(keycols[a]) || ' = NEW.'
  || quote_ident(keycols[a]) || ' AND ';
  END LOOP;
  --trim the last AND
  keyclause := trim(trailing ' AND ' FROM keyclause);

  -- setup the columns to UPDATE
  FOR i IN 1 .. array_upper(updatecols,1) LOOP
upclause := upclause || quote_ident(updatecols[i])
  || ' = COALESCE(NEW.' || quote_ident(updatecols[i])
  || ', orig.' || quote_ident(updatecols[i]) || '), ';
  END LOOP;
  --trim the last comma and space
  upclause := trim(trailing ', ' FROM upclause);

  - put together the function now
EXECUTE 'CREATE FUNCTION "' || tablename || '_merge_on_insert_f" ()
RETURNS TRIGGER AS $$
  DECLARE
orig ' || quote_ident(tablename) || '%ROWTYPE;
  BEGIN
-- NOTE: This function was dynamically built by add_merge_on_insert2
LOCK TABLE ' || quote_ident(tablename) || ' IN ROW EXCLUSIVE MODE;

SELECT INTO orig * FROM  ' || quote_ident(tablename) || ' WHERE '
|| keyclause || ';

IF NOT FOUND THEN
  RETURN NEW;
END IF;

UPDATE ' || quote_ident(tablename) || ' SET ' || upclause || '
WHERE ' || keyclause || ';

RETURN NULL;
  END;
  $$ LANGUAGE plpgsql
 '; -- end of execute

EXECUTE 'CREATE TRIGGER "' || tablename || '_merge_on_insert_t" BEFORE INSERT
ON ' || quote_ident(tablename) || ' FOR EACH ROW
EXECUTE PROCEDURE "' || tablename || '_merge_on_insert_f" ();
  '; -- end of execute

  out_msg := 'FUNCTION ' || tablename || '_merge_on_insert_f ();
TRIGGER ' || tablename || '_merge_on_insert_t';
RETURN out_msg;
END;
$BODY$ LANGUAGE 'plpgsql';

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] returning the primary key value

2005-09-21 Thread Jason Tesser
I have a stored proc in which I want to retur the primary key of an insert 
statement that the stored proc just ran.  How can  I do that?  

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Slow search.. quite clueless

2005-09-21 Thread Yonatan Ben-Nes

Oleg Bartunov wrote:

On Tue, 20 Sep 2005, Philip Hallstrom wrote:

contrib/tsearch2 ( 
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/ )
might works for you. It might because performance depends on 
cardinality of your keywords.



Seconded.  We use tsearch2 to earch about 40,000 rows containing 
manufacturer, brand, and product name and it returns a result almost 
instantly.  Before when we did normal SQL "manufacture LIKE ..., etc." 
it would take 20-30 seconds.


One thing to check is the english.stop file which contains words to 
skip (i, a, the, etc.).  In our case we removed almost all of them 
since one of our products is "7 up" (the drink) and it would remove 
"up".  Made it really hard to pull up 7 up in the results :)



we have "rewriting query support ( thesauri search)" in our todo
(http://www.sai.msu.su/~megera/oddmuse/index.cgi/todo).




-philip



Oleg
On Tue, 20 Sep 2005, Yonatan Ben-Nes wrote:


Hi all,

Im building a site where the users can search for products with up 
to 4 diffrent keywords which all MUST match to each product which 
found as a result to the search.


I got 2 tables (which are relevant to the issue :)), one is the 
product table (5 million rows) and the other is the keyword table 
which hold the keywords of each product (60 million rows).


The scheme of the tables is as follows:

 Table "public.product"
  Column   | Type  |  Modifiers
+---+-
product_id | text  | not null
product_name   | text  | not null
retail_price   | numeric(10,2) | not null
etc...
Indexes:
   "product_product_id_key" UNIQUE, btree (product_id)

Table "public.keyword"
  Column| Type  | Modifiers
-+---+---
product_id  | text  | not null
keyword | text  | not null
Indexes:
   "keyword_keyword" btree (keyword)

The best query which I succeded to do till now is adding the keyword 
table for each keyword searched for example if someone search for 
"belt" & "black" & "pants" it will create the following query:


poweraise.com=# EXPLAIN ANALYZE SELECT 
product_id,product_name,product_image_url,short_product_description,long_product_description,discount,discount_type,sale_price,retail_price 
FROM product INNER JOIN keyword t1 USING(product_id) INNER JOIN 
keyword t2 USING(product_id) INNER JOIN keyword t3 USING(product_id) 
WHERE t1.keyword='belt' AND t2.keyword='black' AND 
t3.keyword='pants' LIMIT 13;


  QUERY PLAN


--- 

Limit  (cost=37734.15..39957.20 rows=13 width=578) (actual 
time=969.798..1520.354 rows=6 loops=1)
  ->  Hash Join  (cost=37734.15..3754162.82 rows=21733 width=578) 
(actual time=969.794..1520.337 rows=6 loops=1)

Hash Cond: ("outer".product_id = "inner".product_id)
->  Nested Loop  (cost=18867.07..2858707.34 rows=55309 
width=612) (actual time=82.266..1474.018 rows=156 loops=1)
  ->  Hash Join  (cost=18867.07..2581181.09 rows=55309 
width=34) (actual time=82.170..1462.104 rows=156 loops=1)
Hash Cond: ("outer".product_id = 
"inner".product_id)
->  Index Scan using keyword_keyword on keyword 
t2 (cost=0.00..331244.43 rows=140771 width=17) (actual 
time=0.033..1307.167 rows=109007 loops=1)

  Index Cond: (keyword = 'black'::text)
->  Hash  (cost=18851.23..18851.23 rows=6337 
width=17) (actual time=16.145..16.145 rows=0 loops=1)
  ->  Index Scan using keyword_keyword on 
keyword t1 (cost=0.00..18851.23 rows=6337 width=17) (actual 
time=0.067..11.050 rows=3294 loops=1)

Index Cond: (keyword = 'belt'::text)
  ->  Index Scan using product_product_id_key on product 
(cost=0.00..5.01 rows=1 width=578) (actual time=0.058..0.060 rows=1 
loops=156)
Index Cond: (product.product_id = 
"outer".product_id)
->  Hash  (cost=18851.23..18851.23 rows=6337 width=17) 
(actual time=42.863..42.863 rows=0 loops=1)
  ->  Index Scan using keyword_keyword on keyword t3 
(cost=0.00..18851.23 rows=6337 width=17) (actual time=0.073..36.120 
rows=3932 loops=1)

Index Cond: (keyword = 'pants'::text)
Total runtime: 1521.441 ms
(17 rows)

Sometimes the query work fast even for 3 keywords but that doesnt 
help me if at other times it take ages


Now to find a result for 1 keyword its really flying so I also tried 
to make 3 queries and do INTERSECT between them but it was found out 
to be extremly slow...


Whats make this query slow as far as I understand is all the merging 
between the results of each table... I tried to divide the keyword 
table into lots of keyword

Re: [GENERAL] Problem with libpq3 & postgresql8

2005-09-21 Thread Tom Lane
"Alex R. Mosteo" <[EMAIL PROTECTED]> writes:
> About the certificate thing, I'm not sure what you mean, but I've
> configured the pg_hba.conf file useing hostnossl. I see this line in the
> server log:
> LOG:  no se pudo aceptar una conexión SSL: se detectó EOF
> Which translates to: "Couldn't accept a SSL connection: EOF detected"
> but this message only appears after I kill the client process, which
> could mean that indeed the connection is alive and hung.

Hm.  Does it work if you add "sslmode = disable" to your connection
string?

What's the platform exactly, and exactly which PG 8.0.* release are you
using?  For that matter, are you sure you are linking your program to
the 8.0 libpq.so, and not still the 7.4 one?

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Fetching column names for a table

2005-09-21 Thread Philip Hallstrom
I need to extract a SETOF column names for a table in plpgsql.  How is this 
done?


Start up psql with the -E option.  Then type "\dt tablename".  This will 
print out the SQL that psql runs to give you the column names.  Maybe that 
will do what you want?


-philip

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Postgres locks table schema?

2005-09-21 Thread Tom Lane
"Stas Oskin" <[EMAIL PROTECTED]> writes:
> When running SQL scripts, which change the tables' schema, we sometimes
> experience a lock of these tables' schema. The lock occurs on the first
> command of the script, on the first modified table. When this occurs, no
> further schema modification for this table is possible, until the server is
> restarted, which is very problematic for a production environment.

Try committing the transaction that performed the schema modification
;-)

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Fetching column names for a table

2005-09-21 Thread Tony Wasson
On 9/21/05, Steve Manes <[EMAIL PROTECTED]> wrote:
> I need to extract a SETOF column names for a table in plpgsql.  How is
> this done?

I got the queries for this by running psql with -E and then using \d
on a table. Use this function like so: SELECT * FROM
column_names('your_table');

CREATE OR REPLACE FUNCTION column_names(in_tablename TEXT) RETURNS
SETOF TEXT AS $BODY$
DECLARE
  recRECORD;
  table_oid  INTEGER;
  i INTEGER := 0;
BEGIN
  FOR rec IN SELECT attname
FROM pg_catalog.pg_attribute
WHERE attnum > 0 AND NOT attisdropped
  AND attrelid = (
SELECT c.oid
FROM pg_catalog.pg_class c
 LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE pg_catalog.pg_table_is_visible(c.oid)
  AND c.relname = in_tablename
  )
ORDER BY attname ASC
  LOOP
RETURN NEXT rec.attname;
i := i+1;
  END LOOP;

  IF i < 1 THEN
RAISE NOTICE'no table called % found. Verify table exists and try
prepending the schema.',in_tablename;
  END IF;

 RETURN;
END;
$BODY$ LANGUAGE 'plpgsql';

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Fetching column names for a table

2005-09-21 Thread Joshua D. Drake

Steve Manes wrote:

I need to extract a SETOF column names for a table in plpgsql.  How is 
this done?


A query such as this:

select * from information_schema.columns where table_name = 'table_name';

Will give you a bunch of information. For SET OF functions in general 
take a look at:


http://techdocs.postgresql.org/guides/SetReturningFunctions

You can also use something like this:

CREATE TYPE column_type_set AS (column_name text, column_type text);
CREATE OR REPLACE FUNCTION describe_table (text, text) RETURNS SETOF 
column_type_set AS '

 SELECT attname::text, typname::text
   FROM pg_namespace, pg_attribute, pg_type, pg_class
   WHERE pg_type.oid = atttypid
 AND pg_class.oid = attrelid
 AND relname = $2 AND attnum >= 1
 AND relnamespace = pg_namespace.oid
 AND pg_namespace.nspname = $1;
' LANGUAGE 'SQL';



Sincerely,

Joshua D. Drake




-=o&o>-
Steve Manes  http://www.magpie.com
Brooklyn, NY

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly




--
Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240
PostgreSQL Replication, Consulting, Custom Programming, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Postgres locks table schema?

2005-09-21 Thread Scott Marlowe
On Wed, 2005-09-21 at 12:51, Stas Oskin wrote:
> Hi.
> 
>  
> 
> When running SQL scripts, which change the tables' schema, we
> sometimes experience a lock of these tables' schema. The lock occurs
> on the first command of the script, on the first modified table. When
> this occurs, no further schema modification for this table is
> possible, until the server is restarted, which is very problematic for
> a production environment.
> 
>  
> 
> Perhaps someone is familiar with this behavior, and how it can be
> resolved, without the restart of the server?
> 
>  
> 
> The server used is PostgreSQL 8.0.3, and the OS is Fedora Core 3.


Have you got a test case that reproduces this in some way?  I'm guessing
Tom's got it right on committing the transaction, but without knowing
EXACTLY what you're doing, we're just guessing here.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Fetching column names for a table

2005-09-21 Thread David Fetter
On Wed, Sep 21, 2005 at 02:31:23PM -0400, Steve Manes wrote:
> I need to extract a SETOF column names for a table in plpgsql.  How
> is this done?

You can do it in SQL.

CREATE OR REPLACE FUNCTION get_columns_for (
in_schema TEXT,
in_table TEXT
) RETURNS SETOF TEXT
LANGUAGE SQL
STRICT
AS $$
SELECT c.column_name
FROM information_schema.columns c
WHERE c.table_schema = $1
AND c.table_name = $2
ORDER BY ordinal_position;
$$;

CREATE OR REPLACE FUNCTION get_columns_for (
in_table TEXT
) RETURNS SETOF TEXT
LANGUAGE SQL
STRICT
AS $$
SELECT * FROM get_columns_for('public', $1);
$$;

HTH :)

Cheers,
D
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 510 893 6100   mobile: +1 415 235 3778

Remember to vote!

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] COPY and Unicode...

2005-09-21 Thread Cristian Prieto
Hello, I need to import some amount of data using the COPY command, the main
trouble I found is that the Database is in UNICODE format and the data in
ASCII Latin-1 codepage, when I try to import it, COPY respond with:

ERROR:  invalid byte sequence for encoding "UNICODE": 0xe9
CONTEXT:  COPY geoip_location, line 307, column city: "Yaound"

I've been looking into the documentation and I could not found any reference
in copy from a distinct codepage than the used by the database, any help?

Thanks a lot!


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] index row size exceeds btree maximum

2005-09-21 Thread CSN

This appears related to my previous post:
http://archives.postgresql.org/pgsql-general/2005-09/msg00809.php

I setup a unique index using the title, yield, and
directions fields. Some inserts are causing this
error:

DBD::Pg::st execute failed: ERROR:  index row size
2832 exceeds btree maximum, 2713
CONTEXT:  SQL statement "insert into stuff (title,
yield, directions) values ( $1 ,  $2 ,  $3 )"

What do I do?

Thanks,
CSN



__ 
Yahoo! Mail - PC Magazine Editors' Choice 2005 
http://mail.yahoo.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] YAPC::Israel looking for Perl/Database speakers

2005-09-21 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
NotDashEscaped: You need GnuPG to verify this message


The call for papers for YAPC::Israel is out. Note that they specifically
mention databases. It would be nice if someone could present something
on Pl/perl there. I'll even volunteer to help write it. :)

--
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200509211603
https://www.biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8


YAPC::Israel::2006 - Call for Papers and Participation


After three Perl conferences in Israel ( http://www.perl.org.il/YAPC/ )
it seems it is time to enlarge the conference and include other, related
Open Source technologies as well.

In February 2006, we are going to have a 3 day long conference in 3 parallel
tracks covering various Open Source fields:
- Programming languages: Perl, Python, PHP, Tcl, Ruby, Haskell, ...
- Databases: MySQL, PostgreSQL, SQLite, ...
- Version Control: Subversion, SVK, Arch, Darcs, ...

Other suggestions are welcome!


Keynote Speaker
-
Larry Wall, inventor of Perl



Special Guest
--
Autrijus Tang, Chief Pugs Hacker



Date and Location
---
The conference will be held in February 2006 in Israel.
Exact dates and location will be announced soon.


Announcements
-
The best way to be up to date on the conference is to join our low volume
news mailing list at   http://www.perl.org.il/mailman/listinfo/news


Participation fee
--
  Company  Individual
Full fee  200 USD+VAT  450 NIS
Early bird discounted fee:100 USD+VAT  290 NIS

- VAT is only relevant to Israeli companies
- Early bird dead-line will be announced along with the dates
  of the conference and will be approximately 30 days before
  the conference.
- Speakers of talks 20 minutes or longer are invited to help cover
  the conference expenses by paying the participation fee but are
  not required to so.


Tracks or Workshops
---
This year we are planning to have several well defined tracks or workshops.
Each track will contain tutorial-like presentations on both introductory and
advanced level. In addition we are planning to have "use cases", that is
people showing us case studies on how they use the given technology in
that field.

The major tracks we are interested are the following:

- Using P for web application development
- GUI development with P and Tk (or other Toolkit)
- Using P for testing
- How to write and maintain large applications in P?
- Embedding P (Featuring sample application that uses P in itself)
- Windows System Administration using P
- mod_perl
- P and databases
- OOP

- In addition we would like to see talks about new and intersting
  technologies such as Ruby on Rails or PyPy and fun with P

Replace P in the above sentences with your favorite programming language.

Other suggestions for tracks are welcome!

Beginners track
---
Last year the beginners track was a surprisingly successful one.
This year we would like to repeat it. We would welcome the same
speakers with the same or improved version of their talks. In
addition we would like to see new faces and new talks in the
beginners track.



Length
---
20, 40, 60 min presentations
90, 180 min tutorials
5 min lightning talks

Language
--
Hebrew or English



Hackaton
-
As Autrijus is going to stay in Israel a few weeks we are planning to
provide him with a platform to hack on. For this after the conference
we are planning to go out to the wilderness for a few days with
notebooks and an uplink to the world and do some Pugs hacking.

That is, the more adventures types can join this additional event.
We are goint to rent rooms in a "Zimmer" (B&B for the British) and move
there for a long weekend.

This is a very good opportunity to get started with Perl6 or Haskell
or both.

Exact details will be announced later.
Expected cost for the hackathon is ~ 40 USD/night/person + food.


So what TODO now?
--
Start thinking now about your presentation, ask us questions
if something is not clear so we can have a full size conference
by February.



Contact

You can reach the organizers at: [EMAIL PROTECTED]

Gabor Szabo
Perl Monger

-BEGIN PGP SIGNATURE-

iEYEARECAAYFAkMxvRAACgkQvJuQZxSWSshVtACeJ7vWjhy0Y+sDVH4uxnbmKPu/
jQEAn2zBuC8NbYm07CN1zUJ1eYetR6wU
=fone
-END PGP SIGNATURE-



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] index row size exceeds btree maximum

2005-09-21 Thread Scott Marlowe
On Wed, 2005-09-21 at 15:02, CSN wrote:
> This appears related to my previous post:
> http://archives.postgresql.org/pgsql-general/2005-09/msg00809.php
> 
> I setup a unique index using the title, yield, and
> directions fields. Some inserts are causing this
> error:
> 
> DBD::Pg::st execute failed: ERROR:  index row size
> 2832 exceeds btree maximum, 2713
> CONTEXT:  SQL statement "insert into stuff (title,
> yield, directions) values ( $1 ,  $2 ,  $3 )"
> 
> What do I do?

Don't insert such big values?  :)

Actually, the standard solution is to use an md5 of the three fields:

create unique index threefieldindex on table1
(md5(field1||field2||field3));

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] index row size exceeds btree maximum

2005-09-21 Thread CSN


--- Scott Marlowe <[EMAIL PROTECTED]> wrote:

> On Wed, 2005-09-21 at 15:02, CSN wrote:
> > This appears related to my previous post:
> >
>
http://archives.postgresql.org/pgsql-general/2005-09/msg00809.php
> > 
> > I setup a unique index using the title, yield, and
> > directions fields. Some inserts are causing this
> > error:
> > 
> > DBD::Pg::st execute failed: ERROR:  index row size
> > 2832 exceeds btree maximum, 2713
> > CONTEXT:  SQL statement "insert into stuff (title,
> > yield, directions) values ( $1 ,  $2 ,  $3 )"
> > 
> > What do I do?
> 
> Don't insert such big values?  :)
> 
> Actually, the standard solution is to use an md5 of
> the three fields:
> 
> create unique index threefieldindex on table1
> (md5(field1||field2||field3));
> 

Ah, cool! Looks like using tsearch2 would be another
option, but I don't plan on searching through the
yield or directions fields (except at insert time).

http://joseph.randomnetworks.com/archives/2005/08/05/postgresql-index-limitation-index-row-size-x-exceeds-btree-maximum-2713/

CSN


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] COPY and Unicode...

2005-09-21 Thread Douglas McNaught
"Cristian Prieto" <[EMAIL PROTECTED]> writes:

> Hello, I need to import some amount of data using the COPY command, the main
> trouble I found is that the Database is in UNICODE format and the data in
> ASCII Latin-1 codepage, when I try to import it, COPY respond with:
>
> ERROR:  invalid byte sequence for encoding "UNICODE": 0xe9
> CONTEXT:  COPY geoip_location, line 307, column city: "Yaound"
>
> I've been looking into the documentation and I could not found any reference
> in copy from a distinct codepage than the used by the database, any help?

Are you setting the client_encoding variable before doing the COPY?

-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] COPY and Unicode...

2005-09-21 Thread Tom Lane
"Cristian Prieto" <[EMAIL PROTECTED]> writes:
> Hello, I need to import some amount of data using the COPY command, the main
> trouble I found is that the Database is in UNICODE format and the data in
> ASCII Latin-1 codepage, when I try to import it, COPY respond with:

> ERROR:  invalid byte sequence for encoding "UNICODE": 0xe9
> CONTEXT:  COPY geoip_location, line 307, column city: "Yaound"

> I've been looking into the documentation and I could not found any reference
> in copy from a distinct codepage than the used by the database, any help?

Try doing "SET client_encoding = latin1" before doing the COPY.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] [pgsql-advocacy] YAPC::Israel looking for Perl/Database speakers

2005-09-21 Thread David Fetter
On Wed, Sep 21, 2005 at 08:07:48PM -, Greg Sabino Mullane wrote:
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> NotDashEscaped: You need GnuPG to verify this message
> 
> 
> The call for papers for YAPC::Israel is out. Note that they
> specifically mention databases. It would be nice if someone could
> present something on Pl/perl there. I'll even volunteer to help
> write it. :)

I'd be delighted to give a talk on DBI-Link, of which PL/PerlU is the
essential component :)

Cheers,
D
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 510 893 6100   mobile: +1 415 235 3778

Remember to vote!

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] COPY and Unicode...

2005-09-21 Thread Cristian Prieto
Thanks a lot! Your help was very handy!

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Douglas McNaught
Sent: Miércoles, 21 de Septiembre de 2005 02:39 p.m.
To: Cristian Prieto
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] COPY and Unicode...

"Cristian Prieto" <[EMAIL PROTECTED]> writes:

> Hello, I need to import some amount of data using the COPY command, the
main
> trouble I found is that the Database is in UNICODE format and the data in
> ASCII Latin-1 codepage, when I try to import it, COPY respond with:
>
> ERROR:  invalid byte sequence for encoding "UNICODE": 0xe9
> CONTEXT:  COPY geoip_location, line 307, column city: "Yaound"
>
> I've been looking into the documentation and I could not found any
reference
> in copy from a distinct codepage than the used by the database, any help?

Are you setting the client_encoding variable before doing the COPY?

-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Slow search.. quite clueless

2005-09-21 Thread Yonatan Ben-Nes

Olly Betts wrote:

Yonatan Ben-Nes wrote:


Actually I even started to look on other solutions and maybe you can say
something about them also.. maybe they can help me:
1. Omega (From the Xapian project) - http://www.xapian.org/



You could certainly do this with Xapian and Omega.  With only 5
million records it should be very quick.

The easiest approach would be to periodically dump the SQL tables
and build a new Xapian index which reflects the SQL database - you'd
probably want to customise the "dbi2omega" script in the Omega
distribution.  This approach works particularly well if the tables
are updated in a batch fashion (one big weekly update, say).

Alternatively you could hook into whatever updates the SQL database
and get it to make corresponding updates to the Xapian index.  That
has the advantage that they'll always be in step, but is probably
more work to set up.

The main drawback compared to doing everything in SQL is that you'd
have two systems to deal with rather than just one...

Cheers,
Olly


Ok then ill try this option if the tsearch2 wont work fast enough for me 
(hopefully it will :)).


Thanks alot,
Ben-Nes Yonatan

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Slow search.. quite clueless

2005-09-21 Thread Dawid Kuroczko
On 9/20/05, Yonatan Ben-Nes <[EMAIL PROTECTED]> wrote:
Hi all,Im building a site where the users can search for products with up to 4diffrent keywords which all MUST match to each product which found as aresult to the search.I got 2 tables (which are relevant to the issue :)), one is the product
table (5 million rows) and the other is the keyword table which hold thekeywords of each product (60 million rows).The scheme of the tables is as follows:  
Table "public.product"Column  
|
Type  |  Modifiers+---+-  product_id
| text  | not
null  product_name  
| text  | not
null  retail_price  
| numeric(10,2) | not null  etc...Indexes: "product_product_id_key" UNIQUE, btree (product_id)  Table "public.keyword"Column|
Type  | Modifiers-+---+---  product_id  | text  | not null  keyword | text  | not nullIndexes: "keyword_keyword" btree (keyword)
The best query which I succeded to do till now is adding the keywordtable for each keyword searched for example if someone search for "belt"& "black" & "pants" it will create the following query:
poweraise.com=# EXPLAIN ANALYZE SELECTproduct_id,product_name,product_image_url,short_product_description,long_product_description,discount,discount_type,sale_price,retail_priceFROM product INNER JOIN keyword t1 USING(product_id) INNER JOIN keyword
t2 USING(product_id) INNER JOIN keyword t3 USING(product_id) WHEREt1.keyword='belt' AND t2.keyword='black' AND t3.keyword='pants' LIMIT 13;QUERY PLAN---
  Limit  (cost=37734.15..39957.20 rows=13 width=578) (actualtime=969.798..1520.354 rows=6 loops=1)->  Hash Join  (cost=37734.15..3754162.82 rows=21733 width=578)(actual time=969.794..1520.337 rows=6 loops=1)
  Hash Cond: ("outer".product_id = "inner".product_id)  ->  Nested
Loop  (cost=18867.07..2858707.34 rows=55309width=612) (actual time=82.266..1474.018 rows=156 loops=1)->  Hash
Join  (cost=18867.07..2581181.09 rows=55309width=34) (actual time=82.170..1462.104 rows=156 loops=1)  Hash
Cond: ("outer".product_id = "inner".product_id)  ->  Index
Scan using keyword_keyword on keyword t2  (cost=0.00..331244.43 rows=140771 width=17) (actualtime=0.033..1307.167 rows=109007 loops=1)Index
Cond: (keyword = 'black'::text)  ->  Hash  (cost=18851.23..18851.23
rows=6337width=17) (actual time=16.145..16.145 rows=0 loops=1)->  Index
Scan using keyword_keyword onkeyword t1  (cost=0.00..18851.23 rows=6337 width=17) (actualtime=0.067..11.050 rows=3294 loops=1)  Index
Cond: (keyword = 'belt'::text)->  Index
Scan using product_product_id_key on product(cost=0.00..5.01 rows=1 width=578) (actual time=0.058..0.060 rows=1loops=156)  Index
Cond: (product.product_id = "outer".product_id)  ->  Hash  (cost=18851.23..18851.23
rows=6337 width=17) (actualtime=42.863..42.863 rows=0 loops=1)->  Index
Scan using keyword_keyword on keyword t3(cost=0.00..18851.23 rows=6337 width=17) (actual time=0.073..36.120rows=3932 loops=1)  Index
Cond: (keyword = 'pants'::text)  Total runtime: 1521.441 ms(17 rows)Sometimes the query work fast even for 3 keywords but that doesnt helpme if at other times it take ages

Hmm, JOIN on a Huge table with LIMIT.  You may be suffering from
the same problem I had:

http://archives.postgresql.org/pgsql-performance/2005-07/msg00345.php

Tom came up with a patch which worked marvellous in my case:

http://archives.postgresql.org/pgsql-performance/2005-07/msg00352.php

Try applying this patch, it may solve your problem!

   Regards,
    Dawid



Re: [GENERAL] Problem with libpq3 & postgresql8

2005-09-21 Thread vishal saberwal
hi,
tom and many others helped me with someting similar to this.

Check for the following:
(a) in /usr/lib check what libpq you have installed. It should be
libpq3.2 or higher. If it is not, you can copy from your compiled
source and then create the symbolic links there.
(b) Try using etheral and see what is going through and what is happening. It really helps.
(c) You need to have keys on each (client and server side). What
certificates are you using (self signed or commercial). I did mine with
self-signed and it worked. My certificates go in /var/lib/pgsql
(d) Have your pg_hba.conf set up for openssl.
(e) Have the connection string give sslmode=allow
(f) when you use psql, use /usr/local/pgsql/bin/psql instead of
directly writing psql (that searches in /usr/bin in my machine and it
isnt the right version) ...

[EMAIL PROTECTED] DBApi]# ls -l /usr/lib/libpq*

-rw-r--r--  1 postgres root 1480452 Mar 10  2004 /usr/lib/libpq.a

lrwxrwxrwx  1 postgres root  21 Aug 29 15:00 /usr/lib/libpq.so -> /usr/lib/libpq.so.3.2

lrwxrwxrwx  1 postgres root  21 Aug 29 14:59 /usr/lib/libpq.so.3 -> /usr/lib/libpq.so.3.2

-rwxr-xr-x  1 postgres root  113988 Mar 10  2004 /usr/lib/libpq.so.3.1

-rwxr-xr-x  1 postgres root  122177 Aug 26 12:55 /usr/lib/libpq.so.3.2

 "allow" attempts a non-SSL connection first, whereas
"prefer" attempts an SSL connection first.  If the server permits
non-SSL connections then "allow" will get you connected as non-SSL
without ever attempting SSL.

thanks,
vishOn 9/21/05, Tom Lane <[EMAIL PROTECTED]> wrote:
"Alex R. Mosteo" <[EMAIL PROTECTED]> writes:> About the certificate thing, I'm not sure what you mean, but I've> configured the pg_hba.conf file useing hostnossl. I see this line in the
> server log:> LOG:  no se pudo aceptar una conexión SSL: se detectó EOF> Which translates to: "Couldn't accept a SSL connection: EOF detected"> but this message only appears after I kill the client process, which
> could mean that indeed the connection is alive and hung.Hm.  Does it work if you add "sslmode = disable" to your connectionstring?What's the platform exactly, and exactly which PG 8.0.*
 release are youusing?  For that matter, are you sure you are linking your program tothe 8.0 libpq.so, and not still the 7.4 one?regards,
tom lane---(end of broadcast)---TIP 1: if posting/reading through Usenet, please send an appropriate   subscribe-nomail command to 
[EMAIL PROTECTED] so that your   message can get through to the mailing list cleanly


Re: [GENERAL] Network Flow Schema + Bulk Import/Updates

2005-09-21 Thread Michael L. Artz
Tony Wasson wrote:

>You can use the merge trigger below to do this. You'll need to add
>some code to update the count. You may also benefit from using the new
>constraint exclusion (table partitioning) in PostgreSQL 8.1. I am not
>sure if CE works against the inet datatype -- if not, try converting
>the IP to an integer.
>  
>

CE looked like it was just for parent/child relationships ... did I read
that right?  I'm not sure how it applies.  And the table partitioning
looks like its still on the todo list ... is that really the case?

And as for getting data into the DB ... from earlier posts it sounded
like standard practice was to bulk load the new data into a temporary
table and then do an INSERT ... SELECT to load the data into the new
table.  Is this still the case with the trigger, or can/should I just
COPY the data straight into the final database?  And I assume that I
should *not* delete my indexes while I'm loading the table, since the
queries in the trigger can take advantage of them ... right?

Also, as a slight aside, has anyone created a data type for single IPs
that is essentially an integer (i.e. 4 bytes) that works with the
standard functions for INET?

-Mike

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] JDBC error w/ WebObjects

2005-09-21 Thread Ismael Perdido
Hi there,

I am getting an error when I try to update or delete
rows in my postgreSQL table.  I am using:

Eclipse
WO5.2
Postgres8.0 JDBC3
Hexdreams 1.2 plugin

The error message is horribly vauge:

com.webobjects.eoaccess.EOGeneralAdaptorException:
updateValuesInRowDescribedByQualifier --
com.webobjects.jdbcadaptor.JDBCChannel method failed
to update row in database

And I am able to both select and add rows without
difficulty.  Any direction here would be greatly
appreciated.

Thanks and regards,

Ismael Jones
[EMAIL PROTECTED]




__ 
Renovamos el Correo Yahoo! 
Nuevos servicios, más seguridad 
http://correo.yahoo.es

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] returning the primary key value

2005-09-21 Thread Michael Fuhr
On Wed, Sep 21, 2005 at 02:22:22PM -0500, Jason Tesser wrote:
> I have a stored proc in which I want to retur the primary key of
> an insert statement that the stored proc just ran.  How can  I do
> that?

If the primary key is a SERIAL column then see "How do I get the
value of a SERIAL insert?" in the FAQ:

http://www.postgresql.org/docs/faqs.FAQ.html#4.11.2

-- 
Michael Fuhr

---(end of broadcast)---
TIP 6: explain analyze is your friend