Re: [GENERAL] Decimal vs. Bigint memory usage

2011-09-30 Thread Craig Ringer

On 29/09/2011 9:13 PM, Marti Raudsepp wrote:



How does decimal store the number internally,


Numeric is stored as binary-coded decimal. This isn't fast.

I'd say using a bigint is a no-brainer unless you think there's a chance 
you might exceed the maximum size for a 64-bit integer.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not access file $libdir/pg_buffercache: No such file or directory

2011-09-30 Thread Venkat Balaji
One way could be -

If the restore you are performing is not very big. Then pg_dump can be taken
with --inserts and we can remove create function buffercache lines
from the output file and restore.

Thanks
VB

On Fri, Sep 30, 2011 at 10:59 AM, Venkat Balaji venkat.bal...@verse.inwrote:

 I had faced the same problem 2 days earlier and that was for
 pg_freespacemap contrib module.

 I did not know the way to ignore these functions and installed THE contrib
 modules and restored. It worked !

 I am also looking for a way to ignore these functions.

 Thanks
 VB


 On Fri, Sep 30, 2011 at 8:15 AM, Royce Ausburn royce...@inomial.comwrote:

 I'm in the process of testing out Postgres 9.0 for production use.  I've
 been using it for development on my mac, a build from EnterpriseDB.  We've
 just installed a 9.0.5 on an Ubuntu (Ubuntu 10.04.3 LTS) machine from a
 backport from lucid.  There's an existing 8.4.8 postgres install also on
 that machine, but as far as I'm aware the debs are set up so you can happily
 have two installs side by side.

 I've dumped a test DB from my laptop and attempted to restore it on to the
 ubuntu machine, but I see errors:

 royce@fishy:~$ createdb test
 royce@fishy:~$ pg_restore -d test --no-owner test_RTR.pgc
 pg_restore: [archiver (db)] Error while PROCESSING TOC:
 pg_restore: [archiver (db)] Error from TOC entry 65; 1255 2194102 FUNCTION
 pg_buffercache_pages() persona
 pg_restore: [archiver (db)] could not execute query: ERROR:  could not
 access file $libdir/pg_buffercache: No such file or directory
Command was: CREATE FUNCTION pg_buffercache_pages() RETURNS SETOF
 record
LANGUAGE c
AS '$libdir/pg_buffercache', 'pg_buffercache_...


 After looking around a bit, my theory on these errors is that my mac has
 some contrib module installed that adds views to my test database, and those
 views have been included in the dump.  The contrib module is apparently not
 installed on the ubuntu machine and hence the restore can't create the
 views.  Is this theory correct?  If so:

 - Can these errors be safely ignored? (not ideal, because I'll miss other,
 real errors)
 - Is there some simple way I can prevent these views from being emitted as
 part of the dump?
 - Is there some simple way I can prevent these views from being restored
 from a dump that contains them?
 - Otherwise, how can I purge these contrib modules from my laptop?

 Otherwise, if my theory is incorrect, any hints as to what it might be?

 Cheers!

 --Royce

 Mac:

  version

 
  PostgreSQL 9.0.4 on x86_64-apple-darwin, compiled by GCC
 i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664), 64-bit
 (1 row)


 Ubuntu box:

   version

 -
  PostgreSQL 9.0.5 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real
 (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 64-bit
 (1 row)


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general





Re: [GENERAL] could not access file $libdir/pg_buffercache: No such file or directory

2011-09-30 Thread Magnus Hagander
On Fri, Sep 30, 2011 at 04:45, Royce Ausburn royce...@inomial.com wrote:
 I'm in the process of testing out Postgres 9.0 for production use.  I've been 
 using it for development on my mac, a build from EnterpriseDB.  We've just 
 installed a 9.0.5 on an Ubuntu (Ubuntu 10.04.3 LTS) machine from a backport 
 from lucid.  There's an existing 8.4.8 postgres install also on that machine, 
 but as far as I'm aware the debs are set up so you can happily have two 
 installs side by side.

 I've dumped a test DB from my laptop and attempted to restore it on to the 
 ubuntu machine, but I see errors:

 royce@fishy:~$ createdb test
 royce@fishy:~$ pg_restore -d test --no-owner test_RTR.pgc
 pg_restore: [archiver (db)] Error while PROCESSING TOC:
 pg_restore: [archiver (db)] Error from TOC entry 65; 1255 2194102 FUNCTION 
 pg_buffercache_pages() persona
 pg_restore: [archiver (db)] could not execute query: ERROR:  could not access 
 file $libdir/pg_buffercache: No such file or directory
    Command was: CREATE FUNCTION pg_buffercache_pages() RETURNS SETOF record
    LANGUAGE c
    AS '$libdir/pg_buffercache', 'pg_buffercache_...


 After looking around a bit, my theory on these errors is that my mac has some 
 contrib module installed that adds views to my test database, and those views 
 have been included in the dump.  The contrib module is apparently not 
 installed on the ubuntu machine and hence the restore can't create the views. 
  Is this theory correct?  If so:

 - Can these errors be safely ignored? (not ideal, because I'll miss other, 
 real errors)

Yes.

 - Is there some simple way I can prevent these views from being emitted as 
 part of the dump?

Not easily, not in 9.0. Once you upgrade to 9.1 and upgrade them to
extensions, the problem should be much easier..

 - Is there some simple way I can prevent these views from being restored from 
 a dump that contains them?

Not really.

 - Otherwise, how can I purge these contrib modules from my laptop?

There is an uninstall script for them. it should be next to where your
install script is (usually in share/contrib, but I'm not sure where
the edb installer puts it on the mac). It's called
uninstall_pg_cuffercache.sql. Run that in the old database and it
should drop all the required objects.

 Otherwise, if my theory is incorrect, any hints as to what it might be?

You can also install the -contrib DEB package on your Ubuntu box, and
then you will have the required libraries, and it will recreate the
functions and views correctly.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procs

2011-09-30 Thread J.V.

Yes, I know that one.

What I want to do however is within a stored procedure make a call and 
store all tables in a given schema in a list.


Is this something you can answer?

thanks

J.V.

On 9/29/2011 3:25 AM, Richard Huxton wrote:

On 29/09/11 02:33, J.V. wrote:

Is is possible within a stored procedure to read all the tables in a
schema into a list?

[snip]

I need to extract this meta-data for a project.


Apart from information_schema mentioned elsewhere, start psql with -E 
and then try \dt and similar - it will show you the queries it uses.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not access file $libdir/pg_buffercache: No such file or directory

2011-09-30 Thread Albe Laurenz
Royce Ausburn wrote:
 I'm in the process of testing out Postgres 9.0 for production use.
I've been using it for development
 on my mac, a build from EnterpriseDB.  We've just installed a 9.0.5 on
an Ubuntu (Ubuntu 10.04.3 LTS)
 machine from a backport from lucid.  There's an existing 8.4.8
postgres install also on that machine,
 but as far as I'm aware the debs are set up so you can happily have
two installs side by side.
 
 I've dumped a test DB from my laptop and attempted to restore it on to
the ubuntu machine, but I see
 errors:
 
 royce@fishy:~$ createdb test
 royce@fishy:~$ pg_restore -d test --no-owner test_RTR.pgc
 pg_restore: [archiver (db)] Error while PROCESSING TOC:
 pg_restore: [archiver (db)] Error from TOC entry 65; 1255 2194102
FUNCTION pg_buffercache_pages()
 persona
 pg_restore: [archiver (db)] could not execute query: ERROR:  could not
access file
 $libdir/pg_buffercache: No such file or directory
 Command was: CREATE FUNCTION pg_buffercache_pages() RETURNS SETOF
record
 LANGUAGE c
 AS '$libdir/pg_buffercache', 'pg_buffercache_...
 
 
 After looking around a bit, my theory on these errors is that my mac
has some contrib module installed
 that adds views to my test database, and those views have been
included in the dump.  The contrib
 module is apparently not installed on the ubuntu machine and hence the
restore can't create the views.
 Is this theory correct?  If so:
 
 - Can these errors be safely ignored? (not ideal, because I'll miss
other, real errors)
 - Is there some simple way I can prevent these views from being
emitted as part of the dump?
 - Is there some simple way I can prevent these views from being
restored from a dump that contains
 them?
 - Otherwise, how can I purge these contrib modules from my laptop?
 
 Otherwise, if my theory is incorrect, any hints as to what it might
be?

Your theory sounds correct since there is a contrib pg_buffercache and
the
error message indicates that the shared object cannot be found.

The best solution would be to drop
  FUNCTION pg_buffercache_pages()
and
  VIEW pg_buffercache
in the original database before dumping.

That way you can be sure that nothing else in the database depends on
these objects.

You can always reinstall the contrib if you need it.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procs

2011-09-30 Thread John R Pierce

On 09/30/11 12:24 AM, J.V. wrote:
What I want to do however is within a stored procedure make a call and 
store all tables in a given schema in a list.


so procedure1 calls procedure2 (not sure why this extra level of 
procedures, but I'm playing along)


and procedure2 does SELECT table_name from information_schema.tables 
where table_schema='someschemaname';  and returns this data as a 'list' 
(whatever you mean by that) ?







--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] stored procedures (packages)

2011-09-30 Thread J.V.
I need to run a series of stored procedures, what is the best way to 
organize and run.


Ideally would like something like Oracle PL/SQL where I can put all 
methods in one file and create a main() method


then just :  select main() to have them all run.

If there is a way to do this, please let me know.

Also is it possible to call a stored function or procedure from another 
stored proc/function?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] stored procs / data types

2011-09-30 Thread J.V.

What data types do I have access to in a stored proc?

I cannot seem to find the stored procedure manual.

I am not speaking of database field/column data types, but rather
stored proc data types.

Regards,

J.V.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procedures (packages)

2011-09-30 Thread John R Pierce

On 09/30/11 12:57 AM, J.V. wrote:
I need to run a series of stored procedures, what is the best way to 
organize and run.


Ideally would like something like Oracle PL/SQL where I can put all 
methods in one file and create a main() method


then just :  select main() to have them all run.


in function main, have

select function1();
select function2();
...


Also is it possible to call a stored function or procedure from 
another stored proc/function?



see above.

note that postgres doesn't acctually have stored procedures, it only has 
functions.


--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procs / data types

2011-09-30 Thread John R Pierce

On 09/30/11 12:59 AM, J.V. wrote:

What data types do I have access to in a stored proc?

I cannot seem to find the stored procedure manual.


http://www.postgresql.org/docs/current/static/plpgsql.html



I am not speaking of database field/column data types, but rather
stored proc data types.



in pl/pgsql, they are the same.


--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procs

2011-09-30 Thread John R Pierce

On 09/30/11 1:13 AM, J.V. wrote:
thanks for the help, we have a production schema with 80 tables and a 
few of the tables have consumed the max limit for the id field, so I 
have to write a program (stored functions would be the fastest way to 
do this), that will go and drop the sequence, create  start at 1 
again, and re-sequence every table and then make sure all the foreign 
keys in every table point to where they are supposed to.


you shouldn't have to drop the sequence, just 
setval('sequencename',max(idfield)+1); after resequencing the ID's with 
a generate_series... ...


or, convert the fields to bigint which are 64 bit and not likely to wrap 
around any time in this century (sequences already use bigint)




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procs

2011-09-30 Thread John R Pierce

On 09/30/11 1:23 AM, John R Pierce wrote:
convert the fields to bigint which are 64 bit and not likely to wrap 
around any time in this century


indeed, this is as simple as ...

alter table tblname alter column id type bigint;

of course, you'll need to alter all the FK fields that refer to it too, 
and I don't know if that entangles this.



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procs

2011-09-30 Thread John R Pierce

On 09/30/11 1:28 AM, John R Pierce wrote:

On 09/30/11 1:23 AM, John R Pierce wrote:
convert the fields to bigint which are 64 bit and not likely to wrap 
around any time in this century


indeed, this is as simple as ...

alter table tblname alter column id type bigint;

of course, you'll need to alter all the FK fields that refer to it 
too, and I don't know if that entangles this.





yes, quite easy.

test=# create table xxx (id serial primary key, val text);
NOTICE:  CREATE TABLE will create implicit sequence xxx_id_seq for 
serial column xxx.id
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
xxx_pkey for table xxx

CREATE TABLE
test=# create table yyy (id serial primary key, xxx_id int references 
xxx(id));
NOTICE:  CREATE TABLE will create implicit sequence yyy_id_seq for 
serial column yyy.id
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
yyy_pkey for table yyy

CREATE TABLE

test=# alter table xxx alter column id type bigint;
ALTER TABLE
test=# alter table yyy alter column xxx_id type bigint;
ALTER TABLE

voila!


--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rules going away

2011-09-30 Thread Gregg Jaskiewicz
speaking of DO INSTEAD, for insert/update case. Try using RETURNING
with that and rules ;) Good luck

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procs

2011-09-30 Thread J.V.
For tables that already exist and have a foreign key relationship, is 
there an equivalent alter statement for the statement below?


Does this mean that if table xxx.id primary key value changes, the 
foreign key value will change as well?


If this is the case, then would not have to match up all the foreign 
keys to the new re-sequenced id.


Also another question:  if the id serial primary key, do I need to 
create a sequence at all?  Or is that managed by the database?


Typically you have to link the id to the database sequence object, but 
it looks as in this case no CREATE SEQUENCE would be needed.


thanks


J.V.

On 9/30/2011 2:45 AM, John R Pierce wrote:
create table yyy (id serial primary key, xxx_id int references xxx(id)); 


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procs

2011-09-30 Thread Alban Hertroys
On 30 September 2011 11:20, J.V. jvsr...@gmail.com wrote:
 For tables that already exist and have a foreign key relationship, is there
 an equivalent alter statement for the statement below?

 Does this mean that if table xxx.id primary key value changes, the foreign
 key value will change as well?

They do if you tell them to CASCADE on UPDATE.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procedures (packages)

2011-09-30 Thread Albe Laurenz
J.V. wrote:
 I need to run a series of stored procedures, what is the best way to
 organize and run.
 
 Ideally would like something like Oracle PL/SQL where I can put all
 methods in one file and create a main() method
 
 then just :  select main() to have them all run.
 
 If there is a way to do this, please let me know.

Don't try to do PostgreSQL things the Oracle way.

To group functions that logically belong together, you can put them
into a separate schema.

You can grant users execute rights on only one of them if you like
(or rather, because by default PUBLIC has execute rights, revoke
that from all others).

 Also is it possible to call a stored function or procedure from
another
 stored proc/function?

Sure, use them in an SQL statement inside a function.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Connection refused (0x0000274D/10061)

2011-09-30 Thread Pietro Laranci
 Good morning to everybody

I have a pg9 db to wich i can connect to by local but not from remote. In the 
properties of the postgres db , the port is set to 5300 (not the default 5432).
I tried to connect in remote with pgadmin3 and quantum gis, both give the error 
Connection refused (0x274D/10061).
This is the complet message in quantum gis (but is silimar in pgadmin3)

could not connect to server: Connection refused (0x274D/10061)

Is the server running on host 95.110.201.74 and accepting

TCP/IP connections on port 5300?



Whit both the programs installed on the same pc of the db, i can connect 
without problem to the db.



I already did these settings:



- in postgresql.conf : listen_addresses = '*' and port = 5300 (in the place of 
the default 5432)



- in pg_hba.conf:  

host   [dbname]   all 0.0.0.0/0  md5

host   [dbname]  all  127.0.0.1/32 md5



- restarted postgres





I also tried to substitute on the db properties host=localhost with 
host=95.110.201.74



On the machines are not active firewalls.



Have you any clue of what's wrong?



Grazie



Thanks



Re: [GENERAL] Problem dbi_link with postgresql 9.04

2011-09-30 Thread Emanuel Araújo
hi, news!

found that the problem occurs when the dbi_link makes parsing of a field float
/ double to a text field, because when it creates the tables already
created with
this type of data.

Another thing we see is that the problem is not with null fields or zero
bytes but with a value of 1.5, no problem in NULLs or Zero Bytes.

Based on that there is any solution?


Re: [GENERAL] Problem dbi_link with postgresql 9.04

2011-09-30 Thread Albe Laurenz
Emanuel Araújo wrote:
 found that the problem occurs when the dbi_link makes parsing of a field 
 float / double to a text
 field, because when it creates the tables already created with this type of 
 data.

I do not understand that.

 Another thing we see is that the problem is not with null fields or zero 
 bytes but with a value of
 1.5, no problem in NULLs or Zero Bytes.
 
 Based on that there is any solution?

What I would do is check how the data look in Perl.
Build a simple Perl script that selects the problem data and display them byte 
for byte.

That should help understand the problem.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] how to improve this similarity query?

2011-09-30 Thread hamann . w
Hi,

I have a table of names, and searches are usually performed on prefix match.
This could nicely translate into an index search
Suppose first name is stored as either 'Jim' or 'Jimmy', searching 
... where firstname ~* '^jim';
gets proper result. I had hoped that creating a functional
index on lower(firstname) and using a query like
 where lower(firstname) ~ '^jim'
would improve the search, but it does not.
I ended up with adding a lowercased column for matching

Now a few names (a few percent of the entire lot) have alternates, like 'James'.
These could be nicknames, informal variants, language variants, alternate 
spellings

I have already split off these few percent into a separate table and can query 
that like
... where 'jim' ~* firstname_pattern;'jim' ~* firstname_pattern;'jim' ~* 
firstname_pattern;

There are two problems with this approach: when I use 'Jimbo' for the plain 
query,
I do not get any 'Jim' or 'Jimmy', as expected. To achieve the same effect on 
the pattern
query, I need to decorate the pattern somewhat. Actually, when I specify 
'J(im|ames)' for
the pattern, it gets preprocessed - and is stored in the database as - 
'J(im|am($|e($|s)))$'
Unfortunately there are regex patterns which the preprocessing script cannot 
handle, so
I might have to try a different regex.
The other, bigger, problem: the search cannot make use of  an index, and it has 
to compile
a regex for every entry in the table. I am considering a change to that part: 
in the Jim/James
case it is obvious that I could speed up the query with
 where firstname_pattern ~* '^j' and 'jim' ~* firstname_pattern;
If the pattern was 'Bob|Robert' instead, I would have to change the 
preprocessing so the
'B' and 'R' parts would be separate.

So, I wonder whether there is any better way of doing these. I have looked into 
tsquery
resp. fulltext, but they do not seem to support prefix matches, only exact ones.

Regards
Wolfgang Hamann

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rules going away

2011-09-30 Thread Igor Neyman
 -Original Message-
 From: Gregg Jaskiewicz [mailto:gryz...@gmail.com]
 Sent: Friday, September 30, 2011 5:18 AM
 To: Igor Neyman
 Cc: Ondrej Ivanič; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Rules going away
 
 speaking of DO INSTEAD, for insert/update case. Try using RETURNING
 with that and rules ;) Good luck

Well, I designed it for specific case.
And in my application I had no need for RETURNING clause.
That's why I'm saying, it works perfectly for me, but I'm not trying to 
generalize, like those who say Rules are evil and don't ever use them.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rules going away

2011-09-30 Thread Gregg Jaskiewicz
You're right, rules are perfect for very limited and narrow cases. And
make it very hard to write complicated queries against. (i.e., updates
that only touch few columns, likewise with inserts).
I'm guessing the upside is that rules are faster then triggers.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deadlock of REINDEX and SELECT queries in PostgresSQL 7.4

2011-09-30 Thread Adrian Klaver
On Thursday, September 29, 2011 8:49:07 am Tendulker, Shivanand G Prabhu (SSTL) 
wrote:
 Hello
 
 We are facing a deadlock kind of issue in PostgresSQL 7.4
 
 We have 2 databases with 3 tables each. DB contains about 250 records. We
 observed deadlock when 2 different clients are performing REINDEX and
 SELECT start their operations near simultaneously.
 
 Client 1  performs following operations in a tight loop:-
 VACCUM, ANALYZE of each table, REINDEX of each table'
 
 Client 2 performs SELECT in one of the table in a tight loop.
 
 Upon looking at the postgres locks, it seems like all the locks are granted
 to 'REINDEX' operation and SELECT is waiting. REINDEX never returns in
 this scenario. This problem occurs when Client1  and Client 2 are running
 simultaneously. Both in a tight loop. Once this deadlock is reached all
 the subsequent SELECT, RENDEX operations go into 'waiting' mode.
 
 Is this a known issue? Is the REINDEX and SELECT transactions directed at
 postgres at same time  not a valid combination?

Yes, see here:
http://www.postgresql.org/docs/7.4/static/explicit-locking.html


 
 Please provide help in resolving this issue.
 
 Thanks and Regards
 Shiv

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection refused (0x0000274D/10061)

2011-09-30 Thread Adrian Klaver
On Friday, September 30, 2011 3:17:51 am Pietro Laranci wrote:
  Good morning to everybody
 
 I have a pg9 db to wich i can connect to by local but not from remote. In
 the properties of the postgres db , the port is set to 5300 (not the
 default 5432). I tried to connect in remote with pgadmin3 and quantum gis,
 both give the error Connection refused (0x274D/10061). This is the
 complet message in quantum gis (but is silimar in pgadmin3)
 
 could not connect to server: Connection refused (0x274D/10061)
 
 Is the server running on host 95.110.201.74 and accepting
 
 TCP/IP connections on port 5300?
 
 
 
 Whit both the programs installed on the same pc of the db, i can connect
 without problem to the db.
 
 
 
 I already did these settings:
 
 
 
 - in postgresql.conf : listen_addresses = '*' and port = 5300 (in the place
 of the default 5432)
 
 
 
 - in pg_hba.conf:
 
 host   [dbname]   all 0.0.0.0/0  md5
 
 host   [dbname]  all  127.0.0.1/32 md5
 
 
 
 - restarted postgres
 
 
 
 
 
 I also tried to substitute on the db properties host=localhost with
 host=95.110.201.74
 
 
 
 On the machines are not active firewalls.
 
 
 
 Have you any clue of what's wrong?

You did not say what OS you are running Postgres on, at a guess I am going to 
go 
with Windows. Are you sure the Windows firewall is not blocking port 5300?

 
 
 
 Grazie
 
 
 
 Thanks

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Quick-and-Dirty Data Entry with LibreOffice3?

2011-09-30 Thread Vincent Veyron
Le mercredi 28 septembre 2011 à 06:07 -0700, Rich Shepard a écrit :
 On Wed, 28 Sep 2011, Vincent Veyron wrote:
 
  I'm not sure what 'INSTALL INTO ... statements' are, but are you aware
  of the very convenient 'M-x sql-postgres' in emacs?
 
 Vincent,
 
I have a SQL major mode for emacs. Don't know that it's specific to
 postgres but it is automatically invoked when I open a buffer with a .sql
 file name. I get syntax-related colors; perhaps it does more of which I am
 unaware.


You get a lot more : this gives you an interface to Postgresql inside an
Emacs buffer.


-- 
Vincent Veyron
http://marica.fr/
Logiciel de gestion des sinistres et des contentieux pour le service juridique


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postgres for OLAP data mining

2011-09-30 Thread Dario Beraldi
Hello,

I'm looking for some information (guidelines, docs, tutorials, etc...) about
using postgres for OLAP, data mining, data warehousing. The sort of
questions I'm trying to answer are on the lines of
- What tools/SQL commands are available?
- How should postgres be configured?
- How suitable is postgres for these tasks (compared to other databases)?

I have done some Google search but I would appreciate any advice/hint from
more experienced users.

Many thanks in advance!

Dario


Re: [GENERAL] Quick-and-Dirty Data Entry with LibreOffice3?

2011-09-30 Thread Rich Shepard

On Fri, 30 Sep 2011, Vincent Veyron wrote:


You get a lot more : this gives you an interface to Postgresql inside an
Emacs buffer.


  Thank you.

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres for OLAP data mining

2011-09-30 Thread Ovnicraft
On Fri, Sep 30, 2011 at 9:12 AM, Dario Beraldi dario.bera...@gmail.comwrote:

 Hello,

 I'm looking for some information (guidelines, docs, tutorials, etc...)
 about using postgres for OLAP, data mining, data warehousing. The sort of
 questions I'm trying to answer are on the lines of
 - What tools/SQL commands are available?
 - How should postgres be configured?
 - How suitable is postgres for these tasks (compared to other databases)?

 I have done some Google search but I would appreciate any advice/hint from
 more experienced users.


IMHO what you need is in logic layer software is not in data layer.

Regards,


 Many thanks in advance!

 Dario




-- 
Cristian Salamea
@ovnicraft


[GENERAL] One-click installer, Windows 7 32-bit, and icacls.exe

2011-09-30 Thread Karl Wright
I saw a thread where somebody saw icacls.exe being called by the
one-click installer.  I'm having the same thing - the installer has
been running for 45 minutes now and is basically going to have to be
stopped because I'm out of time waiting for it.  Looking at process
monitor, it is clear that icacls.exe is going through every file on
the entire system and changing its permissions.  The process tree
indicates that it is a child of the installer, and that it is running
the command:

icacls C:\ /grant kawright:RX

Clearly this won't do at all and should be considered a severe installer bug.

Thanks,
Karl

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any was to prevent a join if no columns are selected from a view?

2011-09-30 Thread Jason Long
On Thu, 2011-09-29 at 22:54 -0600, Ben Chobot wrote:
 On Sep 29, 2011, at 4:57 PM, Jason Long wrote:
 
 
 
  I thought I had read somewhere that Postges could ignore a join if
  it
  was not necessary because there were no columns from the table or
  view
  selected in the query.  Is this possible?
 
 
 
 This sounds like incorrect logic to me, so I would be surprised if it
 was possible.

That is the way it is looking.  I just modified my application to
generate the join manually.


Re: [GENERAL] stored procs

2011-09-30 Thread John R Pierce

On 09/30/11 2:09 AM, J.V. wrote:
Some tables have millions of rows, 


well, something like UPDATE tablename SET 
id=generate_series(1,numberofrows);  will update every row to a 
sequential value. However, I have no idea how you would match the 
foreign key references in other tables to these new sequence values.


anything that updates a field on a million rows will be causing every 
row to be updated... postgres never updates anything in place, it will 
be copying the whole row to a new one (this is how it implements MVCC, 
and its fundamental to the architecture, there's nothing you can do to 
override this behavior).




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to improve this similarity query?

2011-09-30 Thread pasman pasmański
Look at doc chapter II.12

2011/9/30, haman...@t-online.de haman...@t-online.de:
 Hi,

 I have a table of names, and searches are usually performed on prefix match.
 This could nicely translate into an index search
 Suppose first name is stored as either 'Jim' or 'Jimmy', searching
 ... where firstname ~* '^jim';
 gets proper result. I had hoped that creating a functional
 index on lower(firstname) and using a query like
  where lower(firstname) ~ '^jim'
 would improve the search, but it does not.
 I ended up with adding a lowercased column for matching

 Now a few names (a few percent of the entire lot) have alternates, like
 'James'.
 These could be nicknames, informal variants, language variants, alternate
 spellings

 I have already split off these few percent into a separate table and can
 query that like
 ... where 'jim' ~* firstname_pattern;'jim' ~* firstname_pattern;'jim' ~*
 firstname_pattern;

 There are two problems with this approach: when I use 'Jimbo' for the plain
 query,
 I do not get any 'Jim' or 'Jimmy', as expected. To achieve the same effect
 on the pattern
 query, I need to decorate the pattern somewhat. Actually, when I specify
 'J(im|ames)' for
 the pattern, it gets preprocessed - and is stored in the database as -
 'J(im|am($|e($|s)))$'
 Unfortunately there are regex patterns which the preprocessing script cannot
 handle, so
 I might have to try a different regex.
 The other, bigger, problem: the search cannot make use of  an index, and it
 has to compile
 a regex for every entry in the table. I am considering a change to that
 part: in the Jim/James
 case it is obvious that I could speed up the query with
  where firstname_pattern ~* '^j' and 'jim' ~* firstname_pattern;
 If the pattern was 'Bob|Robert' instead, I would have to change the
 preprocessing so the
 'B' and 'R' parts would be separate.

 So, I wonder whether there is any better way of doing these. I have looked
 into tsquery
 resp. fulltext, but they do not seem to support prefix matches, only exact
 ones.

 Regards
 Wolfgang Hamann

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 

pasman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Change server encoding after the fact

2011-09-30 Thread Cody Caughlan
I would like to change my server_encoding which is currently SQL_ASCII to UTF8.

I have existing data that I would like to keep.

From my understanding of the steps I need to:

1) alter the template1 database encoding via

UPDATE pg_database SET encoding = 6 where datname IN ('template0', 'template1');

2) Dump my current database

pg_dump -Fc foo  foo.db

3) Drop my current database

drop database foo;

4) recreate it with the proper encoding

create database foo with template = template1 encoding = 'UTF-8';

5) restore from backup

pg_restore -d foo foo.db



Are these the correct steps to perform or is there an easier / in-place way? 

Also, when I dump my old DB and restore it, will it be converted appropriately 
(e.g. it came from am SQL_ASCII encoding and its going into a UTF-8 database)?

Thank you

/Cody
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres for OLAP data mining

2011-09-30 Thread Henry Drexler
Are you looking for stuff like this?

http://www.postgresql.org/docs/9.0/static/functions-window.html
http://www.postgresql.org/docs/9.0/static/functions-string.html

On Fri, Sep 30, 2011 at 10:12 AM, Dario Beraldi dario.bera...@gmail.comwrote:

 Hello,

 I'm looking for some information (guidelines, docs, tutorials, etc...)
 about using postgres for OLAP, data mining, data warehousing. The sort of
 questions I'm trying to answer are on the lines of
 - What tools/SQL commands are available?
 - How should postgres be configured?
 - How suitable is postgres for these tasks (compared to other databases)?

 I have done some Google search but I would appreciate any advice/hint from
 more experienced users.

 Many thanks in advance!

 Dario


Re: [GENERAL] Change server encoding after the fact

2011-09-30 Thread Scott Marlowe
On Fri, Sep 30, 2011 at 11:25 AM, Cody Caughlan tool...@gmail.com wrote:
 I would like to change my server_encoding which is currently SQL_ASCII to 
 UTF8.

 I have existing data that I would like to keep.

 From my understanding of the steps I need to:

 1) alter the template1 database encoding via

 UPDATE pg_database SET encoding = 6 where datname IN ('template0', 
 'template1');

Just create database using template0 as template and you can skip this step ^^
 Are these the correct steps to perform or is there an easier / in-place way?

 Also, when I dump my old DB and restore it, will it be converted 
 appropriately (e.g. it came from am SQL_ASCII encoding and its going into a 
 UTF-8 database)?

You might need to set client encoding when restoring.  Or use iconv to
convert from one encoding to another, which is what I usually do.
Note that it's VERY likely you'll have data in a SQL_ASCII db that
won't go into a UTF8 database without some lossiness.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Change server encoding after the fact

2011-09-30 Thread Cody Caughlan
Thanks Scott. See below:

On Fri, Sep 30, 2011 at 11:05 AM, Scott Marlowe scott.marl...@gmail.comwrote:

 On Fri, Sep 30, 2011 at 11:25 AM, Cody Caughlan tool...@gmail.com wrote:
  I would like to change my server_encoding which is currently SQL_ASCII to
 UTF8.
 
  I have existing data that I would like to keep.
 
  From my understanding of the steps I need to:
 
  1) alter the template1 database encoding via
 
  UPDATE pg_database SET encoding = 6 where datname IN ('template0',
 'template1');

 Just create database using template0 as template and you can skip this step
 ^^



Wouldn't this only work if my template0 was UTF8 itself?

= select datname, pg_encoding_to_char(encoding) from pg_database;
   datname| pg_encoding_to_char
--+-
 template1| SQL_ASCII
 template0| SQL_ASCII
 postgres | SQL_ASCII


So it appears both template0  template1 are SQL_ASCII, so how would
creating from a new DB from template0 be any different than template1?



  Are these the correct steps to perform or is there an easier / in-place
 way?

  Also, when I dump my old DB and restore it, will it be converted
 appropriately (e.g. it came from am SQL_ASCII encoding and its going into a
 UTF-8 database)?

 You might need to set client encoding when restoring.  Or use iconv to
 convert from one encoding to another, which is what I usually do.
 Note that it's VERY likely you'll have data in a SQL_ASCII db that
 won't go into a UTF8 database without some lossiness.



Yes, I see this might be the case. From my playing around with iconv I
cannot even properly do the conversion:

$ pg_dump -Fp foo  foo.sql
$ file -i foo.sql
foo.sql: text/plain; charset=us-ascii
$ iconv -f utf-8 foo.sql  utf8.sql
iconv: illegal input sequence at position 2512661

Uh oh... I cannot event convert it?

Whats my next step at this point if I cannot even convert my data? I'd be OK
with some lossiness.

Thanks again
/Cody


Re: [GENERAL] postgres for OLAP data mining

2011-09-30 Thread Eduardo Morras

At 16:12 30/09/2011, Dario Beraldi wrote:

Hello,

I'm looking for some information (guidelines, docs, tutorials, 
etc...) about using postgres for OLAP, data mining, data 
warehousing. The sort of questions I'm trying to answer are on the lines of

- What tools/SQL commands are available?
- How should postgres be configured?
- How suitable is postgres for these tasks (compared to other databases)?

I have done some Google search but I would appreciate any 
advice/hint from more experienced users.


Check these tutorials:

http://pgexperts.com/document.html?id=49

http://www.pgexperts.com/document.html?id=40

There are more at http://www.pgexperts.com/presentations.html

YOu can check the presentations made on several pgcons and similar events

http://www.pgcon.org/2011/schedule/index.en.html

http://www.postgresopen.org/2011/home/

HTH 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Change server encoding after the fact

2011-09-30 Thread Scott Marlowe
On Fri, Sep 30, 2011 at 12:26 PM, Cody Caughlan tool...@gmail.com wrote:
 Thanks Scott. See below:

 On Fri, Sep 30, 2011 at 11:05 AM, Scott Marlowe scott.marl...@gmail.com
 wrote:

 On Fri, Sep 30, 2011 at 11:25 AM, Cody Caughlan tool...@gmail.com wrote:
  I would like to change my server_encoding which is currently SQL_ASCII
  to UTF8.
 
  I have existing data that I would like to keep.
 
  From my understanding of the steps I need to:
 
  1) alter the template1 database encoding via
 
  UPDATE pg_database SET encoding = 6 where datname IN ('template0',
  'template1');

 Just create database using template0 as template and you can skip this
 step ^^


 Wouldn't this only work if my template0 was UTF8 itself?
 = select datname, pg_encoding_to_char(encoding) from pg_database;
        datname        | pg_encoding_to_char
 --+-
  template1            | SQL_ASCII
  template0            | SQL_ASCII
  postgres             | SQL_ASCII

 So it appears both template0  template1 are SQL_ASCII, so how would
 creating from a new DB from template0 be any different than template1?

Well, let's try, shall we?  From a freshly created cluster on my
laptop, running 8.4:

smarlowe=# select datname, pg_encoding_to_char(encoding) from pg_database;
  datname  | pg_encoding_to_char
---+-
 template1 | SQL_ASCII
 template0 | SQL_ASCII
 postgres  | SQL_ASCII
 smarlowe  | SQL_ASCII
(4 rows)

smarlowe=# create database j template template0 encoding 'UTF8';
CREATE DATABASE

Seems to work.

P.s. I'm not sure why it works, I just know that it does. :)

  Are these the correct steps to perform or is there an easier / in-place
  way?

  Also, when I dump my old DB and restore it, will it be converted
  appropriately (e.g. it came from am SQL_ASCII encoding and its going into a
  UTF-8 database)?

 You might need to set client encoding when restoring.  Or use iconv to
 convert from one encoding to another, which is what I usually do.
 Note that it's VERY likely you'll have data in a SQL_ASCII db that
 won't go into a UTF8 database without some lossiness.


 Yes, I see this might be the case. From my playing around with iconv I
 cannot even properly do the conversion:
 $ pg_dump -Fp foo  foo.sql
 $ file -i foo.sql
 foo.sql: text/plain; charset=us-ascii
 $ iconv -f utf-8 foo.sql  utf8.sql
 iconv: illegal input sequence at position 2512661

I think you got it backwards, the -f should be somthing other than
utf-8 right?  That's what the -t should be right?  Try iconv without a
-f switch and a -t of utf-8 and see what happens...

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Change server encoding after the fact

2011-09-30 Thread Tom Lane
Scott Marlowe scott.marl...@gmail.com writes:
 On Fri, Sep 30, 2011 at 12:26 PM, Cody Caughlan tool...@gmail.com wrote:
 So it appears both template0  template1 are SQL_ASCII, so how would
 creating from a new DB from template0 be any different than template1?

 P.s. I'm not sure why it works, I just know that it does. :)

CREATE DATABASE assumes that template0 cannot contain any non-ASCII
data, so it's okay to clone it and then pretend that the result has some
other encoding.  The same assumption cannot be made for template1, since
that's user-modifiable.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Change server encoding after the fact

2011-09-30 Thread Cody Caughlan
Please see below.

On Fri, Sep 30, 2011 at 11:38 AM, Scott Marlowe scott.marl...@gmail.comwrote:

 On Fri, Sep 30, 2011 at 12:26 PM, Cody Caughlan tool...@gmail.com wrote:
  Thanks Scott. See below:
 
  On Fri, Sep 30, 2011 at 11:05 AM, Scott Marlowe scott.marl...@gmail.com
 
  wrote:
 
  On Fri, Sep 30, 2011 at 11:25 AM, Cody Caughlan tool...@gmail.com
 wrote:
   I would like to change my server_encoding which is currently SQL_ASCII
   to UTF8.
  
   I have existing data that I would like to keep.
  
   From my understanding of the steps I need to:
  
   1) alter the template1 database encoding via
  
   UPDATE pg_database SET encoding = 6 where datname IN ('template0',
   'template1');
 
  Just create database using template0 as template and you can skip this
  step ^^
 
 
  Wouldn't this only work if my template0 was UTF8 itself?
  = select datname, pg_encoding_to_char(encoding) from pg_database;
 datname| pg_encoding_to_char
  --+-
   template1| SQL_ASCII
   template0| SQL_ASCII
   postgres | SQL_ASCII
 
  So it appears both template0  template1 are SQL_ASCII, so how would
  creating from a new DB from template0 be any different than template1?

 Well, let's try, shall we?  From a freshly created cluster on my
 laptop, running 8.4:

 smarlowe=# select datname, pg_encoding_to_char(encoding) from pg_database;
   datname  | pg_encoding_to_char
 ---+-
  template1 | SQL_ASCII
  template0 | SQL_ASCII
  postgres  | SQL_ASCII
  smarlowe  | SQL_ASCII
 (4 rows)

 smarlowe=# create database j template template0 encoding 'UTF8';
 CREATE DATABASE

 Seems to work.

 P.s. I'm not sure why it works, I just know that it does. :)


Ok, I see what you mean. This would create a new DB with the proper
encoding. Which is fine, and probably what I will do. I guess I see an
ideal scenario being one where we permanently convert the template encoding
to UTF8 so going forward I dont have to worry about forgetting to adding the
encoding= 'UTF8' for every new DB I create.


   Are these the correct steps to perform or is there an easier /
 in-place
   way?
 
   Also, when I dump my old DB and restore it, will it be converted
   appropriately (e.g. it came from am SQL_ASCII encoding and its going
 into a
   UTF-8 database)?
 
  You might need to set client encoding when restoring.  Or use iconv to
  convert from one encoding to another, which is what I usually do.
  Note that it's VERY likely you'll have data in a SQL_ASCII db that
  won't go into a UTF8 database without some lossiness.
 
 
  Yes, I see this might be the case. From my playing around with iconv I
  cannot even properly do the conversion:
  $ pg_dump -Fp foo  foo.sql
  $ file -i foo.sql
  foo.sql: text/plain; charset=us-ascii
  $ iconv -f utf-8 foo.sql  utf8.sql
  iconv: illegal input sequence at position 2512661

 I think you got it backwards, the -f should be somthing other than
 utf-8 right?  That's what the -t should be right?  Try iconv without a
 -f switch and a -t of utf-8 and see what happens...


You're right, I had -f when I needed -t. I tried it again with the same
error:

$ iconv -t utf-8 foo.sql  utf.sql
iconv: illegal input sequence at position 2512661


Re: [GENERAL] Change server encoding after the fact

2011-09-30 Thread Scott Marlowe
On Fri, Sep 30, 2011 at 12:46 PM, Cody Caughlan tool...@gmail.com wrote:
 Please see below.

 On Fri, Sep 30, 2011 at 11:38 AM, Scott Marlowe scott.marl...@gmail.com
 wrote:

 On Fri, Sep 30, 2011 at 12:26 PM, Cody Caughlan tool...@gmail.com wrote:
  Thanks Scott. See below:
 
  On Fri, Sep 30, 2011 at 11:05 AM, Scott Marlowe
  scott.marl...@gmail.com
  wrote:
 
  On Fri, Sep 30, 2011 at 11:25 AM, Cody Caughlan tool...@gmail.com
  wrote:
   I would like to change my server_encoding which is currently
   SQL_ASCII
   to UTF8.
  
   I have existing data that I would like to keep.
  
   From my understanding of the steps I need to:
  
   1) alter the template1 database encoding via
  
   UPDATE pg_database SET encoding = 6 where datname IN ('template0',
   'template1');
 
  Just create database using template0 as template and you can skip this
  step ^^
 
 
  Wouldn't this only work if my template0 was UTF8 itself?
  = select datname, pg_encoding_to_char(encoding) from pg_database;
         datname        | pg_encoding_to_char
  --+-
   template1            | SQL_ASCII
   template0            | SQL_ASCII
   postgres             | SQL_ASCII
 
  So it appears both template0  template1 are SQL_ASCII, so how would
  creating from a new DB from template0 be any different than template1?

 Well, let's try, shall we?  From a freshly created cluster on my
 laptop, running 8.4:

 smarlowe=# select datname, pg_encoding_to_char(encoding) from pg_database;
  datname  | pg_encoding_to_char
 ---+-
  template1 | SQL_ASCII
  template0 | SQL_ASCII
  postgres  | SQL_ASCII
  smarlowe  | SQL_ASCII
 (4 rows)

 smarlowe=# create database j template template0 encoding 'UTF8';
 CREATE DATABASE

 Seems to work.

 P.s. I'm not sure why it works, I just know that it does. :)


 Ok, I see what you mean. This would create a new DB with the proper
 encoding. Which is fine, and probably what I will do. I guess I see an
 ideal scenario being one where we permanently convert the template encoding
 to UTF8 so going forward I dont have to worry about forgetting to adding the
 encoding= 'UTF8' for every new DB I create.

Ah ok.  The way I fix that is this:

update pg_database set datistemplate = false where datname='template1';
drop database template1;
create database template1 template template0 encoding 'UTF8';

But your way would likely work too.

 I think you got it backwards, the -f should be somthing other than
 utf-8 right?  That's what the -t should be right?  Try iconv without a
 -f switch and a -t of utf-8 and see what happens...

 You're right, I had -f when I needed -t. I tried it again with the same
 error:
 $ iconv -t utf-8 foo.sql  utf.sql
 iconv: illegal input sequence at position 2512661

Any idea waht the actual encoding of your source database is?
SQL_ASCII is basically not really ascii, more like anything goes.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Change server encoding after the fact

2011-09-30 Thread Tom Lane
Cody Caughlan tool...@gmail.com writes:
 Ok, I see what you mean. This would create a new DB with the proper
 encoding. Which is fine, and probably what I will do. I guess I see an
 ideal scenario being one where we permanently convert the template encoding
 to UTF8 so going forward I dont have to worry about forgetting to adding the
 encoding= 'UTF8' for every new DB I create.

Well, if you're feeling brave you can mark template0 as having utf8
encoding via a manual update to pg_database.  In theory that should
be safe enough.  If you know template1 doesn't, and never has, contained
any non-ASCII data, you could do the same to it ... but it would be a
lot safer to drop it and recreate from template0.  See
http://wiki.postgresql.org/wiki/Adventures_in_PostgreSQL,_Episode_1
for some context.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Change server encoding after the fact

2011-09-30 Thread Cody Caughlan
On Fri, Sep 30, 2011 at 12:31 PM, Scott Marlowe scott.marl...@gmail.comwrote:

 On Fri, Sep 30, 2011 at 12:46 PM, Cody Caughlan tool...@gmail.com wrote:
  Please see below.
 
  On Fri, Sep 30, 2011 at 11:38 AM, Scott Marlowe scott.marl...@gmail.com
 
  wrote:
 
  On Fri, Sep 30, 2011 at 12:26 PM, Cody Caughlan tool...@gmail.com
 wrote:
   Thanks Scott. See below:
  
   On Fri, Sep 30, 2011 at 11:05 AM, Scott Marlowe
   scott.marl...@gmail.com
   wrote:
  
   On Fri, Sep 30, 2011 at 11:25 AM, Cody Caughlan tool...@gmail.com
   wrote:
I would like to change my server_encoding which is currently
SQL_ASCII
to UTF8.
   
I have existing data that I would like to keep.
   
From my understanding of the steps I need to:
   
1) alter the template1 database encoding via
   
UPDATE pg_database SET encoding = 6 where datname IN ('template0',
'template1');
  
   Just create database using template0 as template and you can skip
 this
   step ^^
  
  
   Wouldn't this only work if my template0 was UTF8 itself?
   = select datname, pg_encoding_to_char(encoding) from pg_database;
  datname| pg_encoding_to_char
   --+-
template1| SQL_ASCII
template0| SQL_ASCII
postgres | SQL_ASCII
  
   So it appears both template0  template1 are SQL_ASCII, so how would
   creating from a new DB from template0 be any different than template1?
 
  Well, let's try, shall we?  From a freshly created cluster on my
  laptop, running 8.4:
 
  smarlowe=# select datname, pg_encoding_to_char(encoding) from
 pg_database;
   datname  | pg_encoding_to_char
  ---+-
   template1 | SQL_ASCII
   template0 | SQL_ASCII
   postgres  | SQL_ASCII
   smarlowe  | SQL_ASCII
  (4 rows)
 
  smarlowe=# create database j template template0 encoding 'UTF8';
  CREATE DATABASE
 
  Seems to work.
 
  P.s. I'm not sure why it works, I just know that it does. :)
 
 
  Ok, I see what you mean. This would create a new DB with the proper
  encoding. Which is fine, and probably what I will do. I guess I see an
  ideal scenario being one where we permanently convert the template
 encoding
  to UTF8 so going forward I dont have to worry about forgetting to adding
 the
  encoding= 'UTF8' for every new DB I create.

 Ah ok.  The way I fix that is this:

 update pg_database set datistemplate = false where datname='template1';
 drop database template1;
 create database template1 template template0 encoding 'UTF8';

 But your way would likely work too.

  I think you got it backwards, the -f should be somthing other than
  utf-8 right?  That's what the -t should be right?  Try iconv without a
  -f switch and a -t of utf-8 and see what happens...
 
  You're right, I had -f when I needed -t. I tried it again with the same
  error:
  $ iconv -t utf-8 foo.sql  utf.sql
  iconv: illegal input sequence at position 2512661

 Any idea waht the actual encoding of your source database is?
 SQL_ASCII is basically not really ascii, more like anything goes.



How would I find this? pg_database says my DB is SQL_ASCII.

show all says

client_encoding = SQL_ASCII
server_encoding = SQL_ASCII


Re: [GENERAL] Change server encoding after the fact

2011-09-30 Thread Scott Marlowe
On Fri, Sep 30, 2011 at 1:35 PM, Cody Caughlan tool...@gmail.com wrote:


 On Fri, Sep 30, 2011 at 12:31 PM, Scott Marlowe scott.marl...@gmail.com
 wrote:

 On Fri, Sep 30, 2011 at 12:46 PM, Cody Caughlan tool...@gmail.com wrote:
  Please see below.
 
  On Fri, Sep 30, 2011 at 11:38 AM, Scott Marlowe
  scott.marl...@gmail.com
  wrote:
 
  On Fri, Sep 30, 2011 at 12:26 PM, Cody Caughlan tool...@gmail.com
  wrote:
   Thanks Scott. See below:
  
   On Fri, Sep 30, 2011 at 11:05 AM, Scott Marlowe
   scott.marl...@gmail.com
   wrote:
  
   On Fri, Sep 30, 2011 at 11:25 AM, Cody Caughlan tool...@gmail.com
   wrote:
I would like to change my server_encoding which is currently
SQL_ASCII
to UTF8.
   
I have existing data that I would like to keep.
   
From my understanding of the steps I need to:
   
1) alter the template1 database encoding via
   
UPDATE pg_database SET encoding = 6 where datname IN ('template0',
'template1');
  
   Just create database using template0 as template and you can skip
   this
   step ^^
  
  
   Wouldn't this only work if my template0 was UTF8 itself?
   = select datname, pg_encoding_to_char(encoding) from pg_database;
          datname        | pg_encoding_to_char
   --+-
    template1            | SQL_ASCII
    template0            | SQL_ASCII
    postgres             | SQL_ASCII
  
   So it appears both template0  template1 are SQL_ASCII, so how would
   creating from a new DB from template0 be any different than
   template1?
 
  Well, let's try, shall we?  From a freshly created cluster on my
  laptop, running 8.4:
 
  smarlowe=# select datname, pg_encoding_to_char(encoding) from
  pg_database;
   datname  | pg_encoding_to_char
  ---+-
   template1 | SQL_ASCII
   template0 | SQL_ASCII
   postgres  | SQL_ASCII
   smarlowe  | SQL_ASCII
  (4 rows)
 
  smarlowe=# create database j template template0 encoding 'UTF8';
  CREATE DATABASE
 
  Seems to work.
 
  P.s. I'm not sure why it works, I just know that it does. :)
 
 
  Ok, I see what you mean. This would create a new DB with the proper
  encoding. Which is fine, and probably what I will do. I guess I see an
  ideal scenario being one where we permanently convert the template
  encoding
  to UTF8 so going forward I dont have to worry about forgetting to adding
  the
  encoding= 'UTF8' for every new DB I create.

 Ah ok.  The way I fix that is this:

 update pg_database set datistemplate = false where datname='template1';
 drop database template1;
 create database template1 template template0 encoding 'UTF8';

 But your way would likely work too.

  I think you got it backwards, the -f should be somthing other than
  utf-8 right?  That's what the -t should be right?  Try iconv without a
  -f switch and a -t of utf-8 and see what happens...
 
  You're right, I had -f when I needed -t. I tried it again with the same
  error:
  $ iconv -t utf-8 foo.sql  utf.sql
  iconv: illegal input sequence at position 2512661

 Any idea waht the actual encoding of your source database is?
 SQL_ASCII is basically not really ascii, more like anything goes.


 How would I find this? pg_database says my DB is SQL_ASCII.
 show all says
 client_encoding = SQL_ASCII
 server_encoding = SQL_ASCII

It would have been set by the application accessing postgresql and
inserting the data.  I.e. was it a windows app using a typical windows
encoding?  etc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Change server encoding after the fact

2011-09-30 Thread Cody Caughlan
Its a Rails app and I do have:

  encoding: utf8

Set in my DB configuration.

On Fri, Sep 30, 2011 at 12:38 PM, Scott Marlowe scott.marl...@gmail.comwrote:

 On Fri, Sep 30, 2011 at 1:35 PM, Cody Caughlan tool...@gmail.com wrote:
 
 
  On Fri, Sep 30, 2011 at 12:31 PM, Scott Marlowe scott.marl...@gmail.com
 
  wrote:
 
  On Fri, Sep 30, 2011 at 12:46 PM, Cody Caughlan tool...@gmail.com
 wrote:
   Please see below.
  
   On Fri, Sep 30, 2011 at 11:38 AM, Scott Marlowe
   scott.marl...@gmail.com
   wrote:
  
   On Fri, Sep 30, 2011 at 12:26 PM, Cody Caughlan tool...@gmail.com
   wrote:
Thanks Scott. See below:
   
On Fri, Sep 30, 2011 at 11:05 AM, Scott Marlowe
scott.marl...@gmail.com
wrote:
   
On Fri, Sep 30, 2011 at 11:25 AM, Cody Caughlan 
 tool...@gmail.com
wrote:
 I would like to change my server_encoding which is currently
 SQL_ASCII
 to UTF8.

 I have existing data that I would like to keep.

 From my understanding of the steps I need to:

 1) alter the template1 database encoding via

 UPDATE pg_database SET encoding = 6 where datname IN
 ('template0',
 'template1');
   
Just create database using template0 as template and you can skip
this
step ^^
   
   
Wouldn't this only work if my template0 was UTF8 itself?
= select datname, pg_encoding_to_char(encoding) from pg_database;
   datname| pg_encoding_to_char
--+-
 template1| SQL_ASCII
 template0| SQL_ASCII
 postgres | SQL_ASCII
   
So it appears both template0  template1 are SQL_ASCII, so how
 would
creating from a new DB from template0 be any different than
template1?
  
   Well, let's try, shall we?  From a freshly created cluster on my
   laptop, running 8.4:
  
   smarlowe=# select datname, pg_encoding_to_char(encoding) from
   pg_database;
datname  | pg_encoding_to_char
   ---+-
template1 | SQL_ASCII
template0 | SQL_ASCII
postgres  | SQL_ASCII
smarlowe  | SQL_ASCII
   (4 rows)
  
   smarlowe=# create database j template template0 encoding 'UTF8';
   CREATE DATABASE
  
   Seems to work.
  
   P.s. I'm not sure why it works, I just know that it does. :)
  
  
   Ok, I see what you mean. This would create a new DB with the proper
   encoding. Which is fine, and probably what I will do. I guess I see
 an
   ideal scenario being one where we permanently convert the template
   encoding
   to UTF8 so going forward I dont have to worry about forgetting to
 adding
   the
   encoding= 'UTF8' for every new DB I create.
 
  Ah ok.  The way I fix that is this:
 
  update pg_database set datistemplate = false where datname='template1';
  drop database template1;
  create database template1 template template0 encoding 'UTF8';
 
  But your way would likely work too.
 
   I think you got it backwards, the -f should be somthing other than
   utf-8 right?  That's what the -t should be right?  Try iconv without
 a
   -f switch and a -t of utf-8 and see what happens...
  
   You're right, I had -f when I needed -t. I tried it again with the
 same
   error:
   $ iconv -t utf-8 foo.sql  utf.sql
   iconv: illegal input sequence at position 2512661
 
  Any idea waht the actual encoding of your source database is?
  SQL_ASCII is basically not really ascii, more like anything goes.
 
 
  How would I find this? pg_database says my DB is SQL_ASCII.
  show all says
  client_encoding = SQL_ASCII
  server_encoding = SQL_ASCII

 It would have been set by the application accessing postgresql and
 inserting the data.  I.e. was it a windows app using a typical windows
 encoding?  etc.



Re: [GENERAL] Change server encoding after the fact

2011-09-30 Thread Scott Marlowe
On Fri, Sep 30, 2011 at 1:39 PM, Cody Caughlan tool...@gmail.com wrote:
 Its a Rails app and I do have:
   encoding: utf8


Hmmm, if you try this does it work (mostly)?

iconv -f utf-8 -t utf-8 -c  infile  outfile

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Change server encoding after the fact

2011-09-30 Thread Cody Caughlan
That worked, but file shows no difference:

$ iconv -f utf-8 -t utf-8 -c foo.sql  utf.sql
$ file -i foo.sql
foo.sql: text/plain; charset=us-ascii

$file -i utf.sql
utf.sql: text/plain; charset=us-ascii

So iconv didnt actually convert the file OR does is the file command just
ignorant?

On Fri, Sep 30, 2011 at 12:41 PM, Scott Marlowe scott.marl...@gmail.comwrote:

 On Fri, Sep 30, 2011 at 1:39 PM, Cody Caughlan tool...@gmail.com wrote:
  Its a Rails app and I do have:
encoding: utf8


 Hmmm, if you try this does it work (mostly)?

 iconv -f utf-8 -t utf-8 -c  infile  outfile



Re: [GENERAL] Change server encoding after the fact

2011-09-30 Thread Scott Marlowe
On Fri, Sep 30, 2011 at 1:41 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Fri, Sep 30, 2011 at 1:39 PM, Cody Caughlan tool...@gmail.com wrote:
 Its a Rails app and I do have:
   encoding: utf8


 Hmmm, if you try this does it work (mostly)?

 iconv -f utf-8 -t utf-8 -c  infile  outfile

If that doesn't work try:

iconv -f utf-8 -t utf-8//IGNORE -c  infile  outfile

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres for OLAP data mining

2011-09-30 Thread Dario Beraldi
On 30 September 2011 18:41, Henry Drexler alonup...@gmail.com wrote:

 Are you looking for stuff like this?

 http://www.postgresql.org/docs/9.0/static/functions-window.html
 http://www.postgresql.org/docs/9.0/static/functions-string.html

 Thanks for reply. Yes, I knew these functions. In fact, I was rather
wondering whether there are dedicated packages for OLAP and some guidelines
for configuring postgres (and whether postgres is a good option for OLAP).

Best
Dario



 On Fri, Sep 30, 2011 at 10:12 AM, Dario Beraldi 
 dario.bera...@gmail.comwrote:

 Hello,

 I'm looking for some information (guidelines, docs, tutorials, etc...)
 about using postgres for OLAP, data mining, data warehousing. The sort of
 questions I'm trying to answer are on the lines of
 - What tools/SQL commands are available?
 - How should postgres be configured?
 - How suitable is postgres for these tasks (compared to other databases)?

 I have done some Google search but I would appreciate any advice/hint from
 more experienced users.

 Many thanks in advance!

 Dario





Re: [GENERAL] Change server encoding after the fact

2011-09-30 Thread Scott Marlowe
On Fri, Sep 30, 2011 at 1:45 PM, Cody Caughlan tool...@gmail.com wrote:
 That worked, but file shows no difference:
 $ iconv -f utf-8 -t utf-8 -c foo.sql  utf.sql
 $ file -i foo.sql
 foo.sql: text/plain; charset=us-ascii
 $file -i utf.sql
 utf.sql: text/plain; charset=us-ascii
 So iconv didnt actually convert the file OR does is the file command just
 ignorant?

Not sure.  try loading the dump into the UTF-8 DB in postgres and see
what happens I guess?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Change server encoding after the fact

2011-09-30 Thread Cody Caughlan
Please see below.

On Fri, Sep 30, 2011 at 1:12 PM, Scott Marlowe scott.marl...@gmail.comwrote:

 On Fri, Sep 30, 2011 at 1:45 PM, Cody Caughlan tool...@gmail.com wrote:
  That worked, but file shows no difference:
  $ iconv -f utf-8 -t utf-8 -c foo.sql  utf.sql
  $ file -i foo.sql
  foo.sql: text/plain; charset=us-ascii
  $file -i utf.sql
  utf.sql: text/plain; charset=us-ascii
  So iconv didnt actually convert the file OR does is the file command
 just
  ignorant?

 Not sure.  try loading the dump into the UTF-8 DB in postgres and see
 what happens I guess?



Uh oh.

On the remote machine:

$ pg_dump -Fc -E UTF8 foo  foo.sql

Then I've created a new local DB with UTF8 encoding and I try to restore
this dump into it:

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2342; 0 17086 TABLE DATA
wine_books vinosmith
pg_restore: [archiver (db)] COPY failed for table wine_books: ERROR:
 invalid byte sequence for encoding UTF8: 0xc309
CONTEXT:  COPY wine_books, line 1147
WARNING: errors ignored on restore: 1

And sure enough the table wine_books is empty. Not good.


Re: [GENERAL] Change server encoding after the fact

2011-09-30 Thread Scott Marlowe
On Fri, Sep 30, 2011 at 2:57 PM, Cody Caughlan tool...@gmail.com wrote:
 Please see below.

 On Fri, Sep 30, 2011 at 1:12 PM, Scott Marlowe scott.marl...@gmail.com
 wrote:

 On Fri, Sep 30, 2011 at 1:45 PM, Cody Caughlan tool...@gmail.com wrote:
  That worked, but file shows no difference:
  $ iconv -f utf-8 -t utf-8 -c foo.sql  utf.sql
  $ file -i foo.sql
  foo.sql: text/plain; charset=us-ascii
  $file -i utf.sql
  utf.sql: text/plain; charset=us-ascii
  So iconv didnt actually convert the file OR does is the file command
  just
  ignorant?

 Not sure.  try loading the dump into the UTF-8 DB in postgres and see
 what happens I guess?


 Uh oh.
 On the remote machine:
 $ pg_dump -Fc -E UTF8 foo  foo.sql
 Then I've created a new local DB with UTF8 encoding and I try to restore
 this dump into it:
 pg_restore: [archiver (db)] Error while PROCESSING TOC:
 pg_restore: [archiver (db)] Error from TOC entry 2342; 0 17086 TABLE DATA
 wine_books vinosmith
 pg_restore: [archiver (db)] COPY failed for table wine_books: ERROR:
  invalid byte sequence for encoding UTF8: 0xc309
 CONTEXT:  COPY wine_books, line 1147
 WARNING: errors ignored on restore: 1
 And sure enough the table wine_books is empty. Not good.

You may have to hunt down that one bad line (1147) and chop it out /
edit it so it works.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection refused (0x0000274D/10061)

2011-09-30 Thread Craig Ringer

On 09/30/2011 06:17 PM, Pietro Laranci wrote:


On the machines are not active firewalls.


Most likely there _is_ a firewall involved, maybe just not one you're 
aware of. As Adrian said, check the settings in the standard Windows 
Firewall if you're on Windows.


For more troubleshooting: Check netstat to see if PostgreSQL is actually 
listening on the port you've configured. Use Wireshark to capture 
incoming tcp/ip traffic and see if (a) SYN packets to open the 
connection actually reach the server, and (b) whether the server 
actually replies with an RST to reject the connection or if something 
else happens.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procs

2011-09-30 Thread Craig Ringer

On 09/30/2011 11:41 PM, John R Pierce wrote:

On 09/30/11 2:09 AM, J.V. wrote:

Some tables have millions of rows,


well, something like UPDATE tablename SET
id=generate_series(1,numberofrows); will update every row to a
sequential value. However, I have no idea how you would match the
foreign key references in other tables to these new sequence values.


There are two ways to do that.

You can add a *new* column for the new keys and generate them. Then you 
add a  matching empty column to each referencing table and fill it using 
a JOIN against the old key and ALTER each referencing table to add the 
FOREIGN KEY before dropping the old key column. Finally, you drop the 
old key column in the main table.


Alternately, you can ALTER all the foreign key references to be CASCADE, 
then UPDATE the main table to set new keys. PostgreSQL will cascade the 
changes to the referencing tables.


The second method is simpler and you might think it'd be faster, but it 
probably won't be. The first method requires one sequential re-write of 
each table when the UPDATE to fill the new key columns runs, but is 
otherwise just a series of JOINs on key columns. On the other hand, the 
second method requires *lots* of *random* writes all over the place on 
the referencing tables, and is likely to be a lot slower even if you 
have indexes on your foreign key columns. If you *don't* have indexes on 
your foreign key columns the second method is going to be spectacularly, 
amazingly, stunningly slow.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general