[GENERAL] Full Text Partial Match at begining

2010-11-30 Thread AI Rumman
Is it possible to match %text' in Postgresql 9 Full Text.
select to_tsvector('english','this is advantage') @@ to_tsquery('tage');
f
I need to get result true for this type of matching.
Any idea please.


Re: [GENERAL] PostgreSQL hanging on new connections?

2010-11-30 Thread Craig Ringer

On 11/30/2010 03:28 PM, Dusan Misic wrote:

We're having similar issues on 8.4.[245]... occasionally psql takes
anywhere from a few to several dozen seconds to connect. I've been
unsuccessfully trying to blame spikes in the OS run queue (we
desperately need some connection pooling) but if it's something to
do with locks I can't see in pg_locks, that would explain why I
haven't been able to figure out what's going on yet
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


This is normal. PostgreSQL needs to create new server process to handle
your requested connection.

Then it needs to allocate resources to that new connection. It
initializes shared memory for that connection. That is the stall you are
mentioning.


Eh, what?

Forking a backend and attaching to shared memory should *not* take a 
few seconds. On my test machine it takes 100ms to fork psql, connect to 
the postmaster, fork a backend, init the backend, authenticate, run a 
dummy query and exit psql.


If you're seeing delays like that, your machine is horrifyingly 
overloaded or there's something else wrong.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] select max()

2010-11-30 Thread Alexander Farber
Hello,

I have this table where I store player results for each week:

# select * from pref_money limit 5;
   id   | money |   yw
+---+-
 OK32378280203  |   -27 | 2010-44
 OK274037315447 |   -56 | 2010-44
 OK19644992852  | 8 | 2010-44
 OK21807961329  |   114 | 2010-44
 FB1845091917   |   774 | 2010-44
(5 rows)

I'm trying to find the winners for each week and
also how many times the player has won,
so that I can display a number of medals
at his or her profile page.

I'm trying:

# select id, money from pref_money where money in
   (select max(money) from pref_money group by yw);
   id   | money
+---
 DE8048 |  3927
 VK91770810 |  6133
 DE7115 |  6655
 OK252342810632 |  8053
 OK22853997 |  1013
(5 rows)

But I'm worried that where in will return false positives
(for the cases with the same money values)
and also I don't know how to count the repeated winners?

Thank you
Alex

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL hanging on new connections?

2010-11-30 Thread Dusan Misic
On Tue, Nov 30, 2010 at 10:20 AM, Craig Ringer
cr...@postnewspapers.com.auwrote:

 On 11/30/2010 03:28 PM, Dusan Misic wrote:

We're having similar issues on 8.4.[245]... occasionally psql takes
anywhere from a few to several dozen seconds to connect. I've been
unsuccessfully trying to blame spikes in the OS run queue (we
desperately need some connection pooling) but if it's something to
do with locks I can't see in pg_locks, that would explain why I
haven't been able to figure out what's going on yet
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


 This is normal. PostgreSQL needs to create new server process to handle
 your requested connection.

 Then it needs to allocate resources to that new connection. It
 initializes shared memory for that connection. That is the stall you are
 mentioning.


 Eh, what?

 Forking a backend and attaching to shared memory should *not* take a few
 seconds. On my test machine it takes 100ms to fork psql, connect to the
 postmaster, fork a backend, init the backend, authenticate, run a dummy
 query and exit psql.

 If you're seeing delays like that, your machine is horrifyingly overloaded
 or there's something else wrong.

 --
 Craig Ringer


This is not happening to me. Even on my home computer (which is terribly
slow single core Sempron 2600+ with 128 kB of L2 cache, 1.5 GB DDR RAM and
with slow ATA hard disks) it is fast to connect. Maximum (slowest)
connection time is about 500 ms. I did some tuning in postgresql.conf.

I'm very happy with PostgreSQL speed on my 'snail' computer. :)


Re: [GENERAL] PostgreSQL hanging on new connections?

2010-11-30 Thread Florian Weimer
* hubert depesz lubaczewski:

 Now, the question is: why did it hang? Is there anything we can do to
 make it *not* hang?

It might be some general system overload issue.  Try running echo w 
/proc/sysrq-trigger as root the next time it happens.  This will dump
kernel backtraces to dmesg, which might hint to what's going on with
the system.

-- 
Florian Weimerfwei...@bfk.de
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter table add column - specify where the column will go?

2010-11-30 Thread Jasen Betts
On 2010-11-24, Daniel Verite dan...@manitou-mail.org wrote:
   Fredric Fredricson wrote:

 But if you change the column names in the second SELECT in the UNION this is
 ignored:
 # SELECT c1,c2 FROM (SELECT 1 AS c1, 2 AS c2 UNION SELECT 2 AS c3, 1 AS c4)
 AS x;
  c1 | c2 
 +
   1 |  2
   2 |  1
 Apparently, in a UNION the column names are derived from the first statement
 only.

 The example upthread demonstrates that in certain contexts, column positions
 are relevant whereas column names are not. The modified query you show here
 doesn't lead to any different conclusion.

 The allegation that row.* doesn't come with a deterministic column order
 remains pretty much unsubstantiated at this point.

It's deterministic, just subject to change (eg if the table is
re-created with a different order, or if a column is dropped and
re-added)

I try to always use column and table names when dealing with tables
other people could mess with.

If dealing with a temp table I sometimes take shortcuts as the source
is all in one place, so anyone messing with it will hopefully be aware
of the consequences of their actions.

-- 
⚂⚃ 100% natural

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] advise on performance issues please

2010-11-30 Thread Gregory Machin
Hi
this is the first time I'm working with Posgresql other than a defualt install.

I have a CentOS 5.5 virtual machine with -
4 virtual cpus , 8 Gig RAM , resource pool set to High (8000)

running on a vmware ESXi 4.1 host -
4 x 2.4 GHz cpus AMD 6 cores each, 96 Gig ram, storage is provided by
HP Left hand SAN iSCSI.

I'm running post Postgresql 8.4.4 rpm from the postgresql repo.

What I need to clarify is wether the performance issue the website
that uses database is experiencing is related to postgresql miss
configuration or bad code in the site.

Customisations in the postgresql.conf

max_connections = 1000
shared_buffers = 4096MB
temp_buffers = 512MB
work_mem = 10MB                         # min 64kB
maintenance_work_mem = 160MB            # min 1MB
fsync = off
synchronous_commit = on
wal_sync_method = open_sync
full_page_writes = off
effective_cache_size = 32MB
join_collapse_limit = 1
autovacuum_vacuum_cost_delay = 100ms
checkpoint_segments = 60

this configuration gives the following pgbench results

[mac...@topnz15209-linux ~]$ pgbench -h drvppgs01 -U postgres -c 40 -T 120 test
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 40
duration: 120 s
number of transactions actually processed: 47070
tps = 391.660580 (including connections establishing)
tps = 392.523468 (excluding connections establishing)

pgbench was setup with  pgbench -h drvppgs01 -U postgres -c 40 -T 120
-i test -F 100 -S 15

mpstat every 5 seconds gives the following:

11:17:15 PM  CPU   %user   %nice    %sys %iowait    %irq   %soft
%steal   %idle    intr/s
11:17:15 PM  all    9.04    0.00    1.77    0.22    0.06    0.24
0.00   88.67   1154.93
11:17:15 PM    0    8.10    0.00    1.80    0.44    0.02    0.13
0.00   89.52    989.97
11:17:15 PM    1   11.77    0.00    2.09    0.07    0.09    0.33
0.00   85.65     75.42
11:17:15 PM    2    7.30    0.00    1.50    0.25    0.00    0.09
0.00   90.86      2.00
11:17:15 PM    3    8.98    0.00    1.70    0.13    0.12    0.41
0.00   88.66     87.54
            total       used       free     shared    buffers     cached
Mem:          7983       7748        235          0        157       7226
-/+ buffers/cache:        363       7619
Swap:         1725          0       1725
Linux 2.6.18-194.26.1.el5 (drvppgs01)   11/30/2010

11:17:20 PM  CPU   %user   %nice    %sys %iowait    %irq   %soft
%steal   %idle    intr/s
11:17:20 PM  all    9.04    0.00    1.77    0.22    0.06    0.24
0.00   88.67   1155.01
11:17:20 PM    0    8.10    0.00    1.80    0.44    0.02    0.13
0.00   89.51    989.97
11:17:20 PM    1   11.77    0.00    2.09    0.07    0.09    0.33
0.00   85.65     75.42
11:17:20 PM    2    7.30    0.00    1.50    0.25    0.00    0.09
0.00   90.86      2.00
11:17:20 PM    3    8.99    0.00    1.70    0.13    0.12    0.42
0.00   88.66     87.63
            total       used       free     shared    buffers     cached
Mem:          7983       7748        235          0        157       7226
-/+ buffers/cache:        364       7619
Swap:         1725          0       1725
Linux 2.6.18-194.26.1.el5 (drvppgs01)   11/30/2010

11:17:25 PM  CPU   %user   %nice    %sys %iowait    %irq   %soft
%steal   %idle    intr/s
11:17:25 PM  all    9.04    0.00    1.77    0.22    0.06    0.24
0.00   88.67   1155.09
11:17:25 PM    0    8.10    0.00    1.80    0.44    0.02    0.13
0.00   89.51    989.97
11:17:25 PM    1   11.77    0.00    2.09    0.07    0.09    0.33
0.00   85.65     75.42
11:17:25 PM    2    7.31    0.00    1.50    0.25    0.00    0.09
0.00   90.86      2.00
11:17:25 PM    3    8.99    0.00    1.70    0.13    0.12    0.42
0.00   88.66     87.71
            total       used       free     shared    buffers     cached
Mem:          7983       7748        235          0        157       7226
-/+ buffers/cache:        363       7619
Swap:         1725          0       1725
Linux 2.6.18-194.26.1.el5 (drvppgs01)   11/30/2010

11:17:30 PM  CPU   %user   %nice    %sys %iowait    %irq   %soft
%steal   %idle    intr/s
11:17:30 PM  all    9.04    0.00    1.77    0.22    0.06    0.24
0.00   88.67   1155.18
11:17:30 PM    0    8.10    0.00    1.80    0.44    0.02    0.13
0.00   89.51    989.97
11:17:30 PM    1   11.77    0.00    2.09    0.07    0.09    0.33
0.00   85.65     75.41
11:17:30 PM    2    7.31    0.00    1.50    0.25    0.00    0.09
0.00   90.86      2.00
11:17:30 PM    3    8.99    0.00    1.70    0.13    0.12    0.42
0.00   88.65     87.80
            total       used       free     shared    buffers     cached
Mem:          7983       7748        235          0        157       7226
-/+ buffers/cache:        363       7619
Swap:         1725          0       1725
Linux 2.6.18-194.26.1.el5 (drvppgs01)   11/30/2010

11:17:35 PM  CPU   %user   %nice    %sys %iowait    %irq   %soft
%steal   %idle    intr/s
11:17:35 PM  all    9.04    0.00    1.77    0.22    0.06    0.24
0.00   88.67   1155.23
11:17:35 PM    0    8.10    0.00    1.80    

[GENERAL] how to increase upsize speed

2010-11-30 Thread Andrus
I installed PostgreSql 9 in Win server 2008 R2 x64 dedicated server and used 
stack builder to optimize

postgresql.conf file as dedicated server.

Upsize program runs in same server.
Upsize starts transaction.
Then it creates new table, sends insert statements to server through ODBC 
driver for every table.


After all data is sent, transaction is committed, analyze command is 
executed, primary and foreign keys and other contraints are added.


Some tables are large. This process takes number of hours to complete.
Server hardware is modern, it it expected that this process should not take 
so much time.


How to speed it up ? Should I turn sync off for upsize or other idea ?

How to increase
Andrus Moor
OÜ Eetasoft
Akadeemia 21-G302
Tallinn 12618
http://www.eetasoft.ee
http://eeva.eetasoft.ee
tel. 6654214,  6654215 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select max()

2010-11-30 Thread Alexander Farber
Ok, it is

# select id from pref_money where money in (select max(money) from
pref_money group by yw);
   id

 DE8048
 VK91770810
 DE7115
 OK252342810632
 OK22853997
(5 rows)

And to see how many times a player has won is:

# select count(id) from pref_money where id='DE7115' and money in
(select max(money) from pref_money group by yw);
 count
---
 1
(1 row)

My only problem is how to prevent false positives,
when the money value in 2 months is the same and
in 1 of the months it is the max value, but in the
other month it is not the max value and has another id.
Then the latter id will be falsely indicated as winner...

Regards
Alex

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL hanging on new connections?

2010-11-30 Thread hubert depesz lubaczewski
On Mon, Nov 29, 2010 at 03:57:29PM -0500, Vick Khera wrote:
 On Mon, Nov 29, 2010 at 1:23 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  hubert depesz lubaczewski dep...@depesz.com writes:
  straced postmaster when the problem was happening, and I was opening new
  connections. strace looks like this:
  [ backend hangs on semop immediately after reading global/pg_database ]
 
  It looks like something had exclusive lock on the database that new
  connections wanted to connect to.  AFAICS the only action in 8.3 that
  would do that would be a DROP DATABASE or RENAME DATABASE.  What was
  that other session doing?
 
 Every once in a while when I connect to my big DB it feels like it
 stalls the connection for a few seconds.  I have no idea what causes
 it, but it does feel like it has to do with the load.  The load is not
 all that high relative to what my box can handle.  It doesn't happen
 often enough for me to track it down, though.  It instinct is that it
 is waiting on a lock, but clearly there is no rename/drop happening on
 my main db else i'd be out of business :-)  I too run 8.3 as primary
 right now... testing 9.0 for deployment soon-ish.
 
 jkust so it will be clear - on some other occasion when we had the
 problem, i left it hng for a while. 5 minutes later i still didn't get
 the connection.

 Best regards,

 depesz


-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL hanging on new connections?

2010-11-30 Thread hubert depesz lubaczewski
On Tue, Nov 30, 2010 at 10:12:47AM +, Florian Weimer wrote:
 * hubert depesz lubaczewski:
 
  Now, the question is: why did it hang? Is there anything we can do to
  make it *not* hang?
 
 It might be some general system overload issue.  Try running echo w 
 /proc/sysrq-trigger as root the next time it happens.  This will dump
 kernel backtraces to dmesg, which might hint to what's going on with
 the system.

will check the idea, but the thing is that aside from postgresql -
everything else on the server is working fine - ssh/top/*stat.
and other postgreses seem to be locked too.

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] advise on performance issues please

2010-11-30 Thread Mark Felder
On Tue, 30 Nov 2010 04:34:32 -0600, Gregory Machin g...@linuxpro.co.za  
wrote:



running on a vmware ESXi 4.1 host -
4 x 2.4 GHz cpus AMD 6 cores each, 96 Gig ram, storage is provided by
HP Left hand SAN iSCSI.


Does the VM do iSCSI itself to get access to the filesystem on the SAN, or  
is this just a generic setup where ESX's datastores are on the iSCSI SAN?




Mark

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] techniques for bulk load of spatial data

2010-11-30 Thread Mario Corchero

Hi, I'm a student of Computer Science,
I know diffrents techniques of bulk load, but I need to know how 
specifically postgreSQL make a bulk load of spatial data, could anyone 
help me please?

Thank you.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] techniques for bulk load of spatial data

2010-11-30 Thread Andy Colson

On 11/30/2010 7:29 AM, Mario Corchero wrote:

Hi, I'm a student of Computer Science,
I know diffrents techniques of bulk load, but I need to know how
specifically postgreSQL make a bulk load of spatial data, could anyone
help me please?
Thank you.



That is a pretty generic question.  Have you run into problems?  what 
have you tried?


In general, use COPY.  If its a one time load, temporarily disable fsync.

I use shp2pgsql all the time, and it loads 10's of thousands of records 
a second.  (I've never timed it, it was never something slow that I 
needed to fix.  I just ran it and went on).


Do you have shape files you need to load?  Have you tuned your 
postgresql.conf?  Do you want a util to import data for you, or are you 
writing your own? Are you using PostGIS?


No one can give you specifics without a bunch more detail about what you 
want.


-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] advise on performance issues please

2010-11-30 Thread Andy Colson

On 11/30/2010 4:34 AM, Gregory Machin wrote:

Hi
this is the first time I'm working with Posgresql other than a defualt install.

I have a CentOS 5.5 virtual machine with -
4 virtual cpus , 8 Gig RAM , resource pool set to High (8000)

running on a vmware ESXi 4.1 host -
4 x 2.4 GHz cpus AMD 6 cores each, 96 Gig ram, storage is provided by
HP Left hand SAN iSCSI.

I'm running post Postgresql 8.4.4 rpm from the postgresql repo.

What I need to clarify is wether the performance issue the website
that uses database is experiencing is related to postgresql miss
configuration or bad code in the site.

Customisations in the postgresql.conf

max_connections = 1000
shared_buffers = 4096MB
temp_buffers = 512MB
work_mem = 10MB # min 64kB
maintenance_work_mem = 160MB# min 1MB
fsync = off
synchronous_commit = on
wal_sync_method = open_sync
full_page_writes = off
effective_cache_size = 32MB
join_collapse_limit = 1
autovacuum_vacuum_cost_delay = 100ms
checkpoint_segments = 60

this configuration gives the following pgbench results

[mac...@topnz15209-linux ~]$ pgbench -h drvppgs01 -U postgres -c 40 -T 120 test
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 40
duration: 120 s
number of transactions actually processed: 47070
tps = 391.660580 (including connections establishing)
tps = 392.523468 (excluding connections establishing)

pgbench was setup with  pgbench -h drvppgs01 -U postgres -c 40 -T 120
-i test -F 100 -S 15

mpstat every 5 seconds gives the following:




pgbench needs its scale (-s) and #connections (-c) to be close.  You 
have a scale of 1 and 40 connections, so this is a bad test.  Init with 
-s 40, then run with -c at 40 or less.


Your mpstat output is hard to read because of the word wrapping.  Not 
sure if iostat would be more readable (its usually the one I look at 
(I'd never even heard of mpstat before now)).


 fsync = off

Thats a bad idea if you care about your data.

 effective_cache_size = 32MB

That looks wrong, you better read up on that one in the docs.

-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] how can i bugfix idle in transaction lockups ?

2010-11-30 Thread Jonathan Vanasco
on a project, i find myself continually finding the database locked up with 
idle in transaction connections

are there any commands that will allow me to check exactly what was going on in 
that transaction ?

i couldn't find anything in the docs, and the project has decent traffic, so 
its going to be an issue to log all statements so I can sift through the data 
by PID


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how can i bugfix idle in transaction lockups ?

2010-11-30 Thread Ben Chobot
On Nov 30, 2010, at 7:21 AM, Jonathan Vanasco wrote:

 on a project, i find myself continually finding the database locked up with 
 idle in transaction connections
 
 are there any commands that will allow me to check exactly what was going on 
 in that transaction ?
 
 i couldn't find anything in the docs, and the project has decent traffic, so 
 its going to be an issue to log all statements so I can sift through the data 
 by PID

Are those transactions actually holding locks? (You can verify by checking 
pg_locks.) If so, that might give you a clue. If not, then they shouldn't be 
causing any lockups
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to indicate failure of a shell script to pgagent

2010-11-30 Thread Wappler, Robert
Hi,
I'm using postgresql 8.4.5 and pgagent 1.8.4 on debian lenny. I set up a 
pgagent job, which executes every 15 minutes. In one step, there is a shell 
script executed. For testing purposes, the script is simply a one-liner 
containing exit 1. Nevertheless, pgAdmin tells me, the job executed 
successful.

So my question is: How can a shell script indicate a failure to pgagent, so 
that the job execution is considered to be failed?

The step's attribute is set to fail-on-error.

Thanks in advance





 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Extension for file management under postgres

2010-11-30 Thread A J
Does anyone know of a module for postgres that can do metadata management in 
 postgres but actual storage of binary objects in filesystem ?
Not really using Postgres large object but want to just do the 
metadata management with postgres. Example, need to store 
file name/size/directory/drive/node in postgres but the actual file in a 
filesystem.
Also utilities that will all to do file rename (and metadata rename) within 
a single transaction. Similarly file delete or file copy as well.
Also maybe webdav or ftp extensions to do get/put from web front. (the 
metadata would be from postgres and actual object from filesystem)

This sounds like a typical exercise and don't want to reinvent if the code 
is already out there.

many thanks ! -AJ


  

[GENERAL] How to find correct locale name for CREATE DATABASE

2010-11-30 Thread Andrus

I tried in  Debian (PostgreSQL 9.0.1 on x86_64-pc-linux-gnu, compiled by
GCC gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit) :

1. CREATE DATABASE mydb WITH TEMPLATE=template0
LC_COLLATE='Estonian_Estonia.1257' LC_CTYPE='Estonian_Estonia.1257'
OWNER=mydb_owner ENCODING='UNICODE'

2. CREATE DATABASE mydb WITH TEMPLATE=template0  LC_COLLATE='et_EE.UTF-8'
LC_CTYPE='et_EE.UTF-8' OWNER=mydb_owner ENCODING='UNICODE'

3. 2. CREATE DATABASE mydb WITH TEMPLATE=template0 
LC_COLLATE='et_EE.UTF8'

LC_CTYPE='et_EE.UTF8' OWNER=mydb_owner ENCODING='UNICODE'

in all cases same error

invalid locale name

occurs.

Questions:

1. How to find correct locale name for Estonian in Debian  Linux ?
In Windows case (1) works.

2. How to create portable CREATE DATABASE command for estonian locale which 
works in all operating systems?


3. Or how to detect OS from PostgreSql server and select correct Estonian 
locale ?


Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to find correct locale name for CREATE DATABASE

2010-11-30 Thread Thom Brown
2010/11/30 Andrus kobrule...@hot.ee:
 I tried in  Debian (PostgreSQL 9.0.1 on x86_64-pc-linux-gnu, compiled by
 GCC gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit) :

 1. CREATE DATABASE mydb WITH TEMPLATE=template0
 LC_COLLATE='Estonian_Estonia.1257' LC_CTYPE='Estonian_Estonia.1257'
 OWNER=mydb_owner ENCODING='UNICODE'

 2. CREATE DATABASE mydb WITH TEMPLATE=template0  LC_COLLATE='et_EE.UTF-8'
 LC_CTYPE='et_EE.UTF-8' OWNER=mydb_owner ENCODING='UNICODE'

 3. 2. CREATE DATABASE mydb WITH TEMPLATE=template0 LC_COLLATE='et_EE.UTF8'
 LC_CTYPE='et_EE.UTF8' OWNER=mydb_owner ENCODING='UNICODE'

 in all cases same error

 invalid locale name

 occurs.

 Questions:

 1. How to find correct locale name for Estonian in Debian  Linux ?
 In Windows case (1) works.

You can list available locales on your system with: locale -a

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Comparing first 3 numbers of a IPv4 address?

2010-11-30 Thread Jasen Betts
On 2010-11-20, Alexander Farber alexander.far...@gmail.com wrote:


 1) if I'd like to compare just the first 3 numbers of
 the IP address instead of the 4, how can I do it?
 (yes, I know about the A,B,C type of IPv4 networks...)

 have you heard of CIDR (what about IPV6, which I'm going to ignore,
but you should consider unless this code is throw-away)
  
 just use set_masklen and the  operator.
 
 select set_masklen('1.2.3.4'::inet,24)  '1.2.3.244'::inet;
   
-- 
⚂⚃ 100% natural

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] finding rows with invalid characters

2010-11-30 Thread Jasen Betts
On 2010-11-21, Sim Zacks s...@compulab.co.il wrote:
 I am using PG 8.2.17 with UTF8 encoding.
 PostgreSQL 8.2.17 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.1 
 (Gentoo 4.1.1)

 One of my tables somehow has invalid characters in it:
 ERROR:  invalid byte sequence for encoding UTF8: 0xa9
 HINT:  This error can also happen if the byte sequence does not match 
 the encoding expected by the server, which is controlled by 
 client_encoding.
 I have already manually found a number of the bad rows by running 
 queries with text functions (upper) between groups of IDs until I found 
 the specific bad row.


 1) Is there a quicker way to get a list of all rows with invalid characters

dumpthe table, run it through iconv , diff agaist the original.

 2) Shouldn't the database prevent these rows from being entered in the 
 first place?

it should have, but that bug has now been fixed.

 3) I have backups of this database (using -Fc) and I noticed that on 
 restore, this table is not restored because of this error. Is there a 
 way to fix the existing backups, or tell the restore to ignore bad rows 
 instead of erroring out the whole table?

translate them to SQL (use pg_resore with no databse name) 
then you can again use iconv to clean them.

use iconv something like this.

  iconv --from-code UTF8 --to-code UTF8 -c   input_file  output_file

This will translate surrogates and drop other invalid characters.

if you have any constraints that place lower bounds on string-length
this has the potential to break them.

-- 
⚂⚃ 100% natural

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how can i bugfix idle in transaction lockups ?

2010-11-30 Thread Merlin Moncure
On Tue, Nov 30, 2010 at 10:21 AM, Jonathan Vanasco postg...@2xlp.com wrote:
 on a project, i find myself continually finding the database locked up with 
 idle in transaction connections

 are there any commands that will allow me to check exactly what was going on 
 in that transaction ?

 i couldn't find anything in the docs, and the project has decent traffic, so 
 its going to be an issue to log all statements so I can sift through the data 
 by PID

You can match the procpid on pg_stat_activity vs pid on pg_locks.
This will give you relation, which you can cross reference against
pg_database and pg_class system tables -- that should give a few
clues.

You can also break down various things interacting with the database
by role.  For example, have the website auth into the database with a
'website' role, backend reporting scripts with 'reports', etc.  That
way pg_stat_activity might tell you the specific trouble maker that is
doing this.

After that, it's about finding the bug -- are you using connection
pooling?  Begin w/o commit is a grave application error and you should
consider reworking your code base so that it doesn't happen (ever).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] lock file permisson

2010-11-30 Thread Peter Eisentraut
On mån, 2010-11-29 at 21:57 +0530, Mohammed Rashad wrote:
 when i start postgresql using
 /etc/init.d/postgresql-8.4 start
 I am getting this error
 IST FATAL:  could not create lock file
 /var/run/postgresql/.s.PGSQL.5432.lock: Permission denied

You probably need to run this command as root.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Pg_upgrade question

2010-11-30 Thread DM
We are planning to upgrade to 9.01 from 8.4/8.3 version. On my production
Server, there isn't much space left to do a pg_upgrade (Copy),

Can I execute pg_upgrade from new box by pointing to the data directory and
binaries on old box.


Thanks
Deepak


Re: [GENERAL] Pg_upgrade question

2010-11-30 Thread Devrim GÜNDÜZ
On Tue, 2010-11-30 at 14:00 -0800, DM wrote:
 We are planning to upgrade to 9.01 from 8.4/8.3 version. On my
 production Server, there isn't much space left to do a pg_upgrade
 (Copy),

You can use pg_upgrade in link mode -- and AFAIK it uses less extra
space as compared to copy mode:

http://www.postgresql.org/docs/9.0/static/pgupgrade.html

 Can I execute pg_upgrade from new box by pointing to the data
 directory and binaries on old box. 

Alternatively, you can use pg_dump of 9.0 to dump the data, and restore
it using psql, since you don't really need pg_upgrade here -- you will
be moving to a new hardware anyway.

Regards, 
-- 
Devrim GÜNDÜZ
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
PostgreSQL RPM Repository: http://yum.pgrpms.org
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Pg_upgrade question

2010-11-30 Thread Vick Khera
On Tue, Nov 30, 2010 at 5:00 PM, DM dm.a...@gmail.com wrote:
 Can I execute pg_upgrade from new box by pointing to the data directory and
 binaries on old box.

If you're moving to a new box anyway, you might as well just do the
traditional pg_dump and restore.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Warm Standby log filling up with FATAL: the database system is starting up entries

2010-11-30 Thread Greg Swisher
Hi Everybody,

I am not a linux expert, but am able to follow well enough documentation and
such that I have been able to get postgresql 9.0.1 up and running on Debian
Lenny in a primary + warm standby configuration, for the most part.

I am successfully sending WAL files to a standby server temp location, and
the standby is able to read the log files after they arrive and process
them.  All seems to be working, however I am seeing 2 remaining issues for
which I cannot find an answer.  I turn to you!

On the standby server  the postgresql.conf file is at pure default.  The
recovery.conf file has only the restore_command configured as:

restore_command = '/usr/local/pgsql/bin/pg_standby ­d -s 2 -t
/home/postgres/pgsql/pgsql.trigger /home/postgres/pgsql/backupdata %f %p %r
2/usr/local/pgsql/data/standby.log'

The logfile gets these ³FATAL: the database system is starting up² entires
on a regular basis.  They are not logged at nearly the frequency that
pg_standby is checking for the WAL files though, according to a comparison
of the log files.  I have not found any info on searches about this
phenomena.  (probably just don¹t know where to look!)

LOG:  database system was shut down in recovery at 2010-11-30 20:53:00 PST
LOG:  starting archive recovery
LOG:  restored log file 0001001A from archive
LOG:  redo starts at 0/1AC0
LOG:  consistent recovery state reached at 0/1B00
FATAL:  the database system is starting up
FATAL:  the database system is starting up
FATAL:  the database system is starting up
FATAL:  the database system is starting up
FATAL:  the database system is starting up
FATAL:  the database system is starting up
And on...
And on...
And on...
Forever...

Any suggestions?  I don¹t want to just hide them unless they are harmless.
Once I get things stable for awhile I will muck about with the log rotation
and keeping them weeded down to size.

The second issue has to do with pg_standby logging.

If I use the ­d switch, I get lots of good info in the standby.log file, but
I also get a ³WAL file not present yet. Checking for trigger file...² every
time pg_standby checks.  I tried removing the ­d but then I get nothing at
all.  The documentation doesn¹t indicate what is logged without the ­d
switch.  I like getting the detail when a WAL file is found and acted upon,
but I really don¹t want to see pages and pages of status check messages.

Any suggestions here?

Thanks so much

Greg 


[GENERAL] Does Postgres support bookmarks (tuples ids)?

2010-11-30 Thread Konstantin Izmailov
Dear experts,
I've noticed that commercial databases (e.g. SQLServer) and some open source
databases (e.g. Cubrid) support so called bookmarks.
As far as I understood, a bookmark allows quickly jump to a row for
retrieval or modification.

Here is scenario that I'm trying to deal with:
A BI/ETL application is querying the Postgres database. The queries return
lots of rows (36 mil), each is about 1KB or larger.
So I'm using DECLARE CURSOR/FETCH to read the rows into a buffer (size is
1 rows, and I'm freeing memory for oldest rows).
The application may alter or re-read some previously read rows by the row
index.
Problem is: if a row is not in the buffer (freed) the application cannot
resolve row index into row itself.
I considered using a unique key to located the row, but unfortunately some
queries do no allows determining the most unique key.

I'm thinking, is it possible to retrieve/alter row by its index after a
Postgres Cursor have read the row?

The application allows a customer to define DB Schema as well as the
queries, so my code does not have a prior knowledge about DB and queries.
It is supposed to provide a certain API with functions based on row indexes.
The API was initially designed for SQLServer, so the goal is to migrate the
application from SQLServer to Postgres.

Would you recommend a solution?

Thank you
Konstantin


Re: [GENERAL] Does Postgres support bookmarks (tuples ids)?

2010-11-30 Thread Dann Corbit
Perhaps you want to use the ctid.  You can query it like any other column:
http://www.postgresql.org/docs/9.0/static/ddl-system-columns.html

The ctid is not permanent.   An alternative is to create tables with OID values.

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Konstantin Izmailov
Sent: Tuesday, November 30, 2010 9:50 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Does Postgres support bookmarks (tuples ids)?

Dear experts,
I've noticed that commercial databases (e.g. SQLServer) and some open source 
databases (e.g. Cubrid) support so called bookmarks.
As far as I understood, a bookmark allows quickly jump to a row for retrieval 
or modification.

Here is scenario that I'm trying to deal with:
A BI/ETL application is querying the Postgres database. The queries return lots 
of rows (36 mil), each is about 1KB or larger.
So I'm using DECLARE CURSOR/FETCH to read the rows into a buffer (size is 1 
rows, and I'm freeing memory for oldest rows).
The application may alter or re-read some previously read rows by the row index.
Problem is: if a row is not in the buffer (freed) the application cannot 
resolve row index into row itself.
I considered using a unique key to located the row, but unfortunately some 
queries do no allows determining the most unique key.

I'm thinking, is it possible to retrieve/alter row by its index after a 
Postgres Cursor have read the row?

The application allows a customer to define DB Schema as well as the queries, 
so my code does not have a prior knowledge about DB and queries.
It is supposed to provide a certain API with functions based on row indexes. 
The API was initially designed for SQLServer, so the goal is to migrate the 
application from SQLServer to Postgres.

Would you recommend a solution?

Thank you
Konstantin


Re: [GENERAL] Does Postgres support bookmarks (tuples ids)?

2010-11-30 Thread Dann Corbit
Is your application by chance using OLEDB?
If that is the case, then just get a PostgreSQL OLEDB provider that supports 
bookmarks.

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Konstantin Izmailov
Sent: Tuesday, November 30, 2010 9:50 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Does Postgres support bookmarks (tuples ids)?

Dear experts,
I've noticed that commercial databases (e.g. SQLServer) and some open source 
databases (e.g. Cubrid) support so called bookmarks.
As far as I understood, a bookmark allows quickly jump to a row for retrieval 
or modification.

Here is scenario that I'm trying to deal with:
A BI/ETL application is querying the Postgres database. The queries return lots 
of rows (36 mil), each is about 1KB or larger.
So I'm using DECLARE CURSOR/FETCH to read the rows into a buffer (size is 1 
rows, and I'm freeing memory for oldest rows).
The application may alter or re-read some previously read rows by the row index.
Problem is: if a row is not in the buffer (freed) the application cannot 
resolve row index into row itself.
I considered using a unique key to located the row, but unfortunately some 
queries do no allows determining the most unique key.

I'm thinking, is it possible to retrieve/alter row by its index after a 
Postgres Cursor have read the row?

The application allows a customer to define DB Schema as well as the queries, 
so my code does not have a prior knowledge about DB and queries.
It is supposed to provide a certain API with functions based on row indexes. 
The API was initially designed for SQLServer, so the goal is to migrate the 
application from SQLServer to Postgres.

Would you recommend a solution?

Thank you
Konstantin