Re: [GENERAL] How to get normalized data from tekst column

2011-11-24 Thread Andrus Moor

Harald,

Thank you.


The query



SELECT id, a[1] AS name, a[2] AS percent
FROM ( SELECT id, regexp_split_to_array(regexp_split_to_table(comment, ', 
*'), '\W+') AS a

 FROM project ) AS dummy



should work un every halfway recent PostgreSQL version - dunno about 8.1.



I tried it but got error in 8.1:

ERROR:  function regexp_split_to_table(text, unknown) does not exist
HINT:  No function matches the given name and argument types. You may need 
to add explicit type casts.


how to get data in 8.1 ?

Andrus.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to add xml data to table

2011-10-07 Thread Andrus Moor
soap response below contains table of products, approx 5000 rows.
Table of products (below) is nearly similar structure as xml data.

Products table needs to be updated from xml data in every hour.

How to add this xml data to table of products ?

Should I use xpath() function or any other ides ?
Using npgsql and C# in ASP .NET / Mono.

Andrus.


CREATE TABLE products (
SupplierCode char(20) primary key,
SegmentId char(8),
GroupId char(8),
ClassId char(8),
SeriesId char(8),
VendorId char(2),
PartNumbrt char(27),
Name Text,
Warranty Numeric(6,2),
Price Numeric(10,4),
Quantity Numeric(8,2)
)

Data which is required to add looks like:

?xml version=1.0 encoding=utf-8?
soap12:Envelope xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
xmlns:xsd=http://www.w3.org/2001/XMLSchema; 
xmlns:soap12=http://www.w3.org/2003/05/soapenvelope;
soap12:Body
GetProductListResponse xmlns=http://xxx.yy.zz/;
GetProductListResult
ProductList
Product
SupplierCode001982/SupplierCode
SegmentId6500/SegmentId
GroupId6501/GroupId
ClassId65010200/ClassId
SeriesId10001125/SeriesId
VendorIdAM/VendorId
PartNumberADA3000BIBOX/PartNumber
NameAMD Athlon64 3000+ (1800MHz/L2 Cache 512KB) Socket 939, BOX/Name
Warranty36/Warranty
Price196.0/Price
Quantity0/Quantity
DateExpected1999-01-01T00:00:00/DateExpected
IsNewProducttrue/IsNewProduct
/Product
Product
SupplierCode001512/SupplierCode
SegmentId6500/SegmentId
GroupId6501/GroupId
ClassId65010200/ClassId
SeriesId10001125/SeriesId
VendorIdAM/VendorId
Acme API Specification v 1.0
13
PartNumberADA3000AXBOX/PartNumber
NameAMD Athlon64 3000+ (2000MHz/1600MHz/L2 Cache 512KB) Socket 754, BOX/Name
Warranty36/Warranty
Price296.0/Price
Quantity0/Quantity
GrossWeight3.6000/GrossWeight
DateExpected1999-01-01T00:00:00/DateExpected
IsNewProductfalse/IsNewProduct
/Product
/ProductList
/GetProductListResult
/GetProductListResponse
/soap12:Body
/soap12:Envelope

Re: [GENERAL] Server stops responding in every week

2011-01-23 Thread Andrus Moor

Scott,

Thank you.


I know you're running windows, but if you can get bash working on it,
here's a simple bash script I wrote that when it detects too many
people connected creates a table called pg_stat_bk_20110120140634
(i.e. date and time) so I can then look over what was in
pg_stat_activity when things were acting up.


I can run scripts from windows scheduler for every 4 minutes or maybe from
pgadmin or maybe frequently accessed table triggers.

Whan command should be used to detect presence of hangup ( maybe query 
running more than 60

seconds or too many idle transactions?)
should pg_stat_activity.query_start column used for this ?

How to log memory usage ?

Is it best way to use initially:

create table activity as   select * from pg_stat_activity limit 0;
create table locks as   select * from pg_locks limit 0;

and if script detects hangup it invokes

insert into activity select * from pg_stat_activity;
insert into locks select * from pg_locks;

How to add log timestamp column to activity and locks tables ?
How to log memory usage ?
Can users notice server perfomance degration due to this?
What else to log ?
How to create plpgsql procedure for this ?

There are few tables which are used frequently. Hangup occurs probably if
queries are invoked againt those tables.
Is it better to implement this as trigger for those tables?

Andrus.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server stops responding in every week

2011-01-22 Thread Andrus Moor

Andy,


How to automatically re-start postgres service in every night ? Or is it
better to switch to 32bit server?


Neither.  You need to fix the problem.


How to configure postgres so that freeze issue can addressed  ?
E.q. if query runs more that 60 seconds, postgres dumps its status and long 
query reason .

How to implement this ?
No idea can log_statement=all help.

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server stops responding in every week

2011-01-22 Thread Andrus Moor

Andy,


Then answer these questions:

Does ram using increase over the week?


Server was re-started yesterday and there was little use after restart.
server is idle. Task Manager shows now

PF Usage 1.22 GB

Physical Memory (K)
Total 4191872
Avail 348 (last 4 vary)
System cache 35676xx (last 2 vary)

How to check for memory increase over week ?


Do you see Idle in transaction (probably in task manager)?


Task Manager shows only image name.
Where I should see this ?


Show us:
  select * from pg_stat_activity;


I ran it from pgadmin. It shows now :

11874;postgres;212;10;postgres;;127.0.0.1;1183;2011-01-22
21:24:51.343+02;;2011-01-22 21:24:51.5+02;f;IDLE
45923;mydb;3080;10;postgres;;127.0.0.1;1184;2011-01-22
21:24:55.25+02;;2011-01-22 21:24:55.281+02;f;IDLE
45923;mydb;4732;10;postgres;;127.0.0.1;1185;2011-01-22
21:24:57.156+02;2011-01-22 21:25:24.109+02;2011-01-22
21:25:24.109+02;f;

I will try to run it on next crash if users will not restart server before.


and
  select * from pg_locks;


I ran it from pgadmin. It shows now :

relation;45923;109854/241;4732;AccessShareLock;t
virtualxid;4/241;4/241;4732;ExclusiveLock;t

I will try to run it on next crash if users will not restart server before.



A little practice up front might help.  When the system is broke, you can
interrogate it to see what its doing.  Hopefully we'll see something
locked, and a bunch of things waiting.


Users can restart server without contacting me. For this reason I
asked for automated dump.

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to generate unique invoice numbers for each day

2011-01-16 Thread Andrus Moor

Andy,

SELECT COALESCE(MAX(nullif(substring( substring(tasudok from 7), 
'^[0-9]*'),'')::int),0)+1

FROM invoice
where date= ?invoicedate

is used to get next free invoice number if new invoice is saved.

If multiple invoices are saved concurrently from different processes, 
they will probably get same number.



I understand this is a business rule, and you cant change it.


Yes. This is customer requirement and I cannot change it.
Is  it reasonable/how to implement the following:

1. plpgsql command obtains some lock
2. It uses

SELECT COALESCE(MAX(nullif(substring( substring(tasudok from 7), 
'^[0-9]*'),'')::int),0)+1

FROM invoice
where date= ?invoicedate

to get next number for invoice date day

3. It adds new invoice with this numbder to database

4. It releases the lock.

Or is it better to maintain separate sequence or separate table of free 
numbers for every day ?


using sequence approach:

Application checks for sequence name in form invoiceno_mmdd
if sequence does not exist it is created. For concurrent adding second 
caller gets sequence exists exception and in this case this query can 
re-executed.

Next value is obtained from sequence

sequneces older than 1 year should be purded automatically since max 1 year 
backward numbers may created.

Which is best solution ?

using new record approach:

1. lock table
2. in this day sequnece does notr exist, add it
3. get next value for day, increment number in this table
4. unlock the table.

Which command should be used to obtain exclusise write access to table (lock 
some reosurce or semaphore) ?


Andrus.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to generate unique invoice numbers for each day

2011-01-16 Thread Andrus Moor

Yes. This is customer requirement and I cannot change it.
OR... can you go back to your customer and tell them they wont like this. 
Really really they should let you do it correctly.  I find people dont 
change because they dont have to, not because there is an actual reason. 
Many times, given a description of how hard and how messy something will 
be to code, I have convinced people that a simple business change and 
simple code is really the best approach.  But I have hit walls.  Things I 
could not change, but I did try.


My Visual FoxPro application  works OK in this case.
I used FLOCK() to lock invoice header table (FLOCK() waits indefinitely 
until lock is obtained and reads fresh data from disk),


used

SELECT MAX( CAST( SUBSTRING(invoiceno,8) AS INT ) )+1
FROM invoices
WHERE date= m.invoice_date

to get next free number, inserted invoice and unlocked the table.

Customer expects Postgres to be more powerful than FoxPro . He don't 
understand why this stops working after upgrade.


Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to generate unique invoice numbers for each day

2011-01-16 Thread Andrus Moor

Thank you.


2. In point 2. add FOR UPDATE
3. Use READ COMMITED TRANSACTION ISOLATION LEVEL

Don't lock tables, You wrote you can generate invoices for few days
backward,
so you don't need locking whole table.

Don't use seqences, as sequence value will don't get back when transaction
is
rolled back (You need to prevent gaps).

Locking with UPDATE, or FOR UPDATE is much more portable.

If you generate invoices in massive operation, probably when process runs
no
one will be able to create invoice, but you don't need to create multi
thread
application.



In any approach preventing gaps, locking is required. This is real life
situation; imagine you have two coworkers and then they need to create
invoices, so they looks in ledger (or a last day copy of ledger in their
offices; international company, but no Internet, only fax and telephone)
and
checks last number used, what should be done next?


Using read commited isolation level requires knowing in start of transaction 
will it perform new invoice adding or not. This requires changing program 
logic a lot.
Currently script which creates day seq numbers runs inside transaction . 
Transaction starter does not know will special isolation required or not.
Changing blindly all transactions to use this isolation level decreases 
perfomance and may lead to deadlocks.


In my case I can assume that transaction newer fails since business rules 
are verified and this is simple insert (inrare cases if it  fails due to 
disk failure etc then gaps are allowed).

Can this knowledge used to create simpler solution ?

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to generate unique invoice numbers for each day

2011-01-15 Thread Andrus Moor

Invoice numbers have format  yymmddn

where n is sequence number in day staring at 1 for every day.

command

SELECT COALESCE(MAX(nullif(substring( substring(tasudok from 7), 
'^[0-9]*'),'')::int),0)+1

FROM invoice
where date= ?invoicedate

is used to get next free invoice number if new invoice is saved.

If multiple invoices are saved concurrently from different processes, they 
will probably get same number.


How to get unique invoice number for some day in 8.1+ when multiple users 
create new invoices ?


Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to generate unique invoice numbers foreach day

2011-01-15 Thread Andrus Moor
There are 365 days in year.
Do you really think pre-creating sequence for every day for every year is best 
solution ?

Andrus.
  - Original Message - 
  From: Jorge Godoy 
  To: Andrus Moor 
  Cc: pgsql-general@postgresql.org 
  Sent: Saturday, January 15, 2011 8:41 PM
  Subject: ***SPAM*** Re: [GENERAL] How to generate unique invoice numbers 
foreach day


  Use a sequence.

  --
  Jorge Godoy jgo...@gmail.com



  2011/1/15 Andrus Moor kobrule...@hot.ee

Invoice numbers have format  yymmddn

where n is sequence number in day staring at 1 for every day.

command

SELECT COALESCE(MAX(nullif(substring( substring(tasudok from 7), 
'^[0-9]*'),'')::int),0)+1
FROM invoice
where date= ?invoicedate

is used to get next free invoice number if new invoice is saved.

If multiple invoices are saved concurrently from different processes, they 
will probably get same number.

How to get unique invoice number for some day in 8.1+ when multiple users 
create new invoices ?

Andrus. 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




Re: [GENERAL] How to generate unique invoice numbers foreach day

2011-01-15 Thread Andrus Moor
Invoices can entered also some days forward or back. Users enters invoice date 
and expected program to generate next sequential number for this day.
Different users can enter invoices for different days.

Andrus.

  - Original Message - 
  From: Jorge Godoy 
  To: Andrus Moor 
  Cc: pgsql-general@postgresql.org 
  Sent: Saturday, January 15, 2011 9:32 PM
  Subject: Re: [GENERAL] How to generate unique invoice numbers foreach day


  Why would you do that?


  You can always reset the sequence at the end of the day.

  --
  Jorge Godoy jgo...@gmail.com



  On Sat, Jan 15, 2011 at 17:09, Andrus Moor kobrule...@hot.ee wrote:

There are 365 days in year.
Do you really think pre-creating sequence for every day for every year is 
best solution ?

Andrus.
  - Original Message - 
  From: Jorge Godoy 
  To: Andrus Moor 
  Cc: pgsql-general@postgresql.org 
  Sent: Saturday, January 15, 2011 8:41 PM
  Subject: ***SPAM*** Re: [GENERAL] How to generate unique invoice numbers 
foreach day


  Use a sequence.

  --
  Jorge Godoy jgo...@gmail.com



  2011/1/15 Andrus Moor kobrule...@hot.ee

Invoice numbers have format  yymmddn

where n is sequence number in day staring at 1 for every day.

command

SELECT COALESCE(MAX(nullif(substring( substring(tasudok from 7), 
'^[0-9]*'),'')::int),0)+1
FROM invoice
where date= ?invoicedate

is used to get next free invoice number if new invoice is saved.

If multiple invoices are saved concurrently from different processes, 
they will probably get same number.

How to get unique invoice number for some day in 8.1+ when multiple 
users create new invoices ?

Andrus. 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general






Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore

2010-12-16 Thread Andrus Moor

Another testimonial to the stability of Postgres :)
We may be arguing semantics
here but I would consider dump/restore an admin function. How do you 
handle a

client restoring a database currently?


Database is 8.0 compliant.
In this case 8.4 pg_dump/pg_restore is used to dump and restore with any 
same version 8 of server witthout issues.


This was broken in 9:
Postgres 9 emits invalid create procedural language plpgsql command which 
does not work in any other version.

How to fix this without distributing two copies of pg_dump/pg_restore ?
Is it reasonable to create database and plpgsql language manually before 
running pg_restore ? In this case invalid create procedural language 
plpgsql issued by pg_restore gets ignored and maybe restore succeeds?



I could see a client connecting to one
of the system dbs and doing a DROP DATABASE.
From your earlier messages the
implication was that you used pg_restore to repopulate the db. My question 
then

is how do the clients make sure that they are not doing this on an active
database
and keep it from going active during the process?


Applicaton asks for new database name and verifies that this does not exist 
before executing

pg_restore.

Another requirement is to clone existing database in server with data. I 
posted question about it and it seems that  PostgreSql does not have any 
capability to do this in server side in plpgsql fast.

So we need to use slow and unsafe dump/restore over internet for this also.

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore

2010-12-16 Thread Andrus Moor

Robert,

I'm probably misunderstanding but CREATE DATABASE foo TEMPLATE bar will 
clone bar as foo including data. Of course this only works within the same 
cluster.


Than you.
You are genious
I haven't never tought about this.

Will this work if database bar is accessed by other users ? Probably it 
fails, so it cannot used.

Will this command create exact copy ?

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore

2010-12-16 Thread Andrus Moor

Loading a dump file into an older server may require manual editing of
the dump file to remove syntax not understood by the older server. 


pg_restore ignores erros during restore (it only returns exit code 1).
So manual editing of  the dump file to remove syntax not understood is 
never required.

Why this is not mentioned in docs ?


 It is the  REPLACE clause that is causing the problem


How to force 9.0 to emit CREATE LANGUAGE command without replace clause ?


Yea the CREATE DATABASE  TEMPLATE option is as close as it gets and you
are already aware of its limitations.


How to make CREATE DATABASE  TEMPLATE  to work if there are connected users
to template ?
Since pg_dump works if active users are connected, CREATE DATABASE  TEMPLATE
should also work.

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore

2010-12-15 Thread Andrus Moor

Server is

PostgreSQL 8.4.3 on i486-pc-linux-gnu, compiled by GCC gcc-4.3.real (Debian
4.3.2-1.1) 4.3.2, 32-bit

Backup is created using 9.0RC  pg_dump.exe file

Trying to restore from this backup to same server using 9.0RC pg_restore.exe
causes error

..\pg_dump\pg_restore.exe -h mysite.com  -U
eur1_owner -i --no-privileges --no-owner -d eur1 C:\mybackup.backup

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 929; 2612 16389 PROCEDURAL
LANGUAGE plpgsql postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  syntax error at
or near PROCEDURAL
LINE 1: CREATE OR REPLACE PROCEDURAL LANGUAGE plpgsql;


How to restore from this backup to 8.4.3 server using 9.0 pg_restore ?

Andrus.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore

2010-12-15 Thread Andrus Moor

Adrian,

thank you.


I am not sure I follow. Are you taking a pg_dump of a 9.0 database using a
9.0
version of pg_dump and trying to restore to a 8.4.3 database or are using
the
9.0 pg_dump against the 8.4.3 server and then restoring back to it? In
either
case the problem you see above will probably be only the first.


I used only 9.0 dump and restore. I did the following:

1. Created backup copy from 8.4.3 using  9.0 pg_dump
2. Restored from this backup to 8.4.3 using 9.0 pg_restore

9.0 pg_restore fails since 8.4.3 server reports invalid sql command  in
create
language plpgsql  statement. previous pg_restores worked OK.
Is this 9.0 pg_restore bug ?


Going backwards
using pg_dump/pg_restore is not guaranteed to work. Your best hope if you
must
do that is to do the dump in plain text format and change the problems
manually
as you proceed.


Application executes  9.0 pg_dump/pg_restore
Application can connect to different servers starting and 8.1 and need to
able for backup/restore for every this server.
For single backup copy, dump and restore are executed for same server
version,

e.q backup created from site A using Postgres version x  will used to
restore
only to this site for same postgres version x
Backup created from site B using Postgres version y will used to restore
only to
this site to same Postgres version y

How to support backup restore for all =8.1 servers using single
pg_dump/pg_restore ?

Currently I looks like for 8.4 serves 9.0 backup/restore cannot used.

Andrus.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore

2010-12-15 Thread Andrus Moor

No, this is just pilot error.  Any version of pg_dump will produce
output that is meant to be loaded into the matching server version
(or a later version).  If you are intending to load back into 8.4,
use the 8.4 pg_dump.

You may have been reading the recommendation to use the later version's
pg_dump when dumping an older server to perform an upgrade.  It's good
advice, but only for upgrades.


Windows application needs to support backup/restore for all servers =8 
between same server version.
So it must distibute 8.0, 8.1, 8.2, 8.3, 8.4, 9.0  pg_dump/pg_restore with 
application,
including all VC++ runtime versions and all dlls specific to this version 
and invoke

specific pg_dump/pg_restore depending on server version ?

Do you really think that this is reasonable ?
I'nt there a simpler way ?

I havent seen that pg_admin includes every pg_dump / pg_restore version.

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore

2010-12-15 Thread Andrus Moor

Why does it have that requirement?  And why doesn't it use the pg_dump
that came with the server?  It seems pretty lame to assume that your app
has to provide pg_dump and not any other part of the Postgres
installation.


Application is like pg_admin.
It is typical client application which is used to edit data in existing 
servers running in different sites over internet.
Application must have function to backup and restore whole database in same 
=8 server where it connects. Only 5432 port is open to internet.


Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore

2010-12-15 Thread Andrus Moor
I got to thinking more about this. How are the databases administered? In 
other

words how are they started/stopped, upgraded, logs read, etc?


Databases are working many years in 24x7 mode without administration.
For every new new site newest PostgreSql was installed.

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to change pg_trigger query so that it works in 9.0 without pg_trigger.tgisconstraint does not exist error

2010-10-16 Thread Andrus Moor

In 9.0 query below returns error

7/42703:ERROR: column pg_trigger.tgisconstraint does not exist

How to change it so that it works in all servers starting at 8.0 (or at 
least

from 8.1) to 9.0 ?
Or if this is not possible how to fix it so that it works in 9 ?

Andrus

SELECT
 pg_catalog.pg_get_triggerdef(pg_trigger.oid) as trdef
FROM pg_catalog.pg_trigger
join pg_catalog.pg_class on pg_trigger.tgrelid = pg_class.oid
JOIN pg_catalog.pg_namespace ON pg_namespace.oid=pg_class.relnamespace
WHERE not pg_trigger.tgisconstraint
and pg_namespace.nspname='firma1' 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 9.0 SSL renegotiation failure restoring data

2010-10-02 Thread Andrus Moor
Steps to reproduce:

1. Ran latest pgAdmin in windows server 2005 Standard x64 Edition
2. Restore data to Postgres 9.0 linux server from 450 MB backup file if
only 
SSL connection is enabled

After some time pg_restore reports that connection is closed.
server log is below.
How to restore 450 MB backup copy to Postgres 9.0 Linux server from
windows
server using SSL ?

Andrus.

LOG:  duration: 2643663.100 ms  statement: COPY artpilt (id,
toode, pilt,
pildityyp, esipil
t) FROM stdin;
LOG:  SSL renegotiation failure
LOG:  SSL failed to send renegotiation request
LOG:  SSL renegotiation failure
LOG:  SSL error: unsafe legacy renegotiation disabled
LOG:  could not send data to client: Connection reset by peer
LOG:  SSL error: unsafe legacy renegotiation disabled
LOG:  could not receive data from client: Connection reset by
peer
LOG:  unexpected EOF on client connection
 

Re: [GENERAL] 9.0 SSL renegotiation failure restoring data

2010-10-02 Thread Andrus Moor
 Either (1) get a non-lobotomized SSL library
I'm using latest official Postgres 9.0 server and pgAdmin client.
Does one of them contain bug in SSL ?
Andrus.

Re: [GENERAL] How to distribute quantity if same product is in multiple rows

2010-07-23 Thread Andrus Moor

Tim,

Thank you.


It can be done in SQL: SUM(kogus) OVER (PARTITION BY toode
ORDER BY ID) - kogus (*1) will give you the running sum of
the product up to that row. You can then subtract that value
from the delivered quantity to calculate the delivered quan-
tity for the current row.


I tried to get list of undelivered items using script below.
Second row value (22) is incorrect (it seems to be is cumulative sum but 
must be undelivered quantity for this row).

How to fix this so that every row contains correct undelivered quantity ?

Andrus.

-- Order details
CREATE TEMP TABLE tellrid  (
id serial primary key,
toode char(20), -- product id
kogus numeric(12,5) ) -- ordered quantity
on commit drop;

insert into tellrid (toode,kogus) values ('PRODUCT1', 10 );
insert into tellrid (toode,kogus) values ('PRODUCT1', 20 );

-- Delivery details
CREATE TEMP TABLE rid  (
id serial primary key,
toode char(20), -- product id
kogus numeric(12,5) ) -- delivered quantity
on commit drop;

insert into rid (toode,kogus) values ('PRODUCT1', 8 );

select
 tellrid.id,
 max(tellrid.kogus) as ordered,

GREATEST( 0,

SUM(MAX(tellrid.kogus) )
  OVER (PARTITION BY MAX(tellrid.toode) ORDER BY tellrid.ID)

-  COALESCE( SUM(rid.kogus),0)

)   as not_delivered
from tellrid
LEFT JOIN rid USING (toode)
GROUP BY 1

Observed:

id  ordered not_delivered
1  10.0  2.0
2  20.022.0


Expected:

id  ordered not_delivered
1  10.0  2.0
2  20.020.0

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Error inserting data to bytea column in 8.4

2009-08-21 Thread Andrus Moor

In 8.4, script

create temp table test ( test bytea );
insert into test values(E'\274')

Causes error

ERROR:  invalid byte sequence for encoding UTF8: 0xbc
HINT:  This error can also happen if the byte sequence does not match the 
encoding expected by the server, which is controlled by client_encoding.


In 8.2 this script runs OK.

How to insert data to bytea field ?

Andrus

Both server and client are running in windows.

PostgreSQL 8.4.0, compiled by Visual C++ build 1400, 32-bit

show client_encoding
UNICODE 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore returns always error if backup is greater than 2 GB

2009-08-19 Thread Andrus Moor
I noticed that backups created by pg_dump are not usable if backup file size 
is greater than 2 GB.


Backups are create in 8.1 I tried to restore them in 8.4. Backup contains 
many tables. There is a large table, attachme containing bytea field. This 
table grows rapidly. If .backup file size is bigger that 2 GB, pg_restore 
always fails for every table restored after that table:


C:/Program Files/PostgreSQL/8.4/bin\pg_restore.exe --host localhost --port 
5432 --username postgres --dbname mydb --verbose \\ls\share\my 
cackups\backups\090703 my backup.backup


pg_restore: connecting to database for restore
pg_restore: creating SCHEMA firma1
pg_restore: creating SCHEMA firma2
pg_restore: creating SCHEMA firma3
pg_restore: creating SCHEMA firma4
pg_restore: creating SCHEMA firma5
pg_restore: creating SCHEMA firma6
pg_restore: creating SCHEMA firma7
pg_restore: creating SCHEMA firma8
pg_restore: creating SCHEMA firma9
pg_restore: creating SCHEMA public
pg_restore: creating COMMENT SCHEMA public
pg_restore: creating PROCEDURAL LANGUAGE plpgsql
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2214; 2612 16787 PROCEDURAL 
LANGUAGE plpgsql postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  language 
plpgsql already exists

   Command was: CREATE PROCEDURAL LANGUAGE plpgsql;
pg_restore: creating DOMAIN ebool
pg_restore: creating FUNCTION dok_seq_trig()

...
pg_restore: restoring data for table attachme
pg_restore: restoring data for table bilkaib
pg_restore: [custom archiver] found unexpected block ID (654399830) when 
reading data -- expected 12781

pg_restore: *** aborted because of error

Process returned exit code 1.

How to recover data from tables in backup after attachme table ?

Andrus.

Environment:

Backups are created  in  PostgreSql 8.1 running in Windows
pg_dump is used to create backup copies to Buffalo Linkstation Pro ls-320 GB 
external disk device ( \\ls ) connected to LAN


backup command: pg_dump.exe -ibv -Z3 -f \\ls\backup\mybackup.backup -Fc -h 
localhost -U admin  mydb


What causes this ? is it Buffalo device, Windows/LAN or PostgreSql 8.1 issue 
? Will upgrading to 8.4 fix this ? 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] index pg_authid_rolname_index is not a btree

2009-08-19 Thread Andrus Moor

Alvaro,


You can get around that particular problem by reindexing the pg_authid
table.  But my guess is that you'll find that there's corruption
elsewhere that's not so easily recoverable ...


Thank you.
reindexing system tables and whole database succeeds.
After that I can connect to database containing data to recover.

However pg_dump fails:

bin\pg_dump -f recover.backup -i -v -F c -h localhost -p 5433 -U postgres 
mydb

pg_dump: reading schemas
pg_dump: reading user-defined functions
pg_dump: reading user-defined types
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  could not identify an ordering
operator for type name
HINT:  Use an explicit ordering operator or modify the query.
pg_dump: The command was: SELECT tableoid, oid, conname,
pg_catalog.pg_get_constraintdef(oid) AS consrc FROM pg_catalog.pg_constraint
WHERE contypid = '10635'::
pg_catalog.oid ORDER BY conname
pg_dump: *** aborted because of error

How to recover data from this database ?
Should I re-create ordering operator in some way ?

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] index pg_authid_rolname_index is not a btree

2009-08-19 Thread Andrus Moor

Tom,

Thank you.


Alvaro was right --- you've got damage in the system catalogs, not just
their indexes.  This looks like missing entries in pg_amop.


postgres -D data mydb

PostgreSQL stand-alone backend 8.1.9
backend select * from pg_amop
1: amopclaid   (typeid = 26, len = 4, typmod = -1, byval = t)
2: amopsubtype (typeid = 26, len = 4, typmod = -1, byval = t)
3: amopstrategy(typeid = 21, len = 2, typmod = -1, byval =
t)
4: amopreqcheck(typeid = 16, len = 1, typmod = -1, byval =
t)
5: amopopr (typeid = 26, len = 4, typmod = -1, byval = t)
   

pg_amop in mydb contains 5 rows.
pg_amop in  template1 database contains large number of rows.
mydb does not contain user-defined operators.

How to repair pg_amop in mydb ?


(You did
say you reindexed all the system catalogs, right?  If not it's possible
this is only index damage, but I'm not very hopeful.)


reindex system mydb
reindex database mydb

complete without errors.


I suspect that if you did get to the point of being able to run pg_dump
without error, you'd find just as much damage to the user data.  I'm
afraid this database is toast and you should write it off as a learning
experience.  Hardware fails, you need backups.


Backup is 4.2 GB and is corrupted after 2 GB as I described in other thread.
Also, backup is too old.
Most of backup size contains few big tables which are not required to
recover.
I ran truncate commands for those tables. This reduces whole data
directory size to 1.2 GB in uncompressed form.

I know which tables contain data to be recovered.

How to dump those tables out ?

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] index pg_authid_rolname_index is not a btree

2009-08-18 Thread Andrus Moor
Hard disk containing PostgreSql 8.1 database on Windows crashes and there 
was no new

backup copy.

I installed 8.1.9 to new computer and copied data directory from crashed 
disk to it.

data directory contains a lot of files with a lot of data.

Trying to connect to template0 or any other database in this cluster causes
error

Error connecting to the server: FATAL:  index pg_authid_rolname_index is
not a btree

How to recover data from this cluster ?

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Changing referenced primary key column type

2009-08-07 Thread Andrus Moor

8.x Db contains tables

create table ametikoh ( ametikoht numeric(7)  NOT NULL DEFAULT 
nextval('firma1.ametikoh_ametikoht_seq'::regclass) primary key );

create table iandmed ( ametikoht numeric(7) references ametikoh);

I created conversion script which normalizes column types:

alter table ametikoh alter column ametikoht type integer;
alter table iandmed alter column ametikoht type integer;


Running this script causes error:

ERROR:  foreign key constraint iandmed_ametikoht_fkey cannot be 
implemented
DETAIL:  Key columns ametikoht and ametikoht are of incompatible types: 
numeric and integer.



How to create script which performs this change and works in  8.0+ ?

Andrus. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ' and \ characters in passwords

2006-03-18 Thread Andrus Moor
I found that \ characters needs not to be quoted in password strings

create role test1 password '\'

is OK.
I havent found any mention of this behaviour in docs.
Where this is documented ?


I created  role

create role test2 password 

Trying to login using password ' causes error from my ODBC client:

Connectivity error: FATAL:  password authentication failed for user test2

ODBC and postgres logs do not show anything about this issue. How can I find
which is wrong ?

Is

create role test2 password 

legal way to set password to  '  character ?

Andrus. 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Using same database from XP and Linux in notebook

2006-03-04 Thread Andrus Moor
I have notebook which can be booted into Windows XP/NTFS and into Linux.
Notebook is not connected to any network. There are separate IDE partitions 
for both OS.

I want same Postgres 8.1 database to be available in both modes.

I think I must put Postgres database cluster into a NTFS partition which can 
be written from Postgres running on Linux or into Linux partition which can 
be written from Postgres running in XP.

Any idea how to implement dual OS database ? Is there a ready to install 
Linux distro which supports this ?

Andrus. 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Wish: remove ancient constructs from Postgres

2006-02-26 Thread Andrus Moor
It is difficult to write standard-compliant code in Postgres.
There are a lot of constructs which have SQL equivalents but are still used
widely, even in samples in docs!

For example, there are suggestions using

now()::CHAR!=foo

while the correct way is

CAST(CURRENT_DATE AS CHAR)foo

now() function, :: and != operators should be removed from language.

I like the Python logic: there is one way
Postgres uses C logic: there are multiple ways.

Bruce seems to attempt start  this process trying implement
escape_string_warning in postgresql.conf . However, this is only very minor
step. Please clean Postgres.

Andrus.




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] How to read odbc and pgadmin lists from news server

2006-02-24 Thread Andrus Moor
In previous week the following (and other active)  newsgroups are removed 
from  news.postgresql.org news server witohut any notice

pgsql.interfaces.odbc
pgsql.interfaces.pgadmin.hackers

How to read those newsgroups from from news server ?

Andrus. 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] How to load 6 MB file to database

2006-02-24 Thread Andrus Moor
I'm using Postgres 8.1 and latest ODBC driver in Windows XP with 256 MB RAM.

When trying to insert a  6 MB binary file to a bytea field, out-of-memory 
error is writeen to ODBC log file and insert fails. Smaller files are 
inserted OK

Any idea how to load 6 MB file to a bytea field or avoid out-of memory error 
?

Is it possible to add data to a single bytea field incrementally using 
separate insert of update
statements each adding for example 100 KB of data ?

Or is there other solution ?

Andrus.



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Wrong length of bytea field

2006-02-24 Thread Andrus Moor
I inserted two binary files to a bytea field contents

file1  size was 7834649 bytes
file2 size was 5888534 bytes

select filename,
octet_length(contents),
length(contents)
from localfil

returns

file1  ;16777184;16777184
file2  ;15768893;15768893

It seems that for some reason postgres returns sometimes escaped string size 
(eq. single byte 0x00 uploaded as \\000 size is returned as 5 )

Why result is incorrect ?
How to get the actual size of bytea field ?

Using Postgres 8.1 in XP , cluster encoding is UTF-8

Andrus.




---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] How to find first non-vacation day

2006-02-03 Thread Andrus Moor
I have a table of vacations

create table vacation (
id integer primary key,
dstart date,
dend date );


I need to find first non-vacation day before given date.

This can be done using the following procedural vfp code

function nonvacation( dbefore )

for i=dbefore to  date(1960,1,1) step -1
  select vacation
  locate for between( i, dstart, dend )
  if not found()
return i
endif
  endfor
return null

but this is very slow

How to implement this as sql select statement ?

Andrus. 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] How to change postgres.conf file from client

2006-01-01 Thread Andrus Moor
I want to change archive_command parameter in server postgres.conf file from 
client application.

Doc wrote that set_config() applies to current session only and there is no 
pg_write_file() function.

How to set archive_command from client permanently?

Platoform: PostgreSQL 8.1.0 on i686-pc-mingw32, compiled by GCC gcc.exe 
(GCC) 3.4.2 (mingw-special)

Andrus. 



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Last line is not read from postgres.conf

2006-01-01 Thread Andrus Moor
To reproduce:

1. Add the following line as last line to postgres.conf file

archive_command='copy %p x'

Make sure that there is no CR LF characters after this line

2. Restart postgres

3.  Issue SHOW archive_command

Observed result:

unset


Note. Adding a newline after this line causes archive_command to be set
correctly.


Using

PostgreSQL 8.1.0 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 3.4.2
(mingw-special)




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] pg_reload_conf() does not unset archive_command

2006-01-01 Thread Andrus Moor
Platform:

PostgreSQL 8.1.0 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 3.4.2
(mingw-special)

To reproduce:

1. Make sure that postgres.conf file contains in first line:

archive_command = 'copy %p c:/arhiiv/%f'

2. Edit postgres.conf file by adding # before this line

#archive_command = 'copy %p c:/arhiiv/%f'

3. Run SELECT pg_reload_conf() in PgAdmin or in other client or use Reload
configuration from XP Start menu. Log file contains received SIGHUP,
reloading configuration files as expected.

4. Run SHOW archive_command

Observed:

archive_command value is

copy %p c:/arhiiv/%f

Expected:

archive_command value is  must be

unset

Before config file reload Postgres must set archive_command  to unset.


Andrus.




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] find last day of month

2005-12-08 Thread Andrus Moor
I have a table containing month column in format mm.

create table months ( tmkuu c(7));
insert into months values ('01.2005');
insert into months values ('02.2005');

How to create select statement which converts this column to date type 
containing last day of month like

'2005-01-31'
'2005-02-28'

Andrus. 



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Why pgAdmin III guru suggests VACUUM in 8.1

2005-11-20 Thread Andrus Moor
I'm using Postgres 8.1 in Windows XP

Sometimes when using pgAdmin the following Guru hint appears suddenly:

Running VACUUM recommended
The estimated rowcount on the table firma1.algsa deviates significantly 
from the actual rowcount. You should run VACUUM ANALYZE on this table. 
Instead of issuing a manual VACUUM ANALYZE command on this table (you can 
use the pgAdmin III maintenance menu for this), running VACUUM ANALYZE on a 
regular or automated basis should be considered. This can be achieved using 
a scheduler. PostgreSQL also supplies the pg_autovacuum daemon, which will 
track changes made to the database and issue vacuum commands as required 
automatically. In most cases, pg_autovacuum will be the best choice. 

I have noticed in Postgres log, that autovacuum processes my cluster 
regulary.
So in my knowledge, this hint is wrong.
Please confirm that guru is stupid.

Also I followed this hint and got the results:

INFO:  vacuuming firma1.algsa
INFO:  index algsa_pkey now contains 122 row versions in 2 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.06 sec.
INFO:  algsa: found 0 removable, 122 nonremovable row versions in 4 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.06 sec.
INFO:  analyzing firma1.algsa
INFO:  algsa: scanned 4 of 4 pages, containing 122 live rows and 0 dead 
rows; 122 rows in sample, 122 estimated total rows

So it seems that vacuum did make anything.

Andrus. 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] CREATE TEMP TABLE AS ... ON COMMIT DROP fails

2005-11-06 Thread Andrus Moor
I need to create temporary table with data which is dropped at end of 
transaction.

CREATE TABLE t1 ( c1 INTEGER ) ;
INSERT INTO t1 VALUES (1);
CREATE TEMP TABLE t2 AS SELECT * FROM t1 ON COMMIT DROP;

Causes ERROR:  syntax error at or near ON at character 104

How to implement this ?

Andrus. 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Best way to use indexes for partial match at beginning

2005-11-06 Thread Andrus Moor
I have 10 CHARACTER columns in table mytable. Mytable is indexed by some of 
those columns.

CREATE TABLE mytable ( col1 CHARACTER(10),
col2 CHARACTER(10),col3 CHARACTER(10),col4 CHARACTER(10),col5 
CHARACTER(10),col6 CHARACTER(10),col7 CHARACTER(10),col8 CHARACTER(10),col9 
CHARACTER(10), col10 CHARACTER(10) );

CREATE INDEX i1 ON mytable(col1);
CREATE INDEX i2 ON mytable(col2);

I need to select records by knowing some characters from beginning.
I know always 1-10 first characters of col1. So my LIKE pattern starts 
always with constant characters and ends with % .

I can use LIKE:

SELECT * FROM mytable
  WHERE col1 LIKE 'A%'
  AND col2 LIKE 'BC%'
  AND col3 LIKE 'DEF%'
  AND col4 LIKE 'G%';

or substring():

SELECT * FROM mytable
  WHERE substring(col1 for 1)='A'
  AND substring(col2 for 2)= 'BC'
  AND substring(col3 for 3)='DEF'
  AND substring(col4 for 1) ='G';


Can Postgres 8.1 use indexes to speed the queries above ?

Which is the best way to to write the where clause in this case so that 
index is used ?

Andrus.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] How to implement table caching

2005-08-14 Thread Andrus Moor
To increase performance, I'm thinking about storing copies of less
frequently changed tables in a client computer.
At startup client application compares last change times and downloads newer
tables from server.

CREATE TABLE lastchange (
  tablename CHAR(8) PRIMARY KEY,
  lastchange timestamp without time zone );

INSERT INTO lastupdated (tablename) values ('mytable1');

INSERT INTO lastupdated (tablename) values ('mytablen');

CREATE OR REPLACE FUNCTION setlastchange() RETURNS trigger
AS $$BEGIN
UPDATE lastchange SET lastchange='now' WHERE tablename=TG_RELNAME;
RETURN NULL;
END$$  LANGUAGE plpgsql STRICT;

CREATE TRIGGER mytable1_trig BEFORE INSERT OR UPDATE OR DELETE ON mytable1
   EXECUTE PROCEDURE setlastchange();

CREATE TRIGGER mytablen_trig BEFORE INSERT OR UPDATE OR DELETE ON mytablen
   EXECUTE PROCEDURE setlastchange();

Is table caching good idea?
Is this best way to implement table caching ?

Andrus.




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Which record causes referential integrity violation on delete

2005-07-03 Thread Andrus Moor
 On Friday 01 July 2005 19:49, you wrote:
 In Postgres 8 I tried commad

 DELETE FROM customer WHERE id=123

 (snip)

 ---(end of broadcast)---
 TIP 8: explain analyze is your friend


 Automatically answered?! :-)

 explain analyze DELETE FROM customer WHERE id=123

Martin,

I tried the command explain analyze command but it produces exactly the same 
error message
about referential integrity violation as

DELETE FROM customer WHERE id=123

How this command can help me ?

Andrus. 



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Which record causes referential integrity violation on delete

2005-07-03 Thread Andrus Moor
 I forgot to add, this is of course a simplistic approach which:
 1. may be simply wrong
 2. assumes data is available to user in nformation_schema (I guess the
 information schema lists only data owned by user; yet I am not sure
 about that).
 3. assumes foreign keys have really simple set up (no FKs depending on
 FKs depending on FKs...)

 In other words, best approach would be know thy schema in the
 application. ;)))

Tom and Dawid,

I have 60 foreign key tables with ON DELETE RESTRICT  constraint.
I have primary key column in all of my tables.
I want that in case of deletion error immediate parent table name and one 
primary key which prevents deletion is displayed to user. So user can 
correct bad invoice manually, try next deletion, proceed with next stopper 
invoice and so on ...
My application runs as postgres super-user, so it can access all data from 
information schema.

So it seems that it is reasonable to create generic routine for this.
Unfortunately I do'nt have enough experience to create such kind of routine 
even if using sample SELECT statements  written by Dawid.

I have seen similar question asked several times in this newsgroup. This 
seems to be a common requirement.

Isn't there really some ready made generic stored procedure which I can use 
?

If delete error occurs, Postgres knows  the ctid of the restricting record.
Why this ctid cannot be retrieved by ODBC client ?

Andrus. 



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Hot to restrict access to subset of data

2005-07-03 Thread Andrus Moor
 Does the application really need superuser privileges or is that
 just a convenience?  It's usually a good idea to follow the Principle
 of Least Privilege -- do some searches on that phrase to learn
 more about it and the rationale for following it.

 Whether this approach is secure and better depends on the application
 requirements, the threat model, how well the application is written,
 etc.  As Bruno pointed out, if users have enough access to the
 system that they could discover the account name and password, then
 they could easily bypass the application's security.  Another
 potential problem is SQL injection: if the application isn't careful
 with how it handles user input, then specially-crafted data could
 result in the pre-defined queries doing more than intended.  You'll
 have to evaluate the risks and benefits of the various approaches
 in the context of your own environment; there's no universal this
 way is better answer.

My application is general purpose accounting and sales application. If 
database does not exists, it prompts user and creates new database containig 
some hundreds of tables and
upsizes local data to database.
Each database can have a lot of schemas. Each schema represents a single 
company. All those schemas have exactly the same tables, each schema 
contains 80 tables.
In public schema I store tables common for all companies (60 tables).

So I seems that my application needs to be run with super-user privileges in 
Postgres.

Andrus 



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Hot to restrict access to subset of data

2005-07-03 Thread Andrus Moor
Greg,

using views would be nice.

I have also a add privilege which allows to add only new documents. I think 
that this requires writing triggers in Postgres.

This seems to be a lot of work.
I do'nt have enough knowledge to implement this in Postgres.

So it seems to more reasonable to run my application as Postgres superuser 
and implement security in application.

Andrus.

Gregory Youngblood [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
I believe you can probably use views to accomplish this.

 You create a view that is populated based on their username. Then you 
 remove access to the actual table, and grant access to the view.

 When people look at the table, they will only see the data in the  view 
 and will not have access to the other.

 Of course, this assumes they do not need to update the data. I've not 
 played around with rules to make a view allow updates. I believe it  is 
 possible, I've just not done it yet. This also assumes you have  data 
 somewhere that maps user names to document types.

 The postgresql docs should provide the syntax and additional details  if 
 you want to try this. I have also found pgAdmin very useful to  create 
 views and other schema related activities as well.

 Hope this helps,
 Greg



 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster
 



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Simple UPDATE runs forever

2005-04-10 Thread Andrus Moor
The command

UPDATE dok SET krdokumnr=NULL WHERE krdokumnr NOT in (select dokumnr from
dok);

runs forever. Postgres.exe process takes 90% of CPU time, disk LED is
flashing.
Platform: Win XP Prof SP2, Postgres 8
dok table has only 7651 rows
Killing client application does not help. Only killing postgres.exe process
stops computer activity.

CREATE TABLE ou1.dok
(
  doktyyp char(1),
  dokumnr numeric(12) NOT NULL DEFAULT
nextval('ou1.dok_dokumnr_seq'::text),
  krdokumnr numeric(12),
... a lot of other fields
  CONSTRAINT dok_pkey PRIMARY KEY (dokumnr)
)
WITHOUT OIDS;

any idea ?

Andrus.




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Referential integrity using constant in foreign key

2005-03-26 Thread Andrus Moor
Thomas,

thank you for reply. There was a typo in my code. Second table should be

 CREATE TABLE info (
 code1 CHAR(10),
 code2 CHAR(10),
 FOREIGN KEY ('1', code1) REFERENCES classifier,
 FOREIGN KEY ('2', code2) REFERENCES classifier
 );

I try to explain my problem more precicely.

I can implement the referential integrity which I need in the following way:

CREATE TABLE classifier (
category CHAR(1),
code CHAR(10),
PRIMARY KEY (category,code)  );

CREATE TABLE info (
code1 CHAR(10),
code2 CHAR(10),
constant1  CHAR default '1',
constant2  CHAR default '2',
FOREIGN KEY (constant1, code1) REFERENCES classifier,
FOREIGN KEY (constant2, code2) REFERENCES classifier
);

This implementation requires 2 additional columns (constant1 and constant2) 
which have always same values, '1' and '2' respectively, in all info table 
rows.

I created those dummy columns since Postgres does not allow to write 
REFERENCES clause like

CREATE TABLE info (
code1 CHAR(10),
code2 CHAR(10),
FOREIGN KEY ('1', code1) REFERENCES classifier,
FOREIGN KEY ('2', code2) REFERENCES classifier
);

Is it possible to implement referential integrity without adding additional 
dummy columns to info table ?

 It's somewhat unclear what you're attempting to do, here, but I'll give a 
 shot at interpreting. Referential integrity lets you guarantee that values 
 in a column or columns exist in a column or columns in another table.

 With classifier as you've defined it, if you want referential integrity in 
 the info table, you could do this:

 CREATE TABLE info (
 code1 CHAR(10),
 code2 CHAR(10),
 FOREIGN KEY code1 REFERENCES classifier (category),
 FOREIGN KEY code2 REFERENCES classifier (category)
 );

 But I'm not sure what you mean by references to category 1. There is 
 only a single category column in classifier, and referential integrity is 
 not for ensuring that a column in one table contains only values of a 
 single row.

 Regardless, your syntax doesn't seem to reflect reality. Read the CREATE 
 TABLE reference thoroughly.

 http://www.postgresql.org/docs/8.0/static/sql-createtable.html

 -tfo

 --
 Thomas F. O'Connell
 Co-Founder, Information Architect
 Sitening, LLC

 Strategic Open Source — Open Your i™

 http://www.sitening.com/
 110 30th Avenue North, Suite 6
 Nashville, TN 37203-6320
 615-260-0005

 On Mar 25, 2005, at 10:22 AM, Andrus Moor wrote:

 I need to create referential integrity constraints:

 CREATE TABLE classifier (
 category CHAR(1),
 code CHAR(10),
 PRIMARY KEY (category,code)  );

 -- code1 references to category 1,
 -- code2 references to category 2 from classifier table.
 CREATE TABLE info (
 code1 CHAR(10),
 code2 CHAR(10),
 FOREIGN KEY ('1', category1) REFERENCES classifier,
 FOREIGN KEY ('2', category2) REFERENCES classifier
 );

 Unfortunately, second CREATE TABLE causes error

 ERROR:  syntax error at or near '1' at character 171

 Any idea how to implement referential integrity for info table ?
 It seems that this is not possible in Postgres.

 Andrus.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Merging item codes using referential integrity

2005-03-26 Thread Andrus Moor
I have item table and many child tables where the items are used.

I want to merge two item codes into single item in all tables.
It is not nice to write a lot of separate UPDATE statements for each table.
So I want to utilize REFERENCES clause for merging.

I tried the following code but got duplicate key error in UPDATE
statement.

Any idea how to impement this?

CREATE TABLE parent ( code CHAR(10) PRIMARY KEY  );

CREATE TABLE orders ( anyfield CHAR(10) REFERENCES parent ON UPDATE 
CASCADE );
CREATE TABLE invoices ( anyotherfield CHAR(10) REFERENCES parent ON UPDATE 
CASCADE );
-- ... a lot of more child tables with different table and field names 
but -- always with same REFERENCES clause.

INSERT INTO parent VALUES ('1');
INSERT INTO parent VALUES ('2');
INSERT INTO orders VALUES ('1');
INSERT INTO invoices VALUES ('1');
INSERT INTO orders VALUES ('2');
INSERT INTO invoices VALUES ('2');

BEGIN;
-- Direct Postgres to update all child tables. This causes error.
UPDATE parent SET code='1' WHERE code='2';
-- Remove duplicate row
CREATE TABLE parent AS
  SELECT * FROM parent
  GROUP BY CODE ;
COMMIT;

Andrus. 



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Referential integrity using constant in foreign key

2005-03-25 Thread Andrus Moor
I need to create referential integrity constraints:

CREATE TABLE classifier (
category CHAR(1),
code CHAR(10),
PRIMARY KEY (category,code)  );

-- code1 references to category 1,
-- code2 references to category 2 from classifier table.
CREATE TABLE info (
code1 CHAR(10),
code2 CHAR(10),
FOREIGN KEY ('1', category1) REFERENCES classifier,
FOREIGN KEY ('2', category2) REFERENCES classifier
);

Unfortunately, second CREATE TABLE causes error

ERROR:  syntax error at or near '1' at character 171

Any idea how to implement referential integrity for info table ?
It seems that this is not possible in Postgres.

Andrus. 



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] How to get concecutive id values

2005-03-19 Thread Andrus Moor
I need to allocate some number of sequential values for primary keys.

Postgres nextval() function does not allow to allocate more than one number. 
So it is not possible to get a range of sequential numbers from sequence 
using nextval()

To solve this, I created table containing id current values:

CREATE TABLE appids (
key_name char (2) PRIMARY KEY,
key_value numeric(7) )


In Microsoft SQL server I can use the following stored procedure to
allocate the number of ids:

-- Allocates specified number of keys.
-- Returns last value allocated
create procedure sp_NewID
@Name char(2), @NumberOfKeys NUMERIC(7)
as
set nocount on
declare @KeyValue int
set @Name = upper( @Name )
update appids
set key_value = key_value + @NumberOfKeys
,@KeyValue = key_value
from appids
where key_name = @Name
select @KeyValue
RETURN

It uses a correlated query so that the record that's read
will be locked and then updated in the same query. This eliminates the need 
for a transaction.

How to convert this code to Postgres?

Or is there a better method?

Andrus 



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Using sequence name depending on other column

2005-03-19 Thread Andrus Moor
 I have table containing different types of documents (type A, B and C).

 Each document type must have separate sequential ID starting at 1

 ID of first inserted record of type A must be set to 1
 ID of first inserted record of type B must be also set to 1
 ID of second record of type A must be set to 2
 etc.

 Sequences aren't designed for doing this. If you aren't doing lots of
 updates, just lock the table and assign the next id as the current max id
 of that type + 1.

Bruno,

thank you for reply.

Document IDs are almost never updated. There are about 25 document updates
per minute in peak hours (they create a copy from document and this creation
also writes reference to original document).  The database can became quite
large (50 documents).

Which indexes should I create for getting max ID's fast (total 25 different
document types) ?

I have 120 concurrent users inserting documents. Mostly they are using 10
different document types. Each type  should have separate numbering. They
insert 30 documents per minute in peak hours.

Locking the whole table causes delay for users wanting to insert other type
of document.
Is this reasonable? Is this delay noticeable in this case?

Is it possible to get a number concecutive IDs from sequence ?

Andrus.





---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster