[GENERAL] 8.3.-build fails due parse error in VERSION script

2008-02-05 Thread peter pilsl


#make

make[3]: Entering directory 
`/opt_noraid/src/postgresql-8.3.0/src/interfaces/libpq'

echo '{ global:' >exports.list
gawk '/^[^#]/ {printf "%s;\n",$1}' exports.txt >>exports.list
echo ' local: *; };' >>exports.list
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline 
-fno-strict-aliasing  -fpic -shared -Wl,-soname,libpq.so.5 
-Wl,--version-script=exports.list  fe-auth.o fe-connect.o fe-exec.o 
fe-misc.o fe-print.o fe-lobj.o fe-protocol2.o fe-protocol3.o 
pqexpbuffer.o pqsignal.o fe-secure.o md5.o ip.o wchar.o encnames.o 
noblock.o pgstrcasecmp.o thread.o strlcpy.o  -L../../../src/port 
-L/usr/local/lib -lssl -lcrypto -lcrypt 
-Wl,-rpath,'/usr/local/pgsql8.3/lib' -o libpq.so.5.1

/usr/bin/ld:exports.list:1: parse error in VERSION script
collect2: ld returned 1 exit status
make[3]: *** [libpq.so.5.1] Error 1


The machine is a very old machine, that uses GNU ld 2.11.90.0.8 but it 
was able to compile and run postgres8.0.13 without any troubles.


And I dont want to upgrade binutils (and probably the whole server) 
until I have to.



The problem is probably with the format of exports.list.  In 8.3.0 a 
file exports.list is created by the make-process with a gawk-command out 
of exports.txt if it does not exist yet (see output of my make-command 
above) and this exports.list then makes the troubles.


In 8.0.13 there is only exports.txt and no exports.list and during make 
no exports.list is created.


To me it seems that exports.list is kind of new format of exports.txt 
(if you compare the two files: same content in different format) and I 
wonder if I can tell 8.3.0 to use the old exports.txt-file/format 
instead of new exports.list.



I found a similar problem in the pgsql-hacker-list with 8.2beta2 at
http://archives.postgresql.org/pgsql-hackers/2006-10/msg01507.php
and Tom Lane suggested in a follow-up, that there could be a strip-down 
in postgres-requirements on exports.list.



thnx,
peter







---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] extend "group by" to include "empty relations" ?

2007-12-13 Thread peter pilsl


I've two tables related via a id-field.


   Table "public.fn_kat"
 Column  |Type |
-+-+-
 id  | integer |
 kategorie   | text|


   Table "public.fn_dokumente"
 Column  |Type |
-+-+-
 kategorie   | integer |
 content | text|


I now want to list all entries in table fn_kat and count the number of
entries in fn_dokumente that have the actual id.


# select k.kategorie,count(d.oid) from fn_kat k,fn_dokumente d where
k.id=d.kategorie group by k.kategorie;

kategorie | count
--+---
 1. Forschungsnetzwerk Erwachsenenbildung | 1
 1.1. Protokolle  | 3
 2. Sonstige Dokumente| 1


But there is a problem now: There are also entries in fn_kat which dont
have corresponding entries in fn_dokumente and this entries should be
listed too. With the proper count=0 !!

How to achieve this?

thnx a lot,
peter






-- 
mag. peter pilsl - goldfisch.at
IT-Consulting
Tel: +43-650-3574035
Tel: +43-1-8900602
Fax: +43-1-8900602-15
skype: peter.pilsl
[EMAIL PROTECTED]
www.goldfisch.at


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] proper export table to csv? multilineproblem.

2007-02-26 Thread peter pilsl


I need to export several tables as csv. I use the \f-command to set the 
seperator and pipe the output of my select directly to a file.


Unfortunately thats all the control I have over the created csv-file. I cannot 
set the field-delimiter and - which is critical to me - I cannot set an 
alternate record-seperator (newline at the moment). The latter is important to 
me cause many of my fields-values have \n or \r in it, so the csv-import-filter 
has a hard time to distinguish the record-seperator from a newline inside the data.


On the server I've postgres7.2, so the COPY-command does not know about the 
CSV-option yet (not does the postgres 8).


Is there any ready tool to create flexible csv-files or any trick I did not find 
out yet?


thnx,
peter

--
mag. peter pilsl - goldfisch.at
IT-Consulting
Tel: +43-650-3574035
Tel: +43-1-8900602
Fax: +43-1-8900602-15
skype: peter.pilsl
[EMAIL PROTECTED]
www.goldfisch.at

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] order by text-type : whitespaces ignored??

2006-12-13 Thread peter pilsl

Richard Huxton wrote:

peter pilsl wrote:


I just get my mind crossed here:

I do a simple select with order on a text-type. In my opinion entries 
with leading spaces should be either first (or maybe last) in the list.

But in my select the whitespace just seems to be ignored:


That'll be down to your locale settings. You'll need to re-initdb to 
change them I'm afraid. It's locale=C that you want.




thnx - I cant mess up with my locale-setting cause I also need postgres to sort 
proper german umlauts stored as unicode, which was a difficult task to set up 
years ago. I had to use de_AT.UTF-8 to get what I want. If I change to locale=C 
again, I'll sure run into my old problems again:


http://www.thescripts.com/forum/thread173467.html  (you have participated in 
that thread also - thnx a lot for your help !!)


/usr/local/pgsql8/bin/pg_controldata /data/postgres/postgres8 | grep LC
LC_COLLATE:   de_AT.UTF-8
LC_CTYPE: de_AT.UTF-8


I just shipped around my current problem by sorting by replace(traeger,' ','0') 
which of course is not very cool, cause it will end up strange as soon as other 
entries have 0 in their text, but I cant think of anything else right now.


I still find correct sorting a difficult thing to do in postgres :)

thnx
peter


--
mag. peter pilsl - goldfisch.at
IT-Consulting
Tel: +43-650-3574035
Tel: +43-1-8900602
Fax: +43-1-8900602-15
skype: peter.pilsl
[EMAIL PROTECTED]
www.goldfisch.at

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] order by text-type : whitespaces ignored??

2006-12-13 Thread peter pilsl


I just get my mind crossed here:

I do a simple select with order on a text-type. In my opinion entries with 
leading spaces should be either first (or maybe last) in the list.

But in my select the whitespace just seems to be ignored:

Note that the second row has a leading space and should imho be first entry.


# select traeger from wb_traeger where id>24 order by traeger;
 traeger
-
 GliA - gehirn | lern | impuls | agentur
  Sonstige Träger
 Volkshochschule Floridsdorf
 Zukunftszentrum Tirol
(4 rows)


note that this not only applies to leading spaces:


# select 'x'||traeger from wb_traeger where id>24 order by 'x'||traeger;
 ?column?
--
 xGliA - gehirn | lern | impuls | agentur
 x Sonstige Träger
 xVolkshochschule Floridsdorf
 xZukunftszentrum Tirol
(4 rows)



the type of my column is simple text:


# \d wb_akademie
   Table "public.wb_akademie"
   Column|Type | Modifiers 


-+-+

 traeger | text|




Is there any way to order so that entries with leading spaces are listed first??

thnx,
peter




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] help with rules please

2006-06-07 Thread peter pilsl


I dont succeed with writing my rules proper.

I need the following rules:

when I INSERT/UPDATE to a table and a certain condition is true then a 
special field in this data-row should be updated to.



I came as far:

# \d testa
   Table "public.testa"
 Column | Type | Modifiers
+--+---
 x1 | text |
 x2 | text |
Rules:
r1 AS
ON INSERT TO testa
   WHERE new.x1 = 'house'::text DO  UPDATE testa SET x2 = 'mouse'::text


this works fine.  When I insert a row with x1=house then x2 gets set to 
mouse.


BUT:

i) it always updates all rows in the tables instead of only the current 
row. I guess this is a huge performance-lack on big tables :)  I tried 
to get a where  oid=new.oid in or something like that, but it did not work.


ii)  the above rule does not work on UPDATE, cause I get a  deep 
recursion. Each update causes another update on the same table which 
makes postgres break with a nested.loop -error (happily postgres detects 
the recursion :)


thnx for any help
peter





--
mag. peter pilsl
goldfisch.at
IT- & dataconsulting
tel: +43 650 3574035
tel: +43 1 8900602
fax: +43 1 8900602 15
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] count( only if true)

2005-10-12 Thread peter pilsl

Martín Marqués wrote:


I'm not sure what exactly it is you want, but check this:

SELECT count(*) FROM tab WHERE expresion

There you get a count of tuples that satisfy the expresion. What NULL values 
are you talking about? Can you hand an example?




thnx.



# select * from test2;
 x | id
---+
 a |  2
 b |  1
 c |  4
 d |  6
 e |  3
 e |  6
(6 rows)


knowledge=# select x,count(id<5) from test2 group by x;
 x | count
---+---
 e | 2   <  this is unexpected
 b | 1
 c | 1
 d | 1   < !
 a | 1
(5 rows)

knowledge=# select x,count(case when id<5 then 't' else null end) from 
test2 group by x;

 x | count
---+---
 e | 1  <- thats the result I want !!!
 b | 1
 c | 1
 d | 0
 a | 1
(5 rows)


the problem is, that  ... count(id<5)  is the very same like  ... 
count(id<10) ... cause count counts all values that are not null and 
id<5 is a boolean expression that only results in null if id is null. 
otherwise its 't' or 'f' which both are notnull.



the where-clause is nice, but not sufficient. for example I also need 
queries like


select x,count(id<5),count(id>15) from test2 group by x;


thnx a lot,
peter



--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fae +43 699 4 3574035
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] count( only if true)

2005-10-12 Thread peter pilsl


the count-aggreate counts every expression that does not evaluate to null.

I would like to have a count that counts all values that are true.

Do I really need to make a count( case when expression then 't' else 
null) to implement this?


thnx,
peter


--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fae +43 699 4 3574035
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] renumber id's in correct order (compact id's)

2005-06-21 Thread peter pilsl

Martijn van Oosterhout wrote:


thnx a lot. But it does not work as expected cause the update-statement 
ist not commiting for the whole table during the execution. So the 
resulting order can be different from the original order, which is what 
I try to avoid.



Well, that's because you're typing the query wrong. Because you said:

where t2.id <= voev_content.id

It's going to order them by the id (which you didn't show in your query
which is why it's not obvious). If you want to order by rank you should
do (your query search-replace id for rank):



;) thnx a lot. While I was reading the manuals to reveal the secrets of 
transaction-levels in update-operations I simply missed the obvious: a 
typo when moving the command from my test-table to the real-world-table.


Thnx a lot for your help. Now everything is working perfekt.
peter

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] renumber id's in correct order (compact id's)

2005-06-21 Thread peter pilsl

Martijn van Oosterhout wrote:

How about:

update table set id = (select count(*) from table t2 where t2.id <= table.id);

Ugly as hell, but it should work.




thnx a lot. But it does not work as expected cause the update-statement 
ist not commiting for the whole table during the execution. So the 
resulting order can be different from the original order, which is what 
I try to avoid.



example with real-work-database. entries with rank=0 are excluded from 
the query.



knowledge=# select  rank,kategorie,titel from voev_content where 
kategorie=5 order by rank;


 rank | kategorie |titel
--+---+--
0 | 5 | hauptaktivitäten
3 | 5 | test
4 | 5 | startseite
5 | 5 | Salzburger Gespräche
(4 rows)

knowledge=# update voev_content set rank = (select count(*) from 
voev_content t2 where t2.id <= voev_content.id and t2.kategorie=5 and 
t2.id !=0) where kategorie=5 and rank!=0;


UPDATE 3


knowledge=# select  rank,kategorie,titel from voev_content where 
kategorie=5 order by rank;

 rank | kategorie |titel
--+---+--
0 | 5 | hauptaktivitäten
1 | 5 | Salzburger Gespräche
2 | 5 | test
3 | 5 | startseite
(4 rows)


note that test now is ordered as second (excluding the rank=0-entry) 
while it was ordered first in the original configuration.


thnx,
peter



Hope this helps,

On Tue, Jun 21, 2005 at 10:06:40AM +0200, peter pilsl wrote:



I've entries with id's like:

x | id
---+
b |  1
a |  4
e |  5
c | 12
d | 19
(5 rows)


now I'd like to have the id in continuing number to get:

x | id
---+
b |  1
a |  2
e |  3
c |  4
d |  5
(5 rows)


Simpliest way to do would be to create a sequence and update the whole 
table using nextval on the sequencec. Unfortunately UPDATE does not know 
about an order-statement.


Any Idea,
thnx,
peter





--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fae +43 699 4 3574035
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly






--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fae +43 699 4 3574035
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] renumber id's in correct order (compact id's)

2005-06-21 Thread peter pilsl



I've entries with id's like:

 x | id
---+
 b |  1
 a |  4
 e |  5
 c | 12
 d | 19
(5 rows)


now I'd like to have the id in continuing number to get:

 x | id
---+
 b |  1
 a |  2
 e |  3
 c |  4
 d |  5
(5 rows)


Simpliest way to do would be to create a sequence and update the whole 
table using nextval on the sequencec. Unfortunately UPDATE does not know 
about an order-statement.


Any Idea,
thnx,
peter





--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fae +43 699 4 3574035
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] optimal hardware for postgres?

2005-04-24 Thread peter pilsl
I'm just going to buy a new server which will mainly run a 
postgreSQL-database and a apache2 with mod_perl and do little more than 
deliver loads of index-based select-queries. But it will have to do a 
lot of them. And it will be master for a replication to slaves.
As always of course we dont know how many selects we'll have to face. 
There will be "many" and the current Athlon1800+ with 1GB Ram is too 
slow. (to its excuse : It has to perform loads of other tasks as well)

I was now wondering which hardware would fit this server best. I think 
about 2GB RAM, a fast Raid1 and now I'm not sure about the CPU.
I was considering 64-bit AMD : A Athlon 64 FX-51 or a Opteron 2.2GHz.
The hosting system will be a 64-Bit linux.

Does postgres benefit from 64-bit AMD's or would I be better off with a 
cheaper AthlonXP or even with a Pentium or a more expensive Xeon?
Or is my question faulty at all, cause CPU is only 20% of the whole system.

Any comments appretiated,
thnx,
peter

--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fae +43 699 4 3574035
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Overload

2005-04-16 Thread peter pilsl
Stefan Krompass wrote:
>
Does PostgreSQL offer information
on the additional workload (execution costs) caused by a query? In case 
it does not: Does anybody have an idea how I get an estimate for the 
execution costs before executing a query?
I cant add to you question, but two nightly thoughts:
i) if you SQL-server is tortured by some application, its very likely 
that you have only a limited range of different select-types. You could 
measure the exact costs manually and use this values for your problem.

ii) The workload might depend on you specific system, on your RAM, 
harddisk etc.  On one system CPU-speed might be the bottleneck, on 
others the RAM and so on. And the cost for a query on your system is not 
always the same. Depending on swap, cache and so on.
Maybe you should consider having a second SQL-server to lower load.

best,
peter
--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fax +43 699 4 3574035
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] many similar indexbased selects are extremely slow

2005-01-01 Thread peter pilsl
psql8:
I use a bigger psql-table to store information and keep an id-value of 
each row in memory of my application for faster access.
My applications is able to calculate a list of needed id's in very short 
time and then wants to retrieve all rows corresponding to this id's.

So in fact I perform a lot of operations like:
select field1,field2,field3 from mytable where id=XX;
There is a index on the id-field and the id-field is of type OID, so 
everything should be quite fast. Unfortunately it is not.

On a 2GHz-machine with 1GB RAM it takes approx. 10seconds to fetch 1 
rows. In this testscenario I only fetch the OID and no other col.
I dont understand this. Am I expecting far to much? Is 10seconds for the 
retrieval of 1 OIDs a fine value? I want it to be less than one 
second and from my experience with postgres this operation is extremely 
slow compared to the impressive speed of most other operations.

I also tried to use the IN-operator, which is much more slower. Is there 
any other way to speed up things? I can order the list of id's to 
retrieve in my application if there is a way to tell psql not to search 
the whole index every time but somehow "do better".

If it is of any interest, here is the table. The id is stored as id_artikel.
   Table "public.artikelindex"
Column |Type | 
Modifiers
---+-+
 autor_artikel | text|
 titel_artikel | text|
 jahrgang_zeitschrift  | integer |
 jahr_zeitschrift  | character varying(20)   |
 heftnummer| character varying(30)   |
 seitenzahl_artikel| character varying(30)   |
 bemerkungen_artikel   | text|
 deskriptoren_alt  | text|
 deskriptoren_neu  | text|
 personennamen_artikel | text|
 orte_artikel  | text|
 id_artikel| oid |
 id_titel  | oid |
 cdate | timestamp without time zone | default 
('now'::text)::timestamp(6) with time zone
 udate | timestamp without time zone | default 
('now'::text)::timestamp(6) with time zone
 uid   | oid |
 gid   | oid |
 mod   | boolean |
Indexes:
"id_artikel_idx" btree (id_artikel)
"id_titel_idx" btree (id_titel)
"idx_artikelindeax_autor" btree (autor_artikel)
"idx_artikelindex_fingerprint" btree (id_artikel)
"idx_artikelindex_jahr" btree (jahrgang_zeitschrift)
"idx_artikelindex_jahrgang" btree (jahr_zeitschrift)
"idx_artikelindex_zeitschrift" btree (id_titel)
Rules:
delete_val AS
ON DELETE TO artikelindex DO  UPDATE counter SET val = counter.val + 1
  WHERE counter.tab::text = 'artikelindex'::character varying::text
insert_val AS
ON INSERT TO artikelindex DO  UPDATE counter SET val = counter.val + 1
  WHERE counter.tab::text = 'artikelindex'::character varying::text
update_val AS
ON UPDATE TO artikelindex DO  UPDATE counter SET val = counter.val + 1
  WHERE counter.tab::text = 'artikelindex'::character varying::text

And more: here is my retrieving program. I use perl and the DBI-module 
and the following code-snip

--
my $sth=$dbh->prepare(
   'select OID from artikelindex where id_artikel=?');
foreach (@id) {
  my $ret=$sth->execute($_);
  my $x=$sth->fetchrow_arrayref;
}
-

thnx a lot for any idea,
peter

--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fax +43 699 4 3574035
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] 7.1.3: dataloss: FATAL 2: XLogFlush: request is not satisfied

2004-12-16 Thread peter pilsl
We had a servercrash the other night and while none of the datadisks was 
affected, the db-server (7.1.3) is not starting again:

Dec 16 14:48:44 alpha postgres[12432]: [1] DEBUG:  database system 
shutdown was interrupted at 2004-12-16 14:46:30 CET
Dec 16 14:48:44 alpha postgres[12432]: [2] DEBUG:  CheckPoint record at 
(8, 836622368)
Dec 16 14:48:44 alpha postgres[12432]: [3] DEBUG:  Redo record at (8, 
836622368); Undo record at (0, 0); Shutdown FALSE
Dec 16 14:48:44 alpha postgres[12432]: [4] DEBUG:  NextTransactionId: 
635329587; NextOid: 3492600
Dec 16 14:48:44 alpha postgres[12432]: [5] DEBUG:  database system was 
not properly shut down; automatic recovery in progre
ss...
Dec 16 14:48:44 alpha postgres[12432]: [6] DEBUG:  redo starts at (8, 
836622432)
Dec 16 14:48:44 alpha postgres[12432]: [7] DEBUG:  ReadRecord: record 
with zero len at (8, 837544592)
Dec 16 14:48:44 alpha postgres[12432]: [8] DEBUG:  redo done at (8, 
837544556)
Dec 16 14:48:44 alpha postgres[12432]: [9] FATAL 2:  XLogFlush: request 
is not satisfied

/usr/local/pgsql/bin/postmaster: Startup proc 13588 exited with status 
512 - abort

I did a new initdb and recovered from the backup, but there is one minor 
database that was not in the backup for weeks (blame me), so I would be 
very happy if there is way to start postgres on the old datadir and 
retrieve the data.

Isnt there any way to let postgres ignore the whole XLog and just give 
what it has? The table in questions sure was not updated while the 
server crashed.

thnx,
peter
--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fax +43 699 4 3574035
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] UNION with more restrictive DISTINCT

2004-12-15 Thread peter pilsl
I'd like to UNION two queries but the distinct-criteria for UNION should 
not be all columns in the queries, but only one.

example. two tables:
test=# select id,name from t1;
 id | name
+--
  1 | bob
  2 | mike
(2 rows)
test=# select id,name from t2;
 id |  name
+-
  1 | bob
  2 | mike j.
(2 rows)
# select id,name from t1 union select id,name from t2;
 id |  name
+-
  1 | bob
  2 | mike
  2 | mike j.
(3 rows)
now I want a construct that returns me only one row for each id. If 
there are different names for that id's in the different tables, the 
name of t2 should be chosen.

like:
# select id,name from t1 UNION ON (id) AND SUPERMAGICPREFER (t2.name) 
select id,name from t2;
 id |  name
+-
  1 | bob
  2 | mike j.
(2 rows)

What is an appropriate approach to this? If I use my UNION-query as 
subquery for a SELECT DISTINCT ID, I loose the name, which is important.

thnx.
peter

--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fax +43 699 4 3574035
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] select single entry and its neighbours using direct-acess

2004-12-06 Thread peter pilsl
Pierre-Frédéric Caillaud wrote:

select id from mytable where id=45 order by name,name2;

Why do you want to select id if you already know it ?
Do you not want to specify a starting value for name and name2 ?
I'll presume you want to select a row by its 'id' and then get the  
previous and next ones in the name, name2 order. I'll guess the id is  
UNIQUE so these two other rows won't have the same id.

If I guessed right I have the solution, if I'm not please explain 
what  you wanna do more precisely ;)

sorry for being unclear.
but you guessed right. ID is UNIQUE and and I want to select a row by 
its ID and also get the previous and next ones in the name, name2-order.

For the selected row I need all datafields and for the next and previous 
I need only the ID (to have it referenced on the dataoutputpage for a 
certain row).

I'm very looking forward for your solution.
thnx a lot,
peter



and then I'd like to select the two entries that would come before 
and  after according to the order "name,name2";
id is not ordered, but there is an index on  (name,name2) so the 
needed  infomation about previous, next should be stored somewhere in 
this index.

My current solution is to read all the data without the WHERE-clause 
and  then fetch the needed ones, which is quite time-demanding.

thnx,
peter



--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fax +43 699 4 3574035
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] select single entry and its neighbours using direct-acess to index?

2004-12-06 Thread peter pilsl
Is there an easy solution for this?
I'd like to select a single entry from a table and the entries that 
would be previous and next given to a certain order.

like
select id from mytable where id=45 order by name,name2;
and then I'd like to select the two entries that would come before and 
after according to the order "name,name2";
id is not ordered, but there is an index on  (name,name2) so the needed 
infomation about previous, next should be stored somewhere in this index.

My current solution is to read all the data without the WHERE-clause and 
then fetch the needed ones, which is quite time-demanding.

thnx,
peter

--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fax +43 699 4 3574035
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] problems with lower() and unicode-databases

2004-09-30 Thread peter pilsl
postgres 7.4 on linux, glibc 2.2.4-6
I've a table containing unicode-data and the lower()-function does not 
work proper. While it lowers standard letters like A->a,B->b ... it 
fails on special letters like german umlauts (Ä , Ö ...) that are simply 
keeped untouched.

Everything else (sorting etc.) is working fine and LC_COLLATE, LC_CTYPE 
and all the other locales were set proper to 'de_AT.UTF-8' (thats how my 
mandrake-systems calls the needed locale. On most other systems its 
called 'de_AT.utf8') when doing initdb.

The database-encoding is unicode, but I also tried SQL_ASCII (just to 
give it a try) and the same problem.

Whats the problem here?
The following output is copied from a unicode-terminal and copied to the 
newsreader. It looks fine here, so I think you can all read it.

# select oid,t,lower(t),length(t) from test order by t;
  oid  |   t   | lower | length
---+---+---+
 17257 | a | a |  1
 17268 | A | a |  1
 17291 | ä | ä |  1
 17265 | Ä | Ä |  1
 17269 | B | b |  1
 17275 | ñ | ñ |  1
 17277 | Ñ | Ñ |  1
 17262 | ö | ö |  1
 17266 | Ö | Ö |  1
 17267 | Ü | Ü |  1
# /usr/local/pgsql/bin/pg_controldata /data/postgresql_de/ | grep LC
LC_COLLATE:   de_AT.UTF-8
LC_CTYPE: de_AT.UTF-8

I would be very happy to get a "solution", but a workaround would be 
better than nothing ;)  perl on the same system can read the data from 
the database and lowercase the data without any problems, but this is 
too much of a *WORK* *AROUND* :)

thnx a lot,
peter
ps: of course upper does not work as well !!
pps: I looked up the changes on newer postgresql-version, but my topic 
did not apperar in the list, so I didnt try new 7.4.5. I think its 
merely a problem with setting than with postgreSQL. (at least I hope so ...)

--
mag. peter pilsl
goldfisch.at
IT-management
tel +43 699 1 3574035
fax +43 699 4 3574035
[EMAIL PROTECTED]
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] 'order by' does "wrong" with unicode-chars (german umlauts)

2003-09-19 Thread peter pilsl

postgres 7.3.2

I store unicode-data in postgresql. The data is retrieved via webinterfaces,
processed with perl and then stored in postgresql (and viceversa).

All is going nice with one problem. If performing a "select * order by
field"-query the result is not what I expected.

German umlauts (ie: Ö) are stored as doublechars ("Ö" is "Ö") and only the
first char seems to be taken into account when sorting.

So it happens that the order is like:

Österreich
America
Pakistan

instead of

Amerika
Österreich
Pakistan


How to deal with this Problem ? Of course converting to latin before storing
would be a solution but we plan to offer support for many non-latin languages
later and the meaning of unicode is to get rid of all this converting-stuff
after all.

thnx,
peter



-- 
IT-Consulting
mag. peter  pilsl
tel:+43-699-1-3574035
fax:+43-699-4-3574035
[EMAIL PROTECTED]
http://www.goldfisch.at




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] drop column

2001-10-18 Thread Peter Pilsl

is there a trick to drop a column in a table ? or do I need to
recreate the table and drop the old one ?

thnx,
peter

-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: [EMAIL PROTECTED]
sms  : [EMAIL PROTECTED]

pgp-key available

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] query on large tables

2001-09-01 Thread Peter Pilsl

I've a query on a large table. The table consists of approx. 100.000
entries and the where-clause checks 4 chars against a 100-char-text in
the table. So this is hard work and the query takes about 4 seconds on
my system. This is quite ok, but the problem is, I want to prepare the
result for human readers and therefore split in several pages.  So I
first need to query once to get the number of results and based on
this number I create a navigation-bar and construct a limit-operator.
With this limit-operator I query a second time to get and display the
entries.

In fact I ask postgres the same query two times (difference is only
the limit-section) and this takes 8 instead of 4 seconds.

Any way to do better ?


thnx,
peter


-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: [EMAIL PROTECTED]
sms  : [EMAIL PROTECTED]

pgp-key available

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] select, where and null-values (or: select null<>'1' is fuzzy)

2001-06-12 Thread Peter Pilsl

I've a problem when selecting values out a table.

manana=# select * from test;
 l1 | l2 | l3
++
 1  | 2  | 3
 2  | 3  | 4
 3  | 4  | 5
 4  | 5  | 6
| 5  | 6
(5 rows)

where l1 is NULL in the last line.
now I do

manana=# select * from test where l1<>1;
 l1 | l2 | l3
++
 2  | 3  | 4
 3  | 4  | 5
 4  | 5  | 6
(3 rows)
and dont get the line with the null-entry !
In fact the problem is also describeable like this:

manana=# select NULL<>'1';
 ?column?
--
 
(1 row)

This gives neither true nor false.

Why is this like this and how can I work around ?

thnx,
peter


-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: [EMAIL PROTECTED]
sms  : [EMAIL PROTECTED]

pgp-key available

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] find position of an special entry in a select-result

2001-04-23 Thread Peter Pilsl

I perform a complex select-operation on a database with 10.000 entries
and with various Where-clauses I get 200 results. Now I want to get
the position of one special entry (identified by its OID) in this
selection.

Background: The user can fully customize a query in an online-database
(sort by, search, entries per page ...). When he adds a new entry to
that database the application should automatically show exact the page
where the new entry is displayed.

thnx,
peter



-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: [EMAIL PROTECTED]
sms  : [EMAIL PROTECTED]

pgp-key available

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] bind postmaster to address

2001-04-22 Thread Peter Pilsl

On Sun, Apr 22, 2001 at 03:37:12PM -0400, Tom Lane wrote:
> Peter Pilsl <[EMAIL PROTECTED]> writes:
> > Is there are way to use postmaster with the -i option (accept
> > tcp-ip-connections) and bind only to certain addresses interfaces ?
> 
> In 7.1 there's a postmaster switch to bind only to one specific IP
> address, rather than all addresses of the machine.  See the docs.
> 

Thanx, you know about 7.0.2 also ? I didnt find anything, but I have a
server here that is bind somehow (maybe its sourcehacked)


peter


-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: [EMAIL PROTECTED]
sms  : [EMAIL PROTECTED]

pgp-key available

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] bind postmaster to address

2001-04-22 Thread Peter Pilsl

Is there are way to use postmaster with the -i option (accept
tcp-ip-connections) and bind only to certain addresses interfaces ?

thnx,
peter

-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: [EMAIL PROTECTED]
sms  : [EMAIL PROTECTED]

pgp-key available

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] Re: problems with pg_dumplo (was Re: backup with blobs)

2001-04-20 Thread Peter Pilsl

On Fri, Apr 20, 2001 at 12:48:36PM +0200, Karel Zak wrote:
> > 
> > when working on an own database things works perfect. However I have a
> > reproduceable problem on a different database when importing blobs
> > into a single table while the old blobs still existing (no matter if
> > using -r or not)
> 
>  Do you have right permissions for DB and dirs with LO dumps?
>

yes, I'm in real god-mode.
 
> > However: the problem is not happening on a new created database, so
> > maybe there is a problem on my system. 
> 
> Hmm.. may be, sounds curious if everythig is right on mew DB.
> 

It is curious and I dont want play around any more on my testmachine.  Later
in the weekend I will jump into the production-server and dump all
the stuff and try to restore the data in a new installed
7.1-environment.

I'll post my success/failure here then.

thnx, peter



-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: [EMAIL PROTECTED]
sms  : [EMAIL PROTECTED]

pgp-key available

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] Re: problems with pg_dumplo (was Re: backup with blobs)

2001-04-20 Thread Peter Pilsl

On Fri, Apr 20, 2001 at 10:27:11AM +0200, Karel Zak wrote:

> > 
> > Thanx to Tom for this great tool. 
> 
>  Sure :-)
>

sorry !! I ment to say, Thnx for giving this tool to me. Even more
thanx for writing this great tool to you !! :)
 
> 
>  pg_dumplo -i (without -r) add *new* LO to DB and update LO oid in some
> tab.attr only. The option '-r' remove old LO and import new.
> 
> Try:
> 
>  pg_dumplo -a -d my_db -s /my_dump/dir
>  pg_dump [with relevant options for your DB] > my.dump
>  DROP DATABASE my_db;
>  CREATE DATABASE my_db;
>  psql my_db < my.dump
>  pg_dumplo -i -d my_db -s /my_dump/dir
> 

when working on an own database things works perfect. However I have a
reproduceable problem on a different database when importing blobs
into a single table while the old blobs still existing (no matter if
using -r or not)

However: the problem is not happening on a new created database, so
maybe there is a problem on my system. 

I will install 7.1 on a different machine and try if I can import all
my data before migrating to 7.1 on a production-server.

When I want to migrate _all_ data from one 7.1 to another 7.1 : does a
brute filecopy do it ?

Thanx a lot for your help,

peter

-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: [EMAIL PROTECTED]
sms  : [EMAIL PROTECTED]

pgp-key available

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[GENERAL] backup with blobs

2001-04-18 Thread Peter Pilsl

I'm currently using postgreSQL 7.0.2 and have big tables with a lot of
blobs.  pg_dump does not cover this blobs.

When my system crashes or when I want to migrate to 7.1, will a pure
file-backup be enough or will I have to write my own tools for backup
and restore of this blobs and their oid's in the corresponding tables.

thnx,
peter

-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: [EMAIL PROTECTED]
sms  : [EMAIL PROTECTED]

pgp-key available

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] MS-ACCESS -> PGSQL

2000-11-23 Thread Peter Pilsl

On Thu, Nov 23, 2000 at 01:21:14PM +0100, Enrico Comini wrote:
> I have a customer that works on a MDB database.
> 
> There is a simple way to port data in the PGSQL database in his site (i use
> php) ?
> 

there is a odbc-driver for postgreSQL for win-applications, that you
can use for copy/paste in access I guess ...

hope this helps,
peter

-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: [EMAIL PROTECTED]
sms  : [EMAIL PROTECTED]

pgp-key available



Re: [GENERAL] TEXT and BLOBS

2000-11-09 Thread Peter Pilsl

On Thu, Nov 09, 2000 at 09:00:13AM -0300, Martin A. Marques wrote:
> > I cant speak for PHP, but in perl or in C you have a documented module
> > that will handle this things.  (in perl its DBD::Pg). You dont need to
> > save the text to a file first, you can write directely into the blob.
> >
> > The principal things are explained in the postgres-programmers-guide
> > you can find at
> > http://www.at.postgresql.org/devel-corner/docs/programmer/
> > in chapter 16 'Large Objects'
> >
> > Although, dealing with blobs seems to be a dangerous things, cause I
> > didnt find a proper backup-method until now :-(
> 
> Which would that method be? I know that with Postgres 7.1 we will have a 
> brand new backup system (WAL), so I'm looking forward to it. ;-)
> 

I always thought backuping the files and pg_dump would be enough. (I
was sure there would be a way to restore the blobs out of the files
when I would need it)

Now I have a table with blob-oid's and the xin* files and I cant
restore the blobs :-( and noone seems to be able to help me.

I was thinking about writing some tools that scans the tables for
oid's and save the corresponding blobs to files and restore the
blobs/update the tables later. But if (WAL) will do it for me I'm also
looking forward for it. When will it come and where can I find
information about it ?

peter




-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: [EMAIL PROTECTED]
sms  : [EMAIL PROTECTED]

pgp-key available



[GENERAL] format of blobs ? backup blobs ?

2000-11-07 Thread Peter Pilsl


When upgrading from 6.5 to 7.0 we backuped all data with pg_dump and
so we lost our blobs. I just have the xinv and xinx-files from a
file-backup. The blobs are stored in the xinvx-files, but there is
a long leading header. Can anyone point me to the exact format of this
files, so I can write a script to restore the blobs ?

second question: how do you guys deal with blobs when
upgrading/backuping/restoring ?

thanx,
peter

-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: [EMAIL PROTECTED]
sms  : [EMAIL PROTECTED]

pgp-key available



[GENERAL] restore blobs (xinv-files)

2000-11-06 Thread Peter Pilsl

we just update from a 6.5 to a 7.0 version and we made a dumpall and we saved all 
files too.
Restoring the data stored with dumpall didnt bring back the blobs again.
How can I restore the blobs out of the pure filedump again ?

thanks,
peter


-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: [EMAIL PROTECTED]
sms  : [EMAIL PROTECTED]

pgp-key available



[GENERAL] blobs dont rollback ?

2000-10-23 Thread Peter Pilsl

I recently ran into a big problems regarding blobs and rollback with postgreSQL 6.5.3

A table contains many entries and each entry includes a single
blob-oid. In a databaseprocessing perlscript I need to delete such a
blob and perform the lo_unlink and later I rollback the session without
commiting (and even no autocommiting) in between. In my opinion this
should have left the original blob unchanged.

But it seems that it is badly damaged: it is not deleted
and it is not functional and every future work causes strange effects.
(the strangest effect is, that whenever I try to work with this blob
inside a transaction I dont get an single error, but after commiting
the whole stuff there is no change in the database. It's like
accessing the blob prevents commiting)

This effects are completely different from the effects that occures by
just using an invalid blob_oid.

---

this is what pgsql tells me at the prompt:

32609 is the oid of the 'damaged' blob:
32600 is no blob-oid at all
32545 is a valid blob_oid

# \lo_unlink 32609;
NOTICE:  UserAbortTransactionBlock and not in in-progress state
ERROR:  mdopen: couldn't open xinv32609: No such file or directory

# \lo_unlink 32600;
NOTICE:  UserAbortTransactionBlock and not in in-progress state
oekoland=# 

# \lo_unlink 32545;
NOTICE:  UserAbortTransactionBlock and not in in-progress state
lo_unlink 32545



if this is of interest, here is the raw perl-stuff that cause the problem:

...
 $dbh->{AutoCommit} = 0; 
...
 show_blob; # work fine !!
...
 $lobj_fd = $dbh->func($blob-oid, 'lo_unlink');  
 bloberror('lo_unlink failed') if not(defined($lobj_fd));
...
 $dbh->rollback;
... 
 show_blob; # get an error !!
...


thanx for any help,
peter

-- 
mag. peter pilsl
pgp-key available