Re: [HACKERS] pg_dump issue : Cannot drop a non-existent(?) trigger

2005-03-26 Thread Devrim GUNDUZ
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
On Fri, 25 Mar 2005, Tom Lane wrote:
Does "\d pg_trigger" show that the tgargs column is of type bytea?

Umm no:

tgnargs| smallint   | not null
tgargs, not tgnargs.
Ooops, sorry. Ok, tgargs is of type bytea.
Also, get the OID for this pg_trigger row and see if it shows up in
objid or refobjid of any rows of pg_depend

Yes it is there

prod=# SELECT * from pg_depend WHERE objid =39053;
- -[ RECORD 1 ]--
classid | 16412
objid   | 39053
objsubid| 0
refclassid  | 1259
refobjid| 37564
refobjsubid | 0
deptype | a
- -[ RECORD 2 ]--
classid | 16412
objid   | 39053
objsubid| 0
refclassid  | 1259
refobjid| 37577
refobjsubid | 0
deptype | a
Hmph.  Those should be 'i' references to the foreign key constraint,
not 'a' references to the relations.  I suspect this database was
carried forward from an ancient (pre-7.3) dump that defined the triggers
by "CREATE CONSTRAINT TRIGGER" instead of "ALTER ADD FOREIGN KEY".
I haven't coded the application but AFAIKit was developed using 7.4 and 
7.5(CVS); and we installed database on 8.0.1... This is a new app.

As a reminder, we ran pg_dump successfully before. After then we did 
nothing on schemas, we didn't upgrade db server, etc.

Have you ever run contrib/adddepend to update the definitions to be
proper constraints?
Now I did, but the found constraints are not related to our problem... :(
I'll try to drop trigger on Monday night and see what will happen. We have 
no up2date backup, except WAL logs... :(

Regards,
- --
Devrim GUNDUZ 
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.tdmsoft.com http://www.gunduz.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.1 (GNU/Linux)

iD8DBQFCRTYGtl86P3SPfQ4RAjn7AKDrz+t6gsc53EAQ9UZAfAmgpZUwVACg532w
7c61IvIL5e2AjRg+5jV1BVw=
=Um2T
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[HACKERS] understanding pg_stat* numbers

2005-03-26 Thread Oleg Bartunov
Hi there,
I'm investigating one performance issue with tsearch2 index  and trying to 
interperet io statiscs from pg_statio_user_tables, pg_stat_user_tables.
But from documentation it's not clear what numbers I shoud take into
account and I'm a bit confused :)
I'm looking for blocks *actually* read from disk, since IO is the most 
important factor.

I reseted stats and run my  query and then obtained statistics:
=# select pg_stat_reset();
=# explain analyze select 1 from message_parts where  message_parts.index_fts @@ '\'star\'';
 QUERY PLAN 
-
 Index Scan using a_gist_key on message_parts  (cost=0.00..1381.92 rows=469 width=0) (actual time=0.785..1236.086 rows=5142 loops=1)
   Index Cond: (index_fts @@ '\'star\''::tsquery)
 Total runtime: 1240.274 ms
(3 rows)


=# select 'StatB:',heap_blks_read,heap_blks_hit,idx_blks_read, idx_blks_hit from pg_statio_user_tables where relname='message_parts';
 ?column? | heap_blks_read | heap_blks_hit | idx_blks_read | idx_blks_hit 
--++---+---+--
 StatB:   |   1888 |  1700 |  1056 | 7226
(1 row)

=# select 'StatR:',seq_scan,seq_tup_read,idx_scan,idx_tup_fetch from pg_stat_user_tables where relname='message_parts';
 ?column? | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch 
--+--+--+--+---
 StatR:   |0 |0 |1 |  5939
(1 row)

From documentation:
the total number of disk blocks read from table- 1888
the number of buffer hits from table   - 1700
the numbers of disk blocks read from indices   - 1056
the number of buffer hits from indices - 7226
total numbers of rows returned by index scan   - 5939
So, the total number of table blocks read  is (1888+1700), and index blocks 
is (1056+7226) ?  Or from 1888 table blocks read there were 1700 blocks
already in buffer, but then I dont' understand index stats.

Since disk io is the most important performance factor,
should I look mostly on heap_blks_read and idx_blks_read ?
My query returns 5142 rows, while I see from  idx_tup_fetch that
index returns 5939 rows. So, does it means that 5939 table rows was actually 
read
from disk and checked for lossines (index is lossy) and 797 hits was
actually false drops ?
Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] minor windows & cygwin regression failures on stable

2005-03-26 Thread Andrew Dunstan

Tom Lane wrote:
Dunno about the stats failure.  It looks like the stats collector just
isn't working on Cygwin, but AFAIR no one has touched that code lately.
 

Well, it seems at least to be running. When I fire up postmaster there 
are 4 processes running and no indication of failure that I could see on 
the log. (There is a complaint about failing to dup(0) after 3195 
successes - I assume that has nothing to do with it?)

The good news is that the regression result fixes made Windows native 
green again.

cheers
andrew

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] minor windows & cygwin regression failures on stable

2005-03-26 Thread Jim Buttafuoco
Andrew,

I can set one up a dedicated windows XP system on monday.  I also have some w2k 
systems that can be used.Are there 
directions anywhere?

Jim


-- Original Message ---
From: Andrew Dunstan <[EMAIL PROTECTED]>
To: Tom Lane <[EMAIL PROTECTED]>
Cc: PostgreSQL-development 
Sent: Fri, 25 Mar 2005 22:19:25 -0500
Subject: Re: [HACKERS] minor windows & cygwin regression failures on stable

> Tom Lane wrote:
> 
> >Andrew Dunstan <[EMAIL PROTECTED]> writes:
> >  
> >
> >>Windows has ordering failures on the join and rules tests - Cygwin has a 
> >>failures on the stats test. See buildfarm for details.
> >>
> >>
> >
> >The ordering failures seem to be because the recent planner hacking has
> >taken us back to preferring merge joins for these tests, and Windows'
> >version of qsort has bizarre behavior for equal keys.
> >
> >I put an ORDER BY in the rules test.  For join, I'm inclined to think
> >that the best bet is to resurrect the join_1.out variant comparison
> >file that we had awhile ago.  Unfortunately, what's in the CVS archives
> >is out of date and can't be used directly.  Could you send me the actual
> >rules.out you get on Windows to use for a comparison file?
> >  
> >
> 
> join.out sent off list
> 
> >Dunno about the stats failure.  It looks like the stats collector just
> >isn't working on Cygwin, but AFAIR no one has touched that code lately.
> >  
> >
> 
> It's worked before, that's the strange thing. I'll check some more.
> 
> >Do these machines fail on HEAD too?  (There don't seem to be any active
> >Windows buildfarm machines for HEAD, which is surely ungood.  Won't
> >someone step up and put one into the regular rotation?)
> >
> >
> >  
> >
> 
> Sufficient unto the day is the evil thereof (appropriate quotation for 
> Good friday). I will address HEAD in due course.
> 
> The buildfarm members for both of these are in reality my laptop, which 
> doesn't even run Windows all the time, and has lots of other duties 
> anyway. We (or rather Josh Berkus and Bruce, at my request) are looking 
> for a replacement.
> 
> cheers
> 
> andrew
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faq
--- End of Original Message ---


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Patch for collation using ICU

2005-03-26 Thread Stephan Szabo
On Sat, 26 Mar 2005, Palle Girgensohn wrote:
> I've noticed a couple of things about using the ICU patch vs. pristine
> pg-8.0.1:
>
> - ORDER BY is case insensitive when using ICU. This might break the SQL
> standard (?), but sure is nice :)

Err, I think if your system implements strcoll correctly 8.0.1 can do this
if the chosen collation is set up that way (or at least naive tests I've
done seem to imply that). Or are you speaking about C locale?

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] minor windows & cygwin regression failures on stable

2005-03-26 Thread Andrew Dunstan
Jim,
that is just execllent! Thank you so much! I assume you mean XP-Pro - I 
gather that user permissions get in the way on XP-HE.

We can make the one machine do double duty for Windows and Cygwin.
You will need installed:
 . cygwin, including whatever it takes to build cygwin postgres
 . native perl (ActiveState perl works just fine)
 . normal native windows postgres build environment (MSys, Mingw, 
MSysDTK etc).

After that we'll explore a bit what it takes to automate the buildfarm 
processes on Windows. You and I can do that offline.

Let's get this going first and then look at W2K.
cheers
andrew

Jim Buttafuoco wrote:
Andrew,
I can set one up a dedicated windows XP system on monday.  I also have some w2k systems that can be used.Are there 
directions anywhere?

Jim
-- Original Message ---
From: Andrew Dunstan <[EMAIL PROTECTED]>
To: Tom Lane <[EMAIL PROTECTED]>
Cc: PostgreSQL-development 
Sent: Fri, 25 Mar 2005 22:19:25 -0500
Subject: Re: [HACKERS] minor windows & cygwin regression failures on stable
 

Tom Lane wrote:
   

Andrew Dunstan <[EMAIL PROTECTED]> writes:
 

Windows has ordering failures on the join and rules tests - Cygwin has a 
failures on the stats test. See buildfarm for details.
  

   

The ordering failures seem to be because the recent planner hacking has
taken us back to preferring merge joins for these tests, and Windows'
version of qsort has bizarre behavior for equal keys.
I put an ORDER BY in the rules test.  For join, I'm inclined to think
that the best bet is to resurrect the join_1.out variant comparison
file that we had awhile ago.  Unfortunately, what's in the CVS archives
is out of date and can't be used directly.  Could you send me the actual
rules.out you get on Windows to use for a comparison file?
 

join.out sent off list
   

Dunno about the stats failure.  It looks like the stats collector just
isn't working on Cygwin, but AFAIR no one has touched that code lately.
 

It's worked before, that's the strange thing. I'll check some more.
   

Do these machines fail on HEAD too?  (There don't seem to be any active
Windows buildfarm machines for HEAD, which is surely ungood.  Won't
someone step up and put one into the regular rotation?)

 

Sufficient unto the day is the evil thereof (appropriate quotation for 
Good friday). I will address HEAD in due course.

The buildfarm members for both of these are in reality my laptop, which 
doesn't even run Windows all the time, and has lots of other duties 
anyway. We (or rather Josh Berkus and Bruce, at my request) are looking 
for a replacement.

cheers
andrew
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq
   

--- End of Original Message ---
---(end of broadcast)---
TIP 8: explain analyze is your friend
 

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] problem with CR+LF in files in psql \i command

2005-03-26 Thread Bruce Momjian
Euler Taveira de Oliveira wrote:
> Hi Bruce,
> 
> > Function and comments are just _strings_ to PostgreSQL, so we have no
> > good way of cleaning the output up, unless we hack pg_dump to somehow
> > change line endings when outputting such information, though
> > currently we
> > don't.  The would perhaps cause problems in viewing the functions if
> > restored in Win32.
> > 
> Agreed.
> 
> > Fixing it at the \i level is too crude because it might remove
> > carriage
> > returns that are wanted in the input stream.
> > 
> Agreed.
> 
> > We are open to ideas on how to improve this.
> > 
> What about another option to pg_dump to indicate such a thing? Maybe -l
> or --change-line [m|w|u]

Yea, we could do that, but pg_dump seems like the wrong place to handle
this.  For example, imagine if you have a mix of Win32 and Unix clients
--- if you create a function under Win32, it will look double-spaced to
a Unix client.

Ideally we would auto-convert the function text to the native client
format, but the function text is displayed via a SELECT, so how would we
special-case the function body SELECT output to output using the native
client line endings?

I am thinking we need a functions like unix_eol(), dos_eol(), or even
client_eol() to allow conversion of the output to an end-of-line format
matching the client.   One nifty trick would be to modify the data type
of pg_proc.prosrc to auto-convert end-of-line format to match the
client.  The problem there is that it special-cases a column just to fix
this single issue.  Perhaps the ability to use functions like this is
the proper approach because they can be used in other places as well.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] minor windows & cygwin regression failures on stable branch

2005-03-26 Thread Tom Lane
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> Well, it seems at least to be running. When I fire up postmaster there 
> are 4 processes running and no indication of failure that I could see on 
> the log. (There is a complaint about failing to dup(0) after 3195 
> successes - I assume that has nothing to do with it?)

No, that's some code that's trying to measure the number of files we are
allowed to open.  It expects to fail, it just thought the particular
errno it got was odd enough to report.  Might be worth an #ifdef to tell
it that that errno is expected on Cygwin?

As far as the test failure, maybe we are just not allowing enough time
for the stats collector to run?  The thing sits there for 2 sec, which
theoretically is plenty, but it's a busy-wait loop and if the Cygwin
scheduler is not aggressive about taking away timeslices then maybe
the stats processes don't get to run.  Try doing the test script by
hand, with just a manual delay instead of the sleep function, and see
if it passes.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] understanding pg_stat* numbers

2005-03-26 Thread Tom Lane
Oleg Bartunov  writes:
> I'm looking for blocks *actually* read from disk, since IO is the most 
> important factor.

Well, you won't find that out from Postgres, since it has no idea
whether a read() request was satisfied from kernel disk cache or had
to actually go to disk.

You could turn on log_statement_stats to get some possibly-more-reliable
numbers from the kernel via getrusage(), but this will only tell you
about totals across the whole backend process, not per-relation ...

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] understanding pg_stat* numbers

2005-03-26 Thread Oleg Bartunov
On Sat, 26 Mar 2005, Tom Lane wrote:
Oleg Bartunov  writes:
I'm looking for blocks *actually* read from disk, since IO is the most
important factor.
Well, you won't find that out from Postgres, since it has no idea
whether a read() request was satisfied from kernel disk cache or had
to actually go to disk.
so, what's the usefulness of pg_stat* ?
From my experiments I feel heap_blks_read is the table blocks read from disk, 
well, sort of, and heap_blks_hit - from buffer. At least, I observed when I 
repeat the same query several times (heap_blks_read+heap_blks_hit) doesn't 
changed while more blocks come from heap_blks_hit.

You could turn on log_statement_stats to get some possibly-more-reliable
numbers from the kernel via getrusage(), but this will only tell you
about totals across the whole backend process, not per-relation ...
Hmm, it's impossible to do researching :( We have so many parameters and
almost no reliable stats. How do you believe you did a good choice ?
It's common in db worlds to have IO statistics (timings are not important)
to research algorithms and implementation.
regards, tom lane
Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Bug 1500

2005-03-26 Thread Josh Berkus
Karel,

> > Yeah. ÂKarel Zak, who wrote that code, is convinced we should remove it,
> > but I don't think anyone else is ...
>
> ÂI think I was Peter and Josh Berkus who convinced me that the code is
> bed. "we should remove..." is opinion only...

I certainly didn't recommend removing it before we have a replacement ready.  

The complaint, btw, was that the current to_char formats intervals as if they 
were dates.   This results in some rather confusing output.   I wanted to 
improve to_char to support proper interval formatting, but apparently it's 
difficult to do that without breaking other aspects of to_char (at least, I 
was told that).   

What we need is a function or functions which do the following:

SELECT to_char( INTERVAL '43 hours 20 minutes', 'MI' ) || ' min';
2600 min

SELECT to_char( INTERVAL '43 hours 20 minutes', 'WK:DD:HR:MI' );
0:1:19:20

SELECT to_char( INTERVAL '3 years 5 months','MM' ) || ' mons';
41 mons

etc.  This would be more sophisticated than the logic employed for the current 
to_char, as the interval would be re-calculated in the units supplied, 
limited by the month/year|day/hour/minute boundary.   

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Bug 1500

2005-03-26 Thread lsunley
This has my vote

Lorne

In <[EMAIL PROTECTED]>, on 03/26/05 
   at 02:04 PM, Josh Berkus  said:

>Karel,

>> > Yeah.  áKarel Zak, who wrote that code, is convinced we should remove it,
>> > but I don't think anyone else is ...
>>
>>  áI think I was Peter and Josh Berkus who convinced me that the code is
>> bed. "we should remove..." is opinion only...

>I certainly didn't recommend removing it before we have a replacement
>ready.  

>The complaint, btw, was that the current to_char formats intervals as if
>they  were dates.   This results in some rather confusing output.   I
>wanted to  improve to_char to support proper interval formatting, but
>apparently it's  difficult to do that without breaking other aspects of
>to_char (at least, I  was told that).   

>What we need is a function or functions which do the following:

>SELECT to_char( INTERVAL '43 hours 20 minutes', 'MI' ) || ' min'; 2600
>min

>SELECT to_char( INTERVAL '43 hours 20 minutes', 'WK:DD:HR:MI' );
>0:1:19:20

>SELECT to_char( INTERVAL '3 years 5 months','MM' ) || ' mons'; 41 mons

>etc.  This would be more sophisticated than the logic employed for the
>current  to_char, as the interval would be re-calculated in the units
>supplied,  limited by the month/year|day/hour/minute boundary.   



-- 
---
[EMAIL PROTECTED]
---


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Bug 1500

2005-03-26 Thread Alvaro Herrera
On Sat, Mar 26, 2005 at 02:04:14PM -0800, Josh Berkus wrote:

> SELECT to_char( INTERVAL '43 hours 20 minutes', 'MI' ) || ' min';
> 2600 min

Hmm, what if you wanted more than one literal string?  Say "1 mon 3
days" ... your concatenation idea wouldn't work.  ISTM the format string
should allow unconverted literals, so you would use

SELECT to_char( INTERVAL '43 hours 20 minutes', 'MI min' );

-- 
Alvaro Herrera (<[EMAIL PROTECTED]>)
"Cuando no hay humildad las personas se degradan" (A. Christie)

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Bug 1500

2005-03-26 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> ...  ISTM the format string
> should allow unconverted literals, so you would use

> SELECT to_char( INTERVAL '43 hours 20 minutes', 'MI min' );

... which to_char can do already, IIRC.  The rewrite should define a new
set of format substitution codes, but not otherwise change the behavior
of to_char.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Bug 1500

2005-03-26 Thread Josh Berkus
Alvaro,

> On Sat, Mar 26, 2005 at 02:04:14PM -0800, Josh Berkus wrote:
> > SELECT to_char( INTERVAL '43 hours 20 minutes', 'MI' ) || ' min';
> > 2600 min
>
> Hmm, what if you wanted more than one literal string?  Say "1 mon 3
> days" ... your concatenation idea wouldn't work.  ISTM the format string
> should allow unconverted literals, so you would use
>
> SELECT to_char( INTERVAL '43 hours 20 minutes', 'MI min' );

Hmmm, good point.

Question: how does to_char tell the difference between a code ("MI") and a 
code which is also part of a word? ("MIN").

--Josh

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] Patch for collation using ICU

2005-03-26 Thread Palle Girgensohn

--On lördag, mars 26, 2005 08.16.01 -0800 Stephan Szabo 
<[EMAIL PROTECTED]> wrote:

On Sat, 26 Mar 2005, Palle Girgensohn wrote:
I've noticed a couple of things about using the ICU patch vs. pristine
pg-8.0.1:
- ORDER BY is case insensitive when using ICU. This might break the SQL
standard (?), but sure is nice :)
Err, I think if your system implements strcoll correctly 8.0.1 can do this
if the chosen collation is set up that way (or at least naive tests I've
done seem to imply that). Or are you speaking about C locale?
No, I doubt this.
Example: set up a cluster:
$ initdb -E LATIN1 --locale=sv_SE.ISO8859-1
$ createdb foo
CREATE DATABASE
$ psql foo
foo=# create table bar (val text);
CREATE TABLE
foo=# insert into bar values ('aaa');
INSERT 18354409 1
foo=# insert into bar values ('BBB');
INSERT 18354412 1
foo=# select val from bar order by val;
val
-
BBB
aaa
(2 rows)
Order by is not case insensitive. It shouldn't be for any system, AFAIK. As 
John Hansen noted, this might be a bad thing. I'm not sure about that, 
though...

As for general collation of unicode, the reason for me to use ICU is that 
my system does not support strcoll correctly for multibyte locales, as I 
mentioned earlier. I also noted that even for systems that do handle 
strcoll correctly for unicode, ICU claims to be a couple of magnitudes 
faster, so this patch might be useful for other systems (read Linux) as 
well. See previous emails for details.

Regards,
Palle
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Patch for collation using ICU

2005-03-26 Thread Palle Girgensohn

--On lördag, mars 26, 2005 13.59.19 +1100 John Hansen <[EMAIL PROTECTED]> 
wrote:

- ORDER BY is case insensitive when using ICU. This might
break the SQL standard (?), but sure is nice :)
This would mean that indexes are also case insensitive right?
Which makes it a Bad Thing(tm).
Well, no, not really. Indices use collation rules, yes, but upper and lower 
case strings are not considered *equal*, just "closer related". In 
collation, characters are compared at four levels. See [1] for a good 
explaination. This means that indices will use a case insensitive sort 
order, but equality will not be different, so it shouldn't break anything.

- When the database is initialized using the C locale,
upper() and lower() normally does not work at all for
non-ASCII characters even if the database's encoding is say
LATIN1 or UNICODE. (does not work for me anyway, on FreeBSD,
and this is probably correct since the locale is still `C', I
believe?). The ICU patch changes nothing for the LATIN1 case,
since it does not act on single byte encodings, but for the
UNICODE representation, it works and does what I expect it
to, namely upper() and lower() neatly
upper- or lowercase diacritical characters, i.e. lower('ÅÄÖ')
-> 'åäö'.
This is a good thing, although I'm surprised that upper/lower
is dragged along with the LC_COLLATE fixation at initdb. I
never run initdb in the C locale, but only now do I realize
how broken that really is if you need to store anything else
than English :-)
That is what I would have expected. However, it probably won't work for
the more exotic cases, like turkish I, which depends on the locale.
Nope, Turkish must of course have its locale to for example handle their 
special capital "i". Let's just say it is less broken :)

/Palle
[1] 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] Patch for collation using ICU

2005-03-26 Thread Stephan Szabo

On Sun, 27 Mar 2005, Palle Girgensohn wrote:

>
>
> --On lördag, mars 26, 2005 08.16.01 -0800 Stephan Szabo
> <[EMAIL PROTECTED]> wrote:
>
> > On Sat, 26 Mar 2005, Palle Girgensohn wrote:
> >> I've noticed a couple of things about using the ICU patch vs. pristine
> >> pg-8.0.1:
> >>
> >> - ORDER BY is case insensitive when using ICU. This might break the SQL
> >> standard (?), but sure is nice :)
> >
> > Err, I think if your system implements strcoll correctly 8.0.1 can do this
> > if the chosen collation is set up that way (or at least naive tests I've
> > done seem to imply that). Or are you speaking about C locale?
>
> No, I doubt this.
>
> Example: set up a cluster:
> $ initdb -E LATIN1 --locale=sv_SE.ISO8859-1
> $ createdb foo
> CREATE DATABASE
> $ psql foo
> foo=# create table bar (val text);
> CREATE TABLE
> foo=# insert into bar values ('aaa');
> INSERT 18354409 1
> foo=# insert into bar values ('BBB');
> INSERT 18354412 1
> foo=# select val from bar order by val;
>  val
> -
>  BBB
>  aaa
> (2 rows)
>
>
> Order by is not case insensitive. It shouldn't be for any system, AFAIK. As

It is on my machine... for the same test:

foo=# select val from bar order by val;
 val
-
 aaa
 BBB
(2 rows)

I think this just implies even greater breakage of either the collation or
strcoll on the system you're trying on. ;)  Which, of course, is a fairly
reasonable reason to offer an alternative.  Especially if it's generically
useful.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Patch for collation using ICU

2005-03-26 Thread Palle Girgensohn

--On lördag, mars 26, 2005 17.40.01 -0800 Stephan Szabo 
<[EMAIL PROTECTED]> wrote:

On Sun, 27 Mar 2005, Palle Girgensohn wrote:

--On lördag, mars 26, 2005 08.16.01 -0800 Stephan Szabo
<[EMAIL PROTECTED]> wrote:
> On Sat, 26 Mar 2005, Palle Girgensohn wrote:
>> I've noticed a couple of things about using the ICU patch vs. pristine
>> pg-8.0.1:
>>
>> - ORDER BY is case insensitive when using ICU. This might break the
>> SQL standard (?), but sure is nice :)
>
> Err, I think if your system implements strcoll correctly 8.0.1 can do
> this if the chosen collation is set up that way (or at least naive
> tests I've done seem to imply that). Or are you speaking about C
> locale?
No, I doubt this.
Example: set up a cluster:
$ initdb -E LATIN1 --locale=sv_SE.ISO8859-1
$ createdb foo
CREATE DATABASE
$ psql foo
foo=# create table bar (val text);
CREATE TABLE
foo=# insert into bar values ('aaa');
INSERT 18354409 1
foo=# insert into bar values ('BBB');
INSERT 18354412 1
foo=# select val from bar order by val;
 val
-
 BBB
 aaa
(2 rows)
Order by is not case insensitive. It shouldn't be for any system, AFAIK.
As
It is on my machine... for the same test:
foo=# select val from bar order by val;
 val
-
 aaa
 BBB
(2 rows)
I think this just implies even greater breakage of either the collation or
strcoll on the system you're trying on. ;)  Which, of course, is a fairly
reasonable reason to offer an alternative.  Especially if it's generically
useful.
Interesting! Indeed, just tried on an old Linux Redhat system... BTW, 
that's pretty odd for a unix system. "ls -l" sorts aaa before BBB, I've 
never seen the likes of it! Call me old fashion if you like ;-)

Still, as you say, FreeBSD does it capital letters first, and does not 
handle unicode locales' collation, so I need an alternative. Perhaps the 
best way would be to inject ICU into BSD instead :-)

/Palle

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [HACKERS] Postgres recoverey for deleted row of data

2005-03-26 Thread Qingqing Zhou
"Srinivasa Perumal" <[EMAIL PROTECTED]> writes
>
> hi, is there is way undelete or recover the deleted row in a table in
postgres.iam layman, can anyone help me.
>

Here is one way but I am not sure it is a good one. The precondition is that
you didn't schedule any vacuum on your database.

Since PG never really destory any data as you delete them before you use
vacuum, so you still have chance to find back your data - copy down the
values, and re-insert them again. PG uses visibility rules to fill out the
garbage data, say, the data you deleted. So if you change the visibility
rules, you could see your data again. The visibility rules is in
function/macro HeapTupleSatisfiesVisibility(), return true means the tuple
is visible, else, not.

So if you know how to compile PG kernel, here is how: (1) shutdown your
database and backup your data; (2) change HeapTupleSatisfiesVisibility(),
just let it return "true", which means, it will treat everything as visible,
including deleted rows; compile the kernel; (3) restart your database and
find out the data you want - you may select them into another table; (4)
revert the changes, and restart your database and insert the rows you just
find out.

Regards,
Qingqing






---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[HACKERS] problem with rules - column values lost

2005-03-26 Thread Daniel Schuchardt
Hi List,
there seem to be a bug in the 8.0 Rule System if I update a view and 
does not give a column an value.

example
TEST=# \d abzu_ruletest
View "public.abzu_ruletest"
   Column   |   Type| Modifiers
+---+---
 abz_txt| character varying |
 abz_id | integer   |
 abz_proz   | real  |
 abz_betrag | real  |
View definition:
 SELECT lang_abzu(abzu.abz_id) AS abz_txt, abzu.abz_id, abzu.abz_proz, 
abzu.abz_betrag FROM abzu;

TEST=# SELECT * FROM abzu_ruletest;
 abz_txt | abz_id | abz_proz | abz_betrag
-++--+
 |  9 |6 |  3
(1 row)
TEST=# UPDATE "abzu_ruletest" SET "abz_betrag"=3, abz_txt='test'  WHERE 
"abz_id"=9;
UPDATE 1
TEST=# SELECT * FROM abzu_ruletest;
 abz_txt | abz_id | abz_proz | abz_betrag
-++--+
 test|  9 |6 |  3
(1 row)

TEST=# UPDATE "abzu_ruletest" SET "abz_betrag"=3 WHERE "abz_id"=9;
UPDATE 1
TEST=# SELECT * FROM abzu_ruletest;
 abz_txt | abz_id | abz_proz | abz_betrag
-++--+
 |  9 |6 |  3
As you can see the Value of abz_txt is lost. The reason seems to be the 
on Update rule, i fully delete the old record of the child table and 
insert a new record there:
(i do not know if a record exists)

RULE :
 UPDATE abzu SET abz_id=new.abz_id, abz_proz=new.abz_proz,  
 abz_betrag=new.abz_betrag WHERE abz_id=old.abz_id; --OK
 DELETE FROM abzutxt WHERE abzl_abz_id=old.abz_id; 

[HACKERS] i want to find

2005-03-26 Thread chamil wijenayake

hi
 
i want to find the last update time of a tuple(row) in the postgresql data base
 
please be knid enough to reply me a solution if u know
 
thanking you in advance
 
chamil
		Do you Yahoo!? 
Take Yahoo! Mail with you! Get it on your mobile phone.

Re: [HACKERS] problem with rules - column values lost

2005-03-26 Thread Tom Lane
Daniel Schuchardt <[EMAIL PROTECTED]> writes:
> there seem to be a bug in the 8.0 Rule System if I update a view and 
> does not give a column an value.

You can't seriously expect that example to work.  The DELETE removes the
row that lang_abzu() needs to have in order to yield a non-null result,
and so the reference to new.abz_txt in the next line yields a null.

new.* and old.* in rules are macros; they don't represent some sort of
internally held data, but re-evaluations of the relevant definitions.
In particular, new.abz_txt in the last line of the rule references
the view definition if the invoking UPDATE didn't specify any particular
new value for the column.

It might be that you could get the effect you want with triggers ...

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] i want to find

2005-03-26 Thread Michael Fuhr
[Reply-To set to pgsql-general]

On Fri, Mar 25, 2005 at 11:11:19PM -0800, chamil wijenayake wrote:

> i want to find the last update time of a tuple(row) in the postgresql data 
> base

You can set up a trigger to maintain a timestamp for the row.  See
the example in the "Trigger Procedures" section of the PL/pgSQL
documentation.

http://www.postgresql.org/docs/8.0/interactive/plpgsql-trigger.html

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])