Re: [HACKERS] inet to bigint?

2005-12-06 Thread Michael Fuhr
On Tue, Dec 06, 2005 at 03:51:17PM +0800, Christopher Kings-Lynne wrote:
 PL/SQL or PL/PGSQL...

Sheesh, arbitrary restrictions ;-)  Something like this then:

CREATE FUNCTION inet2num(inet) RETURNS numeric AS $$
DECLARE
a  text[] := string_to_array(host($1), '.');
BEGIN
RETURN a[1]::numeric * 16777216 + 
   a[2]::numeric * 65536 + 
   a[3]::numeric * 256 + 
   a[4]::numeric;
END;
$$ LANGUAGE plpgsql IMMUTABLE STRICT;

-- 
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] inet to bigint?

2005-12-06 Thread Christopher Kings-Lynne

Sheesh, arbitrary restrictions ;-)  Something like this then:

CREATE FUNCTION inet2num(inet) RETURNS numeric AS $$
DECLARE
a  text[] := string_to_array(host($1), '.');
BEGIN
RETURN a[1]::numeric * 16777216 + 
   a[2]::numeric * 65536 + 
   a[3]::numeric * 256 + 
   a[4]::numeric;

END;
$$ LANGUAGE plpgsql IMMUTABLE STRICT;


Cool, and now the reverse? :D

(I'll credit you in the MySQL Compat Library code btw)  If you're 
interested, you'd be welcome to join the project btw...


Chris


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] inet to bigint?

2005-12-06 Thread Michael Fuhr
On Tue, Dec 06, 2005 at 01:05:12AM -0700, Michael Fuhr wrote:
 CREATE FUNCTION inet2num(inet) RETURNS numeric AS $$
 DECLARE
 a  text[] := string_to_array(host($1), '.');
 BEGIN
 RETURN a[1]::numeric * 16777216 + 
a[2]::numeric * 65536 + 
a[3]::numeric * 256 + 
a[4]::numeric;
 END;
 $$ LANGUAGE plpgsql IMMUTABLE STRICT;

I should point out that this is only for IPv4, so a family() check
might be in order.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] inet to bigint?

2005-12-06 Thread Michael Fuhr
On Tue, Dec 06, 2005 at 04:10:22PM +0800, Christopher Kings-Lynne wrote:
 Sheesh, arbitrary restrictions ;-)  Something like this then:
 
 CREATE FUNCTION inet2num(inet) RETURNS numeric AS $$
 DECLARE
 a  text[] := string_to_array(host($1), '.');
 BEGIN
 RETURN a[1]::numeric * 16777216 + 
a[2]::numeric * 65536 + 
a[3]::numeric * 256 + 
a[4]::numeric;
 END;
 $$ LANGUAGE plpgsql IMMUTABLE STRICT;
 
 Cool, and now the reverse? :D

Tom posted one just a couple of days ago:

http://archives.postgresql.org/pgsql-general/2005-12/msg00191.php

 (I'll credit you in the MySQL Compat Library code btw)  If you're 
 interested, you'd be welcome to join the project btw...

I haven't been following it but I might have some time.  Is there
a TODO list?  The one I see on pgfoundry is empty.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PATCHES] [HACKERS] snprintf() argument reordering not working under Windows in 8.1

2005-12-06 Thread Nicolai Tufar
2005/12/4, Andrew Dunstan [EMAIL PROTECTED]:
 Tom said:

 Would it work to modify c.h so that it #include's libintl.h, then #undefs
 these macros, then #includes port.h to define 'em the way we want?
 Some or all of this might need to be #ifdef WIN32, but that seems like
 a reasonably noninvasive solution if it can work.
 

 IIRC last time I tried this it didn't work too well ;-( I will have
 another go. I think it's the best way to go.

Very well, I will try to put up a patch to implement it in a couple of days.


 cheers

 andrew


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PATCHES] [HACKERS] snprintf() argument reordering not working under Windows in 8.1

2005-12-06 Thread Nicolai Tufar
2005/12/6, Nicolai Tufar [EMAIL PROTECTED]:
 
  IIRC last time I tried this it didn't work too well ;-( I will have
  another go. I think it's the best way to go.

 Very well, I will try to put up a patch to implement it in a couple of days.

Oh boy, it is already fixed. Sorry folks, my error.
Many thanks to Bruce, Tom and Andrew!
Turksh Windows user can breathe easier now.

Sincerely,
Nicolai Tufar

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PATCHES] [HACKERS] snprintf() argument reordering not working

2005-12-06 Thread Dave Page
 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Bruce Momjian
 Sent: 06 December 2005 04:40
 To: Andrew Dunstan
 Cc: Tom Lane; [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
 [EMAIL PROTECTED]; pgsql-hackers@postgresql.org
 Subject: Re: [PATCHES] [HACKERS] snprintf() argument 
 reordering not working
 
  We hope to put out a new pginstaller in the next few days
 for testing to make sure this has been resolve before releasing 8.1.1.

http://developer.postgresql.org/~dpage/postgresql-8.1t1.zip

DO NOT use in production as it got virtually no testing. I regenerated
all the GUIDs, so it will install alongside an existing installation.

Regards, Dave.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Replication on the backend

2005-12-06 Thread Gustavo Tonini
But, wouldn't the performance be better? And wouldn't asynchronous messages be better processed?

Thanks for replies,
Gustavo.2005/12/6, Jan Wieck [EMAIL PROTECTED]:
On 12/5/2005 8:18 PM, Gustavo Tonini wrote: replication (master/slave, multi-master, etc) implemented inside postgres...I would like to know what has been make in this area.We do not plan to implement replication inside the backend. Replication
needs are so diverse that pluggable replication support makes a lot moresense. To me it even makes more sense than keeping transaction supportoutside of the database itself and add it via pluggable storage add-on.
Jan Gustavo. P.S. Sorry for my bad English. 2005/12/5, Chris Browne [EMAIL PROTECTED]: 
[EMAIL PROTECTED] (Gustavo Tonini) writes:  What about replication or data distribution inside the backend.This  is a valid issue?
 I'm not sure what your question is... -- (reverse (concatenate 'string gro.gultn @ enworbbc)) 
http://www.ntlug.org/~cbbrowne/x.html Love is like a snowmobile flying over the frozen tundra that suddenly flips, pinning you underneath.At night, the ice weasels come. -- Matt Groening
 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriatesubscribe-nomail command to 
[EMAIL PROTECTED] so that yourmessage can get through to the mailing list cleanly--#==#
# It's easier to get forgiveness for being wrong than for being right. ##
Let's break this rule - forgive
me.##== [EMAIL PROTECTED] #


Re: [PATCHES] [HACKERS] snprintf() argument reordering not working

2005-12-06 Thread Andrew Dunstan



Tom Lane wrote:


Please test ...
 



Well, if you look here you'll see a bunch of Turkish messages, because I 
forgot to change the locale back ;-)


http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lorisdt=2005-12-06%2011:57:20

Which raises another question: can we force the locale on Windows, or 
are we stuck with the locale that the machine is set to? But maybe that 
belongs in another thread.


cheers

andrew



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Replication on the backend

2005-12-06 Thread Markus Schiltknecht
On Tue, 2005-12-06 at 10:03 -0200, Gustavo Tonini wrote:
 But,  wouldn't the performance be better? And wouldn't asynchronous
 messages be better processed?

At least for synchronous multi-master replication, the performance
bottelneck is going to be the interconnect between the nodes -
integration of the replication logic into the backend most probably
doesn't affect performance that much.

I'd rather like to ask Jan what different needs for replication he
discovered so far. And how he came to the conclusion, that it's not
possible to provide a general solution.

My point for integration into the backend is flexibility: obviously the
replication code can influence the database much more from within the
backend than from the outside. For example running one complex query on
several nodes. I know, this a very advanced feature - currently it's not
even possible to run one query on multiple backends (i.e. processors of
a multi-core system) - but I like to plan ahead instead of throwing away
code later. For such advanced features you simply have to dig around in
the backend code one day. Of course you can always add hooks, but IMHO
that only complicates matters.

Is there some discussion going on about such topics somewhere? What's up
with slony-2? The wiki on slony2.org still doesn't provide a lot of
technical information (and obviously got spammed BTW).

Regards

Markus



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Optimizer oddness, possibly compounded in 8.1

2005-12-06 Thread Hannu Krosing
Ühel kenal päeval, L, 2005-12-03 kell 09:21, kirjutas Simon Riggs:

 First off, I think we need to do some more work on partitioning so that
 some knowledge about the union set is understood by the optimizer. At
 the moment there is no concept of partition key, so its hard to spot
 when two union sets have the same key to allow pushdown.

One of the easier cases would be non-overlapping (exclusive) constraints
on union subtables on the joined column.

This could serve as a partition key, or in case of many nonoverlapping
columns (ex.: table is partitioned by date and region), as many
partition keys.

-
Hannu



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PATCHES] [HACKERS] snprintf() argument reordering not working

2005-12-06 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 Which raises another question: can we force the locale on Windows, or 
 are we stuck with the locale that the machine is set to? But maybe that 
 belongs in another thread.

I thought we'd put in some sort of no-locale switch specifically for
the buildfarm to use on Windows?  I recall talking about it anyway ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Bug in pg_dump -c with casts

2005-12-06 Thread Christopher Kings-Lynne
Actually, scratch that - I'm wrong... It appeared separately from the 
other DROP commands...


Chris

Christopher Kings-Lynne wrote:

Hi,

Playing around with this MySQL compatibility library, I noticed that 
pg_dump -c does not emit DROP commands for casts.  Seems like a bug...?


Chris


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Replication on the backend

2005-12-06 Thread Jan Wieck

On 12/6/2005 8:10 AM, Markus Schiltknecht wrote:


On Tue, 2005-12-06 at 10:03 -0200, Gustavo Tonini wrote:

But,  wouldn't the performance be better? And wouldn't asynchronous
messages be better processed?


At least for synchronous multi-master replication, the performance
bottelneck is going to be the interconnect between the nodes -
integration of the replication logic into the backend most probably
doesn't affect performance that much.


That is exactly right. Thus far, processor, memory and disk speeds have 
allways advanced on a higher pace than network speeds. Thus, the few 
percent of performance gain we'd get from moving things into the backend 
will be irrelevant tomorrow with 4x-core and 16x-core CPU's.



I'd rather like to ask Jan what different needs for replication he
discovered so far. And how he came to the conclusion, that it's not
possible to provide a general solution.


  - Asynchronous master to multi-slave. We have a few of those with
Mommoth-Replicator and Slony-I being the top players. Slony-I does
need some cleanup and/or reimplementation after we have a general
pluggable replication API in place.

  - Synchronous multimaster. There are certain attempts out there, like
Postgres-R, pgcluster, Slony-II. Some more advanced, some less. But
certainly nothing I would send into the ring against Oracle-Grid.

  - Asynchronous multimaster with conflict resolution. I have not seen
any reasonable attempt on this one yet. Plus, it divides again into
two camps. One is the idea to have one central system with thousands
of satellites (salesman on the street), the other being two or more
central systems doing load balancing (although this competes with
sync-mm).


My point for integration into the backend is flexibility: obviously the
replication code can influence the database much more from within the


We need a general API. It should be possible to define on a per-database 
level which shared replication module to load on connect. The init 
function of that replication module then installs all the required 
callbacks at strategic points (like heap_update(), at_commit() ...) and 
the rest is hidden in the module.



Is there some discussion going on about such topics somewhere? What's up
with slony-2? The wiki on slony2.org still doesn't provide a lot of
technical information (and obviously got spammed BTW).


Slony-II has been slow lately in the Eastern timezone.


Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Replication on the backend

2005-12-06 Thread Markus Schiltknecht
Hello Jan,

On Tue, 2005-12-06 at 10:10 -0500, Jan Wieck wrote:
 We need a general API. It should be possible to define on a per-database 
 level which shared replication module to load on connect. The init 
 function of that replication module then installs all the required 
 callbacks at strategic points (like heap_update(), at_commit() ...) and 
 the rest is hidden in the module.

thank you for your list of replication types. Those still have some
things in common. Thus your approach of providing hooks for different
modules might make sense. Only I fear that I would need way to many
hooks for what I want ;)

 Slony-II has been slow lately in the Eastern timezone.

What is that supposed to mean? Who sits in the eastern timezone?

Regards

Markus


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PATCHES] [HACKERS] snprintf() argument reordering not working

2005-12-06 Thread Andrew Dunstan



Andrew Dunstan wrote:


Tom Lane wrote:


Andrew Dunstan [EMAIL PROTECTED] writes:
 

Which raises another question: can we force the locale on Windows, 
or are we stuck with the locale that the machine is set to? But 
maybe that belongs in another thread.
  


I thought we'd put in some sort of no-locale switch specifically for
the buildfarm to use on Windows?  I recall talking about it anyway ...



Yeah, but I'm not sure it's working. I will look into it.



*sheepish look*

I committed the pg_regress change back in Nov but didn't change 
buildfarm to use it. And now I look at it more closely I think it won't 
work. We have:


/   # locale
/   NOLOCALE :=
  ifdef NO_LOCALE
  NOLOCALE += --no-locale
  endif

I think instead of the += line we need:

  override NOLOCALE := --nolocale

The intended effect is that if any NOLOCALE arg is used in invoking 
make, --no-locale gets passed to pg_regress.


cheers

andrew



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] Oddity with extract microseconds?

2005-12-06 Thread Christopher Kings-Lynne

Does anyone else find this odd:

mysql=# select extract(microseconds from timestamp '2005-01-01
00:00:00.123');
 date_part
---
123000
(1 row)

mysql=# select extract(microseconds from timestamp '2005-01-01
00:00:01.123');
 date_part
---
   1123000
(1 row)

No other extracts include other fields.  eg, minutes:

mysql=# select extract(minutes from timestamp '2005-01-01 00:10:00');
 date_part
---
10
(1 row)

mysql=# select extract(minutes from timestamp '2005-01-01 10:10:00');
 date_part
---
10

So how come microseconds includes the microseconds from the 'seconds'
field and not just after the '.'?  And if it's supposed to include
'seconds', then why doesn't it include minutes, hours, etc.?

Chris


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PATCHES] [HACKERS] snprintf() argument reordering not working

2005-12-06 Thread Andrew Dunstan



Andrew Dunstan wrote:





I committed the pg_regress change back in Nov but didn't change 
buildfarm to use it. And now I look at it more closely I think it 
won't work. We have:


/   # locale
/   NOLOCALE :=
  ifdef NO_LOCALE
  NOLOCALE += --no-locale
  endif

I think instead of the += line we need:

  override NOLOCALE := --nolocale




and if I look after I have had some coffee I will see the underscore I 
missed that makes it make sense. We now return you to your regular viewing.


cheers

andrew

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Oddity with extract microseconds?

2005-12-06 Thread Christopher Kings-Lynne

OK, AndrewSN just pointed out that it's documented to work like that...

...still seems bizarre...

Chris

Christopher Kings-Lynne wrote:

Does anyone else find this odd:

mysql=# select extract(microseconds from timestamp '2005-01-01
00:00:00.123');
 date_part
---
123000
(1 row)

mysql=# select extract(microseconds from timestamp '2005-01-01
00:00:01.123');
 date_part
---
   1123000
(1 row)

No other extracts include other fields.  eg, minutes:

mysql=# select extract(minutes from timestamp '2005-01-01 00:10:00');
 date_part
---
10
(1 row)

mysql=# select extract(minutes from timestamp '2005-01-01 10:10:00');
 date_part
---
10

So how come microseconds includes the microseconds from the 'seconds'
field and not just after the '.'?  And if it's supposed to include
'seconds', then why doesn't it include minutes, hours, etc.?

Chris


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Replication on the backend

2005-12-06 Thread Chris Browne
[EMAIL PROTECTED] (Gustavo Tonini) writes:
 But,  wouldn't the performance be better? And wouldn't asynchronous
 messages be better processed?

Why do you think performance would be materially affected by this?

The MAJOR performance bottleneck is normally the slow network
connection between servers.

When looked at in the perspective of that bottleneck, pretty much
everything else is just noise.  (Sometimes pretty loud noise, but
still noise :-).)
-- 
let name=cbbrowne and tld=cbbrowne.com in name ^ @ ^ tld;;
http://cbbrowne.com/info/spreadsheets.html
When the grammar checker identifies an error, it suggests a
correction and can even makes some changes for you.  
-- Microsoft Word for Windows 2.0 User's Guide, p.35:

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Replication on the backend

2005-12-06 Thread Mario Weilguni

IMO this is not true. You can get affordable 10GBit network adapters, so you 
can have plenty of bandwith in a db server pool (if they are located in the 
same area). Even 1GBit Ethernet greatly helps here, and would make it possible 
to balance read-intensive (and not write intensive) applications. We using 
linux bonding interface with 2 gbit NICs, and 200 MBytes/sec throughput is 
something you need to have a quite some harddisks to reach that. Latency is not 
bad too.

Regards,
Mario weilguni


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Chris Browne
Sent: Tuesday, December 06, 2005 4:43 PM
To: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Replication on the backend

[EMAIL PROTECTED] (Gustavo Tonini) writes:
 But,  wouldn't the performance be better? And wouldn't asynchronous
 messages be better processed?

Why do you think performance would be materially affected by this?

The MAJOR performance bottleneck is normally the slow network
connection between servers.

When looked at in the perspective of that bottleneck, pretty much
everything else is just noise.  (Sometimes pretty loud noise, but
still noise :-).)
-- 
let name=cbbrowne and tld=cbbrowne.com in name ^ @ ^ tld;;
http://cbbrowne.com/info/spreadsheets.html
When the grammar checker identifies an error, it suggests a
correction and can even makes some changes for you.  
-- Microsoft Word for Windows 2.0 User's Guide, p.35:

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Oddity with extract microseconds?

2005-12-06 Thread Alvaro Herrera
Christopher Kings-Lynne wrote:
 OK, AndrewSN just pointed out that it's documented to work like that...
 
 ...still seems bizarre...

So it's a gotcha!

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Oddity with extract microseconds?

2005-12-06 Thread Tom Lane
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
 OK, AndrewSN just pointed out that it's documented to work like that...
 ...still seems bizarre...

It seems reasonably consistent to me.  extract() doesn't consider
seconds and fractional seconds to be distinct fields: it's all one
value.  The milliseconds and microseconds options just shift the
decimal place for you.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Oddity with extract microseconds?

2005-12-06 Thread Bruce Momjian
Tom Lane wrote:
 Christopher Kings-Lynne [EMAIL PROTECTED] writes:
  OK, AndrewSN just pointed out that it's documented to work like that...
  ...still seems bizarre...
 
 It seems reasonably consistent to me.  extract() doesn't consider
 seconds and fractional seconds to be distinct fields: it's all one
 value.  The milliseconds and microseconds options just shift the
 decimal place for you.

I think this illustrates the issue:

test= SELECT date_part('microseconds', '00:00:01.33'::time);
 date_part
---
   133
(1 row)

test= SELECT date_part('microseconds', '00:03:01.33'::time);
 date_part
---
   133
(1 row)

Why aren't 'minutes' considered too?  Because they aren't 'seconds'. 
Well, seconds aren't microseconds either.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Oddity with extract microseconds?

2005-12-06 Thread Tom Lane
Bruce Momjian pgman@candle.pha.pa.us writes:
 Why aren't 'minutes' considered too?  Because they aren't 'seconds'. 
 Well, seconds aren't microseconds either.

Yeah, they are: it's just one field.  The other way of looking at it
(that everything is seconds) is served by extract(epoch).

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] generalizing the planner knobs

2005-12-06 Thread Rod Taylor
On Fri, 2005-12-02 at 15:49 -0500, Greg Stark wrote:
 Rod Taylor [EMAIL PROTECTED] writes:
 
   In the extreme, no amount of added intelligence in the optimizer is going 
   to
   help it come up with any sane selectivity estimate for something like 
   
 WHERE radius_authenticate(user) = 'OK'
  
  Why not?
  
  The missing capability in this case is to be able to provide or generate
  (self learning?) statistics for a function that describe a typical result
  and the cost of getting that result.
 
 Ok, try WHERE radius_authenticate(user, (select ...), ?)
 
 The point is that you can improve the estimates the planner gets. But you can
 never make them omniscient. There will always be cases where the user knows
 his data more than the planner. And those hints are still valid when a new
 optimizer has new plans available.

You missed my point. If the user knows there data there is absolutely no
reason, aside from missing functionality in PostgreSQL, that statistics
cannot be generated to represent what the user knows about their data.

Once the planner knows the statistics it can make the right decision
without any hints.

The missing feature here is the ability to generate or provide
statistics and costs for functions.



-- 


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Reduce NUMERIC size by 2 bytes, reduce max length to 508

2005-12-06 Thread John D. Burger

Tom Lane wrote:

Hm ... between that, the possible crypto connection, and John's 
personal

testimony that he actually uses PG for calculations in this range, I'm
starting to lean to the idea that we shouldn't cut the range.


Just to be clear, this John has yet to use NUMERIC for any 
calculations, let alone in that range.  (I've only used NUMERIC for 
importing real-valued data where I didn't want to lose precision with a 
floating point representation, for instance, decimal latitude-longitude 
values.)


There was this post, though:

Gregory Maxwell wrote:


I've hesitated commenting, because I think it might be a silly reason,
but perhaps it's one other people share.  ...  I use PG as a
calculator for big numbers because it's the only user friendly thing
on my system that can do factorial(300) - factorial(280). I'd rather
use something like octave, but I've found its pretty easy to escape
its range.   If the range for computation is changed, then I'll
probably keep an old copy around just for this, though I'm not quite
sure how much I'd be affected..


- John D. Burger
  MITRE


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Reduce NUMERIC size by 2 bytes, reduce max length to 508

2005-12-06 Thread Tom Lane
John D. Burger [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Hm ... between that, the possible crypto connection, and John's 
 personal testimony

 Just to be clear, this John has yet to use NUMERIC for any 
 calculations, let alone in that range.

My mistake, got confused as to who had said what.

The point remains though: in discussing this proposed patch, we were
assuming that 10^508 would still be far beyond what people actually
needed.  Even one or two reports from the list membership of actual
use of larger values casts a pretty big shadow on that assumption.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Reduce NUMERIC size by 2 bytes, reduce max length to 508

2005-12-06 Thread Bruce Momjian
Tom Lane wrote:
 John D. Burger [EMAIL PROTECTED] writes:
  Tom Lane wrote:
  Hm ... between that, the possible crypto connection, and John's 
  personal testimony
 
  Just to be clear, this John has yet to use NUMERIC for any 
  calculations, let alone in that range.
 
 My mistake, got confused as to who had said what.
 
 The point remains though: in discussing this proposed patch, we were
 assuming that 10^508 would still be far beyond what people actually
 needed.  Even one or two reports from the list membership of actual
 use of larger values casts a pretty big shadow on that assumption.

Agreed.  I would like to see us hit the big savings first, like merging
cmin/cmax (4 bytes per row) and reducing the varlena header size (2-3
bytes for short values), before we start going after disk savings that
actually limit our capabilites.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Optimizer oddness, possibly compounded in 8.1

2005-12-06 Thread Simon Riggs
On Tue, 2005-12-06 at 16:12 +0200, Hannu Krosing wrote:
 Ühel kenal päeval, L, 2005-12-03 kell 09:21, kirjutas Simon Riggs:
 
  First off, I think we need to do some more work on partitioning so that
  some knowledge about the union set is understood by the optimizer. At
  the moment there is no concept of partition key, so its hard to spot
  when two union sets have the same key to allow pushdown.
 
 One of the easier cases would be non-overlapping (exclusive) constraints
 on union subtables on the joined column.
 
 This could serve as a partition key, or in case of many nonoverlapping
 columns (ex.: table is partitioned by date and region), as many
 partition keys.

Yes, thats my planned direction.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Upcoming PG re-releases

2005-12-06 Thread Bruce Momjian

I have added your suggestions to the 8.1.X release notes.

---

Paul Lindner wrote:
-- Start of PGP signed section.
 On Sat, Dec 03, 2005 at 10:54:08AM -0500, Bruce Momjian wrote:
  Neil Conway wrote:
   On Wed, 2005-11-30 at 10:56 -0500, Tom Lane wrote:
It's been about a month since 8.1.0 was released, and we've found about
the usual number of bugs for a new release, so it seems like it's time
for 8.1.1.
   
   I think one fix that should be made in time for 8.1.1 is adding a note
   to the version migration section of the 8.1 release notes describing
   the invalid UTF-8 byte sequence problems that some people have run
   into when upgrading from prior versions. I'm not familiar enough with
   the problem or its remedies to add the note myself, though.
  
  Agreed, but I don't understand the problem well enough either.  Does
  anyone?
 
 There was a thread a couple of weeks back about this problem.  Here's
 my sample writeup -- I give my permission for anyone to use it as they
 see fit:
 
 
 Upgrading UNICODE databases to 8.1
 
 Postgres 8.1 includes a number of bug-fixes and improvements to
 Unicode and UTF-8 character handling.  Unfortunately previous releases
 would accept character sequences that were not valid UTF-8.  This
 may cause problems when upgrading your database using
 pg_dump/pg_restore resulting in an error message like this:
 
   Invalid UNICODE byte sequence detected near byte ...
 
 To convert your pre-8.1 database to 8.1 you may have to remove and/or
 fix the offending characters.  One simple way to fix the problem is to
 run your pg_dump output through the iconv command like this:
 
   iconv -c -f UTF8 -t UTF8 -o fixed.sql dump.sql
 
 The -c flag tells iconv to omit invalid characters from output.
 
 There is one problem with this.  Most versions of iconv try to read
 the entire input file into memory.  If you dump is quite large you
 will need to split the dump into multiple files and convert each one
 individually.  You must use the -l flag for split to insure that the
 unicode byte sequences are not split.
 
split -l 1 dump.sql
 
 Another possible solution is to use the --inserts flag to pg_dump.
 When you load the resulting data dump in 8.1 this will result in the
 problem rows showing up in your error log.
 
 -- 
 Paul Lindner| | | | |  |  |  |   |   |
 [EMAIL PROTECTED]
-- End of PGP section, PGP failed!

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Upcoming PG re-releases

2005-12-06 Thread Tom Lane
Bruce Momjian pgman@candle.pha.pa.us writes:
 I have added your suggestions to the 8.1.X release notes.

Did you read the followup discussion?  Recommending -c without a large
warning seems a very bad idea.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Upcoming PG re-releases

2005-12-06 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian pgman@candle.pha.pa.us writes:
  I have added your suggestions to the 8.1.X release notes.
 
 Did you read the followup discussion?  Recommending -c without a large
 warning seems a very bad idea.

Well, I said it would remove invalid sequences.  What else should we
say?

This will remove invalid character sequences.

I saw no clear solution that allowed sequences to be corrected.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: Concurrent CREATE INDEX, try 2 (was Re: [HACKERS] Reducing relation locking overhead)

2005-12-06 Thread Jochem van Dieten
On 12/5/05, Hannu Krosing wrote:

 Concurrent CREATE INDEX
 

 Concurrent index NDX1 on table TAB1 is created like this:

 1) start transaction. take a snapshot SNAP1

 1.1) optionally, remove pages for TAB1 from FSM to force (?) all newer
 inserts/updates to happen at end of table (won't work for in-page
 updates without code changes)

 2) create the index as we do now, but only for pages which are visible
 in SNAP1

 3) record the index in pg_class, but mark it as do not use for lookups
 in a new field. Take snapshot SNAP2. commit transaction.

What happens if another transaction takes a snapshot between SNAP2 and
the commit? Wouldn't you need a lock to guard against that? (Not that
I don't know if that is possible or desirable.)

Jochem

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Replication on the backend

2005-12-06 Thread Michael Meskes
  Postgres-R, pgcluster, Slony-II. Some more advanced, some less. But
  certainly nothing I would send into the ring against Oracle-Grid.

Assuming that you mean Oracle Real Application Cluster (the Grid is more, 
right?) I wonder if this technology technically still counts as replication. 
AFAIK they do not replicate data but share a common data pool among different 
servers. You still have communication overhead but you write a tuple only 
once for all servers involved. Takes away a lot of overhead on a system 
that's heavily written too.

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Replication on the backend

2005-12-06 Thread Rick Gigger

Just like MySql!

On Dec 5, 2005, at 10:35 PM, Jan Wieck wrote:


On 12/5/2005 8:18 PM, Gustavo Tonini wrote:


replication (master/slave, multi-master, etc) implemented inside
postgres...I would like to know what has been make in this area.


We do not plan to implement replication inside the backend.  
Replication needs are so diverse that pluggable replication support  
makes a lot more sense. To me it even makes more sense than keeping  
transaction support outside of the database itself and add it via  
pluggable storage add-on.



Jan



Gustavo.
P.S. Sorry for my bad English.
2005/12/5, Chris Browne [EMAIL PROTECTED]:


[EMAIL PROTECTED] (Gustavo Tonini) writes:
 What about replication or data distribution inside the  
backend.  This

 is a valid issue?

I'm not sure what your question is...
--
(reverse (concatenate 'string gro.gultn @ enworbbc))
http://www.ntlug.org/~cbbrowne/x.html
Love is like a snowmobile flying over the frozen tundra that  
suddenly

flips, pinning you underneath.  At night, the ice weasels come.
-- Matt Groening

---(end of  
broadcast)---

TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so  
that your

   message can get through to the mailing list cleanly




--
#= 
=#
# It's easier to get forgiveness for being wrong than for being  
right. #
# Let's break this rule - forgive  
me.  #
#==  
[EMAIL PROTECTED] #


---(end of  
broadcast)---

TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Replication on the backend

2005-12-06 Thread Rick Gigger

  - Asynchronous master to multi-slave. We have a few of those with
Mommoth-Replicator and Slony-I being the top players. Slony-I does
need some cleanup and/or reimplementation after we have a general
pluggable replication API in place.


Is this API actually have people working on it or just something on  
the todo list?



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Replication on the backend

2005-12-06 Thread Gustavo Tonini
I don't see anything in the TODO list. I'm very interesting in work that. If is possible...

Gustavo.


Re: Concurrent CREATE INDEX, try 2 (was Re: [HACKERS] Reducing relation locking overhead)

2005-12-06 Thread Tom Lane
Jochem van Dieten [EMAIL PROTECTED] writes:
 On 12/5/05, Hannu Krosing wrote:
 3) record the index in pg_class, but mark it as do not use for lookups
 in a new field. Take snapshot SNAP2. commit transaction.

 What happens if another transaction takes a snapshot between SNAP2 and
 the commit? Wouldn't you need a lock to guard against that? (Not that
 I don't know if that is possible or desirable.)

It's worse than that, because an updating command that is already
running has already made its list of which indexes to update.  You can't
say commit and expect transactions already in flight to react
magically to the presence of the new index.  If you take a lock that
excludes writes, and then release that lock with your commit (lock
release actually happens after commit btw), then you can be sure that
subsequent write transactions will see your new index, because they take
their writer's lock before they inspect pg_index to see what indexes
they need to update.

Short of taking such a lock, you have a race condition.

There's another little problem: it's not clear that present in SNAP2
but not in SNAP1 has anything to do with the condition you need.  This
would exclude rows made by transactions still in progress as of SNAP2,
but you can't know whether such rows were made before or after your
commit of the index.  It doesn't do the right thing for deleted rows
either (deleted rows may still need to be entered into the index),
though perhaps you could fix that with a creative reinterpretation of
what present in a snap means.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Upcoming PG re-releases

2005-12-06 Thread Bruce Momjian
Bruce Momjian wrote:
 Tom Lane wrote:
  Bruce Momjian pgman@candle.pha.pa.us writes:
   I have added your suggestions to the 8.1.X release notes.
  
  Did you read the followup discussion?  Recommending -c without a large
  warning seems a very bad idea.
 
 Well, I said it would remove invalid sequences.  What else should we
 say?
 
   This will remove invalid character sequences.
 
 I saw no clear solution that allowed sequences to be corrected.

The release note text is:

Some users are having problems loading literalUTF8/ data into 8.1.X.
 This is because previous versions allowed invalid literalUTF8/
sequences to be entered into the database, and this release properly
accepts only valid literalUTF8/ sequences.  One way to correct a
dumpfile is to use commandiconv -c -f UTF-8 -t UTF-8/. This will
remove invalid character sequences. commandiconv/ reads the entire
input file into memory so it might be necessary to commandsplit/ the
dump into multiple smaller files for processing.

One nice solution would be if iconv would report the lines with errors
and you could correct them, but I see no way to do that.  The only thing
you could do is to diff the old and new files to see the problems.  Is
that helpful?  Here is new text I have used:

Some users are having problems loading literalUTF8/ data into 8.1.X.
 This is because previous versions allowed invalid literalUTF8/
sequences to be entered into the database, and this release properly
accepts only valid literalUTF8/ sequences.  One way to correct a
dumpfile is to use commandiconv -c -f UTF-8 -t UTF-8 -o cleanfile.sql
dumpfile.sql/.  The literal-c/ option removes invalid character
sequences.  A diff of the two files will show the sequences that are
invalid.  commandiconv/ reads the entire input file into memory so
it might be necessary to commandsplit/ the dump into multiple
smaller files for processing.

It highlights the 'diff' idea.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: Concurrent CREATE INDEX, try 2 (was Re: [HACKERS] Reducing

2005-12-06 Thread Hannu Krosing
Ühel kenal päeval, T, 2005-12-06 kell 20:50, kirjutas Jochem van Dieten:
 On 12/5/05, Hannu Krosing wrote:
 
  Concurrent CREATE INDEX
  
 
  Concurrent index NDX1 on table TAB1 is created like this:
 
  1) start transaction. take a snapshot SNAP1
 
  1.1) optionally, remove pages for TAB1 from FSM to force (?) all newer
  inserts/updates to happen at end of table (won't work for in-page
  updates without code changes)
 
  2) create the index as we do now, but only for pages which are visible
  in SNAP1
 
  3) record the index in pg_class, but mark it as do not use for lookups
  in a new field. Take snapshot SNAP2. commit transaction.
 
 What happens if another transaction takes a snapshot between SNAP2 and
 the commit? 

I'm hoping there to be some clever way to circumvent (the effects) of
it. But I can't see it yet.

 Wouldn't you need a lock to guard against that? (Not that
 I don't know if that is possible or desirable.)

That may be needed. At least I hope it to be possible in a way that can
quarantee avoiding deadlocks.

What I have in mind would be something like this to get both SNAP2 and
commit between any transactions:

LOOP:
LOCK AGAINST STARTING NEW TRANSACTIONS
LOOP UP TO N SEC :
IF NO OTHER TRANSACTIONS: BREAK
ELSE: CONTINUE
IF NO OTHER TRANSACTIONS: BREAK
ELSE:
UNLOCK AGAINST STARTING NEW TRANSACTIONS
SLEEP N SEC
TAKE SNAP2
COMMIT (AND UNLOCK)


This will eventually succeed (given right values for N ) and will
quarantee that SNAP2 and COMMIT are atomic wrt other backends.


--
Hannu



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: Concurrent CREATE INDEX, try 2 (was Re: [HACKERS] Reducing

2005-12-06 Thread Hannu Krosing
Ühel kenal päeval, T, 2005-12-06 kell 15:12, kirjutas Tom Lane:
 Jochem van Dieten [EMAIL PROTECTED] writes:
  On 12/5/05, Hannu Krosing wrote:
  3) record the index in pg_class, but mark it as do not use for lookups
  in a new field. Take snapshot SNAP2. commit transaction.
 
  What happens if another transaction takes a snapshot between SNAP2 and
  the commit? Wouldn't you need a lock to guard against that? (Not that
  I don't know if that is possible or desirable.)
 
 It's worse than that, because an updating command that is already
 running has already made its list of which indexes to update.  You can't
 say commit and expect transactions already in flight to react
 magically to the presence of the new index.  If you take a lock that
 excludes writes, and then release that lock with your commit (lock
 release actually happens after commit btw),

Is it possible to release a lock without commit ?

  then you can be sure that
 subsequent write transactions will see your new index, because they take
 their writer's lock before they inspect pg_index to see what indexes
 they need to update.
 
 Short of taking such a lock, you have a race condition.
 
 There's another little problem: it's not clear that present in SNAP2
 but not in SNAP1 has anything to do with the condition you need.  This
 would exclude rows made by transactions still in progress as of SNAP2,
 but you can't know whether such rows were made before or after your
 commit of the index.  It doesn't do the right thing for deleted rows
 either (deleted rows may still need to be entered into the index),
 though perhaps you could fix that with a creative reinterpretation of
 what present in a snap means.

It seems that taking SNAP1 also needs to be fitted between any other
transactions (i.e no transaction can be running at the time) which can
hopefully be done as I outlined to my other answer to grandparent.

-
Hannu



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Weird Grant/Revoke/Usage behavior

2005-12-06 Thread Bruce Momjian

Can someone comment on this?

---

Joshua D. Drake wrote:
 Hello,
 
 The below seems incorrect. If I am in the schema the behavior seems 
 correct. I can't see or select from the table.
 However if I am not in the schema I am able to see the table and its 
 structure. The user jd is not a superuser.
 
 cleancontact=# revoke usage on schema financials from jd;
 REVOKE
 cleancontact=# \c cleancontact jd
 You are now connected to database cleancontact as user jd.
 cleancontact= \d financials.foo
   Table financials.foo
  Column |  Type  |Modifiers
 ++-
  id | bigint | not null default nextval('financials.foo_id_seq'::text)
  fname  | text   |
 Indexes:
 foo_pkey PRIMARY KEY, btree (id)
 
 cleancontact= set search_path='financials';
 SET
 cleancontact= \d
 No relations found.
 cleancontact= \d foo
 Did not find any relation named foo.
 cleancontact=
 
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] Slow email caught in the act

2005-12-06 Thread Jim C. Nasby
I just found an email that took 5 days to be delivered. Looking at the
headers below, the holdup was between m2x.hub.org and postgresql.org.
Can someone take a look at the two boxes and see what's going on?

Also, would -www have been the better place for this? I'm not sure if
they handle email stuff...

- Forwarded message from Stephen Slezak [EMAIL PROTECTED] -

Return-Path: [EMAIL PROTECTED]
X-Original-To: [EMAIL PROTECTED]
Delivered-To: [EMAIL PROTECTED]
Received: from mx2.hub.org (mx2.hub.org [200.46.204.254])
by flake.decibel.org (Postfix) with ESMTP id 5439F1527C
for [EMAIL PROTECTED]; Tue,  6 Dec 2005 00:09:27 -0600 (CST)
Received: from postgresql.org (postgresql.org [200.46.204.71])
by mx2.hub.org (Postfix) with ESMTP id 05380514AE8;
Tue,  6 Dec 2005 06:09:12 + (GMT)
X-Original-To: [EMAIL PROTECTED]
Received: from localhost (av.hub.org [200.46.204.144])
by postgresql.org (Postfix) with ESMTP id C14C49DCAD1
for [EMAIL PROTECTED]; Thu,  1 Dec 2005 19:15:17 -0400 (AST)
Received: from postgresql.org ([200.46.204.71])
 by localhost (av.hub.org [200.46.204.144]) (amavisd-new, port 10024)
 with ESMTP id 43083-01
 for [EMAIL PROTECTED];
 Thu,  1 Dec 2005 19:15:13 -0400 (AST)
X-Greylist: from auto-whitelisted by SQLgrey-
Received: from ausimss.pervasive.com (ausimss.pervasive.com [66.45.103.246])
by postgresql.org (Postfix) with ESMTP id A46669DCAB4
for [EMAIL PROTECTED]; Thu,  1 Dec 2005 19:15:11 -0400 (AST)
Received: from ausbayes2.aus.pervasive.com ([172.16.8.6]) by 
ausimss.pervasive.com with InterScan Messaging Security Suite; Thu, 01 Dec 2005 
17:15:11 -0600
Received: from FRASPAM.fra.pervasive.com ([10.1.16.7]) by 
ausbayes2.aus.pervasive.com with Microsoft SMTPSVC(5.0.2195.6713);
 Thu, 1 Dec 2005 17:15:11 -0600
Received: from brumail2.bru.pervasive.com ([10.5.16.61]) by 
FRASPAM.fra.pervasive.com with Microsoft SMTPSVC(5.0.2195.6713);
 Fri, 2 Dec 2005 00:15:06 +0100
X-MimeOLE: Produced By Microsoft Exchange V6.5.7226.0
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain;
charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
Subject: Re: [pgsql-advocacy] joint booths at upcoming tradeshows
Date: Fri, 2 Dec 2005 00:15:05 +0100
Message-ID: [EMAIL PROTECTED]
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: [pgsql-advocacy] joint booths at upcoming tradeshows
Thread-Index: AcX13EA7k+/T1h3MQS+/wKoOVRCB/QA8IzWg
From: Stephen Slezak [EMAIL PROTECTED]
To: Jim C. Nasby [EMAIL PROTECTED], Josh Berkus josh@agliodbs.com
Cc: Jim Nasby [EMAIL PROTECTED],
Bruce Momjian pgman@candle.pha.pa.us,
[EMAIL PROTECTED], [EMAIL PROTECTED]
X-OriginalArrivalTime: 01 Dec 2005 23:15:07.0358 (UTC) 
FILETIME=[11942FE0:01C5F6CD]
X-Virus-Scanned: by amavisd-new at hub.org
X-Mailing-List: pgsql-advocacy
List-Archive: http://archives.postgresql.org/pgsql-advocacy
List-Help: mailto:[EMAIL PROTECTED]
List-ID: pgsql-advocacy.postgresql.org
List-Owner: mailto:[EMAIL PROTECTED]
List-Post: mailto:[EMAIL PROTECTED]
List-Subscribe: mailto:[EMAIL PROTECTED]
List-Unsubscribe: mailto:[EMAIL PROTECTED]
Precedence: bulk
Sender: [EMAIL PROTECTED]
X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on flake.decibel.org
X-Spam-Level: 
X-Spam-Status: No, score=-2.6 required=5.0 tests=AWL,BAYES_00 autolearn=ham 
version=3.0.4

We also should have some extra space in out booth (right beside the .org 
section) where we could highlight other solutions that run on PostreSQL

Steve

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Jim C. Nasby
Sent: Wednesday, November 30, 2005 12:31 PM
To: Josh Berkus; Stephen Slezak
Cc: Jim Nasby; Bruce Momjian; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Subject: Re: [pgsql-advocacy] joint booths at upcoming tradeshows


On Wed, Nov 30, 2005 at 10:11:53AM -0800, Josh Berkus wrote:
 Jim,
 
 Speaking of which, a coworker (Steve Slezak) has info from IDG; they
 want to know who to talk to about a PostgreSQL .org booth for Linux
 World '06. He emailed this list a while ago but never got a reply.
 
 Ooops, sorry, I must have missed that in the non-member posts.
 
 Which LWE?   Boston, San Francisco, Sydney, what?

Boston, Apr. 4-6.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


- End forwarded message -

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 

Re: Concurrent CREATE INDEX, try 2 (was Re: [HACKERS] Reducing relation locking overhead)

2005-12-06 Thread Tom Lane
Hannu Krosing [EMAIL PROTECTED] writes:
 What I have in mind would be something like this to get both SNAP2 and
 commit between any transactions:

 LOOP:
 LOCK AGAINST STARTING NEW TRANSACTIONS

I can hardly credit that let's block startup of ALL new transactions
is a more desirable restriction than let's block writers to the table
we wish to reindex.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: Concurrent CREATE INDEX, try 2 (was Re: [HACKERS] Reducing relation locking overhead)

2005-12-06 Thread Tom Lane
Hannu Krosing [EMAIL PROTECTED] writes:
 Is it possible to release a lock without commit ?

Yes, but I don't see where that helps you here.

(To do any of this, you'd need to use the same kluge VACUUM does to hold
selected locks across a series of transactions.  So in reality you'd
probably be thinking of committing a startup transaction and letting
some of the locks be released by that.)

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: Concurrent CREATE INDEX, try 2 (was Re: [HACKERS] Reducing

2005-12-06 Thread Hannu Krosing
Ühel kenal päeval, T, 2005-12-06 kell 15:38, kirjutas Tom Lane:
 Hannu Krosing [EMAIL PROTECTED] writes:
  What I have in mind would be something like this to get both SNAP2 and
  commit between any transactions:
 
  LOOP:
  LOCK AGAINST STARTING NEW TRANSACTIONS
 
 I can hardly credit that let's block startup of ALL new transactions
 is a more desirable restriction than let's block writers to the table
 we wish to reindex.

If the block is short-time (will be removed one way or other in a few
(tenths of) seconds, then this is much more desirable than blocking
writers for a few hours.

The scenario where concurrent create index command is be needed is 24/7
OLTP databases, which can't be taken down for maintenance. Usully they
can be arranged to tolerate postponing a few transactions for one
second.

--
Hannu



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Ideas for easier debugging of backend problems

2005-12-06 Thread Bruce Momjian

Added to TODO:

* Add GUC variable to run a command on database panic or
  smart/fast/immediate shutdown

---

Peter Eisentraut wrote:
 Martijn van Oosterhout wrote:
  3. Add either a GUC or a command line switch or PGOPTION switch  to
  automatically invoke and attach gdb on certain types of error.
  Obviously you can only do this where stdin, stdout and stderr have
  not been redirected.
 
 Samba has a configuration parameter that allows you to set an arbitrary 
 command as a panic action script.  This can then be used to gather 
 debugging information or prepare a bug report (see other thread).  This 
 also has the added flexibility that binary packagers can add extra 
 information specific to their environment.  It may be worthwhile to 
 research whether we could do something similar.
 
 -- 
 Peter Eisentraut
 http://developer.postgresql.org/~petere/
 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Upcoming PG re-releases

2005-12-06 Thread Peter Eisentraut
Bruce Momjian wrote:
 One nice solution would be if iconv would report the lines with
 errors and you could correct them, but I see no way to do that.  The
 only thing you could do is to diff the old and new files to see the
 problems.  Is that helpful?  Here is new text I have used:

I think this is nice.  It users see a big mess, they will have to clean
it up by hand anyway.

How about this for better wording:

diff -u -3 -p -r1.400.2.4 release.sgml
--- doc/src/sgml/release.sgml   6 Dec 2005 20:26:02 -   1.400.2.4
+++ doc/src/sgml/release.sgml   6 Dec 2005 20:44:26 -
@@ -528,15 +528,16 @@ psql -t -f fixseq.sql db1 | psql -e db1

  listitem
   para
-   Some users are having problems loading literalUTF8/ data into
-   8.1.X.  This is because previous versions allowed invalid 
literalUTF8/
+   Some users are having problems loading UTF-8 data into
+   8.1.X.  This is because previous versions allowed invalid UTF-8 byte
sequences to be entered into the database, and this release
-   properly accepts only valid literalUTF8/ sequences.  One
-   way to correct a dumpfile is to use commandiconv -c -f UTF-8 -t UTF-8
+   properly accepts only valid UTF-8 sequences.  One
+   way to correct a dumpfile is to run the command commandiconv -c -f 
UTF-8 -t UTF-8
-o cleanfile.sql dumpfile.sql/.  The literal-c/ option removes
invalid character sequences.  A diff of the two files will show the
sequences that are invalid.  commandiconv/ reads the entire input
-   file into memory so it might be necessary to commandsplit/ the dump
+   file into memory so it might be necessary to use commandsplit/
+   to break up the dump
into multiple smaller files for processing.
   /para
  /listitem

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: Concurrent CREATE INDEX, try 2 (was Re: [HACKERS] Reducing

2005-12-06 Thread Hannu Krosing
Ühel kenal päeval, T, 2005-12-06 kell 15:41, kirjutas Tom Lane:
 Hannu Krosing [EMAIL PROTECTED] writes:
  Is it possible to release a lock without commit ?
 
 Yes, but I don't see where that helps you here.
 
 (To do any of this, you'd need to use the same kluge VACUUM does to hold
 selected locks across a series of transactions.  So in reality you'd
 probably be thinking of committing a startup transaction and letting
 some of the locks be released by that.)

Hmm, that sounds like an plan:

1) run a transaction repeatedly, trying to hit a point of no concurrent
transactions, encance the odds by locking out starting other
transactions for a few (tenths or hundredths of) seconds, if it
succeeds, record SNAP1, commit and and continue, else rollback, then
sleep a little and retry.

2) build index on all rows inserted before SNAP1

3) run a transaction repeatedly, trying to hit a point of no concurrent
transactions by locking out other transactions for a few (tenths or
hundredths of) seconds, if it succeeds, record SNAP2, mark index as
visible for inserts, commit. now all new transactions see the index and
use it when inserting new tuples.

4) scan over table, add all tuples between SNAP1 and SNAP2 to index 

5) mark index as usable for query plans


-
Hannu



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Upcoming PG re-releases

2005-12-06 Thread Bruce Momjian

Nice, updated.

---

Peter Eisentraut wrote:
 Bruce Momjian wrote:
  One nice solution would be if iconv would report the lines with
  errors and you could correct them, but I see no way to do that.  The
  only thing you could do is to diff the old and new files to see the
  problems.  Is that helpful?  Here is new text I have used:
 
 I think this is nice.  It users see a big mess, they will have to clean
 it up by hand anyway.
 
 How about this for better wording:
 
 diff -u -3 -p -r1.400.2.4 release.sgml
 --- doc/src/sgml/release.sgml   6 Dec 2005 20:26:02 -   1.400.2.4
 +++ doc/src/sgml/release.sgml   6 Dec 2005 20:44:26 -
 @@ -528,15 +528,16 @@ psql -t -f fixseq.sql db1 | psql -e db1
 
   listitem
para
 -   Some users are having problems loading literalUTF8/ data into
 -   8.1.X.  This is because previous versions allowed invalid 
 literalUTF8/
 +   Some users are having problems loading UTF-8 data into
 +   8.1.X.  This is because previous versions allowed invalid UTF-8 byte
 sequences to be entered into the database, and this release
 -   properly accepts only valid literalUTF8/ sequences.  One
 -   way to correct a dumpfile is to use commandiconv -c -f UTF-8 -t 
 UTF-8
 +   properly accepts only valid UTF-8 sequences.  One
 +   way to correct a dumpfile is to run the command commandiconv -c -f 
 UTF-8 -t UTF-8
 -o cleanfile.sql dumpfile.sql/.  The literal-c/ option removes
 invalid character sequences.  A diff of the two files will show the
 sequences that are invalid.  commandiconv/ reads the entire input
 -   file into memory so it might be necessary to commandsplit/ the 
 dump
 +   file into memory so it might be necessary to use commandsplit/
 +   to break up the dump
 into multiple smaller files for processing.
/para
   /listitem
 
 -- 
 Peter Eisentraut
 http://developer.postgresql.org/~petere/
 
 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: Concurrent CREATE INDEX, try 2 (was Re: [HACKERS] Reducing

2005-12-06 Thread Tom Lane
Hannu Krosing [EMAIL PROTECTED] writes:
 1) run a transaction repeatedly, trying to hit a point of no concurrent
 transactions,

In the sort of 24x7 environment that people are arguing this is needed
for, it's entirely possible that that will *never* succeed.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: Concurrent CREATE INDEX, try 2 (was Re: [HACKERS] Reducing

2005-12-06 Thread Hannu Krosing
Ühel kenal päeval, T, 2005-12-06 kell 16:01, kirjutas Tom Lane:
 Hannu Krosing [EMAIL PROTECTED] writes:
  1) run a transaction repeatedly, trying to hit a point of no concurrent
  transactions,
 
 In the sort of 24x7 environment that people are arguing this is needed
 for, it's entirely possible that that will *never* succeed.

My OLTP transactions are usually 5-50ms in length. common sense tells
me, that if I disallow new transactions for 100ms, I am more than likely
to have waited for all existing ones to have finished and can do my 1 ms
of take snapshot + commit and let all the waiting transactions to
commence.

If the database is running longer transactions, there can be a GUC to
adjust the time CUNCURRENT CREATE INDEX will wait. For example for
trx'es mostly in 0.5-2 sec range the wait could be set to 3 sec.

-
Hannu


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Weird Grant/Revoke/Usage behavior

2005-12-06 Thread Tom Lane
Bruce Momjian pgman@candle.pha.pa.us writes:
 Can someone comment on this?

It's operating as designed.  Schemas you don't have USAGE privilege on
are ignored if listed in your search path.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[HACKERS] more locale problems on Windows

2005-12-06 Thread Andrew Dunstan


I set my locale to Turkish, then did initdb --no-locale. pg_controldata 
is set up correctly, as is postgresql.conf, but messages still come out 
in Turkish on the log file. So either we aren't doing it right or my 
(modern) libintl is hijacking some more stuff. Same result for French, 
so it's not an anti- (or pro-) Turkish plot.


*sigh*

cheers

andrew



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] more locale problems on Windows

2005-12-06 Thread Magnus Hagander
 I set my locale to Turkish, then did initdb --no-locale. 
 pg_controldata is set up correctly, as is postgresql.conf, 
 but messages still come out in Turkish on the log file. So 
 either we aren't doing it right or my
 (modern) libintl is hijacking some more stuff. Same result 
 for French, so it's not an anti- (or pro-) Turkish plot.
 
 *sigh*

Do you have lc_messages set in initdb.conf? IIRC, you must explicitly
set it to english or else libintl will try to guess from your OS.

(And not only that, it guesses badly in many cases. For example, I have
windows installed in english, witheverything set to english except for
the date/time/currency format, and libintl guesses Swedish..)

//Magnus

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] more locale problems on Windows

2005-12-06 Thread Magnus Hagander
  I set my locale to Turkish, then did initdb --no-locale. 
  pg_controldata is set up correctly, as is postgresql.conf, but 
  messages still come out in Turkish on the log file. So either we 
  aren't doing it right or my
  (modern) libintl is hijacking some more stuff. Same result 
 for French, 
  so it's not an anti- (or pro-) Turkish plot.
  
  *sigh*
 
 Do you have lc_messages set in initdb.conf? IIRC, you must 
 explicitly set it to english or else libintl will try to 
 guess from your OS.

*postgresql.conf* of course..


//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Replication on the backend

2005-12-06 Thread Aly S.P Dharshi
I would classify it as a clustered database system (Oracle 10g that is).
Clustered meaning more than one node in the cluster.

ALy.

On Tue, 6 Dec 2005, Michael Meskes wrote:

  Postgres-R, pgcluster, Slony-II. Some more advanced, some less. But
  certainly nothing I would send into the ring against Oracle-Grid.

Assuming that you mean Oracle Real Application Cluster (the Grid is more,
right?) I wonder if this technology technically still counts as replication.
AFAIK they do not replicate data but share a common data pool among different
servers. You still have communication overhead but you write a tuple only
once for all servers involved. Takes away a lot of overhead on a system
that's heavily written too.

Michael


-- 
Aly S.P Dharshi
[EMAIL PROTECTED]

 A good speech is like a good dress
  that's short enough to be interesting
  and long enough to cover the subject

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] more locale problems on Windows

2005-12-06 Thread Andrew Dunstan



Magnus Hagander wrote:

I set my locale to Turkish, then did initdb --no-locale. 
pg_controldata is set up correctly, as is postgresql.conf, 
but messages still come out in Turkish on the log file. So 
either we aren't doing it right or my
(modern) libintl is hijacking some more stuff. Same result 
for French, so it's not an anti- (or pro-) Turkish plot.


*sigh*
   



Do you have lc_messages set in initdb.conf? IIRC, you must explicitly
set it to english or else libintl will try to guess from your OS.

(And not only that, it guesses badly in many cases. For example, I have
windows installed in english, witheverything set to english except for
the date/time/currency format, and libintl guesses Swedish..)

 



Yes. In fact, I can't find *any* combination of environment settings, 
initdb flags or lc_messages setting in postgresql.conf that appears to 
make any difference.


Maybe I'll just leave it for someone who cares more than I do to solve.

cheers

andrew



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [GENERAL] 8.1, OID's and plpgsql

2005-12-06 Thread Jim C. Nasby
On Sat, Dec 03, 2005 at 03:07:19PM -0800, Uwe C. Schroeder wrote:
 the ctid seems to be the solution to my problem. I'm inserting the record in 
 a 
 transaction so the ctid shouldn't change while the transaction isn't finished 
 (either rolled back or committed).
 One question though. How would I get the ctid of the just inserted record. 
 GET 
 DIAGNOSTICS only handles row count and oid per the docs.

Right now you don't. :( ISTM there should be a way to get back the row
you just inserted. Whether a ctid is the right way to do that I don't
know...

I'm going to move this over to -hackers to see what people over there
have to say.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Slow email caught in the act

2005-12-06 Thread Marc G. Fournier


It was sent by someone not subscribed to the mailing list, and was delayed 
for moderator approval ...


On Tue, 6 Dec 2005, Jim C. Nasby wrote:


I just found an email that took 5 days to be delivered. Looking at the
headers below, the holdup was between m2x.hub.org and postgresql.org.
Can someone take a look at the two boxes and see what's going on?

Also, would -www have been the better place for this? I'm not sure if
they handle email stuff...

- Forwarded message from Stephen Slezak [EMAIL PROTECTED] -

Return-Path: [EMAIL PROTECTED]
X-Original-To: [EMAIL PROTECTED]
Delivered-To: [EMAIL PROTECTED]
Received: from mx2.hub.org (mx2.hub.org [200.46.204.254])
by flake.decibel.org (Postfix) with ESMTP id 5439F1527C
for [EMAIL PROTECTED]; Tue,  6 Dec 2005 00:09:27 -0600 (CST)
Received: from postgresql.org (postgresql.org [200.46.204.71])
by mx2.hub.org (Postfix) with ESMTP id 05380514AE8;
Tue,  6 Dec 2005 06:09:12 + (GMT)
X-Original-To: [EMAIL PROTECTED]
Received: from localhost (av.hub.org [200.46.204.144])
by postgresql.org (Postfix) with ESMTP id C14C49DCAD1
for [EMAIL PROTECTED]; Thu,  1 Dec 2005 19:15:17 -0400 (AST)
Received: from postgresql.org ([200.46.204.71])
by localhost (av.hub.org [200.46.204.144]) (amavisd-new, port 10024)
with ESMTP id 43083-01
for [EMAIL PROTECTED];
Thu,  1 Dec 2005 19:15:13 -0400 (AST)
X-Greylist: from auto-whitelisted by SQLgrey-
Received: from ausimss.pervasive.com (ausimss.pervasive.com [66.45.103.246])
by postgresql.org (Postfix) with ESMTP id A46669DCAB4
for [EMAIL PROTECTED]; Thu,  1 Dec 2005 19:15:11 -0400 (AST)
Received: from ausbayes2.aus.pervasive.com ([172.16.8.6]) by 
ausimss.pervasive.com with InterScan Messaging Security Suite; Thu, 01 Dec 2005 
17:15:11 -0600
Received: from FRASPAM.fra.pervasive.com ([10.1.16.7]) by 
ausbayes2.aus.pervasive.com with Microsoft SMTPSVC(5.0.2195.6713);
 Thu, 1 Dec 2005 17:15:11 -0600
Received: from brumail2.bru.pervasive.com ([10.5.16.61]) by 
FRASPAM.fra.pervasive.com with Microsoft SMTPSVC(5.0.2195.6713);
 Fri, 2 Dec 2005 00:15:06 +0100
X-MimeOLE: Produced By Microsoft Exchange V6.5.7226.0
Content-class: urn:content-classes:message
MIME-Version: 1.0
Content-Type: text/plain;
charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
Subject: Re: [pgsql-advocacy] joint booths at upcoming tradeshows
Date: Fri, 2 Dec 2005 00:15:05 +0100
Message-ID: [EMAIL PROTECTED]
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Thread-Topic: [pgsql-advocacy] joint booths at upcoming tradeshows
Thread-Index: AcX13EA7k+/T1h3MQS+/wKoOVRCB/QA8IzWg
From: Stephen Slezak [EMAIL PROTECTED]
To: Jim C. Nasby [EMAIL PROTECTED], Josh Berkus josh@agliodbs.com
Cc: Jim Nasby [EMAIL PROTECTED],
Bruce Momjian pgman@candle.pha.pa.us,
[EMAIL PROTECTED], [EMAIL PROTECTED]
X-OriginalArrivalTime: 01 Dec 2005 23:15:07.0358 (UTC) 
FILETIME=[11942FE0:01C5F6CD]
X-Virus-Scanned: by amavisd-new at hub.org
X-Mailing-List: pgsql-advocacy
List-Archive: http://archives.postgresql.org/pgsql-advocacy
List-Help: mailto:[EMAIL PROTECTED]
List-ID: pgsql-advocacy.postgresql.org
List-Owner: mailto:[EMAIL PROTECTED]
List-Post: mailto:[EMAIL PROTECTED]
List-Subscribe: mailto:[EMAIL PROTECTED]
List-Unsubscribe: mailto:[EMAIL PROTECTED]
Precedence: bulk
Sender: [EMAIL PROTECTED]
X-Spam-Checker-Version: SpamAssassin 3.0.4 (2005-06-05) on flake.decibel.org
X-Spam-Level:
X-Spam-Status: No, score=-2.6 required=5.0 tests=AWL,BAYES_00 autolearn=ham
version=3.0.4

We also should have some extra space in out booth (right beside the .org 
section) where we could highlight other solutions that run on PostreSQL

Steve

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Jim C. Nasby
Sent: Wednesday, November 30, 2005 12:31 PM
To: Josh Berkus; Stephen Slezak
Cc: Jim Nasby; Bruce Momjian; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Subject: Re: [pgsql-advocacy] joint booths at upcoming tradeshows


On Wed, Nov 30, 2005 at 10:11:53AM -0800, Josh Berkus wrote:

Jim,


Speaking of which, a coworker (Steve Slezak) has info from IDG; they
want to know who to talk to about a PostgreSQL .org booth for Linux
World '06. He emailed this list a while ago but never got a reply.


Ooops, sorry, I must have missed that in the non-member posts.

Which LWE?   Boston, San Francisco, Sydney, what?


Boston, Apr. 4-6.
--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


- End forwarded message 

Re: [HACKERS] [GENERAL] 8.1, OID's and plpgsql

2005-12-06 Thread Greg Stark

Jim C. Nasby [EMAIL PROTECTED] writes:

 Right now you don't. :( ISTM there should be a way to get back the row
 you just inserted. Whether a ctid is the right way to do that I don't
 know...
 
 I'm going to move this over to -hackers to see what people over there
 have to say.

Perhaps the right thing to provide would be a data structure that bundled up
the ctid and the transaction id. It would only be valid if the transaction id
still matched the current transaction id it was used in. Any attempt to use it
in a later transaction would give an error, much like using sequence.currval
when nextval hasn't been used.

Many people would suggest the right thing to be using is the primary key. And
there ought to be an interface to fetch the current value (or values) of the
primary key of the last inserted record.

The benefits of providing something based on ctid is to avoid the inefficiency
of the index lookup on the primary key and it would work on tables without any
primary key. I'm not sure it's worth the effort it would entail for those
narrow use cases especially since I think some interface to retrieve the
primary will still be needed anyways.

-- 
greg


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [GENERAL] 8.1, OID's and plpgsql

2005-12-06 Thread Tom Lane
Greg Stark [EMAIL PROTECTED] writes:
 The benefits of providing something based on ctid is to avoid the inefficiency
 of the index lookup on the primary key and it would work on tables without any
 primary key. I'm not sure it's worth the effort it would entail for those
 narrow use cases especially since I think some interface to retrieve the
 primary will still be needed anyways.

Rather than hard-wiring a special case for any of these things, I'd much
rather see us implement INSERT...RETURNING and UPDATE...RETURNING as per
previous suggestions.  Then you can fetch pkey, ctid, or whatever you
need.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: Concurrent CREATE INDEX, try 2 (was Re: [HACKERS] Reducing relation locking overhead)

2005-12-06 Thread Greg Stark
Hannu Krosing [EMAIL PROTECTED] writes:

 The scenario where concurrent create index command is be needed is 24/7
 OLTP databases, which can't be taken down for maintenance. Usully they
 can be arranged to tolerate postponing a few transactions for one
 second.

Well, the dominant defining characteristic of OLTP is precisely that you do
*not* have under your control the timing requirements and can't make such
arrangements. That is, you have to process requests as fast as they come in
whatever that might be.

But that said, realistically *any* solution has to obtain a lock at some time
to make the schema change. I would say pretty much any O(1) (constant time)
outage is at least somewhat acceptable as contrasted with the normal index
build which locks out other writers for at least O(n lg n) time. Anything on
the order of 100ms is probably as good as it gets here.

-- 
greg


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Oddity with extract microseconds?

2005-12-06 Thread Christopher Kings-Lynne
Why aren't 'minutes' considered too?  Because they aren't 'seconds'. 
Well, seconds aren't microseconds either.


Yeah, they are: it's just one field.  The other way of looking at it
(that everything is seconds) is served by extract(epoch).


Well, it's different in MySQL unfortunately - what does the standard 
say?  Out of interest, can someone try this for me in MySQL 5:


SELECT EXTRACT (MICROSECOND FROM '2003-01-02 10:30:00.00123');
SELECT EXTRACT (MICROSECOND FROM '2003-01-02 10:30:10.00123');

Chris


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Optimizer oddness, possibly compounded in 8.1

2005-12-06 Thread Michael Glaesemann


On Dec 7, 2005, at 10:46 , Christopher Kings-Lynne wrote:

In case you didn't know btw, MySQL 5.1 is out with rather extensive  
table partition support.  So get coding :D


You do mean MySQL 5.1 alpha is out, right?

Michael Glaesemann
grzm myrealbox com


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] row is too big: size 8916, maximum size 8136

2005-12-06 Thread Euler Taveira de Oliveira
Hi,

I'm doing some tests with a 700 columns' table. But when I try to load
some data with INSERT or COPY I got that message. I verified that the
BLCKZ is limiting the tuple size but I couldn't have a clue why it's
not using TOAST. I'm using PostgreSQL 8.0.3 in Slackware 10.1 box.
Let me know if you want a test case or other useful information.


Euler Taveira de Oliveira
euler[at]yahoo_com_br








___ 
Yahoo! doce lar. Faça do Yahoo! sua homepage. 
http://br.yahoo.com/homepageset.html 


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Oddity with extract microseconds?

2005-12-06 Thread Michael Fuhr
On Wed, Dec 07, 2005 at 09:43:30AM +0800, Christopher Kings-Lynne wrote:
 Why aren't 'minutes' considered too?  Because they aren't 'seconds'. 
 Well, seconds aren't microseconds either.
 
 Yeah, they are: it's just one field.  The other way of looking at it
 (that everything is seconds) is served by extract(epoch).
 
 Well, it's different in MySQL unfortunately - what does the standard 
 say?

I don't see microseconds as a possible field in SQL:2003 (draft copy).

 Out of interest, can someone try this for me in MySQL 5:
 
 SELECT EXTRACT (MICROSECOND FROM '2003-01-02 10:30:00.00123');
 SELECT EXTRACT (MICROSECOND FROM '2003-01-02 10:30:10.00123');

MySQL 5.0.16 gives an error:

mysql SELECT EXTRACT (MICROSECOND FROM '2003-01-02 10:30:00.00123');
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
that corresponds to your MySQL server version for the right syntax to use
near 'FROM '2003-01-02 10:30:00.00123')' at line 1

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Oddity with extract microseconds?

2005-12-06 Thread Christopher Kings-Lynne

MySQL 5.0.16 gives an error:

mysql SELECT EXTRACT (MICROSECOND FROM '2003-01-02 10:30:00.00123');
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
that corresponds to your MySQL server version for the right syntax to use
near 'FROM '2003-01-02 10:30:00.00123')' at line 1


Odd, that example is straight from the MySQL 5 manual:

http://dev.mysql.com/doc/refman/5.0/en/date-and-time-functions.html

Chris


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] row is too big: size 8916, maximum size 8136

2005-12-06 Thread Tom Lane
Euler Taveira de Oliveira [EMAIL PROTECTED] writes:
 I'm doing some tests with a 700 columns' table.

Redesign your schema...

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Oddity with extract microseconds?

2005-12-06 Thread Christopher Kings-Lynne

Looks like MySQL doesn't allow a space before the open parenthesis
(there isn't one in the manual's example):

mysql SELECT EXTRACT(MICROSECOND FROM '2003-01-02 10:30:00.00123');
+---+
| EXTRACT(MICROSECOND FROM '2003-01-02 10:30:00.00123') |
+---+
|  1230 |
+---+
1 row in set (0.01 sec)


Ok, and what does this give:

SELECT EXTRACT(MICROSECOND FROM '2003-01-02 10:30:01.00123');

Chris


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Oddity with extract microseconds?

2005-12-06 Thread Michael Fuhr
On Wed, Dec 07, 2005 at 10:32:20AM +0800, Christopher Kings-Lynne wrote:
 MySQL 5.0.16 gives an error:
 
 mysql SELECT EXTRACT (MICROSECOND FROM '2003-01-02 10:30:00.00123');
 ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
 that corresponds to your MySQL server version for the right syntax to use
 near 'FROM '2003-01-02 10:30:00.00123')' at line 1
 
 Odd, that example is straight from the MySQL 5 manual:
 
 http://dev.mysql.com/doc/refman/5.0/en/date-and-time-functions.html

Looks like MySQL doesn't allow a space before the open parenthesis
(there isn't one in the manual's example):

mysql SELECT EXTRACT(MICROSECOND FROM '2003-01-02 10:30:00.00123');
+---+
| EXTRACT(MICROSECOND FROM '2003-01-02 10:30:00.00123') |
+---+
|  1230 |
+---+
1 row in set (0.01 sec)

-- 
Michael Fuhr

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Oddity with extract microseconds?

2005-12-06 Thread Michael Fuhr
On Wed, Dec 07, 2005 at 10:47:45AM +0800, Christopher Kings-Lynne wrote:
 Ok, and what does this give:
 
 SELECT EXTRACT(MICROSECOND FROM '2003-01-02 10:30:01.00123');

mysql SELECT EXTRACT(MICROSECOND FROM '2003-01-02 10:30:01.00123');
+---+
| EXTRACT(MICROSECOND FROM '2003-01-02 10:30:01.00123') |
+---+
|  1230 |
+---+
1 row in set (0.00 sec)

Does contrary behavior from MySQL count as evidence that PostgreSQL's
behavior is correct? :-)

-- 
Michael Fuhr

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Oddity with extract microseconds?

2005-12-06 Thread Christopher Kings-Lynne

mysql SELECT EXTRACT(MICROSECOND FROM '2003-01-02 10:30:01.00123');
+---+
| EXTRACT(MICROSECOND FROM '2003-01-02 10:30:01.00123') |
+---+
|  1230 |
+---+
1 row in set (0.00 sec)

Does contrary behavior from MySQL count as evidence that PostgreSQL's
behavior is correct? :-)


No...I happen to think that their way is more consistent though.  Pity 
it's not in the spec.


At least PostgreSQL is consistent with seconds/microseconds:

mysql=# select extract(seconds from timestamp '2005-01-01 00:00:01.01');
 date_part
---
  1.01
(1 row)

Chris


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[HACKERS] more locale problems on Windows

2005-12-06 Thread Andrew Dunstan



I set my locale to Turkish, then did initdb --no-locale. pg_controldata 
is set up correctly, as is postgresql.conf, but messages still come out 
in Turkish on the log file. So either we aren't doing it right or my 
(modern) libintl is hijacking some more stuff. Same result for French, 
so it's not an anti- (or pro-) Turkish plot.


*sigh*

cheers

andrew

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] [GENERAL] 8.1, OID's and plpgsql

2005-12-06 Thread Uwe C. Schroeder
Thanks Jim.

Right now I just keep using the oid's - but it would be nice to eliminate the 
need for that completely.

UC

On Tuesday 06 December 2005 15:01, Jim C. Nasby wrote:
 On Sat, Dec 03, 2005 at 03:07:19PM -0800, Uwe C. Schroeder wrote:
  the ctid seems to be the solution to my problem. I'm inserting the record
  in a transaction so the ctid shouldn't change while the transaction isn't
  finished (either rolled back or committed).
  One question though. How would I get the ctid of the just inserted
  record. GET DIAGNOSTICS only handles row count and oid per the docs.

 Right now you don't. :( ISTM there should be a way to get back the row
 you just inserted. Whether a ctid is the right way to do that I don't
 know...

 I'm going to move this over to -hackers to see what people over there
 have to say.

UC

--
Open Source Solutions 4U, LLC   1618 Kelly St
Phone:  +1 707 568 3056 Santa Rosa, CA 95401
Cell:   +1 650 302 2405 United States
Fax:+1 707 568 6416

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Upcoming PG re-releases

2005-12-06 Thread Gavin Sherry
Hi,

On Tue, 6 Dec 2005, Bruce Momjian wrote:


 Nice, updated.

 ---


I think my suggestion from the other day is useful also.

---

Omar Kilani and I have spent a few hours looking at the problem. For
situations where there is a lot of invalid encoding, manual fixing is just
not viable. The vim project has a kind of fuzzy encoding conversion which
accounts for a lot of the non-UTF8 sequences in UTF8 data. You can use vim
to modify your text dump as follows:

vim -c :wq! ++enc=utf8 fixed.dump original.dump

---

I think this is a viable option for people with a non-trivial amount of
data and don't see manual fixing or potentially losing data as a viable
option.

Thanks,

Gavin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] row is too big: size 8916, maximum size 8136

2005-12-06 Thread Michael Fuhr
On Tue, Dec 06, 2005 at 11:03:16PM -0300, Euler Taveira de Oliveira wrote:
 I'm doing some tests with a 700 columns' table. But when I try to load
 some data with INSERT or COPY I got that message. I verified that the
 BLCKZ is limiting the tuple size but I couldn't have a clue why it's
 not using TOAST. I'm using PostgreSQL 8.0.3 in Slackware 10.1 box.

What data types are you using?  Wide fixed-length (i.e., non-TOASTable)
types might be limiting how many columns you can insert, and TOASTed
data still has in-line overhead:

http://www.postgresql.org/docs/8.0/interactive/storage-toast.html

Allowing for the varlena header word, the total size of a TOAST
pointer datum is therefore 20 bytes regardless of the actual size
of the represented value.

With a block size of 8192, if every column is TOASTed then you'll
get a row is too big error if you have more than about 405 columns.
With short, non-TOASTed data you'll be able to insert more columns.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] Feature Request: Multi-octet raw

2005-12-06 Thread Trent Shipley
It would be nice if Postgresql supported multi-octet raw data.  Certainly a 
lot of what you would do with it would be similar to bytea, but the basic 
string functions would be overloaded so that the unit of work would be a 
multi-octet word.  

Multi-octet instances could be cast to bytea when one wanted to work with the 
data bit-by-bit instead of word-by-word.

the declaration would be something like:

CREATE TABLE acme
 ( ... 
  ,multi_octet_foo  MULTIOCTET (octets-per-word, max-words-per-column)
  ,...
 )

To reuse a declaration you could create a domain.


MULTIOCTET columns could be loaded using octal, decimal, or hexadecimal text 
input. 


The real advantage of a multi-octet type would be for power users.  The code 
would be reusable.  It would simplify tasks like creating the often requested 
native support for UTF-16 and the less frequently requested UTF-32.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Replication on the backend

2005-12-06 Thread Jan Wieck

On 12/6/2005 11:23 AM, Mario Weilguni wrote:


IMO this is not true. You can get affordable 10GBit network adapters, so you 
can have plenty of bandwith in a db server pool (if they are located in the 
same area). Even 1GBit Ethernet greatly helps here, and would make it possible 
to balance read-intensive (and not write intensive) applications. We using 
linux bonding interface with 2 gbit NICs, and 200 MBytes/sec throughput is 
something you need to have a quite some harddisks to reach that. Latency is not 
bad too.


It's not so much the bandwidth but more the roundtrips that limit your 
maximum transaction throughput. Remember, whatever the priority, you 
can't increase the speed of light.



Jan




Regards,
Mario weilguni


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Chris Browne
Sent: Tuesday, December 06, 2005 4:43 PM
To: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Replication on the backend

[EMAIL PROTECTED] (Gustavo Tonini) writes:

But,  wouldn't the performance be better? And wouldn't asynchronous
messages be better processed?


Why do you think performance would be materially affected by this?

The MAJOR performance bottleneck is normally the slow network
connection between servers.

When looked at in the perspective of that bottleneck, pretty much
everything else is just noise.  (Sometimes pretty loud noise, but
still noise :-).)



--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Upcoming PG re-releases

2005-12-06 Thread Bruce Momjian

Exactly what does vim do that iconv does not?  Fuzzy encoding sounds
scary to me.

---

Gavin Sherry wrote:
 Hi,
 
 On Tue, 6 Dec 2005, Bruce Momjian wrote:
 
 
  Nice, updated.
 
  ---
 
 
 I think my suggestion from the other day is useful also.
 
 ---
 
 Omar Kilani and I have spent a few hours looking at the problem. For
 situations where there is a lot of invalid encoding, manual fixing is just
 not viable. The vim project has a kind of fuzzy encoding conversion which
 accounts for a lot of the non-UTF8 sequences in UTF8 data. You can use vim
 to modify your text dump as follows:
 
 vim -c :wq! ++enc=utf8 fixed.dump original.dump
 
 ---
 
 I think this is a viable option for people with a non-trivial amount of
 data and don't see manual fixing or potentially losing data as a viable
 option.
 
 Thanks,
 
 Gavin
 
 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [GENERAL] 8.1, OID's and plpgsql

2005-12-06 Thread Greg Stark

Tom Lane [EMAIL PROTECTED] writes:

 Rather than hard-wiring a special case for any of these things, I'd much
 rather see us implement INSERT...RETURNING and UPDATE...RETURNING as per
 previous suggestions.  Then you can fetch pkey, ctid, or whatever you
 need.

I happen to think UPDATE RETURNING is one of the coolest things since sliced
bread, but that's because it saved my ass once in my last job.

I wonder whether the ui tools need anything more low level than that. In
general sticking their grubby fingers in the query the user entered seems
wrong and they would have to tack on a RETURNING clause. Though I can't really
see it failing in this particular instance.

-- 
greg


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [GENERAL] 8.1, OID's and plpgsql

2005-12-06 Thread Tom Lane
Greg Stark [EMAIL PROTECTED] writes:
 Tom Lane [EMAIL PROTECTED] writes:
 Rather than hard-wiring a special case for any of these things, I'd much
 rather see us implement INSERT...RETURNING and UPDATE...RETURNING as per
 previous suggestions.

 I wonder whether the ui tools need anything more low level than that. In
 general sticking their grubby fingers in the query the user entered seems
 wrong and they would have to tack on a RETURNING clause.

That was mentioned before as a possible objection, but I'm not sure that
I buy it.  The argument seems to be that a client-side driver would
understand the query and table structure well enough to know what to do
with a returned pkey value, but not well enough to understand how to
tack on a RETURNING clause to request that value.  This seems a bit
bogus.

There may be some point in implementing a protocol-level equivalent of
RETURNING just to reduce the overhead on both sides, but I think we
ought to get the RETURNING functionality in place first and then worry
about that...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Replication on the backend

2005-12-06 Thread Gregory Maxwell
On 12/6/05, Jan Wieck [EMAIL PROTECTED] wrote:
  IMO this is not true. You can get affordable 10GBit network adapters, so 
  you can have plenty of bandwith in a db server pool (if they are located in 
  the same area). Even 1GBit Ethernet greatly helps here, and would make it 
  possible to balance read-intensive (and not write intensive) applications. 
  We using linux bonding interface with 2 gbit NICs, and 200 MBytes/sec 
  throughput is something you need to have a quite some harddisks to reach 
  that. Latency is not bad too.

 It's not so much the bandwidth but more the roundtrips that limit your
 maximum transaction throughput. Remember, whatever the priority, you
 can't increase the speed of light.

Eh, why would light limited delay be any slower than a disk on FC the
same distance away? :)

In any case, performance of PG on iscsi is just fine. You can't blame
the network... Doing multimaster replication is hard because the
locking primitives that are fine on a simple multiprocessor system
(with a VERY high bandwidth very low latency interconnect between
processors) just don't work across a network, so you're left finding
other methods and making them work...

But again, multimaster isn't hard because there of some inherently
slow property of networks.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] row is too big: size 8916, maximum size 8136

2005-12-06 Thread Mark Kirkwood

Euler Taveira de Oliveira wrote:

Hi,

I'm doing some tests with a 700 columns' table. But when I try to load
some data with INSERT or COPY I got that message. I verified that the
BLCKZ is limiting the tuple size but I couldn't have a clue why it's
not using TOAST. I'm using PostgreSQL 8.0.3 in Slackware 10.1 box.
Let me know if you want a test case or other useful information.




700 columns - yuck... recompiling with BLCKSZ=16384 or 32768 (say) might 
allow you this many (will require an initdb).


regards

Mark

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Replication on the backend

2005-12-06 Thread Markus Schiltknecht
On Tue, 2005-12-06 at 23:19 -0500, Jan Wieck wrote:
 It's not so much the bandwidth but more the roundtrips that limit your 
 maximum transaction throughput. 

I completely agree that the latency is counting, not the bandwith.

Does anybody have latency / roundtrip measurements for current hardware?
I'm interested in:
1Gb Ethernet,
10 Gb Ethernet,
InfiniBand,
probably even p2p usb2 or firewire links?

At least Quadrics claims(1) to have measured only 1.38 microseconds.
Assuming real world condition would give you 5 microseconds, on a 3 GHz
processor that's 15'000 CPY cycles. Which is IMHO not that much any
more. Or am I wrong (mental arithmetic never was my favourite subject)?

Regards
Markus

[1]
http://www.quadrics.com/quadrics/QuadricsHome.nsf/NewsByDate/98FFE60F799AC95180256FEA002A6D9D



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: Concurrent CREATE INDEX, try 2 (was Re: [HACKERS] Reducing

2005-12-06 Thread Hannu Krosing
Ühel kenal päeval, T, 2005-12-06 kell 19:32, kirjutas Greg Stark:
 Hannu Krosing [EMAIL PROTECTED] writes:
 
  The scenario where concurrent create index command is be needed is 24/7
  OLTP databases, which can't be taken down for maintenance. Usully they
  can be arranged to tolerate postponing a few transactions for one
  second.
 
 Well, the dominant defining characteristic of OLTP is precisely that you do
 *not* have under your control the timing requirements and can't make such
 arrangements. That is, you have to process requests as fast as they come in
 whatever that might be.

While as fast as possible is a good goal when designing and optimising
a DB engine proper, you never need to design a real system to a spec as
fast as possible but rather to some given expected performance.

For me a 24/7 OLTP is more like a Real Time system, where all queries
have to be processed in not more than a certain time v.s. as fast as
possible. There as fast as possible is a secondary goal, a lot less
important than meeting the deadlines.

For example one real db processes requests usually in 50-200ms, but the
maximum the client is prepared to wait is set to 20 sec. Anything longer
than that and the bells start ringing.

 But that said, realistically *any* solution has to obtain a lock at some time
 to make the schema change. I would say pretty much any O(1) (constant time)
 outage is at least somewhat acceptable as contrasted with the normal index
 build which locks out other writers for at least O(n lg n) time. Anything on
 the order of 100ms is probably as good as it gets here.

For me any delay less than the client timeout is acceptable and anything
more than that is not. N sec is ok, N+1 is not. It's as simple as that.

And if the CREATE INDEX takes 2 weeks in order to let other OLTP
processing go on uninterrupted then it is completely OK. I can afford to
set the deadline for it accordingly. 

Thinking of it, maybe concurrent CREATE INDEX should also honour
vacuum_cost_* GUC's and throttle its progress accordingly in order to
not starve others on IO/CPU .


Hannu



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match