Re: [HACKERS] Re: [BUGS] BUG #4796: Recovery followed by backup creates unrecoverable WAL-file

2009-05-16 Thread Simon Riggs

On Fri, 2009-05-15 at 18:03 -0400, Tom Lane wrote:

 I didn't read this thread earlier, but now that I have, it seems to be
 making a mountain out of a molehill.  

We've discussed a complex issue to pursue other nascent bugs. It's
confused all of us at some point, but seems we're thru that now.

Why do you think the issue on this thread has become a mountain? I don't
see anything other than a docs improvement coming out of it. (The last
thread on pg_standby *was* a mountain IMHO, but that has nothing to do
with this, other than the usual suspects being involved).

 It is entirely false that
 you've got to keep the history files on the live server.

There was a similar suggestion that was already clearly dropped, after
discussion.

I (still) think that keeping the history files that have been used to
build the current timeline would be an important documentary record for
DBAs, especially since we encourage people to add their own notes to
them. The safest place for them would be in the data directory. Keeping
them there would be a minor new feature, not any kind of bug fix.

 I've got no objection to clarifying the documentation's rather offhand
 statement about this, 

Cool

 but let's clarify it correctly.

Of course.

-- 
 Simon Riggs   www.2ndQuadrant.com
 PostgreSQL Training, Services and Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bytea vs. pg_dump

2009-05-16 Thread Stefan Kaltenbrunner

Bernd Helmle wrote:
--On Mittwoch, Mai 06, 2009 19:04:21 -0400 Tom Lane t...@sss.pgh.pa.us 
wrote:



So I'm now persuaded that a better textual representation for bytea
should indeed make things noticeably better here.  It would be
useful though to cross-check this thought by profiling a case that
dumps a comparable volume of text data that contains no backslashes...


This is a profiling result of the same data converted into a printable 
text format without any backslashes. The data amount is quite the same 
and as you already guessed, calls to appendBinaryStringInfo() and 
friends gives the expected numbers:



time   seconds   secondscalls   s/call   s/call  name
35.13 24.6724.67   134488 0.00 0.00  byteaout
32.61 47.5722.90   134488 0.00 0.00  CopyOneRowTo
28.92 67.8820.3185967 0.00 0.00  pglz_decompress
 0.67 68.35 0.47  4955300 0.00 0.00 
hash_search_with_hash_value

 0.28 68.55 0.20 11643046 0.00 0.00  LWLockRelease
 0.28 68.75 0.20  4828896 0.00 0.00  index_getnext
 0.24 68.92 0.17  1208577 0.00 0.00  StrategyGetBuffer
 0.23 69.08 0.16 11643046 0.00 0.00  LWLockAcquire
...
 0.00 70.23 0.00   134498 0.00 0.00  enlargeStringInfo
 0.00 70.23 0.00   134497 0.00 0.00  appendBinaryStringInfo
 0.00 70.23 0.00   134490 0.00 0.00  AllocSetReset
 0.00 70.23 0.00   134490 0.00 0.00  resetStringInfo
 0.00 70.23 0.00   134488 0.00 0.00  CopySendChar
 0.00 70.23 0.00   134488 0.00 0.00  CopySendEndOfRow



while doing some pg_migrator testing I noticed that dumping a database 
seems to be much slower than IO-system is capable off. ie i get 100% CPU 
usage with no IO-wait at all with between 15-30MB/s read rate if i say 
do a pg_dumpall  /dev/null.


The profile for that looks like:


samples  %image name   symbol name
1333764  29.3986  postgres CopyOneRowTo
463205   10.2099  postgres enlargeStringInfo
2371175.2265  postgres AllocSetAlloc
2310175.0920  postgres appendBinaryStringInfo
2247924.9548  postgres heap_deform_tuple
1721543.7946  postgres AllocSetReset
1624343.5803  postgres DoCopyTo
1499483.3051  postgres internal_putbytes
1375483.0318  postgres OutputFunctionCall
1294802.8540  postgres heapgettup_pagemode
1010172.2266  postgres FunctionCall1
93584 2.0628  postgres pq_putmessage
86553 1.9078  postgres timesub
81400 1.7942  postgres CopySendChar
81230 1.7905  postgres int4out
78374 1.7275  postgres localsub
52003 1.1462  postgres MemoryContextAlloc
51265 1.1300  postgres CopySendEndOfRow
49849 1.0988  postgres SPI_push_conditional
48157 1.0615  postgres pg_server_to_client
47670 1.0507  postgres timestamptz_out
42762 0.9426  postgres timestamp2tm


Stefan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bytea vs. pg_dump

2009-05-16 Thread Merlin Moncure
On Sat, May 16, 2009 at 11:23 AM, Stefan Kaltenbrunner
ste...@kaltenbrunner.cc wrote:
 Bernd Helmle wrote:

 --On Mittwoch, Mai 06, 2009 19:04:21 -0400 Tom Lane t...@sss.pgh.pa.us
 wrote:

 So I'm now persuaded that a better textual representation for bytea
 should indeed make things noticeably better here.  It would be
 useful though to cross-check this thought by profiling a case that
 dumps a comparable volume of text data that contains no backslashes...

 This is a profiling result of the same data converted into a printable
 text format without any backslashes. The data amount is quite the same and
 as you already guessed, calls to appendBinaryStringInfo() and friends gives
 the expected numbers:


 time   seconds   seconds    calls   s/call   s/call  name
 35.13     24.67    24.67   134488     0.00     0.00  byteaout
 32.61     47.57    22.90   134488     0.00     0.00  CopyOneRowTo
 28.92     67.88    20.31    85967     0.00     0.00  pglz_decompress
  0.67     68.35     0.47  4955300     0.00     0.00
 hash_search_with_hash_value
  0.28     68.55     0.20 11643046     0.00     0.00  LWLockRelease
  0.28     68.75     0.20  4828896     0.00     0.00  index_getnext
  0.24     68.92     0.17  1208577     0.00     0.00  StrategyGetBuffer
  0.23     69.08     0.16 11643046     0.00     0.00  LWLockAcquire
 ...
  0.00     70.23     0.00   134498     0.00     0.00  enlargeStringInfo
  0.00     70.23     0.00   134497     0.00     0.00
  appendBinaryStringInfo
  0.00     70.23     0.00   134490     0.00     0.00  AllocSetReset
  0.00     70.23     0.00   134490     0.00     0.00  resetStringInfo
  0.00     70.23     0.00   134488     0.00     0.00  CopySendChar
  0.00     70.23     0.00   134488     0.00     0.00  CopySendEndOfRow


 while doing some pg_migrator testing I noticed that dumping a database seems
 to be much slower than IO-system is capable off. ie i get 100% CPU usage
 with no IO-wait at all with between 15-30MB/s read rate if i say do a
 pg_dumpall  /dev/null.

Part of the problem is the decompression.  Can't do much about that
except to not compress your data.

I don't have any hard statistics on hand at the moment, but a while
back we compared 'COPY' vs a hand written SPI routine that got the
tuple data in binary and streamed it out field by field raw to a file.
 The speed difference was enormous..I don't recall the exact
difference but copy was at least 2x slower.  This seems to suggest
there are many potential improvements to copy (my test was mainly
bytea as well).

merlin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] where EXEC_BACKEND?

2009-05-16 Thread abdelhak benmohamed
hi,

actually i try to execute postgres step by step (on paper)
i don't retreive where EXEC_BACKEND is initialized
can any one help me?
it is very important for me

thanks



  

Re: [HACKERS] where EXEC_BACKEND?

2009-05-16 Thread Andrew Dunstan



abdelhak benmohamed wrote:

hi,

actually i try to execute postgres step by step (on paper)
i don't retreive where EXEC_BACKEND is initialized
can any one help me?
it is very important for me

thanks




normally it is added to the CPP_FLAGS by configure, if needed (i.e. for 
the Windows gcc build), or by the Project (for the MSVC build).


It's not defined in any include file.

On Unix it is only ever used to test the way the Windows port works, and 
then you have to define it manually, e.g. by passing it in to configure 
via preset CPP_FLAGS. Standard Unix builds don't work this way.



cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] where EXEC_BACKEND?

2009-05-16 Thread Alvaro Herrera
abdelhak benmohamed wrote:
 hi,
 
 actually i try to execute postgres step by step (on paper)
 i don't retreive where EXEC_BACKEND is initialized
 can any one help me?
 it is very important for me

Nowhere.  If you want it, you have to define it manually in
pg_config_manual.h.

EXEC_BACKEND is a source code hack that allows the Unix build (which
normally uses only fork() without exec()) to follow the same startup
code as the Windows version (which uses CreateProcess(), equivalent to
both fork() and exec()), allowing for better debuggability for those of
us that do not use Windows.

If you want to follow postmaster initialization on a POSIX platform,
it's easier if you just assume that EXEC_BACKEND is not defined.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] generate_series from now to infinity...

2009-05-16 Thread Dickson S. Guedes
Hi all

Is a simple SELECT generate_series(now(), CAST('infinity'::date AS
timestamp), interval '1 hour'); working forever, an expected
behavior?

regards...
-- 
Dickson S. Guedes
-
mail/xmpp: gue...@guedesoft.net - skype: guediz
http://guedesoft.net - http://www.postgresql.org.br

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generate_series from now to infinity...

2009-05-16 Thread Tom Lane
Dickson S. Guedes lis...@guedesoft.net writes:
 Is a simple SELECT generate_series(now(), CAST('infinity'::date AS
 timestamp), interval '1 hour'); working forever, an expected
 behavior?

Uh, what were you expecting it to do?

Actually, I believe it will fail eventually when the repeated additions
overflow ... in 294277 AD.  So you've got about 2 billion timestamp
additions to wait through.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizing Read-Only Scalability

2009-05-16 Thread Jignesh K. Shah



Simon Riggs wrote:

On Thu, 2009-05-14 at 16:21 -0700, Josh Berkus wrote:

  

So we can optimize away the scan through the procarray by doing two if
tests, one outside of the lock, one inside. In normal running, both will
be optimized away, though in read-only periods we would avoid much work.
  

How much work would it be to work up a test patch?



Not much. The most important thing is a place to test it and access to
detailed feedback. Let's see if Dimitri does this.

There are some other tuning aspects to be got right first also, but
those are already known.

  
I would be interested in testing it out.. I have been collecting some 
sysbench read-scalability numbers and some other numbers that I can cook 
up with dbt3 , igen.. So I have a frame of reference on those numbers .. 
I am sure we can always use some extra performance.


Regards,
Jignesh

--
Jignesh Shah   http://blogs.sun.com/jkshah  
The New Sun Microsystems,Inc   http://sun.com/postgresql


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generate_series from now to infinity...

2009-05-16 Thread Brendan Jurd
On Sun, May 17, 2009 at 1:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Dickson S. Guedes lis...@guedesoft.net writes:
 Is a simple SELECT generate_series(now(), CAST('infinity'::date AS
 timestamp), interval '1 hour'); working forever, an expected
 behavior?

 Uh, what were you expecting it to do?

It appears that any generate_series involving infinity is guaranteed to fail.

That being the case, wouldn't it be more useful to throw an error than
to just keep on running until overflow?

Cheers,
BJ

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers