[HACKERS] Error building 32 bit on 64 bit linux system

2008-02-18 Thread Doug Knight
All,
I am trying to build 8.2.5, forcing to a 32 bit build on a 64 bit
system. I have set CFLAGS=-m32, and I run the configure and make/make
install as follows:

setarch i386 ./configure
setarch i386 make
setarch i386 make install

However, I get the following error (using timezone for example):

$ make
gcc -m32 -Wall -Wmissing-prototypes -Wpointer-arith -Winline
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing
-I../../src/include -D_GNU_SOURCE   -c -o localtime.o localtime.c
gcc -m32 -Wall -Wmissing-prototypes -Wpointer-arith -Winline
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing
-I../../src/include -D_GNU_SOURCE   -c -o strftime.o strftime.c
gcc -m32 -Wall -Wmissing-prototypes -Wpointer-arith -Winline
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing
-I../../src/include -D_GNU_SOURCE   -c -o pgtz.o pgtz.c
/usr/bin/ld -r -o SUBSYS.o localtime.o strftime.o pgtz.o
/usr/bin/ld: Relocatable linking with relocations from format elf32-i386
(localtime.o) to format elf64-x86-64 (SUBSYS.o) is not supported
make: *** [SUBSYS.o] Error 1

Funny thing is, there is no SUBSYS.o in my current directory. If I build
from the top, I see this same error in each dirctory/makefile where a
SUBSYS.o is linked with. If I search my build tree after a top-down
build, I do not see any SUBSYS.O files at all. Where is this SUBSYS.o
getting created, and why isn't it being created as a 32 bit file instead
of 64 bit?

Doug Knight
WSI Corp
Andover, MA, USA


Re: [HACKERS] Error building 32 bit on 64 bit linux system

2008-02-18 Thread Doug Knight
Thanks Andrew, I missed the little -o in front of the SUBSYS.o. I did
find that if I did export LDEMULATION=elf_i386 I was able to link
successfully. Now I just need to tell configure that I want to use the
32 bit perl libs, not the 64 bit ones it keeps finding by using:

$PERL -MConfig -e 'print $Config{archlibexp}'

Both 32 and 64 bit libraries are installed on my system, but the return
from the above command within configure points to the 64 bit libs, as
the perl executable is a 64 bit file. I think my better option is to
build my 32 bit versions on a 32 bit CentOS VM I have setup.

Doug

On Mon, 2008-02-18 at 09:48 -0500, Andrew Dunstan wrote:

 
 Doug Knight wrote:
  All,
  I am trying to build 8.2.5, forcing to a 32 bit build on a 64 bit 
  system. I have set CFLAGS=-m32, and I run the configure and make/make 
  install as follows:
 
  setarch i386 ./configure
  setarch i386 make
  setarch i386 make install
 
  However, I get the following error (using timezone for example):
 
  $ make
  gcc -m32 -Wall -Wmissing-prototypes -Wpointer-arith -Winline 
  -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing 
  -I../../src/include -D_GNU_SOURCE   -c -o localtime.o localtime.c
  gcc -m32 -Wall -Wmissing-prototypes -Wpointer-arith -Winline 
  -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing 
  -I../../src/include -D_GNU_SOURCE   -c -o strftime.o strftime.c
  gcc -m32 -Wall -Wmissing-prototypes -Wpointer-arith -Winline 
  -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing 
  -I../../src/include -D_GNU_SOURCE   -c -o pgtz.o pgtz.c
  /usr/bin/ld -r -o SUBSYS.o localtime.o strftime.o pgtz.o
  /usr/bin/ld: Relocatable linking with relocations from format 
  elf32-i386 (localtime.o) to format elf64-x86-64 (SUBSYS.o) is not 
  supported
  make: *** [SUBSYS.o] Error 1
 
  Funny thing is, there is no SUBSYS.o in my current directory. If I 
  build from the top, I see this same error in each dirctory/makefile 
  where a SUBSYS.o is linked with. If I search my build tree after a 
  top-down build, I do not see any SUBSYS.O files at all. Where is this 
  SUBSYS.o getting created, and why isn't it being created as a 32 bit 
  file instead of 64 bit?
 
 
 man ld IYF.
 
 It looks like you need the --oformat option to tell the linker you want 
 32bit output.
 
 Of course you won't find the SUBSYS.o files - it it the creation of 
 those that is failing.
 
 cheers
 
 andrew
 


Re: [HACKERS] Tuning Postgresql on Windows XP Pro 32 bit

2008-01-15 Thread Doug Knight
We tried reducing the memory footprint of the postgres processes, via
shared_buffers (from 3 on Linux to 3000 on Windows), max_fsm_pages
(from 2000250 on Linux to 10 on Windows), max_fsm_relations (from
2 on Linux to 5000 on Windows), and max_connections (from 222 on
Linux to 100 on Windows). Another variable we played with was
effective_cache_size  (174000 on Linux, 43700 on Windows). None of these
reduced memory usage, or improved performance,  significantly. We still
see the high page fault rate too. Other things we tried were reducing
the number of WAL buffers, and changing the wal_sync_method to
opendata_sync, all with minimal effect. I've attached the latest version
of our Windows postgresql.conf file.

Doug



On Mon, 2008-01-07 at 19:49 +0500, Usama Dar wrote: 

 Doug Knight wrote:
  We are running the binary distribution, version 8.2.5-1, installed on 
  Windows XP Pro 32 bit with SP2. We typically run postgres on linux, 
  but have a need to run it under windows as well. Our typical admin 
  tuning for postgresql.conf doesn't seem to be as applicable for windows.
 
 
 So what have you tuned so far? what are your current postgresql settings 
 that you have modified? What are your system specs for Hardware, RAM , 
 CPU etc?
 
 
# -
# PostgreSQL configuration file
# -
#
# This file consists of lines of the form:
#
#   name = value
#
# (The '=' is optional.) White space may be used. Comments are introduced
# with '#' anywhere on a line. The complete list of option names and
# allowed values can be found in the PostgreSQL documentation. The
# commented-out settings shown in this file represent the default values.
#
# Please note that re-commenting a setting is NOT sufficient to revert it
# to the default value, unless you restart the postmaster.
#
# Any option can also be given as a command line switch to the
# postmaster, e.g. 'postmaster -c log_connections=on'. Some options
# can be changed at run-time with the 'SET' SQL command.
#
# This file is read on postmaster startup and when the postmaster
# receives a SIGHUP. If you edit the file on a running system, you have 
# to SIGHUP the postmaster for the changes to take effect, or use 
# pg_ctl reload. Some settings, such as listen_addresses, require
# a postmaster shutdown and restart to take effect.


#---
# FILE LOCATIONS
#---

# The default values of these variables are driven from the -D command line
# switch or PGDATA environment variable, represented here as ConfigDir.

#data_directory = 'ConfigDir'   # use data in another directory
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
#ident_file = 'ConfigDir/pg_ident.conf' # IDENT configuration file

# If external_pid_file is not explicitly set, no extra pid file is written.
#external_pid_file = '(none)'   # write an extra pid file


#---
# CONNECTIONS AND AUTHENTICATION
#---

# - Connection Settings -

listen_addresses = '*'  # what IP address(es) to listen on; 
# comma-separated list of addresses;
# defaults to 'localhost', '*' = all
#port = 5432
max_connections = 100
# note: increasing max_connections costs ~400 bytes of shared memory per 
# connection slot, plus lock space (see max_locks_per_transaction).  You
# might also need to raise shared_buffers to support more connections.
superuser_reserved_connections = 3  # def 2, add one for autovacuum
#unix_socket_directory = ''
#unix_socket_group = ''
#unix_socket_permissions = 0777 # octal
#bonjour_name = ''  # defaults to the computer name

# - Security  Authentication -

#authentication_timeout = 60# 1-600, in seconds
#ssl = off
#password_encryption = on
#db_user_namespace = off

# Kerberos
#krb_server_keyfile = ''
#krb_srvname = 'postgres'
#krb_server_hostname = ''   # empty string matches any keytab entry
#krb_caseins_users = off

# - TCP Keepalives -
# see 'man 7 tcp' for details

#tcp_keepalives_idle = 0# TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0# TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0   # TCP_KEEPCNT;
# 0 selects the system default


#---
# RESOURCE USAGE (except WAL)
#---

# - Memory -

shared_buffers = 3000

Re: [HACKERS] Tuning Postgresql on Windows XP Pro 32 bit

2008-01-07 Thread Doug Knight
We are running the binary distribution, version 8.2.5-1, installed on
Windows XP Pro 32 bit with SP2. We typically run postgres on linux, but
have a need to run it under windows as well. Our typical admin tuning
for postgresql.conf doesn't seem to be as applicable for windows.

Doug

On Sun, 2008-01-06 at 18:23 +0500, Usama Dar wrote:

 
 
 
 On Jan 3, 2008 8:57 PM, Doug Knight [EMAIL PROTECTED] wrote:
 
 All,
 Is there a place where I can find information about tuning
 postgresql running on a Windows XP Pro 32 bit system? I
 installed using the binary installer. I am seeing a high page
 fault delta and total page faults for one of the postgresql
 processes. Any help would be great. 
 
 
 
 Which version of postgres? the process you are seeing this for is a
 user process? 
 
 
 
 
 -- 
 Usama Munir Dar http://www.linkedin.com/in/usamadar
 Consultant Architect
 Cell:+92 321 5020666
 Skype: usamadar 


[HACKERS] Tuning Postgresql on Windows XP Pro 32 bit

2008-01-03 Thread Doug Knight
All,
Is there a place where I can find information about tuning postgresql
running on a Windows XP Pro 32 bit system? I installed using the binary
installer. I am seeing a high page fault delta and total page faults for
one of the postgresql processes. Any help would be great.

Doug Knight
WSI Corp.


[HACKERS] Windows psql -f load of files with tabs changing to escape sequences

2007-11-13 Thread Doug Knight
All,
I am having a problem loading functions from a text file into postgres
on the Windows platform. If the source code stored in the file contains
tabs, they get changed to \x09, which causes problems when the function
is executed. As an example see attached, which preserves the tabs in the
original input file, and shows a condensed version of the \df+ output.
Is there a way to force postgres to load the file exactly as is? The
command I used to load the function was similar to psql -U postgres -d
test -f func.sql (func.sql was the name of the original file containing
only the CREATE FUNCTION section).

Doug Knight
WSI Corp
Andover, MA USA

As stored in file loaded, with tabs:

CREATE OR REPLACE FUNCTION tcl_module_version(text) RETURNS text
AS $_$
set funcname $1
append funcname Version
elog DEBUG Calling TCL function $funcname
return [$funcname]
$_$
LANGUAGE pltcl;

As stored in database, tabs replaced with \x09:

   Name| Source code
---+---
tcl_module_version | \r
   : \x09set funcname $1\r
   : \x09append funcname Version\r
   : \x09elog DEBUG Calling TCL function $funcname\r
   : \x09return [$funcname]\r
   :
(1 row)
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[HACKERS] plperl.dll error from create langauge plperl (XP Pro 64 bit)

2007-11-09 Thread Doug Knight
All,
I am building an XP Pro 64 bit system, where I have installed the latest
PostgreSQL binary distribution (8.2.5-1). Everything loaded fine, I have
postgres up and running as a service, installed ActiveTcl and
ActivePerl, enabled PostGIS, etc. I can do the CREATE LANGUAGE 'pgtcl'
successfully, but I get the following error when I try to CREATE
LANGUAGE 'plperl'

ERROR: could not load library C:/Program Files
(x86)/PostgreSQL/8.2/lib/plperl.dll: %1 is not a valid Win32
application.

Has anyone encountered this error with the binary install in question,
or has anyone attempted on XP Pro 64 bit? If this is a 32 vs 64 bit
issue, our plan is to eventually run on a 32 bit version, but 64 bit is
all I currently have access to.

Thanks,
Doug Knight
WSI Corp
Andover, MA, USA



Re: [HACKERS] plperl.dll error from create langauge plperl (XP Pro 64 bit)

2007-11-09 Thread Doug Knight
Hi Andrew,
The ActivePerl is 64 bit. Funny how it didn't affect the ActiveTcl and
pltcl load. Everything else seems to work fine, just loading the perl
lib.

Doug
On Fri, 2007-11-09 at 15:35 -0500, Andrew Dunstan wrote:

 
 Doug Knight wrote:
  All,
  I am building an XP Pro 64 bit system, where I have installed the 
  latest PostgreSQL binary distribution (8.2.5-1). Everything loaded 
  fine, I have postgres up and running as a service, installed ActiveTcl 
  and ActivePerl, enabled PostGIS, etc. I can do the CREATE LANGUAGE 
  'pgtcl' successfully, but I get the following error when I try to 
  CREATE LANGUAGE 'plperl'
 
  ERROR: could not load library C:/Program Files 
  (x86)/PostgreSQL/8.2/lib/plperl.dll: %1 is not a valid Win32 application.
 
  Has anyone encountered this error with the binary install in question, 
  or has anyone attempted on XP Pro 64 bit? If this is a 32 vs 64 bit 
  issue, our plan is to eventually run on a 32 bit version, but 64 bit 
  is all I currently have access to.
 
 
 
 It's almost certainly a 32 vs 64 bit issue. Do you have the 32bit or 
 64bit perl installed?
 
 We have not got support for 64-bit binaries on Windows yet.
 
 cheers
 
 andrew
 


[HACKERS] Capturing binary and other output destined for make install

2007-06-27 Thread Doug Knight
Is there a way within the existing installation mechanisms to capture
the files generated by an execution of make, compiling from source,
that are copied to their locations during the make install? For
example, I had considered executing a make -n install, capturing that
output, and turning it into a script. I'm not looking to make an RPM.
I've been tasked to automate our build/install process so that we build
once (including packages related to postgres like PostGIS, and some of
our own applications), and install from the result of the build onto a
number of internal servers. 

Thanks,
Doug Knight



Re: [HACKERS] [PATCHES] pg_standby

2007-03-08 Thread Doug Knight
Hi Simon,
I would preserve the existing trigger function as little t -t, and
maybe implement a catchup trigger function as big t -T? Set it up so
that if the first attempt to find the WAL file postgres is currently
requesting succeeds, skip over the trigger check. If the first attempt
fails, then do your trigger check. That way, in the OCF script, the
postmaster can be started, the trigger file set, and connection to the
database looped on until it succeeds as an indication for when the
database is up and available. I think that's cleaner than comparing a
filename from a 'ps' command. Once I've completed the OCF script and
done some testing, I'll forward it to you for you to review and see if
you want to include it.

Thanks,
Doug

On Thu, 2007-03-08 at 15:37 +, Simon Riggs wrote:
 On Thu, 2007-03-08 at 10:33 -0500, Doug Knight wrote:
  Thanks, Simon. I kind of figured that's how pg_standby would work,
  since its invoked by postgres once per WAL file. What I was thinking I
  might do in the OCF script is to grab the pg_standby process line from
  a ps, pull out the current WAL file path and filename, then do an
  existence check for the file. If the file exists, then
  pg_standby/postgres is probably processing it. If not, then we're
  probably waiting on it, implying that recovery is complete. Thoughts
  on this process?
 
 I suppose I might be able to have the option to catch up before it
 stops, on the basis that if it can find the file it was looking for
 without waiting then that can override the trigger.
 
 Which way would you like it to work?
 


Re: [HACKERS] [PATCHES] pg_standby

2007-03-08 Thread Doug Knight
Excellent. Once you're ready, fire it over and I'll test it on our
config.

Doug
On Thu, 2007-03-08 at 18:34 +, Simon Riggs wrote:
 On Thu, 2007-03-08 at 13:29 -0500, Doug Knight wrote:
 
  I would preserve the existing trigger function as little t -t, and
  maybe implement a catchup trigger function as big t -T? Set it up so
  that if the first attempt to find the WAL file postgres is currently
  requesting succeeds, skip over the trigger check. If the first attempt
  fails, then do your trigger check. That way, in the OCF script, the
  postmaster can be started, the trigger file set, and connection to the
  database looped on until it succeeds as an indication for when the
  database is up and available. I think that's cleaner than comparing a
  filename from a 'ps' command. Once I've completed the OCF script and
  done some testing, I'll forward it to you for you to review and see if
  you want to include it.
 
 I'm happy to do this, unless other objections.
 
 I'll be doing another version before feature freeze.
 


Re: [HACKERS] Proposal for Implenting read-only queries during wal replay (SoC 2007)

2007-02-23 Thread Doug Knight
Hi,
Here's some feedback, this is a feature that would be very useful to a
project I am currently working on. 

Doug

On Fri, 2007-02-23 at 17:34 +0100, Florian G. Pflug wrote:
 Hi
 
 I plan to submit a proposal for implementing support for
 read-only queries during wal replay as a Google Summer of Code 2007
 project.
 
 I've been browsing the postgres source-code for the last few days,
 and came up with the following plan for a implementation.
 
 I'd be very interested in any feedback on the propsoal - especially
 of the you overlooked this an that, it can never work that way kind ;-)
 
 greetings, Florian Pflug
 
 Implementing read-only quries during wal archive replay
 ---
 
 Submitter: Florian Pflug [EMAIL PROTECTED]
 
 Abstract:
 Implementing full support for read-only queries during
 wal archive replay is splitted into multiple parts, where
 each part offeres additional functionality over what
 postgres provides now. This makes tackling this as a
 Google Summer of Code 2007 project feasable, and guarantees
 that at least some progress is made, even if solving the
 whole problem turns out to be harder then previously
 thought.
 
 Parts/Milestones of the implementation:
 A) Allow postgres to be started in read-only mode. After
 initial wal recovery, postgres doesn't perform writes
 anymore. All transactions started are implicitly in
 readonly mode. All transactions will be assigned dummy
 transaction ids, which never make it into the clog.
 B) Split StartupXLOG into two steps. The first (Recovery) will process
 only enough wal to bring the system into a consistent state,
 while the second one (Replay) replays the archive until it finds no
 more wal segments. This replay happens in chunks, such that
 after a chunk all *_safe_restartpoint functions return true.
 C) Combine A) and B), in the simplest possible way.
 Introduce a global R/W lock, which is taken by the Replay part
 of B) in write mode before replaying a chunk, then released,
 and immediatly reaquired before replaying the next chunk.
 The startup sequence is modified to do only the Recovery part
 where is is doing StartupXLOG now, and to lauch an extra process
 (similar to bgwriter) to do the second (Replay) part in the background.
 The system is then started up in read-only mode, with the addition
 that the global R/W lock is taken in read mode before starting any
 transaction. Thus, while a transaction is running, no archive replay
 happens.
 
 Benefits:
 *) Part A) alone might be of value for some people in the embedded world,
 or people who want to distribute software the use postgres. You could
 e.g. distribute a CD with a large, read-only database, and your 
 application
 would just need to start postmaster to be able to query it directly from
 the CD.
 *) Read-only hot standby is a rather simple way to do load-balancing, if
 your application doesn't depend on the data being absolutely up-to-date.
 *) Even if this isn't used for load-balancing, it gives the DBA an
 easy way to check how far a PITR slave is lagging behind, therefore
 making PITR replication more user-friendly.
 
 Open Questions/Problems
 *) How do read-only transactions obtain a snapshot? Is it sufficient
 to just create an empty snapshot for them, meaning that they'll
 always look at the clog to obtain a transaction's state?
 *) How many places to attempt to issue writes? How hard is it to
 silence them all while in read-only mode.
 *) How does the user interface look like? I'm currently leaning towards
 a postgresql.conf setting read_only=yes. This would put postgres
 into read-only mode, and if a recovery.conf is present, archive
 replay would run as a background process.
 
 Limitations:
 *) The replaying process might be starved, letting the slave fall
 further and further behind the master. Only true if the slave
 executes a lot of queries, though.
 *) Postgres would continue to run in read-only mode, even after finishing
 archive recovery. A restart would be needed to switch it into read-write
 mode again. (I probably wouldn't be too hard to do that switch without
 a restart, but it seems better to tackle this after the basic features
 are working)
 
 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings
 


Re: [HACKERS] Database backup mechanism

2007-02-09 Thread Doug Knight
I would also be interested in any creative ways to reduce the size and
time to backup databases/clusters. We were just having a conversation
about this yesterday. We were mulling over things like using rsync to
only backup files in the database directory tree that actually changed.
Or maybe doing a selective backup of files based on modified times, etc,
but were unsure if this would be a safe, reliable way to backup a
reduced set of data.

Doug Knight
WSI Inc.
Andover, MA
 
On Fri, 2007-02-09 at 12:45 +0530, [EMAIL PROTECTED] wrote:
 
 Hi Folks, 
 
 We have a requirement to deal with large databases of the size
 Terabytes when we go into production. What is the best database
 back-up mechanism and possible issues? 
 
 pg_dump can back-up database but the dump file is limited by OS
 file-size limit. What about the option of compressing the dump file?
 How much time does it generally take for large databases? I heard,
 that it would be way too long (even one or days). I haven't tried it
 out, though. 
 
 What about taking zipped back-up of the database directory? We tried
 this out but the checkpoint data in pg_xlogs directory is also being
 backed-up. Since these logs keeps on increasing from day1 of database
 creation, the back_up size if increasing drastically. 
 Can we back-up certain subdirectories without loss of information or
 consistency..? 
 
 Any quick comments/suggestions in this regard would be very helpful. 
 
 Thanks in advance, 
 Ravi Kumar Mandala


Re: [HACKERS] pg_standby and build farm

2006-12-28 Thread Doug Knight
Thanks for the response, Simon. I'm continuing to use your script, so if
there's anything I can help you with (testing, ideas, etc), just let me
know.

Doug

On Thu, 2006-12-28 at 13:40 +, Simon Riggs wrote:

 On Wed, 2006-12-27 at 20:09 +, Simon Riggs wrote:
 
  The reason for the -m option was performance. Recovery is I/O-bound,
  with 50% of the CPU it does use coming from IsRecordValid() - which is
  where the CRC checking takes place. (I can add an option to recover
  without CRC checks, if anyone wants it, once I've rejigged the option
  parsing for recovery.conf.)
 
 Make that 70% of the CPU, for long running recoveries, but the CPU only
 gets as high as 20% on my tests, so still I/O bound.
 
  Should be able to use hard links, i.e. ln -f -s /archivepath/%f %p
  instead. I'll test that tomorrow then issue a new version.
 
 The ln works, and helps, but not that much. I'll remove the -m option
 and replace it with an -l option. Must be careful to use the -f option.
 
 The majority of the I/O comes from writing dirty buffers, so enabling
 the bgwriter during recovery would probably be more helpful.
 


Re: [HACKERS] pg_standby and build farm

2006-12-26 Thread Doug Knight
Hi all,
I'm new to the forums, so bear with me on my questions. I've set up an
auto-archive and auto-recover pair of databases using pg_standby, which
I'm prototyping various products for high availability. I've noticed
that when I attempt to fail from the primary archiver to the secondary
recovery db using the pg_standby trigger file, the secondary detects the
trigger file, flags that it couldn't read the current WAL file
pg_standby was waiting on, then attempts to read in the previous WAL
file. I use the -m option in pg_standby, so the previous WAL file no
longer exists, which causes the secondary postgres to panic on not
being able to open the previous WAL and terminates. Is there a way to
prevent the looking for the previous, or preserving the previous WAL
file so that when the trigger file is detected, the secondary will come
all the way up, completely its recovery mode? 

Thanks,
Doug Knight