Re: [GENERAL] DB problem

2006-12-27 Thread Albe Laurenz
 We are using PostgreSQL 7.3 version, installed in linux 
 system. Only DB is getting shut down 3-4 times a day. From my 
 application end I get Sorry too many clients error. 
 Refering the FAQ I maximized the DB session limit to 200. 
  
 I think the logs are not enabled in my DB. So I set the 
 following options in postgresql.con file
  
 log_timestamp= true
 log_directory=pg_log  
 log_filename=%Y-%m-%d_%H%M%S.log  
 log_disconnections=true 
 log_hostname=true
 log_statement=true
  
  
 After restarting the system got the following error : 
  FATAL: postgresql.conf:131: syntax error, token=%
  FATAL: postgresql.conf:131: syntax error, token=%
  FATAL: postgresql.conf:131: syntax error, token=%
  FATAL: 'log_directory' is not a valid option name
  FATAL: 'log_directory' is not a valid 
  FATAL: 'log_filename' is not a valid option name
  FATAL: 'log_disconnections' is not a valid option name
 
 When you install the DB, the logs are enabled by default?
  
 Would you pls tell me still what setting need to be done?

You can see in the documentation:
http://www.postgresql.org/docs/7.3/static/runtime-config.html#LOGGING

The parameters you used are for a different version of PostgreSQL.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] array quotation problem

2006-12-27 Thread Sim Zacks

I solved my specific problem by converting the array to a python as follows:
objlist=arr_text[1:-1].replace('','').split(,)
This removes the quotes in any element that had it already, and then makes a list out of the string 
separated by commas.
I'm still curious if it is possible to put the quotes into an array, so if you have an idea, give a 
shout.


Sim Zacks wrote:

select version();
PostgreSQL 8.0.1 on i686-pc-linux-gnu, compiled by GCC 
i686-pc-linux-gnu-gcc (GCC) 3.3.5  (Gentoo Linux 3.3.5-r1, ssp-3.3.2-3, 
pie-8.7.7.1)


In short:
Does anyone have any idea of how to put non escaped quotes into a text 
array element in the same way that the system puts non escaped quotes 
around the text of an element that has whitespace within?


Details:
I am using a text array and when there are spaces in the text, the 
parser automatically puts double quotes around the element and when 
there are no special characters it doesn't, as is written in the 
documentation.


I would like to put double quotes around all the elements, even the ones 
without special characters and I can't figure out how. The double quotes 
that the system automatically puts in are not escaped with a \, but when 
I add quotes they are escaped (again as written in the documentation). 
If have tried using chr(32) and quote_ident() and they work the same way.


My specific problem is that I am passing the array into a python 
function and converting it to a python list type. The array gets passed 
in as a comma delimited string. If the values are already quoted 
(without escapes) then it can be converted automatically. Otherwise I 
have to write a parsing routine to check which of the elements have 
quotes (the ones that had a space) and which don't and then to put in 
the quotes manually before I can convert it to a list type.


According to the documentation:
The array output routine will put double quotes around element values 
if they are empty strings, contain curly braces, delimiter characters, 
double quotes, backslashes, or white space, or
  match the word NULL. Double quotes and backslashes embedded in 
element values will be backslash-escaped.
For numeric data types it is safe to assume that double quotes will 
never appear, but for textual
data types one should be prepared to cope with either presence or 
absence of quotes.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] table locks

2006-12-27 Thread Ilja Golshtein
[EMAIL PROTECTED] wrote:
The question is *why* you feel you need that, ie what are you using
these for?  As was already mentioned upthread, it's usually better
to avoid explicit locking altogether, if you can.

Scenario 1. One has objects belong to a metaclass. Certain operations changes 
metaclass while some other operations are sensitive to these changes and could 
not be executed during changes.
Metaclass corresponds with a table and it is very convenient to be able to lock 
such a table exclusively or shared.

Scenario 2. One should calculate number of items (or amount of money on several 
accounts) and do some actions based on these calculations. Calculations and 
analysis done on user side. Data analysis and data modification should be done 
in the same transaction because of obvious reason. The problem it is not 
enough, and serializable isolation level [may be] required here. Other problem 
one cannon use PostgreSQL▓s serializable transaction isolation because of 
number of conflicts.

My own case is similar with Scenario 2 while it is rather special. In my 
company PostgreSQL is used as a backend (among other DBMSs) by an application. 
This application expects DBMS provide real and proper serializable isolation 
level. Sometimes this problem could be solved (or bypassed), though my 
abilities to change application logic are limited and sometimes the only option 
is table locks.

I have very long experience with Oracle. During those years I▓ve never used 
table locks and never thought about such a thing at all, so I understand my 
questions look peculiar. Actually they are not as strange as they probably seem.

 The only drawback - interference with VACUUM and other system processes
 with obvious performance/response time penalty.

I can hardly imagine an ordinary lock type that doesn't conflict with
anything at all ... ALTER/DROP TABLE being the obvious counterexamples.

That▓s true and DELETE/UPDATE/INSERT statements are other counterexamples. The 
idea is I don▓t care about anything bypass application logic. It is not 
perfect, but it is acceptable. It's nature of advisory locks after all.

If you don't want your shared lock to conflict with VACUUM then you
could use ACCESS SHARE instead of SHARE, and it would still block
EXCLUSIVE.

So you suggest I use ACCESS SHARE instead of SHARE. It means I should use 
ACCESS EXCLUSIVE instead of EXCLUSIVE, right? Not sure it is better pair then 
SHARED/EXCLUSIVE because ACCESS EXCLUSIVE blocks even SELECTs,  while I give it 
a try.

It's quite unlikely to get accepted, considering that advisory locks
already seem to cover the territory.  (8.2 has blocking versions of
those calls BTW.)

Thank you for information about advisory locks. It is cool we have blocking 
versions now. Next step is advisory locks with transaction behavior, right? It 
would be very very nice. 

Not sure my English is good enough to put adjectives in proper order ;) , but 
what I need is advisory [table] blocking transaction lock. Table-level is Ok 
for me while general form is probably better.

-- 
Best regards
Ilja Golshtein

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Is PostgreSQL for this?

2006-12-27 Thread [EMAIL PROTECTED]
Hi everybody,
I'm looking for a database system for a SCADA system. The major 
probles I think it's on performance because the application it's going 
to poll about 4k variables per second from hardware and has to register 
the values on the procces table.
I heard that PostgreSQL provides a bulk loading mechanism called COPY, 
which takes tab-delimited or CSV input from a file. Where COPY can be 
used instead of hundreds or thousands of INSERTS, it can cut execution 
time.
I'm less than a novice so I'll thank any piece of advice.
Sorry for my poor english.
F.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Is PostgreSQL for this?

2006-12-27 Thread [EMAIL PROTECTED]
Hi everybody,
I'm looking for a database system for a SCADA system. The major
probles I think it's on performance because the application it's going
to poll about 4k variables per second from hardware and has to 
register
the values on the procces table.
I heard that PostgreSQL provides a bulk loading mechanism called COPY,
which takes tab-delimited or CSV input from a file. Where COPY can be
used instead of hundreds or thousands of INSERTS, it can cut execution
time.
I'm less than a novice so I'll thank any piece of advice.
Sorry for my poor english.
F.---BeginMessage---
Hi everybody,
I'm looking for a database system for a SCADA system. The major 
probles I think it's on performance because the application it's going 
to poll about 4k variables per second from hardware and has to register 
the values on the procces table.
I heard that PostgreSQL provides a bulk loading mechanism called COPY, 
which takes tab-delimited or CSV input from a file. Where COPY can be 
used instead of hundreds or thousands of INSERTS, it can cut execution 
time.
I'm less than a novice so I'll thank any piece of advice.
Sorry for my poor english.
F.---End Message---

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Problem with index in OR'd expression

2006-12-27 Thread postgresql . org

Tom Lane wrote:

[EMAIL PROTECTED] writes:
I would submit that in that situation, it would be 
reasonable for a user to expect my suggested syntax to still use the 
indicated indexes.


The only thing that will make that work is if indexed_col IS NULL were
an indexable condition, which it isn't because the PG index API only
supports indexed_col operator something as an indexable condition
(IS NULL is not an operator, and even if it were, there's no something
on its righthand side).  Fixing this has been on the radar screen for
awhile, but it's not done, largely for lack of agreement about a
reasonably clean way to change that API.


Sorry to keep this issue alive even longer, Tom, but I think I may've 
been unclear with my example.


I was referring to the situation where one has this in a WHERE clause:

  ((vConstant IS NULL) OR (Table.IndexedCol = vConstant))

where vConstant is a *constant* parameter in a pl/pgsql function.

In the latest versions (8.1 *or* 8.2), would you expect this to 
successfully use the index on Table.IndexedCol and not have PG be 
confused (into a sequential scan) by the (vConstant IS NULL) expression?


As I indicated, I'm currently running 8.0.x, and am wondering whether it 
would be worth the effort to upgrade to 8.1 or 8.2 (Gentoo doesn't yet 
have PG at 8.2, and I'm a bit lazy with installing things outside of 
Portage) to solve this issue or whether I should just enable a 
workaround for now and keep an eye on future releases for a better 
solution to this problem.


Thanks again,

John

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Is PostgreSQL for this?

2006-12-27 Thread A. Kretschmer
am  Wed, dem 27.12.2006, um 13:41:24 -0300 mailte [EMAIL PROTECTED] folgendes:
 Hi everybody,
 I'm looking for a database system for a SCADA system. The major 

I don't know, what is a 'SCADA' System?


 I heard that PostgreSQL provides a bulk loading mechanism called COPY, 
 which takes tab-delimited or CSV input from a file. Where COPY can be 
 used instead of hundreds or thousands of INSERTS, it can cut execution 
 time.

Right.

http://www.postgresql.org/docs/current/static/sql-copy.html


 I'm less than a novice so I'll thank any piece of advice.
 Sorry for my poor english.

No problem, i'm not a native english speaker, but i can read your mail
without problems.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47215,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Problem with index in OR'd expression

2006-12-27 Thread Ragnar
On mið, 2006-12-27 at 11:02 -0600, [EMAIL PROTECTED] wrote:
 
 I was referring to the situation where one has this in a WHERE clause:
 
((vConstant IS NULL) OR (Table.IndexedCol = vConstant))
 
 where vConstant is a *constant* parameter in a pl/pgsql function.

Reguardless of the issue whether pl/pgsql could be expected to
optimize this case, I find it difficult to imagine a scenario
where this kind of coding makes sense.

I understand that in some cases on would like to do this with
a *variable* to simplify logic, but what possible gain can
be had from doing this with a constant, other that obfuscation?

It does not make sense to spend CPU-time on  optimizing a case
that is not useful or common.

gnari



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Problem with index in OR'd expression

2006-12-27 Thread Tom Lane
[EMAIL PROTECTED] writes:
 I was referring to the situation where one has this in a WHERE clause:
((vConstant IS NULL) OR (Table.IndexedCol = vConstant))
 where vConstant is a *constant* parameter in a pl/pgsql function.

My mistake, I was thinking of Table.IndexedCol IS NULL.

 In the latest versions (8.1 *or* 8.2), would you expect this to 
 successfully use the index on Table.IndexedCol and not have PG be 
 confused (into a sequential scan) by the (vConstant IS NULL) expression?

No, and there's no confusion about it: with that WHERE clause, the
plan might have to return every row in the table.  The index is useless.

Since you know that the two OR'd conditions are mutually exclusive,
perhaps you could transform the query into a UNION operation, as in this
example:

regression=# prepare foo(int) as select * from tenk1 where $1 is null union all 
select * from tenk1 where unique2 = $1;
PREPARE
regression=# explain analyze execute foo(42);
 QUERY PLAN
-
 Append  (cost=0.00..561.02 rows=10001 width=244) (actual time=0.169..0.201 
rows=1 loops=1)
   -  Result  (cost=0.00..458.00 rows=1 width=244) (actual 
time=0.012..0.012 rows=0 loops=1)
 One-Time Filter: ($1 IS NULL)
 -  Seq Scan on tenk1  (cost=0.00..458.00 rows=1 width=244) (never 
executed)
   -  Index Scan using tenk1_unique2 on tenk1  (cost=0.00..3.01 rows=1 
width=244) (actual time=0.142..0.166 rows=1 loops=1)
 Index Cond: (unique2 = $1)
 Total runtime: 1.092 ms
(7 rows)

Because of the one-time filter, the seqscan isn't executed unless
needed.

However I'm not sure that this sort of approach scales up if you have
more than one of these conditions in a query ...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Is PostgreSQL for this?

2006-12-27 Thread Richard Broersma Jr
 I'm looking for a database system for a SCADA system. The major
 probles I think it's on performance because the application it's going
 to poll about 4k variables per second from hardware and has to 
 register
 the values on the procces table.
 I heard that PostgreSQL provides a bulk loading mechanism called COPY,
 which takes tab-delimited or CSV input from a file. Where COPY can be
 used instead of hundreds or thousands of INSERTS, it can cut execution
 time.

I think Postgresql can do what you want provide you have the hardware that can 
support 4K +
writes/second.  However, if you followed the model that PIE uses you could 
really relieve the
write load on the database since they only write a 15 second rolling average of 
the process values
every minute.  A good example of what not to do is how Siemans PCS7 implemented 
MSSQL server as
its backend historian.  They created a new db every week to hold trended data.  
The problem they
faced was that if you requested data that spanned multiple weeks from their 
front end they would
sometimes loose weeks of data in the trend.  The problem in thier design what 
that (for some
reason) they would write data to the wrong database segment. 

Regards,
Richard Broersma Jr.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Is PostgreSQL for this?

2006-12-27 Thread Tomi N/A

2006/12/27, [EMAIL PROTECTED] [EMAIL PROTECTED]:

Hi everybody,
I'm looking for a database system for a SCADA system. The major
probles I think it's on performance because the application it's going
to poll about 4k variables per second from hardware and has to register
the values on the procces table.
I heard that PostgreSQL provides a bulk loading mechanism called COPY,
which takes tab-delimited or CSV input from a file. Where COPY can be
used instead of hundreds or thousands of INSERTS, it can cut execution
time.
I'm less than a novice so I'll thank any piece of advice.


I believe you could easily simulate the load in a small
fake-SCADA-program and see how the hardware at your disposal handles
it with postgresql, a different RDBMS or simply a flat file. Make a
small program which will generate a set of 4k random values and send
them asynchronously over the network to your data acquisition
application which should store the data in the database. Measure how
fast you can send the data and still record everything.

If data acquisition speed is your primary concern (as it seems to be),
you might want to use a simple .csv file: you'll probably beat the
performance of any database management system. You could periodically
move the saved data from the .csv files into a database (say,
postgresql) where you could (I assume) analyze it. You might want to
use a separate machine for the database management system so as to
remove any unnecessary CPU and I/O disturbances from the primary data
storage machine.
I don't think your load (32 kBps if your variables are double
precision float values) is a challenge, but running any kind of
analysis on a basically real-time-response-constrained machine might
cost you data losses and I don't know if you can afford those.

Cheers,
t.n.a.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Is PostgreSQL for this?

2006-12-27 Thread Joshua D. Drake
On Wed, 2006-12-27 at 18:13 +, Tomi N/A wrote:
 2006/12/27, [EMAIL PROTECTED] [EMAIL PROTECTED]:
  Hi everybody,
  I'm looking for a database system for a SCADA system. The major
  probles I think it's on performance because the application it's going
  to poll about 4k variables per second from hardware and has to register
  the values on the procces table.
  I heard that PostgreSQL provides a bulk loading mechanism called COPY,
  which takes tab-delimited or CSV input from a file. Where COPY can be
  used instead of hundreds or thousands of INSERTS, it can cut execution
  time.
  I'm less than a novice so I'll thank any piece of advice.
 
 I believe you could easily simulate the load in a small
 fake-SCADA-program and see how the hardware at your disposal handles
 it with postgresql, a different RDBMS or simply a flat file. Make a
 small program which will generate a set of 4k random values and send
 them asynchronously over the network to your data acquisition
 application which should store the data in the database. Measure how
 fast you can send the data and still record everything.


If you are going to have a live feed you would probably benefit from the
multi insert capability of 8.2:

insert into foo values () () ()

Sincerely,

Joshua D. Drake




 
 If data acquisition speed is your primary concern (as it seems to be),
 you might want to use a simple .csv file: you'll probably beat the
 performance of any database management system. You could periodically
 move the saved data from the .csv files into a database (say,
 postgresql) where you could (I assume) analyze it. You might want to
 use a separate machine for the database management system so as to
 remove any unnecessary CPU and I/O disturbances from the primary data
 storage machine.
 I don't think your load (32 kBps if your variables are double
 precision float values) is a challenge, but running any kind of
 analysis on a basically real-time-response-constrained machine might
 cost you data losses and I don't know if you can afford those.
 
 Cheers,
 t.n.a.
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org/
 
-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] pg_dump using MS-DOS BAT file

2006-12-27 Thread Richard Broersma Jr
Creating shell scripts and cron jobs are pretty easy tasks in *nix.  I wanted 
to attach a sample
BAT file script for anyone that is enterested in get something similar to work 
under windows. 
Comments are welcome.

Copy this text below into you own newly created bat file and then create a 
new scheduled job
by:

open Control Panel
open Scheduled Tasks
open Add Scheduled Tasks
Browse to where you saved you bat file.
Configure a the schedule and how many time a day you would like your db dumped.

I didn't attach my bat file to this email since I expect most email filters 
would balk at a bat
file.  Let me know what you think.

Regards,
Richard Broersma Jr.



REM Backup BAT file script begins here

REM Set these variables to what is appropriate for your system

SET PGPATH=C:\Program Files\PostgreSQL\8.2rc1\bin\
SET SVPATH=O:\02U20410 Rule 1105\Engr\Controls\Instrument Index\db_backup\
SET ALTPTH=C:\Documents and Settings\BroersR\My Documents\db_backupalt\
SET PRJDB=proj02u20411
SET DBUSR=postgres


REM This is an ugly solution to get a timestamp for the newly created
REM dump file.

FOR /F TOKENS=2,3,4 DELIMS=/  %%i IN ('DATE /T') DO SET d=%%k-%%i-%%j
FOR /F TOKENS=1,2,3 DELIMS=:  %%i IN ('TIME /T') DO SET t=%%i%%j%%k

SET DBDUMP=%PRJDB%-%d%-%t%.dump

%PGPATH%pg_dump.exe -C -U %DBUSR% %PRJDB%  %SVPATH%%DBDUMP%
copy %SVPATH%%DBDUMP% %ALTPTH%

REM Backup BAT file script ends here

-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Is PostgreSQL for this?

2006-12-27 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/27/06 11:59, Richard Broersma Jr wrote:
 I'm looking for a database system for a SCADA system. The major
  probles I think it's on performance because the application
 it's going to poll about 4k variables per second from hardware
 and has to register the values on the procces table. I heard
 that PostgreSQL provides a bulk loading mechanism called COPY, 
 which takes tab-delimited or CSV input from a file. Where COPY
 can be used instead of hundreds or thousands of INSERTS, it can
 cut execution time.
 
 I think Postgresql can do what you want provide you have the
 hardware that can support 4K + writes/second.  However, if you

As others have mentioned, DP variables are 8 bytes.  If metadata is
16 bytes, that's a total of (8+16)*4096 = 96KBps.

Trivial for even the slowest home-designed hard disks.

Let's then take the CPU.  A dual-core 2GHz Opteron has 4 gigacycles
per second.  That gives a dedicated machine 1 megacycle to handle
each variable per second.

I certainly think that's achievable...

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is common sense really valid?
For example, it is common sense to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that common sense is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFkt4iS9HxQb37XmcRAm4GAJ4lsDl1Juw2/VMsaKA+6YlOth9RngCgiMse
PlMJq3loRhVUaBh5y+oDLnM=
=QWBN
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Autovacuum Improvements

2006-12-27 Thread Scott Ribe
 Having a mechanism that requires enormous DBA effort and where there
 is considerable risk of simple configuration errors that will be hard
 to notice may not be the best kind of feature :-).

Why not? It seems to have worked remarkably well for the market leader ;-)

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] libnss_pgsql? anyone have it working?

2006-12-27 Thread Tony Caduto

Does anyone have libnss_pgsql working on RedHat EL 4.x?

I compiled it ok, but it does not seem to work.  I copied the compiled 
libs to /lib and setup all the conf files,
but I keep getting a error when I try to logon that states Cannot 
connect to database


I am running pg 8.1.5(installed via RPM), not sure if it is looking for 
libpq to be named something specific or not(other than the 8.1 default).


Thanks,

--
Tony  



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Need Help

2006-12-27 Thread JM
how can i get the disk usage for each table? can i do it via SQL? 



Thanks,

Mailing-Lists

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Need Help

2006-12-27 Thread Richard Broersma Jr

--- JM [EMAIL PROTECTED] wrote:

 how can i get the disk usage for each table? can i do it via SQL? 

This function should do what you want:

pg_relation_size(text) or pg_total_relation_size(text) 

I found it on the following link:

http://www.postgresql.org/docs/8.2/interactive/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] 答复: [GENERAL] Need Help

2006-12-27 Thread 马庆
Maybe U can't get the disk usage info, but you can try command explain


-邮件原件-
发件人: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 代表 JM
发送时间: 2006年12月28日 10:43 MARK
收件人: pgsql-general@postgresql.org
抄送: Pgsql-Performance (E-mail)
主题: [GENERAL] Need Help

how can i get the disk usage for each table? can i do it via SQL? 



Thanks,

Mailing-Lists

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] OS X Tiger, and PostgreSQL 8.2 don't mix?

2006-12-27 Thread Karen Hill
I have OS X tiger with all the updates:

uname -r
8.8.0

Here is what I get when I try to initdb on a freshly compiled 8.2:

selecting default max_connections ... 10
selecting default shared_buffers/max_fsm_pages ... 400kB/2
creating configuration files ... ok
creating template1 database in /usr/local/pgsql/data/base/1 ... FATAL:
could not create shared memory segment: Cannot allocate memory
DETAIL:  Failed system call was shmget(key=2, size=1646592, 03600).
HINT:  This error usually means that PostgreSQL's request for a shared
memory segment exceeded available memory or swap space. To reduce the
request size (currently 1646592 bytes), reduce PostgreSQL's
shared_buffers parameter (currently 50) and/or its max_connections
parameter (currently 10).
The PostgreSQL documentation contains more information about
shared memory configuration.
child process exited with exit code 1


I read the documentation
(http://www.postgresql.org/docs/8.2/static/kernel-resources.html) and
added the appropriate items to /etc/sysctl.conf, and I rebooted for it
to take effect.

cat /etc/sysctl.conf
kern.sysv.shmmax=4194304
kern.sysv.shmmin=1
kern.sysv.shmmni=32
kern.sysv.shmseg=8
kern.sysv.shmall=1024

I still get the error when I initdb.  OS X and PostgreSQL has worked
before for me, compiled from the source.

regards,

karen


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq