On Tue, 7 Apr 2009, John Cheng wrote:
One concern I have with SSD drives is that the performance degrades over
time.
The bigger concern I have with them is that even the Intel drives have a
volatile write cache in them. You have either turn off the write cache
(which degrades performance
Hi,
What does the Tuning Wizard do? The one you can run on PGS ? Does it just
touch up the postgresql.conf ? Does it play with other stuff as well?
I am asking because, every time I run the wizard I cannot login with the
password I choose when I installed PGS. What is going on? Replacing the
Le 30/03/09 05:39, Osvaldo Kussama a écrit :
2009/3/29 Bruno Baguette bruno.bague...@gmail.com:
I would like to do a SELECT of that table, but by splitting by 24h day :
Try:
bdteste=# \set ini '\'2009-03-30 14:50\'::timestamp'
bdteste=# \set fim '\'2009-04-01 19:00\'::timestamp'
bdteste=#
On Wed, Apr 8, 2009 at 12:16 PM, Jennifer Trey jennifer.t...@gmail.com wrote:
Look, I am telling now what I did.
I don't want to hear claims about how its not the tuning wizards fault. It
is!
And it does more than the postgresql.conf because thats not true.
I'm sorry to hear you think I'm
Hehe, I am not saying that you are untruthful, but obviously there is more
to this issue than just the .conf file.
Because when returning to the old conf the problem is still there and I am
also certain that this problem was not there before I just installed the
tuning wizard.
Would you at least
On Wed, Apr 08, 2009 at 02:16:35PM +0300, Jennifer Trey wrote:
looking at the pgadmin.log i see this :
2009-04-08 02:03:01 ERROR : server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Why aren't you looking at
On Wed, Apr 8, 2009 at 7:50 AM, Jennifer Trey jennifer.t...@gmail.comwrote:
The service did not report an error.
More help is available by typing NET HELPMSG 3534
Not sure what is going on. Server refuses to start it seems. Please help me
debug this problem :)
My first guess as to why
Look, I am telling now what I did.
I don't want to hear claims about how its not the tuning wizards fault. It
is!
And it does more than the postgresql.conf because thats not true.
Just until today I have been able to login and logout, using pgAdmin to my
DB.
Now, after considering what you
On Wed, Apr 8, 2009 at 10:48 AM, Jennifer Trey jennifer.t...@gmail.com wrote:
Hi,
What does the Tuning Wizard do? The one you can run on PGS ? Does it just
touch up the postgresql.conf ? Does it play with other stuff as well?
It only touches postgresql.conf, and only after you're given a
Jenifer;
Can you send a copy of both your before and after config files to the list?
On Wednesday 08 April 2009 05:16:35 Jennifer Trey wrote:
Look, I am telling now what I did.
I don't want to hear claims about how its not the tuning wizards fault. It
is!
And it does more than the
Jennifer,
obviously you are on windows. And the problem you are describing seems to
be:
after running the tuning wizzard, the PostgreSQL Service does not start any
more
AT the same time you are describing some problem of
you are not able to delete some Files / Directories connected with
I just tried to change the permission on all users, admin to user to
postgres. Access Denied for all! The machines have taken over!
It seems like my long message didnt make it :S Here it is again. (Perhaps it
was because of the attached pic, I will leave it out this time)--
On Wed, Apr 8, 2009 at 3:17 PM, Massa, Harald Armin c...@ghum.de wrote:
Jennifer,
obviously you are on windows. And the problem you are describing
Jennifer,
1.) you should state the exact version of your operating system
**
*Windows Web Server 2008 x64*
**
2.) you should state the exact PostgreSQL Version
**
*The latest available, installed this : postgresql-8.3.7-1-windows.exe*
3.) exactly describe your Windows Authorization.
Greetings!
On my computer, I have a copy of a customer's database for which a full backup
file would be about 300 megabytes long. There are several history tables that
are not needed for day-to-day operation of our system. I came up with a DOS
batch file that consists of a sequence of
After doing allot of clicks with the permissions. It automagically fixed
itself. Fantastic!
The root problem was Windows itself!
I would recommend the EnterpriseDB folks to consider this though, because in
someway its affecting your software.
Thank you all. Now join my new thread for real
Ok,
I have left the previous thread. After changing the last permissions, even
though it said Access Denied, suddenly PostgreSQL started to work again. I
will not dig any further to the strangeness.
I copied the content of the.conf from tuning wizard and restarted. Still
working!
I want to say
On Fri, 27 Mar 2009, Mauro Bertoli wrote:
SELECT
db1.a.id FROM db1.a
UNION
db2.b.id FROM db2.b
Where db1 is a database and db2 is another database. a is a
table in database db1 and b is a table in database db2
You might be able to create the equivalent of a union by having a
On Wednesday 08 April 2009 6:51:44 am Radcon Entec wrote:
Greetings!
On my computer, I have a copy of a customer's database for which a full
backup file would be about 300 megabytes long. There are several history
tables that are not needed for day-to-day operation of our system. I came
up
In response to Jennifer Trey jennifer.t...@gmail.com:
I have 8GB memory, Intel Quad Core 2.4Ghz with 8MB L2 cache. I am running
Windows Web Server 2008 x64 and will be running a Java (64 bit version)
application.
I want to give the java app room for working on 2-3GB. The operating system
Bill, Jennifer,
*shared_buffers = 1024 # min 128kB or max_connections*16kB ## Also to
low.
Right? I've got 3GB to work with!*
Assuming that's equating to 1G, then the value is about right. Common
best practice is to set this value to 1/4 - 1/3 of the memory available
for PostgreSQL.
On Wed, Apr 8, 2009 at 5:23 PM, Bill Moran wmo...@potentialtech.com wrote:
In response to Jennifer Trey jennifer.t...@gmail.com:
I have 8GB memory, Intel Quad Core 2.4Ghz with 8MB L2 cache. I am running
Windows Web Server 2008 x64 and will be running a Java (64 bit version)
application.
On Wed, Apr 8, 2009 at 10:23 AM, Bill Moran wmo...@potentialtech.comwrote:
In response to Jennifer Trey jennifer.t...@gmail.com:
*maintenance_work_mem = 16384 *
If your vacuums and / or create index are taking ages, considering a
higher value here may be useful. I would need to know
postgresql suddenly died...
during recovery
2009-04-08 16:35:34 CEST FATAL: the database system is starting up
^^^ several
2009-04-08 16:35:34 CEST LOG: incomplete startup packet
2009-04-08 16:36:53 CEST FATAL: the database system is starting up
2009-04-08 16:36:53 CEST LOG: startup process
On Wed, Apr 8, 2009 at 5:38 PM, Massa, Harald Armin c...@ghum.de wrote:
Bill, Jennifer,
*shared_buffers = 1024 # min 128kB or max_connections*16kB ## Also to
low.
Right? I've got 3GB to work with!*
Assuming that's equating to 1G, then the value is about right. Common
best practice is
Ivan Sergio Borgonovo m...@webthatworks.it writes:
2009-04-08 16:36:53 CEST LOG: startup process (PID 3176) was
terminated by signal 11: Segmentation fault 2009-04-08 16:36:53 CEST
LOG: aborting startup due to startup process failure
Hmm, what Postgres version is this? Can you get a stack
Greetings!
I'm running PostgreSQL 8.1 under Windows XP, looking at a database hosted on a
machine running PostgreSQL under Windows Server 2003.
The database has a table with three simple columns and 22 million rows. I am
trying to back up that table by itself. However, pg_dump finishes
On Wed, 08 Apr 2009 10:59:54 -0400
Tom Lane t...@sss.pgh.pa.us wrote:
Ivan Sergio Borgonovo m...@webthatworks.it writes:
2009-04-08 16:36:53 CEST LOG: startup process (PID 3176) was
terminated by signal 11: Segmentation fault 2009-04-08 16:36:53
CEST LOG: aborting startup due to startup
Radcon Entec wrote:
Greetings!
I'm running PostgreSQL 8.1 under Windows XP, looking at a database
hosted on a machine running PostgreSQL under Windows Server 2003.
The database has a table with three simple columns and 22 million
rows. I am trying to back up that table by itself.
Hi all,
I'm designing a Postgresql database, and would appreciate this design advice.
I've got a fairly straightforward table that's similar to a blog table
(entryId, date, title, author, etc). There is, however, the
requirement to allow a single, fairly bulky binary attachment to
around 1% of
Jennifer Trey wrote:
*As far as I know, *I have one user and that is the Administrator. Not
sure if that is the commander in chief but obviously not, because he
is screwing with me allot.
unlike Unix derivatives, the Windows Administrator account does NOT have
file access override rights,
Ian Mayo ianm...@tesco.net writes:
I've got a fairly straightforward table that's similar to a blog table
(entryId, date, title, author, etc). There is, however, the
requirement to allow a single, fairly bulky binary attachment to
around 1% of the rows.
There will be a few million rows, and
On Wed, 8 Apr 2009, Massa, Harald Armin wrote:
documenting that for the wiki is still on my backlog; so, here:
shared_buffers of PostgreSQL on Windows != shared_buffers of PostgreSQL on Unix
There's already comments about that in the shared_buffers section of
Cheers Tom,
On Wed, Apr 8, 2009 at 4:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Ian Mayo ianm...@tesco.net writes:
[snip]
No. You'd basically be manually reinventing the TOAST mechanism;
or the large object mechanism, if you choose to store the blob
as a large object rather than a plain
max_connections = 150 # A comprimise :)
effective_cache_size = 2048MB # Old value 439MB -- Even older : 128MB
#Is this too high?
maintenance_work_mem = 96MB # Old 16MB. Would 64MB be better? Updates
and therefore re-indexing of tuples happens quite frequently.
work_mem = 3MB
# Old was 1MB!?
Oleg Bartunov o...@sai.msu.su wrote:
I probably just need to have that Aha! moment, slap my forehead,
and
move on; but I'm not quite understanding something. The answer to
this question could be it: Can I use a different set of
dictionaries
for creating the tsquery than I did for the
Please remember to reply-all so others can help or see the solution as
well (also, the convention on this list is to bottom-post, not top-post).
Radcon Entec wrote:
Steve,
Here's the exact command and output, taken from the DOS command window:
C:\Documents and Settings\entec\program
On Wed, Apr 8, 2009 at 12:05 PM, Jennifer Trey jennifer.t...@gmail.comwrote:
max_connections = 150 # A comprimise :)
Scott, you mentioned :
You can also use the pg_stat_all_indexes table to look at index scans
vs. tuples being read, this can sometimes hint at index 'bloat'. I
would also
Scott, thank you.
I think I might have misunderstood the effective cache size. Its measured in
8kB blocks. So the old number 449697 equals 3.5 GB, which is quite much.
Should I lower this? I had plans to use 2.75GB max. Can I put 2.75GB there?
Should I leave it?
Also, Greg. Since I use Java,
On Wed, Apr 08, 2009 at 04:56:35PM +0100, Ian Mayo wrote:
One more thing: hey, did you hear? I just got some advice from Tom Lane!
Statistically speaking; he's the person most likely to answer you by
quite a long way. Out of the ~24k emails going back to Oct 2007 I've
got from pgsql-general
On Wed, Apr 8, 2009 at 12:24 PM, Jennifer Trey jennifer.t...@gmail.com wrote:
I think I might have misunderstood the effective cache size. Its measured in
8kB blocks. So the old number 449697 equals 3.5 GB, which is quite much.
Should I lower this? I had plans to use 2.75GB max. Can I put
Well, no.. I don't know that. But in a worst case scenario, where everything
is using max, there won't be 3.5 GB for the OS. But for the OS + Postgre
(combined) there will be 2.5 + 2.75 .. But it seems that there is no
greater danger in the effective cache, but a good setting would be nice :)
Is
From: Steve Crawford scrawf...@pinpointresearch.com
snipAnd what was the result? Zero-size file? If not, what was in the file?
/snip
Here is the text that results from dumping my 22-million-row feedback table:
--
-- PostgreSQL database dump
--
-- Started on 2009-04-08 10:10:49 Eastern Daylight
Jennifer Trey wrote:
Scott, thank you.
I think I might have misunderstood the effective cache size. Its
measured in 8kB blocks. So the old number 449697 equals 3.5 GB, which
is quite much. Should I lower this? I had plans to use 2.75GB max. Can
I put 2.75GB there? Should I leave it?
By the way, a full backup and restore using PGAdmin and accepting all default
setings worked successfully, including all 22 million feedback records.
I still would like to understand why the feedback table cannot be backed up by
itself. The technique of backing up and restoring only selected
Radcon Entec wrote:
Here is the text that results from dumping my 22-million-row feedback
table:
...
CREATE RULE feedback_partition_active AS ON INSERT TO feedback WHERE
(new.charge 7000) DO INSTEAD INSERT INTO feedback_active (charge,
elapsed_time, tag_type, stack, tag_value,
On Wed, 8 Apr 2009 17:39:02 +0100
Sam Mason s...@samason.me.uk wrote:
On Wed, Apr 08, 2009 at 04:56:35PM +0100, Ian Mayo wrote:
One more thing: hey, did you hear? I just got some advice from
Tom Lane!
Statistically speaking; he's the person most likely to answer you
by quite a long
From: Steve Crawford scrawf...@pinpointresearch.com
To: Radcon Entec radconen...@yahoo.com
Cc: pgsql-general@postgresql.org pgsql-general@postgresql.org
Sent: Wednesday, April 8, 2009 1:15:55 PM
Subject: Re: [GENERAL] Table has 22 million records, but backup
On Wed, Apr 8, 2009 at 8:01 AM, Jennifer Trey jennifer.t...@gmail.com wrote:
I would like to further tune the tuning wizards recommendations though. I
think it put itself on the lower scale.
OK, instead of blindly guessing at better values, and making a lot of
concurrent changes, you need to
On Wed, 8 Apr 2009, Jennifer Trey wrote:
shared_buffer = 1024MB # Kept it
As mentioned a couple of times here, this is a really large setting for
Windows. Something like 256MB would work better, and you might even find
some people making a case for 64MB or less on Windows. I don't really
On Wednesday 08 April 2009 11:56:35 Ian Mayo wrote:
Cheers Tom,
On Wed, Apr 8, 2009 at 4:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Ian Mayo ianm...@tesco.net writes:
[snip]
No. You'd basically be manually reinventing the TOAST mechanism;
or the large object mechanism, if you choose
Radcon Entec wrote:
*
*
You are, of course, correct. select count(*) from only feedback
returns 0. I have never used (or even seen) PostgreSQL rules before.
When I run the query select * from feedback where charge =
On Wed, Apr 8, 2009 at 8:13 PM, Robert Treat
xzi...@users.sourceforge.net wrote:
Maybe I've been reading too much Pascal again lately, but if only 1% of your
rows are going to have data in this column, personally, I'd put it in a
separate table.
thanks for that Robert - it does match my
I just need another set of eyes to see whatever it is that I am
overlooking.
This is the function:
CREATE OR REPLACE FUNCTION hll_pg_fn_ident_insert()
RETURNS TRIGGER AS $pg_fn$
-- ROW AFTER TRIGGER
-- trigger passes identifier_type and _description
--
On Wed, Apr 08, 2009 at 03:59:22PM -0400, James B. Byrne wrote:
EXCEPTION
WHEN unique_violation THEN
-- NULL -- do nothing
RETURN NULL; -- AFTER trigger results are ignored anyway
END;
exception is part of begin/exception/end; block
so you will
Sam Mason wrote:
On Wed, Apr 08, 2009 at 04:56:35PM +0100, Ian Mayo wrote:
One more thing: hey, did you hear? I just got some advice from Tom Lane!
Statistically speaking; he's the person most likely to answer you by
Even so, this might be the #1 advantage of Postgres over Oracle (cost
i'm in need of some psql advise,
believe its rather a trivial issue, but confusing for me...
(and hope this is the correct list for this?)
facing following issue:
got 2 tables like:
CREATE TABLE td_fetch_by_rrd_id (
f_rrd_id numeric NOT NULL,
f_timestamp numeric NOT NULL,
f_ds numeric,
Le mercredi 08 avril 2009 à 21:59:22, James B. Byrne a écrit :
I just need another set of eyes to see whatever it is that I am
overlooking.
This is the function:
CREATE OR REPLACE FUNCTION hll_pg_fn_ident_insert()
RETURNS TRIGGER AS $pg_fn$
-- ROW AFTER TRIGGER
James B. Byrne byrn...@harte-lyne.ca writes:
I just need another set of eyes to see whatever it is that I am
overlooking.
The RETURN is inside the EXCEPTION clause.
You really need two BEGINs here, one for the outer function body and
one for the exception block around the INSERT.
On Wed, April 8, 2009 16:05, hubert depesz lubaczewski wrote:
On Wed, Apr 08, 2009 at 03:59:22PM -0400, James B. Byrne wrote:
EXCEPTION
WHEN unique_violation THEN
-- NULL -- do nothing
RETURN NULL; -- AFTER trigger results are ignored anyway
END;
On Wed, Apr 08, 2009 at 04:13:58PM -0400, James B. Byrne wrote:
Does this mean that the example given on pg. 798 of the manual is in
error, or have I misread it?
BEGIN
EXCEPTION
WHEN UNIQUE_VIOLATION THEN
-- do nothing
END;
please
On Wed, April 8, 2009 16:06, Tom Lane wrote:
James B. Byrne byrn...@harte-lyne.ca writes:
I just need another set of eyes to see whatever it is that I am
overlooking.
The RETURN is inside the EXCEPTION clause.
You really need two BEGINs here, one for the outer function body and
one for
Kashmir kashmir_us_1...@yahoo.com writes:
being a sql-lamer, i used some query builder help to build my query (which
served me quite well in the past for all my 'complicated' sqls), and was
suggested for f_rrd_id=444 to use something as:
SELECT
td_fetch1m_by_rrd_id.f_timestamp,
ianm...@tesco.net (Ian Mayo) writes:
On Wed, Apr 8, 2009 at 8:13 PM, Robert Treat
xzi...@users.sourceforge.net wrote:
Maybe I've been reading too much Pascal again lately, but if only 1% of your
rows are going to have data in this column, personally, I'd put it in a
separate table.
thanks
On Wednesday 08 April 2009 15:30:28 Ian Mayo wrote:
On Wed, Apr 8, 2009 at 8:13 PM, Robert Treat
xzi...@users.sourceforge.net wrote:
Maybe I've been reading too much Pascal again lately, but if only 1% of
your rows are going to have data in this column, personally, I'd put it
in a
On Wed, Apr 08, 2009 at 05:24:08PM +0200, Ivan Sergio Borgonovo wrote:
How on Debian?
Debian does all it's automagic stuff in init. I never learned how to
start pg manually.
What might be easier is turning on core dumps (ulimit -S -c unlimited)
and then start postgres and see if it drops a
On Wed, Apr 08, 2009 at 05:06:44PM -0400, Robert Treat wrote:
A slightly longer answer would be that, as a general rule, attributes
of your relations that only apply to 1% of the rows are better
represented as a one to N relationship using a second table.
Have you tried to
Robert Treat wrote on 08.04.2009 23:06:
http://www.databasedesign-resource.com/null-values-in-a-database.html
That is a very - hmm - strange article.
One of the proofs that nulls are bad is that SELECT * FROM theTable (theTable
being empty) returns nothing, whereas SELECT COUNT(*) FROM
Robert Treat wrote:
You can be sure that discussion of this topic in this forum will soon be
visited by religious zealots, but the short answer is nulls are bad, mmkay.
A slightly longer answer would be that, as a general rule, attributes of your
relations that only apply to 1% of the
On Apr 7, 2009, at 7:32 PM, Kenneth Tilton wrote:
Scott Marlowe wrote:
On Tue, Apr 7, 2009 at 7:12 PM, Kenneth Tilton
kentil...@gmail.com wrote:
Scott Marlowe wrote:
You can use a different method if you need a table available to the
same session. Create a schema based on the session id,
I still would like to understand why the feedback table cannot be backed
up by itself.
Because there is no actual data in the feedback table.
It's being stored in the feedback_active, feedback_archived_7000 and
your other tables instead.
--
Postgresql php tutorials
On Wed, 8 Apr 2009 23:59:43 +0200
Martijn van Oosterhout klep...@svana.org wrote:
What might be easier is turning on core dumps (ulimit -S -c
unlimited) and then start postgres and see if it drops a core
thanks.
Is there a way to just kill recovery for one DB? Just don't
start it at all?
I'm currently porting an Informix based application to PostgreSQL 8.3.
All of the dates processed by the application are in the mmdd
format. According to the date input table at the link below, the
mmdd format is not listed. The mmdd format is supported, but
I'd have to modify a LOT
Martijn van Oosterhout wrote:
On Wed, Apr 08, 2009 at 05:24:08PM +0200, Ivan Sergio Borgonovo wrote:
How on Debian?
Debian does all it's automagic stuff in init. I never learned how to
start pg manually.
What might be easier is turning on core dumps (ulimit -S -c unlimited)
and then start
On Wednesday 8. April 2009, Ron Mayer wrote:
Sam Mason wrote:
On Wed, Apr 08, 2009 at 04:56:35PM +0100, Ian Mayo wrote:
One more thing: hey, did you hear? I just got some advice from
Tom Lane!
Statistically speaking; he's the person most likely to answer you by
Even so, this might be the
I've encountered some db corruption after restarting postgres on my
database server running 8.2.4. I think that postgres did not shut down
cleanly. Postgres started appropriately but crashed 45 minutes later.
I used pg_resetxlog after the crash to get the db to start again but it
appears that
This is from the link
1/8/1999
January 8 in MDY mode; August 1 in DMY mode
1/18/1999
January 18 in MDY mode; rejected in
other modes
01/02/03
January 2, 2003 in MDY mode;
February 1, 2003 in DMY mode; February 3,
2001 in YMD
I would imagine you would have better luck dropping the index and
recreating. But considering you're 98% full on that drive, it looks like
you're about to have other problems...
On Wed, Apr 8, 2009 at 8:32 PM, Jeff Brenton jbren...@sandvine.com wrote:
I’ve encountered some db corruption after
On Wednesday 08 April 2009 6:32:06 pm Jeff Brenton wrote:
I've encountered some db corruption after restarting postgres on my
database server running 8.2.4. I think that postgres did not shut down
cleanly. Postgres started appropriately but crashed 45 minutes later.
I used pg_resetxlog after
There are no filesystem level content size restrictions that I am aware
of on this system. The user pgsql should have full access to the
filesystems indicated except for the root filesystem.
Where is the temporary location? I am searching around to see if I can
specify it anywhere in the
I was looking at dropping the index and recreating. Part of the reason
that I restarted postgres was to enable WAL archiving so that I can
migrate to a larger filesystem next week. I've got a system with a 1.3T
array that I will be migrating to. This DB has been neglected and I am
trying to fix
On Wed, 2009-04-08 at 22:14 -0400, Jeff Brenton wrote:
There are no filesystem level content size restrictions that I am aware
of on this system. The user pgsql should have full access to the
filesystems indicated except for the root filesystem.
Inodes?
Where is the temporary location?
There are 9 miilion inodes free on /db. All other partitions have at
least 1/2 million free.
-Original Message-
From: Joshua D. Drake [mailto:j...@commandprompt.com]
Sent: Wednesday, April 08, 2009 10:26 PM
To: Jeff Brenton
Cc: Adrian Klaver; pgsql-general@postgresql.org
Subject: Re:
I'm trying to configure Postgres to connect to my university's LDAP
server to authenticate database users. In my pg_hba.conf, I have:
hostssl all +members129.21.0.0/16 ldap
ldaps://ldap.rit.edu:636/ou=people,dc=rit,dc=edu;uid=
These are the same connection settings I'm
All,
From the C API, how do I check for the existence of a column name in
a given table?
Thanks,
Eric
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Eric Smith wrote:
All,
From the C API, how do I check for the existence of a column name in a
given table?
select data_type from information_schema.columns where
table_schema='public' and table_name='given_table' and
column_name='some_column';
that will return the data_type if the
Jeff Brenton wrote:
I've attempted to re-index the pkey listed but after an hour it fails
with
REINDEX INDEX testrun_log_pkey;
ERROR: could not write block 1832079 of temporary file: No space left
on device
HINT: Perhaps out of disk space?
There is currently 14GB free on the
87 matches
Mail list logo