Re: [GENERAL] 9.1 to 9.2 requires a dump/reload?

2012-10-23 Thread Nikolas Everett
On Mon, Oct 22, 2012 at 6:17 PM, Alan Hodgson  wrote:

> On Monday, October 22, 2012 05:55:07 PM Nikolas Everett wrote:
> > I see that pg_upgrade is an option.  Having never used how long should I
> > expect pg_upgrade to take?  Obviously we'll measure it in our
> environment,
> > but it'd be nice to have a ballpark figure.
>
> pg_upgrade using hard links should only take a minute or 2. You'll also
> need
> to shuffle around packages and services and config files. The slowest part
> for any
> decent sized database will be doing an analyze after bringing it up under
> 9.2,
> though. So however long that takes for your db, plus maybe 10-15 minutes or
> so, if you've practiced.
>

Yikes!  Analyze will certainly take the longest time - we'll have to build
some kind of strategy for which tables to analyze first and how many to
analyze at once.

Thanks for letting me know.

Nik


[GENERAL] 9.1 to 9.2 requires a dump/reload?

2012-10-22 Thread Nikolas Everett
I was just looking at
http://www.postgresql.org/docs/devel/static/release-9-2.html and it
mentioned that a dump/reload cycle was required to upgrade from a previous
release.  I just got done telling some of my coworkers that PG had been
bitten by this enough times that they were done with it.  Am I wrong?  Is
this normal?

I see that pg_upgrade is an option.  Having never used how long should I
expect pg_upgrade to take?  Obviously we'll measure it in our environment,
but it'd be nice to have a ballpark figure.

Nik


Re: [GENERAL] Please help me write a query

2010-05-27 Thread Nikolas Everett
Got it:
SELECT state1, timestamp
   FROM (SELECT state1, timestamp, lag(state1) OVER (ORDER BY timestamp)
FROM test) as foo
 WHERE state1 != lag OR lag IS NULL
ORDER BY timestamp;
 state1 | timestamp
+
  1 | now() - interval '12 hours'
  2 | now() - interval '9 hours'
  1 | now() - interval '8 hours'

Without lag IS NULL I miss the first row.

On Thu, May 27, 2010 at 11:44 AM, Nikolas Everett  wrote:

> The 10 and 11 hour interval are being skipped because I'm only interested
> in the transitions of state 1. State 1 only transitioned three times at now
> - 12, now - 9 and now - 8.
>
> The table has both transitions in it because I frequently care about them
> both together.  I just don't in this case.
>
>
> On Thu, May 27, 2010 at 12:36 PM, Justin Graf  wrote:
>
>>  On 5/27/2010 9:45 AM, Nikolas Everett wrote:
>>
>> Sorry.  Here is the setup:
>> CREATE TABLE test (id BIGSERIAL PRIMARY KEY, state1 INT NOT NULL, state2
>> INT NOT NULL, timestamp TIMESTAMP);
>>  INSERT INTO test (state1, state2, timestamp) VALUES (1, 1, now() -
>> interval '12 hours');
>>   INSERT INTO test (state1, state2, timestamp) VALUES (1, 2, now() -
>> interval '11 hours');
>>   INSERT INTO test (state1, state2, timestamp) VALUES (1, 1, now() -
>> interval '10 hours');
>>  INSERT INTO test (state1, state2, timestamp) VALUES (2, 1, now() -
>> interval '9 hours');
>>  INSERT INTO test (state1, state2, timestamp) VALUES (1, 1, now() -
>> interval '8 hours');
>>
>>
>>  I want to write a query that spits out:
>>   state1 | timestamp
>> +
>>   1 | now() - interval '12 hours'
>>   2 | now() - interval '9 hours'
>>   1 | now() - interval '8 hours'
>>
>>
>>  Have a question what makes  these values different other than the
>> timestamp???
>>
>>
>> 1, 1, now() - interval '12 hours'
>> *1, 1, now() - interval '10 hours'*
>>
>> The reason i ask, is because you show *1, 1, now() - interval '8 hours'*
>> in the desired output.   What logic keeps the 8 hour and 12 hour but not the
>> 10hour interval???
>>
>> Its kinda hard to understand why the 10hour interval is being skipped???
>>
>>
>> All legitimate Magwerks Corporation quotations are sent in a .PDF file
>> attachment with a unique ID number generated by our proprietary quotation
>> system. Quotations received via any other form of communication will not be
>> honored.
>>
>> CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain
>> legally privileged, confidential or other information proprietary to
>> Magwerks Corporation and is intended solely for the use of the individual to
>> whom it addresses. If the reader of this e-mail is not the intended
>> recipient or authorized agent, the reader is hereby notified that any
>> unauthorized viewing, dissemination, distribution or copying of this e-mail
>> is strictly prohibited. If you have received this e-mail in error, please
>> notify the sender by replying to this message and destroy all occurrences of
>> this e-mail immediately.
>> Thank you.
>>
>
>


Re: [GENERAL] Please help me write a query

2010-05-27 Thread Nikolas Everett
The 10 and 11 hour interval are being skipped because I'm only interested in
the transitions of state 1. State 1 only transitioned three times at now -
12, now - 9 and now - 8.

The table has both transitions in it because I frequently care about them
both together.  I just don't in this case.

On Thu, May 27, 2010 at 12:36 PM, Justin Graf  wrote:

>  On 5/27/2010 9:45 AM, Nikolas Everett wrote:
>
> Sorry.  Here is the setup:
> CREATE TABLE test (id BIGSERIAL PRIMARY KEY, state1 INT NOT NULL, state2
> INT NOT NULL, timestamp TIMESTAMP);
>  INSERT INTO test (state1, state2, timestamp) VALUES (1, 1, now() -
> interval '12 hours');
>   INSERT INTO test (state1, state2, timestamp) VALUES (1, 2, now() -
> interval '11 hours');
>   INSERT INTO test (state1, state2, timestamp) VALUES (1, 1, now() -
> interval '10 hours');
>  INSERT INTO test (state1, state2, timestamp) VALUES (2, 1, now() -
> interval '9 hours');
>  INSERT INTO test (state1, state2, timestamp) VALUES (1, 1, now() -
> interval '8 hours');
>
>
>  I want to write a query that spits out:
>   state1 | timestamp
> +
>   1 | now() - interval '12 hours'
>   2 | now() - interval '9 hours'
>   1 | now() - interval '8 hours'
>
>
>  Have a question what makes  these values different other than the
> timestamp???
>
>
> 1, 1, now() - interval '12 hours'
> *1, 1, now() - interval '10 hours'*
>
> The reason i ask, is because you show *1, 1, now() - interval '8 hours'*
> in the desired output.   What logic keeps the 8 hour and 12 hour but not the
> 10hour interval???
>
> Its kinda hard to understand why the 10hour interval is being skipped???
>
>
> All legitimate Magwerks Corporation quotations are sent in a .PDF file
> attachment with a unique ID number generated by our proprietary quotation
> system. Quotations received via any other form of communication will not be
> honored.
>
> CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain
> legally privileged, confidential or other information proprietary to
> Magwerks Corporation and is intended solely for the use of the individual to
> whom it addresses. If the reader of this e-mail is not the intended
> recipient or authorized agent, the reader is hereby notified that any
> unauthorized viewing, dissemination, distribution or copying of this e-mail
> is strictly prohibited. If you have received this e-mail in error, please
> notify the sender by replying to this message and destroy all occurrences of
> this e-mail immediately.
> Thank you.
>


Re: [GENERAL] Please help me write a query

2010-05-27 Thread Nikolas Everett
Sorry.  Here is the setup:
CREATE TABLE test (id BIGSERIAL PRIMARY KEY, state1 INT NOT NULL, state2 INT
NOT NULL, timestamp TIMESTAMP);
INSERT INTO test (state1, state2, timestamp) VALUES (1, 1, now() - interval
'12 hours');
INSERT INTO test (state1, state2, timestamp) VALUES (1, 2, now() - interval
'11 hours');
INSERT INTO test (state1, state2, timestamp) VALUES (1, 1, now() - interval
'10 hours');
INSERT INTO test (state1, state2, timestamp) VALUES (2, 1, now() - interval
'9 hours');
INSERT INTO test (state1, state2, timestamp) VALUES (1, 1, now() - interval
'8 hours');


I want to write a query that spits out:
 state1 | timestamp
+
  1 | now() - interval '12 hours'
  2 | now() - interval '9 hours'
  1 | now() - interval '8 hours'

Standard grouping destroys the third row so that's out.  No grouping at all
gives repeats of state1.  Is this what partitioning is for?

Nik

On Thu, May 27, 2010 at 10:20 AM, Ozz Nixon  wrote:

> Lost me a bit, do you mean DISTINCT?
>
> select distinct state1, first(timestamp) from table
>
> On May 27, 2010, at 10:04 AM, Nikolas Everett wrote:
>
> > Say I have a table that stores state transitions over time like so:
> > id, transitionable_id, state1, state2, timestamp
> >
> > I'm trying to write a query that coalesces changes in state2 away to
> produce just a list of transitions of state1.  I guess it would look
> something like
> >
> > SELECT state1, FIRST(timestamp)
> > FROM table
> >
> > but I have no idea how to aggregate just the repeated state1 rows.
>
>


[GENERAL] Please help me write a query

2010-05-27 Thread Nikolas Everett
Say I have a table that stores state transitions over time like so:
id, transitionable_id, state1, state2, timestamp

I'm trying to write a query that coalesces changes in state2 away to produce
just a list of transitions of state1.  I guess it would look something like

SELECT state1, FIRST(timestamp)
FROM table

but I have no idea how to aggregate just the repeated state1 rows.


Re: [GENERAL] 110,000,000 rows

2010-05-27 Thread Nikolas Everett
I've had a reporting database with just about a billion rows.  Each row
was horribly large because the legacy schema had problems.  We partitioned
it out by month and it ran about 30 million rows a month.  With a reasonably
large box you can get that kind of data into memory and indexes are
almost unnecessary.  So long as you have constraint exclusion and a good
partition scheme you should be fine.  Throw in a well designed schema and
you'll be cooking well into the tens of billions of rows.

We ran self joins of that table reasonably consistently by the way:
SELECT lhs.id, rhs.id
FROM bigtable lhs, bigtable rhs
WHERE lhs.id > rhs.id
 AND '' > lhs.timestamp AND lhs.timestamp >= ''
 AND '' > rhs.timestamp AND rhs.timestamp >= ''
 AND lhs.timestamp = rhs.timestamp
 AND lhs.foo = rhs.foo
 AND lhs.bar = rhs.bar

This really liked the timestamp index and we had to be careful to only do it
for a few days at a time.  It took a few minutes each go but it was
definitely doable.

Once you get this large you do have to be careful with a few things though:
*It's somewhat easy to write super long queries or updates.  This can lots
of dead rows in your tables.  Limit your longest running queries to a day or
so.  Note that queries are unlikely to take that long but updates with
massive date ranges could.  SELECT COUNT(*) FROM bigtable too about 30
minutes when the server wasn't under heavy load.
*You sometimes get bad plans because:
**You don't or can't get enough statistics about a column.
**PostgreSQL doesn't capture statistics about two columns together.
 PostgreSQL has no way of knowing that columnA = 'foo' implies columnB =
'bar' about 30% of the time.

Nik

On Thu, May 27, 2010 at 5:58 AM, Massa, Harald Armin  wrote:

> Dann,
>
> There really are domains that big, so that there is no more normalization
>> or other processes to mitigate the problem.
>>
>> Examples:
>> Microsoft's registered customers database (all MS products bought by any
>> customer, including operating systems)
>> Tolls taken on the New Jersey road system for FY 2009
>> DNA data from the Human Genome Project
>>
>> .
>
> please also think of ouer most risk exposed users, the ones using Poker /
> Roulette simulation and analyzing software with an PostgrSQL database below.
> There are so many rounds of Poker to play  :)
>
> Harald
>
>
> --
> GHUM Harald Massa
> persuadere et programmare
> Harald Armin Massa
> Spielberger Straße 49
> 70435 Stuttgart
> 0173/9409607
> no fx, no carrier pigeon
> -
> Using PostgreSQL is mostly about sleeping well at night.
>


Re: [GENERAL] Using Postgres to store high volume streams of sensor readings

2008-11-21 Thread Nikolas Everett
You might want to look into how OpenNMS uses RRDTool.  It is able to handle
a huge number of nodes by queuing inserts into the RRDs and using JRobin.

I'm not sure if it is a great solution for what you are looking for, but
I've found its performance scales quite well.  I'm getting well over 500
updates per second using JRobin and an NFS disk.  I'm sure I could do better
but by my application is limited by hibernate.  Each of my files stores an
MRTG's worth of data and keeps the average and max of four points in 136k.

Here is an iostat -dmx 1 of the rrd update:
Device: rrqm/s   wrqm/s   r/s   w/srMB/swMB/s avgrq-sz
avgqu-sz   await  svctm  %util
xvda  7.0033.00 122.00 2093.00 0.48 8.34 8.16
150.70   69.92   0.45 100.00

Device: rrqm/s   wrqm/s   r/s   w/srMB/swMB/s avgrq-sz
avgqu-sz   await  svctm  %util
xvda493.0026.00 248.00 540.00 2.84 2.2113.13
43.70   55.42   1.26  99.60

My big problem with RRD is not being able to query it like you can a
database and it tending not to keep exact values.  Oh, and it being pretty
cryptic.

Sorry to clutter up the list of RRD stuff.  I just thought it might be
pertinent to Ciprian.

On Fri, Nov 21, 2008 at 8:03 AM, Ciprian Dorin Craciun <
[EMAIL PROTECTED]> wrote:

> On Fri, Nov 21, 2008 at 2:55 PM, Gerhard Heift
> <[EMAIL PROTECTED]> wrote:
> > On Fri, Nov 21, 2008 at 02:50:45PM +0200, Ciprian Dorin Craciun wrote:
> >> Hello all!
> >>
> >> I would like to ask some advice about the following problem
> >> (related to the Dehems project: http://www.dehems.eu/ ):
> >> * there are some clients; (the clients are in fact house holds;)
> >> * each device has a number of sensors (about 10), and not all the
> >> clients have the same sensor; also sensors might appear and disappear
> >> dynamicaly; (the sensors are appliances;)
> >> * for each device and each sensor a reading is produced (at about
> >> 6 seconds); (the values could be power consumptions;)
> >> * I would like to store the following data: (client, sensor,
> >> timestamp, value);
> >> * the usual queries are:
> >> * for a given client (and sensor), and time interval, I need
> >> the min, max, and avg of the values;
> >> * for a given time interval (and sensor), I need min, max, and
> >> avg of the values;
> >> * other statistics;
> >>
> >> Currently I'm benchmarking the following storage solutions for this:
> >> * Hypertable (http://www.hypertable.org/) -- which has good insert
> >> rate (about 250k inserts / s), but slow read rate (about 150k reads /
> >> s); (the aggregates are manually computed, as Hypertable does not
> >> support other queries except scanning (in fact min, and max are easy
> >> beeing the first / last key in the ordered set, but avg must be done
> >> by sequential scan);)
> >> * BerkeleyDB -- quite Ok insert rate (about 50k inserts / s), but
> >> fabulos read rate (about 2M reads / s); (the same issue with
> >> aggregates;)
> >> * Postgres -- which behaves quite poorly (see below)...
> >> * MySQL -- next to be tested;
> >
> > For this case I would use RRDTool: http://oss.oetiker.ch/rrdtool/
> >
> > Regards,
> >  Gerhard
> >
> > -BEGIN PGP SIGNATURE-
> > Version: GnuPG v1.4.9 (GNU/Linux)
> >
> > iEYEARECAAYFAkkmr9wACgkQa8fhU24j2flejQCeKl650qmSlomovOXDl6IKrADb
> > cTAAnRebAFq420MuW9aMmhoFOo+sPIje
> > =Zcoo
> > -END PGP SIGNATURE-
>
>Hy Gerhard, I know about RRDTool, but it has some limitations:
>* I must know in advance the number of sensors;
>* I must create for each client a file (and If I have 10 thousand
> clients?);
>* I have a limited amount of history;
>* (I'm not sure about this one but i think that) I must insert
> each data point by executing a command;
>* and also I can not replicate (distribute) it easily;
>
>Or have you used RRDTool in a similar context as mine? Do you have
> some benchmarks?
>
>Ciprian.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Table bloat in 8.3

2008-11-13 Thread Nikolas Everett
That is the expected behavior.  Postgres doesn't give back disk like Java
doesn't give back memory.  It keeps a map of where the free space is so it
can use it again.

It does all this so it doesn't have to lock the table to compact it when
VACUUMing.  VACUUM FULL does lock the table to compact it.  In practice, if
you keep your free space map large enough and you have enough rows, your
tables settle down to a size close to what you'd expect.

I hope that helps,

--Nik

On Thu, Nov 13, 2008 at 2:03 PM, <[EMAIL PROTECTED]> wrote:

> I am somewhat new to Postgresql and am trying to figure out if I have a
> problem here.
>
> I have several tables that when I run VACUUM FULL on, they are under 200k,
> but after a day of records getting added they grow to 10 to 20 megabytes.
> They get new inserts and a small number of deletes and updates.
>
> A normal VACUUM does not shrink the table size, but FULL does, or dumping
> and restoring the database to a test server.
>
> I know that some extra space is useful so disk blocks don't need to be
> allocated for every insert, but this seems excessive.
>
> My question is... should I be worrying about this or is this expected
> behaviour?  I can run a daily VACUUM but if this is indicating a
> configuration problem I'd like to know.
>
> Here is an example table.   The disk size is reported at 14,049,280 bytes.
>
> pg_stat_user_tables for the live db...  table size is 14,049,280 bytes.
>
> seq_scan | 32325
> seq_tup_read | 39428832
> idx_scan | 6590219
> idx_tup_fetch| 7299318
> n_tup_ins| 2879
> n_tup_upd| 6829984
> n_tup_del| 39
> n_tup_hot_upd| 420634
> n_live_tup   | 2815
> n_dead_tup   | 0
>
> And after it is dumped and restored... size is now 188,416 bytes.
>
> seq_scan | 8
> seq_tup_read | 22520
> idx_scan | 0
> idx_tup_fetch| 0
> n_tup_ins| 2815
> n_tup_upd| 0
> n_tup_del| 0
> n_tup_hot_upd| 0
> n_live_tup   | 2815
> n_dead_tup   | 0
>
> I checked for outstanding transactions and there are none.
>
> Thanks!
>
> --
> Ian Smith
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] options for launching sql script asynchronously from web app

2008-11-07 Thread Nikolas Everett
Authenticate in web app and drop a script in a directory and run them with
cron maybe?
Authenticate in web app and drop a row in a table and let a long running
process suck the row out and kick something off in a thread pool?

I've seen both.  You've got to monitor both somehow.  The second one is much
more work but lets you do more with the output and gives you a nice thread
pool.  There might be a simple bash-ie way to make a thread pool, but I
don't know it.

--Nik

On Fri, Nov 7, 2008 at 8:06 PM, Ivan Sergio Borgonovo
<[EMAIL PROTECTED]>wrote:

> I'd like to launch some sql script asynchronously from a web app and
> have some kind of feedback later.
>
> Some form of authentication would be a plus.
>
> Is there anything ready?
>
> --
> Ivan Sergio Borgonovo
> http://www.webthatworks.it
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Performance of views

2008-11-02 Thread Nikolas Everett
We've been toying around with reworking our years old database schema and
replacing the old tables with updatable views into the new schema.  The only
real problem we've had with it is that queries to one of our views seem to
be joining on unnecessary tables because the view does the join.  We don't
need the columns provided by the join and the join is kind of costly, but
performance has been great otherwise.
On Sun, Nov 2, 2008 at 8:59 PM, Scott Marlowe <[EMAIL PROTECTED]>wrote:

> On Sun, Nov 2, 2008 at 6:39 PM, Stephen Frost <[EMAIL PROTECTED]> wrote:
> > Simon,
> >
> >>   * Higher overhead mapping to original tables and indexes
> >
> > This just plain isn't true in PG, at least, and I'd think most other
> > sensible databases..
>
> Note that, at least in older versions, MySQL completely materialized a
> temporary table from a view, then used that for the view.  This is
> horribly inefficient, and results in a lot of people thinking views
> are slow.  Not sure if this has been addressed in MySQL yet, don't
> really care anymore, since I rarely use mysql for anything anymore.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] excluding tables from VACUUM ANALYZE

2008-10-30 Thread Nikolas Everett
I generally write bash one liners for this kind of thing:

for table in $(psql -U postgres --tuples-only -c "SELECT schemaname || '.'
|| tablename FROM pg_tables WHERE tablename NOT IN ('table1', 'table2')") ;
do psql -U postgres -c "VACUUM ANALYZE $table"; done

This is nice because you can bring all kinds of awk/sed/grep to bear on your
problems.

On Thu, Oct 30, 2008 at 9:17 AM, Igor Neyman <[EMAIL PROTECTED]> wrote:

>  This question didn't get any "traction on "admin" list, so I'll try
> here:
>
> I want to analyze the entire database with the exception of several
> tables.
>  When I run "VACUUM ANALYZE" (or "vacuumdb -z") on the database, how can I
> exclude specific tables from being analyzed?
> Is there any place in system dictionary, where the table could be marked ,
> so it's not processed (skipped) by "vacuum analyze"?
>
> Igor
>


Re: [GENERAL] partitioning a table containing millions of records

2008-10-25 Thread Nikolas Everett
There is no fast way to split an existing table into partitions.  Create a
new parent table, create partitions, create the insert trigger, and then
INSERT INTO newparent SELECT * FROM unpartitioned.  You may want to split
that into groups if you have many millions of rows.


On Sat, Oct 25, 2008 at 6:47 AM, Andreas Jochem <[EMAIL PROTECTED]> wrote:

> Hi,
> I have a question concerning the topic patitioning.
>
> I have a table with millions of records and I would like to partition it. I
> have already read the documentation but it could not answer my question.
>
> My question is, the following:
> Do I have to create the master and child tables and so  on before inserting
> the data or is it also possible to partition a table which already contains
> millions of records??? And how can I do this??? Has anybody done this
> before???
>
> Thanks.
>
> Andi
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] How to force PostgreSQL to use multiple cores within one connection?

2008-10-01 Thread Nikolas Everett
If you happen to be using JDBC you can also get copy to work:

http://kato.iki.fi/sw/db/postgresql/jdbc/copy/

On Wed, Oct 1, 2008 at 9:24 AM, Merlin Moncure <[EMAIL PROTECTED]> wrote:

> On Wed, Oct 1, 2008 at 6:44 AM, Sergey A. <[EMAIL PROTECTED]> wrote:
> > Hello.
> >
> > My application generates a large amount of inserts (~ 2000 per second)
> > using one connection to PostgreSQL. All queries are buffered in memory
> > and then the whole buffers are send to DB. But when I use two
> > connections to PostgreSQL instead of one on dual core CPU (i.e. I use
> > two processes of PostgreSQL) to insert my buffers I see that things
> > goes 1.6 times faster.
> >
> > Using several connections in my application is somewhat tricky, so I
> > want to move this problem to PostgreSQL's side. Is there any method
> > for PostgreSQL to process huge inserts coming from one connection on
> > different cores?
>
> If you are buffering inserts, you can get an easy performance boost by
> using copy as others have suggested.  Another approach is to use
> mutli-row insert statement:
>
> insert into something values (1,2,3), (2,4,6), ...
>
> Using multiple cpu basically requires multiple connections.  This can
> be easy or difficult depending on how you are connecting to the
> database.
>
> merlin
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Free Cache Memory (Linux) and Postgresql

2008-09-30 Thread Nikolas Everett
If its the OS cache the kernel ought to free the memory when there is
something else worth caching.  Its not a big deal if the cache is full so
long as the system still performs well.

On Tue, Sep 30, 2008 at 9:51 AM, Denis Gasparin <[EMAIL PROTECTED]> wrote:

> Tom Lane ha scritto:
> > Denis Gasparin <[EMAIL PROTECTED]> writes:
> >
> >> I'm evaluating to issue the drop_caches kernel command (echo 3 >
> >> /proc/sys/vm/drop_caches) in order to free unused pagecache, directory
> >> entries and inodes.
> >>
> >
> > Why in the world would you think that's a good idea?
> >
> >   regards, tom lane
> >
> >
> We see cached memory growing on constant base, even if there are no
> connections to database.
>
> We have some tables that are truncated and reloaded with updated data on
> regular basis (3,4 days).
>
> It seems like postgres or the operating system (linux) is keeping in
> cache that old data even if it has been deleted.
>
> We're searching a way to free that memory without shutting down pgsql.
>
> Thank you for your help,
> Denis
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>