On Mon, Oct 22, 2012 at 6:17 PM, Alan Hodgson wrote:
> On Monday, October 22, 2012 05:55:07 PM Nikolas Everett wrote:
> > I see that pg_upgrade is an option. Having never used how long should I
> > expect pg_upgrade to take? Obviously we'll measure it in our
> environmen
I was just looking at
http://www.postgresql.org/docs/devel/static/release-9-2.html and it
mentioned that a dump/reload cycle was required to upgrade from a previous
release. I just got done telling some of my coworkers that PG had been
bitten by this enough times that they were done with it. Am I
hours'
2 | now() - interval '9 hours'
1 | now() - interval '8 hours'
Without lag IS NULL I miss the first row.
On Thu, May 27, 2010 at 11:44 AM, Nikolas Everett wrote:
> The 10 and 11 hour interval are being skipped because I'm only interested
> in
case.
On Thu, May 27, 2010 at 12:36 PM, Justin Graf wrote:
> On 5/27/2010 9:45 AM, Nikolas Everett wrote:
>
> Sorry. Here is the setup:
> CREATE TABLE test (id BIGSERIAL PRIMARY KEY, state1 INT NOT NULL, state2
> INT NOT NULL, timestamp TIMESTAMP);
> INSERT INTO test (state
uping destroys the third row so that's out. No grouping at all
gives repeats of state1. Is this what partitioning is for?
Nik
On Thu, May 27, 2010 at 10:20 AM, Ozz Nixon wrote:
> Lost me a bit, do you mean DISTINCT?
>
> select distinct state1, first(timestamp) from table
>
Say I have a table that stores state transitions over time like so:
id, transitionable_id, state1, state2, timestamp
I'm trying to write a query that coalesces changes in state2 away to produce
just a list of transitions of state1. I guess it would look something like
SELECT state1, FIRST(timest
I've had a reporting database with just about a billion rows. Each row
was horribly large because the legacy schema had problems. We partitioned
it out by month and it ran about 30 million rows a month. With a reasonably
large box you can get that kind of data into memory and indexes are
almost
You might want to look into how OpenNMS uses RRDTool. It is able to handle
a huge number of nodes by queuing inserts into the RRDs and using JRobin.
I'm not sure if it is a great solution for what you are looking for, but
I've found its performance scales quite well. I'm getting well over 500
up
That is the expected behavior. Postgres doesn't give back disk like Java
doesn't give back memory. It keeps a map of where the free space is so it
can use it again.
It does all this so it doesn't have to lock the table to compact it when
VACUUMing. VACUUM FULL does lock the table to compact it.
Authenticate in web app and drop a script in a directory and run them with
cron maybe?
Authenticate in web app and drop a row in a table and let a long running
process suck the row out and kick something off in a thread pool?
I've seen both. You've got to monitor both somehow. The second one is
We've been toying around with reworking our years old database schema and
replacing the old tables with updatable views into the new schema. The only
real problem we've had with it is that queries to one of our views seem to
be joining on unnecessary tables because the view does the join. We don'
I generally write bash one liners for this kind of thing:
for table in $(psql -U postgres --tuples-only -c "SELECT schemaname || '.'
|| tablename FROM pg_tables WHERE tablename NOT IN ('table1', 'table2')") ;
do psql -U postgres -c "VACUUM ANALYZE $table"; done
This is nice because you can bring
There is no fast way to split an existing table into partitions. Create a
new parent table, create partitions, create the insert trigger, and then
INSERT INTO newparent SELECT * FROM unpartitioned. You may want to split
that into groups if you have many millions of rows.
On Sat, Oct 25, 2008 at
If you happen to be using JDBC you can also get copy to work:
http://kato.iki.fi/sw/db/postgresql/jdbc/copy/
On Wed, Oct 1, 2008 at 9:24 AM, Merlin Moncure <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 1, 2008 at 6:44 AM, Sergey A. <[EMAIL PROTECTED]> wrote:
> > Hello.
> >
> > My application generate
If its the OS cache the kernel ought to free the memory when there is
something else worth caching. Its not a big deal if the cache is full so
long as the system still performs well.
On Tue, Sep 30, 2008 at 9:51 AM, Denis Gasparin <[EMAIL PROTECTED]> wrote:
> Tom Lane ha scritto:
> > Denis Gaspa
15 matches
Mail list logo