> how can I call array_append from a user-defined C function?
> I know the type of the array I'm going to use (int4[]) so if there's an
> equivalent
>
> function that can be called without going through PG_FUNCTION_ARGS stuff...
I just found "array_set" (the array I'm using is one-dimensional
Hi,
how can I call array_append from a user-defined C function?
I know the type of the array I'm going to use (int4[]) so if there's an
equivalent
function that can be called without going through PG_FUNCTION_ARGS stuff...
Thank you
Leonardo
--
Sent via pgsql-general mailing list (pgsql-g
Hi,
trying to find how to store a large amount (>1 rows/sec) of rows in a table
that
has indexes on "random values" columns, I found:
http://en.wikipedia.org/wiki/TokuDB
Basically, instead of using btrees (which kill insert performance for random
values
on large tables) they use a differ
On 07/06/2011 23.52, Tom Lane wrote:
> Very fast on a very narrow set of use cases ...
Can you explain a little (if possible)?
Thank you
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-gener
> What about the index size? How much space do they occupy? Analyze the
> table and do this
Of course space is different. That's not the point. The point is: I'm willing
to pay the price for another HD, if that helps with performance. But it doesn't.
>
> The minimal performance difference is
> Yes, I use the same approach, but I'm not aware of any such guideline
> related to fillfactor with indexes. Anyway those guidelines need to be
> written by someone, so you have a great opportunity ;-)
I did a quick test using your example. As in your test, "increasing"
values don't get any g
> It will be really useful to see some test results where you alter the
> fillfactor and report various measurables.
It's not that easy... stressing "only" the index insertion
speed won't be simple. I would have liked some "theory"...
The docs seem to imply there are some guidelines, it's
just
> > I have an index on a timestamp value that is inserted, for 90%
> > of the inserts, in increasing order. No updates, no deletes on the
> > table (appends only).
>
> The bit about "increasing order" is a red herring here. If you have
> no updates, then you can leave the FILLFACTOR alone.
>
Hi,
the doc pages are somehow "cryptic" regarding FILLFACTOR.
(well, at least they're cryptic to me, since I don't know a lot
of btree stuff...)
I have an index on a timestamp value that is inserted, for 90%
of the inserts, in increasing order. No updates, no deletes on the
table (appends only).
> I am not sure if this could apply to your case, but maybe - unless you have
>done it before -
>
> you could look at windowing functions
Ciao Gabriele,
the problem is that the only thing the N queries have in
common is the base table; everything else is different,
because the different "gro
Hi,
I'm going to need to GROUP BY the same table
multiple times. That is, something like:
select (some aggregate functions here) from
tableA group by f1, f2
select (some other aggregate functions here) from
tableA group by f3, f4
etc
The table is pretty large; can someone suggest the
best way
> I believe this perception that SSDs are less "safe" than failure-prone
> mechanical hard drives will eventually change.
By "safe" I mean they won't corrupt data in case of crash of the machine.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subs
have a look at
http://postgresql.1045698.n5.nabble.com/Intel-SSDs-that-may-not-suck-td4268261.html
It looks like those are "safe" to use with a db, and aren't that expensive.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://w
> Yes, but its more than fsync.
>
> You'd need to copy the whole table, all indexes and forks to WAL if
> wal_level is set high.
"if" wal_level is set high. If it's set to minimal it would be "simple" fsyncs,
right? I mean: if it's at minimal it wouldn't take long, and it basically won't
re-wr
> I suppose that a similar effect could be achieved by something like
>
> CREATE TABLE mytable_logged AS
> SELECT * FROM mytable_unlogged;
>
> which would not produce WAL:
yes, but it would mean re-writing the whole data + re-creating the indexes.
I didn't know that some statements don't
Hi,
we're inserting tons of rows in some partitioned tables, and to get
more performance out of the system we would like to use UNLOGGED
tables that, after some time, become "regular" tables (at that point, we
wouldn't want to write to them anymore; we could only select or drop
them). Would it be
> One option would be to create a simple 2-node cluster and run your PgSQL
> server in a migrateable VM backed by a SAN or, if your budget is more
> modest, a simple DRBD device.
>
> Personally, I like to use RHCS (Red Hat Cluster Services) with a DRBD
> array becking clustered LVM with Xen VM
> I also recommend reindexing any table that has been VACUUM FULLed.
Mmmh, from the docs I get that in 9.0 a "vacuum full" rewrites the whole table,
so I expect the indexes to be re-created anyway... so a reindexing would
be totally useless.
Am I wrong?
--
Sent via pgsql-general mailing
> > MayVACUUM FULL on a table improve perfromance of the system?
>
> No, it will make things worse.
???
Why?
"The FULL option is not recommended for routine use, but might be useful
in special cases. An example is when you have deleted or updated most
of the rows in a table and would like th
> In pl/pgsql at any rate, functions which return a set of rows build up
> the entire result set in memory and then return the set in one go:
Ok, then pl/pgsql and pl/python (which can't return SETOF) are ruled out.
(Thank you for pointing that out).
But pl/perl seems to do the trick:
"PL/Per
Hi,
I need to generate aggregates of data coming from a stream.
I could easily doing it inserting data coming from the stream into a table,
and then query it using something like:
select from atable group by
The problem with this approach is that I would have to wait for the whole
stream to
> i thought PG multidimensional arrays were just arrays of arrays, and any
>dimension could be anything.
from:
http://www.postgresql.org/docs/8.4/static/arrays.html
"Multidimensional arrays must have matching extents for each dimension. A
mismatch causes an error"
--
Sent via pgsql-g
Hi,
since postgresql multidimensional arrays can't have different size per axis, I
was wondering what would happen in case I used an array of, say, 10x10
elements, where only 10x2 elements are filled and the rest are NULL. I guess
the NULL elements take space (and I would have 80% of the space wa
We use continuent at work (albeit for mysql...) on a three node cluster.
That's a good project, the only thing I don't like is that one is forced
to use Java which is not what I'd like to do (I'd prefer Ruby).
Thank you everybody for your answers.
Leonardo
---(end
pg_cluster is an example of a synchronous replication method (although
it's really considered multi-master...not master-slave)
It looks like pgcluster is what I would need, I just don't understand
how it works... aren't there some "good" docs about it?
Which are its limits?
And: is there a w
Hi,
I still don't understand how replication can be used in web applications.
Given this scenario:
1) user updates his profile -> update to the db (master)
2) web app redirects to the "profile page" -> select from db (slave)
Since (2) is a select it is issued to the slave.
How can one be sure
Have a look at Mysql gotchas...
http://sql-info.de/mysql/database-definition.html#2_4
So here's another little gem about our friends from Uppsala: If you create a
table with InnoDB storage and your server does not have InnoDB configured, it
falls back to MyISAM without telling you.
As i
When a data file for a specific table (or index?) is larger than 1GB,
its split up in several parts. This is probably a left over from the
time OSs used to have problems with large files.
Thank you.
Is there any documentation I can read about this?
---(end of broadcast)---
I got a table with oid 25459.
The file is 1073741824 bytes big.
I did some more inserts, and now I have this two new files:
size/name:
1073741824 25459.1
21053440 25459.2
What are they?
The 25459.1 looks exactly like the 25459.
I tried looking at the docs, but searching for ".1" or ".2" wasn't that
"In addition, your original invocation of the postmaster command
must have a shorter ps status display than that provided by each
server process."
Yes, using PGDATA instead of the whole path eith the -D option worked:
now I can see the different status displays.
---(end of
I wrote a function to sum arrays.
It works, but I had to cast the data pointer to int64 (because my arrays
are 'int8[]'):
int64* ptr1 = ARR_DATA_PTR(v1);
What if I want to write a more general function that adds values of 2
arrays of every int type? How could I do it?
Here is the function (if y
I read "Chapter 23. Monitoring Database Activity" to monitor postgresql,
but on Solaris it doesn't work. I tried "/usr/ucb/ps", but it doesn't
work either (I only see the postmaster startup parameters). Isn't there
any other solution to see what postgresql instances are doing?
-
A couple of days ago I announced that I wrote a JDBC driver that
adds table partitioning features to databases accessed via JDBC.
I also wrote:
> In case you think this could be of any interest if integrated
> in Postgresql (I mean if it was a core functionality of Postgresql,
> not just a JDBC d
I was reading "Don't be lazy, be consistent: Postgres-R,
a new way to implement Database Replication"
and I found this:
"5.1 General configuration
PostgreSQL uses a force strategy to avoid redo recovery,
flushing all dirty buffer pages at the end of each
transaction. With this strategy, response
With mysql I know how much space a row will take, based on the datatype
of it columns. I also (approximately) know the size of indexes.
Is there a way to know that in postgresql?
Is there a way to pack (compress) data, as with myisampack for mysql?
Thank you
---(end of
35 matches
Mail list logo