in with a GROUP BY, like ...
>
Wow, David, that is perfect – 0.91 seconds. Problem well and truly solved.
It took me a little while to modify my original query to use that concept, but
now it flies.
Many thanks
Frank
On 5 Oct 2017, at 9:51 AM, Frank Millman wrote:
>
> I should have re-stated the reason for my original post.
>
> Exactly the same query, on exactly the same data, takes 1.8 seconds on Sql
> Server, 1.0 seconds on SQLite3, and 1607 seconds, or 26 minutes, on
> Postgr
On 5 Oct 2017, at 8:20 AM, Frank Millman wrote:
> If anyone wants to take this further, maybe this is a good place to start.
I should have re-stated the reason for my original post.
Exactly the same query, on exactly the same data, takes 1.8 seconds on Sql
Server, 1.0 seconds on SQLi
On 4 Oct 2017, at 9:19 PM, Alban Hertroys wrote:
> On 2 Oct 2017, at 8:32, Frank Millman <fr...@chagford.com> wrote:
>
>
> On Fri, Sep 22, 2017 at 7:34 AM, Frank Millman wrote:
> >
> > On Thu, Sep 21, 2017 at 3:54 PM, Merlin Moncure wrote:
> > > So
On Wednesday, October 4, 2017 06:07 PM Jan de Visser wrote:
> On Monday, October 2, 2017 2:32:34 AM EDT Frank Millman wrote:
> >
> > Just checking – is this under investigation, or is this thread considered
> > closed?
>
> That's not how it works. This is a commun
From: Frank Millman
Sent: Friday, September 22, 2017 7:34 AM
To: pgsql-general@postgresql.org
Subject: Re: a JOIN to a VIEW seems slow
On Fri, Sep 22, 2017 at 7:34 AM, Frank Millman wrote:
>
> On Thu, Sep 21, 2017 at 3:54 PM, Merlin Moncure wrote:
> > Something is not ad
Frank Millman wrote:
>
> On Thu, Sep 21, 2017 at 3:54 PM, Merlin Moncure wrote:
> > Something is not adding up here. Can you EXPLAIN ANALYZE the 26 minute
> > query?
> Here it is -
>
> https://explain.depesz.com/s/cwm
>
There is one thing I have n
Merlin Moncure wrote:
On Thu, Sep 21, 2017 at 12:02 PM, Frank Millman <fr...@chagford.com> wrote:
>
> SELECT q.cust_row_id,
> SUM(CASE WHEN q.tran_date > '2015-08-31' THEN q.balance ELSE 0 END
> ) AS "balance_curr AS [DECTEXT]",
> SUM(CASE WHEN q.tran_da
On Thu, Sep 21, 2017 at 3:54 PM, Merlin Moncure wrote:
> Something is not adding up here. Can you EXPLAIN ANALYZE the 26 minute
> query?
Here it is -
https://explain.depesz.com/s/cwm
Frank
On Thu, Sep 21, 2017 at 3:54 PM, Merlin Moncure wrote:
>
> > On Thu, Sep 21, 2017 at 1:51 AM, Frank Millman <fr...@chagford.com> wrote:
> >
> > I did not get any response to this, but I am still persevering, and feel
> > that I am getting closer. Instead of w
On 2017-09-18 Frank Millman wrote:
>
> Here are the timings for running the query on identical data sets using
> Postgresql, Sql Server, and Sqlite3 -
>
> PostgreSQL -
> Method 1 - 0.28 sec
> Method 2 – 1607 sec, or 26 minutes
>
> Sql Server -
> Meth
> (deleted_id = 0) is not too effective - maybe some composite or partial
> > index helps.
>
> In my testing JOINS can push through UNION ALL. Why do we need to
> materialize union first? What version is this?
>
I am using version 9.4.4 on Fedora 22.
Frank Millman
2017-09-14 15:09 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:
2017-09-14 14:59 GMT+02:00 Frank Millman <fr...@chagford.com>:
Pavel Stehule wrote:
2017-09-14 10:14 GMT+02:00 Frank Millman <fr...@chagford.com>:
Hi all
This is a follow-up to a
Pavel Stehule wrote:
2017-09-14 10:14 GMT+02:00 Frank Millman <fr...@chagford.com>:
Hi all
This is a follow-up to a recent question I posted regarding a slow query. I
thought that the slowness was caused by the number of JOINs in the query, but
with your assistance I have found th
.
Is this analysis correct?
If so, is there any way to force it to use an indexed read?
Thanks for any pointers.
Frank Millman
From: Chris Travers
Sent: Tuesday, September 12, 2017 3:36 PM
To: Frank Millman
Cc: Postgres General
Subject: Re: [GENERAL] Joining 16 tables seems slow
Chris Travers wrote:
On Tue, Sep 12, 2017 at 3:15 PM, Frank Millman <fr...@chagford.com> wrote:
2017-09-12 12:39 GMT+02:00
2017-09-12 12:39 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:
2017-09-12 12:25 GMT+02:00 Frank Millman <fr...@chagford.com>:
Pavel Stehule wrote:
2017-09-12 9:36 GMT+02:00 Frank Millman <fr...@chagford.com>:
Pavel Stehule wrote:
>
>
Pavel Stehule wrote:
2017-09-12 9:36 GMT+02:00 Frank Millman <fr...@chagford.com>:
Pavel Stehule wrote:
>
> 2017-09-12 8:45 GMT+02:00 Frank Millman <fr...@chagford.com>:
I am using 9.4.4 on Fedora 22.
I am experimenting with optimising a SQL statement. One v
Ron Johnson wrote:
>
On 09/12/2017 01:45 AM, Frank Millman wrote:
Hi all
I am using 9.4.4 on Fedora 22.
I am experimenting with optimising a SQL statement. One version uses 4 LEFT
JOIN’s and a 5-way CASE statement in the body. The second moves the filtering
into the JOIN section, and I
Pavel Stehule wrote:
>
> 2017-09-12 8:45 GMT+02:00 Frank Millman <fr...@chagford.com>:
I am using 9.4.4 on Fedora 22.
I am experimenting with optimising a SQL statement. One version uses 4 LEFT
JOIN’s and a 5-way CASE statement in the body. The second moves the filtering
i
.
Exactly the same exercise on Sql Server results in 0.06 seconds for both
versions.
I realise that, if I was selecting a large number of rows, 0.23 seconds is
trivial and the overall result could be different. But still, it seems odd.
Is this normal, or should I investigate further?
Frank Millman
Hi,
i have the following question:
Given an empty database with only schema api_dev in it, a table and a
function is created as follows:
CREATE TABLE api_dev.item_texts
(
item_id integer,
item_text text
)
WITH (
OIDS=FALSE
);
CREATE OR REPLACE FUNCTION api_dev.add_texts_to_item(
ht it up, so maybe keeping one of those around
isn't too bad an idea ;)
--
Best,
Frank.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ng for users that didn't have to pay much attention yet
to handling priviliges... i.e. trying even a plain select on table_a in the
public schema as a non-system user returns something like:
ERROR: relation "table_a" does not exist
--
Best,
Frank.
--
Sent via pgsql-gener
schema.
Granting 'usage' solves it, but I expect this isn't suppose to happen.
--
Best,
Frank.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ion
--
PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.2, 64-bit
(1 row)
--
Best,
Frank.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
schema
| | =UC/postgres |
(1 row)
--
Best,
Frank.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
From: amul sul
Sent: Monday, January 02, 2017 12:42 PM
To: Frank Millman
Cc: pgsql-general
Subject: Re: [GENERAL] Difficulty modelling sales taxes
> On Mon, Jan 2, 2017 at 4:03 PM, Frank Millman <fr...@chagford.com> wrote:
> > Hi all
> >
> > It is a bit
better ideas?
Thanks
Frank Millman
From: Frank Millman
Sent: Wednesday, October 26, 2016 10:42 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Locking question
> I am designing an inventory application, and I want to ensure that the stock
> level of any item cannot go negative.
Thanks to all for some really
From: hubert depesz lubaczewski
Sent: Wednesday, October 26, 2016 10:46 AM
To: Frank Millman
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Locking question
On Wed, Oct 26, 2016 at 10:42:29AM +0200, Frank Millman wrote:
> Hi all
> >
> > I am designing an inventory app
*think* that the solution is to BEGIN the transaction, then perform SELECT
... WITH UPDATE, then proceed with INSERT and COMMIT if ok, else ROLLBACK.
Is this the correct approach, or am I missing something?
Thanks
Frank Millman
On Tue, Mar 15, 2016 at 12:02 PM, Francisco Olarte wrote:
> Hi Frank:
>
> This may byte you any day, so I wuld recommend doing
>
> s=> select v, pg_typeof(v) from (select floor(4.725 * power(cast(10 as
> numeric), 2) + 0.5)) as aux(v);
> v | pg_typeof
> -+-
--
numeric
I found that adding a decimal point after the 10 is the easiest way to force it
to return a numeric.
Putting this together, my solution is -
test=> select floor(4.725 * power(10., 2) + 0.5);
floor
---
473
Can anyone see any problems with this?
Thanks
Frank
Hi all
I am running PostgreSQL 9.4.4 on Fedora 22.
SELECT floor(4.725 * 100 + 0.5) returns 473, which is what I expected.
SELECT floor(4.725 * power(10, 2) + 0.5) returns 472, which I find surprising.
Please can someone explain the anomaly.
Thanks
Frank Millman
On Thu, 2015-10-29 at 08:33 -0400, Will McCormick wrote:
> Trying to get the bdr-plugin to install make install on RHEL7.1. Having some
> issues with make of the plugin.
>
>
>
> # make -j4 -s all make -s install
> make: *** No rule to make target `make'. Stop.
> make: *** Waiting for
Hi,
Does someone have a success story of using Postgres Full Search Capability
with significant data, lets say 50-100 GB ?
Any pointers would be much appreciated
Thanks
Frank
Thanks
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hello, I'd like to see all the tables in my data base,
but can't figure out how to access th information scheme.
Thanks
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Jeff Janes wrote
That's not really true. There are no per-row WAL records. There is still
a per-transaction WAL record, the commit record. If you only care about
the
timing of the WAL and not the volume, changing to unlogged will not make a
difference. (These commit-only records are
Payal,
I think you completely ignored john r pierce...any reason you're using
9.2.4? Whoevers going to help you will want to be testing on what you're
eventually going to compile
Frank
On Fri, Aug 8, 2014 at 4:54 PM, Payal Shah payals...@fico.com wrote:
Hello John,
Thank you for your
Looks like you're doing it right, you actually have to specify the user
though:
psql -U postgres
and make sure you restarted the server so your changes take effect.
Frank
On Wed, Aug 6, 2014 at 4:43 PM, Jorge Arevalo jorgearev...@libregis.org
wrote:
Hello,
I want to connect to my local
I've just used a quick
wrapper:
https://gist.github.com/frankpinto/3427cf769a72ef25ffac
It can be modified to accept arguments for the script name, run a sql
script by the same name, have a default environment, etc.
Frank
On Fri, Jul 18, 2014 at 10:43 AM, Martin Gudmundsson
martingudmunds
Hello, I cant seem to find the allowed values for OIDs.
result = PQprepare(parrentcon, insertstmt, insert, 2, /*FIXME*/);
Both arguments are to be of type char * in C and of type varchar(255) in sql.
I looked in the docs but I cant seem to find where OIDs are enumerated, please point me in the
That actually sounds terrifying. I'd throw up a caching layer ASAP to try
to decrease the speed those transactions are happening.
Frank
On Mon, Jul 7, 2014 at 2:25 PM, Nicolas Zin
nicolas@savoirfairelinux.com wrote:
Maybe you can priorize your worker with a ionice?
- Mail original
how would an instance of your program know what to connect to, or which
previous instance its predecessor was ?
normally, you have ONE database for a given set of applications, and all
the applications share the same database tables and such.
Thats the problem, is there some way to tell
Hi, Im new to postgresql and sql in general. I desired to write a program in C that used an sql data base for IPC and because multiple copies of my program might run on the same machine I wanted a way to ensure that only one copy of each multithreaded program got one database but Im uncertain how
the complexities of that approach.
Thanks,
Frank
On Tue, Mar 25, 2014 at 4:46 PM, Raymond O'Donnell r...@iol.ie wrote:
On 25/03/2014 13:56, Frank Foerster wrote:
Hi,
we are currently in the process of upgrading a production/live 1 TB
database from 9.2 to 9.3 via pg_dump
very large files and piping directly into something like
psql/pg_restore on another machine etc. would greatly reduce
upgrade-time/pain.
Thanks and best regards,
Frank
Hi,
please ignore my previous question about piping pg_dump output to
pg_restore. This is already working.
Thanks,
Frank
: postgis_restore.pl is in /usr/local/share/postgis/utils/
perl /usr/local/share/postgis/utils/postgis_restore.pl epc.dump | psql
-U postgres epc
-- uninstall legacy.sql
psql -U postgres -f
/usr/local/share/postgresql/contrib/postgis-2.1/uninstall_legacy.sql
Hth,
Frank
Am 2014-02-12 18:58, schrieb
welcome :-)
Frank
- --
Frank BRONIEWSKI
METRICO s.à r.l.
géomètres
technologies d'information géographique
rue des Romains 36
L-5433 NIEDERDONVEN
tél.: +352 26 74 94 - 28
fax.: +352 26 74 94 99
http://www.metrico.lu
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment
.
-Frank
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
fine.
Anyone have a clue as to what I'm missing? TIA!
-Frank
{p.s. yes, cm_id won't normally be negative... some negative values
were inserted as part of the unit testing, which avoids confusion
with existing positive value. That shouldn't be a problem, right?}
--
Sent via pgsql
On Thu, 5 Dec 2013, Andy Colson wrote:
On 12/5/2013 4:05 PM, Frank Miles wrote:
[snip]
Table public.credmisc
Column | Type |Modifiers
On Thu, 5 Dec 2013, Andy Colson wrote:
On 12/5/2013 4:05 PM, Frank Miles wrote:
The table schema is {\d
credmisc}:
And this is all owned by: {\dp credmisc}
You have a table credmisc, in schema credmisc, owned by credmisc?
It could be a path problem. Maybe trigger should be:
Sorry
How can I run a PostgreSQL database independently of the normal packaged
based configuration?
I just want to start up postgres or pg_ctl process using a different port,
pointing to a different data directory and get it running, with permissions
etc working okay.
I don't want it to depend on the
this? I never use this switch
and my .pgpass is used by shell scripts and other programs ...
Frank
--
Frank BRONIEWSKI
METRICO s.à r.l.
géomètres
technologies d'information géographique
rue des Romains 36
L-5433 NIEDERDONVEN
tél.: +352 26 74 94 - 28
fax.: +352 26 74 94 99
http://www.metrico.lu
Am 17.05.2013 21:21, schrieb Alfonso Afonso:
Hi Frank
Although you are thinking in OOP, the SQL is itself one definition
model that you should not ignore and, IMHO, try to follow the
normalization statements.
You can build a robust and normalized schema (table primarylocation ,
table
) VALUES ('My Place', 5);
having the incrementation and the uniqueness. I was thinking off
creating a number of triggers doing this for me but wondering whether
there might be a better way.
Cheers,
Frank
BTW: Using Postgres 9.2 and up
--
Sent via pgsql-general mailing list (pgsql-general
check for HowTo for upgrading
postgres.
Cheers,
Frank
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi folks,
I'm looking for a HowTo of Foreign Tables feature. We are thinking of
connecting two postgres databases via this way and I wanted to try
before I do say yes or no ;) However, I didn't find any good HowTo on
via §search_engine.
Cheers,
Frank
--
Sent via pgsql-general mailing list
the username,password and database again ? In Java, we
are using set catalog to do this in MySQL.
2. Based on #1 above, would the latest JDBC driver support the ability to
create this type of connection?
Thanks,
Frank
Database Administrator
if
PostgreSQL has something similar.
Thanks,
Frank
From:
Adrian Klaver adrian.kla...@gmail.com
To:
Frank Cavaliero/Boston/IBM@IBMUS
Cc:
pgsql-general@postgresql.org, pgsql-ad...@postgresql.org
Date:
02/25/2013 01:48 PM
Subject:
Re: [GENERAL] Use, Set Catalog and JDBC questions
On 02/25/2013 10
Hi John,
Thanks for the response. I will look into that as an option.
Thanks,
Frank
From:
John R Pierce pie...@hogranch.com
To:
pgsql-general@postgresql.org
Date:
02/25/2013 01:33 PM
Subject:
Re: [GENERAL] Use, Set Catalog and JDBC questions
Sent by:
pgsql-general-ow...@postgresql.org
Hi Adrian,
Thanks a lot!I will certainly look into the multiple datasources as an
option.
-Frank
From:
Adrian Klaver adrian.kla...@gmail.com
To:
Frank Cavaliero/Boston/IBM@IBMUS
Cc:
pgsql-ad...@postgresql.org, pgsql-general@postgresql.org
Date:
02/25/2013 02:16 PM
Subject:
Re
be applied in case of an upgrade. Is there any build in for?
Cheers,
Frank
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
and
pgcrypto ? I see the md5 is prefixed in hash in pg_shadow. I was
wondering what other prefixes may exist, say for pgcrypto. If you have an
examples of what a pgcrypto or any other PAM hash would look like (or what
they would at least begin with), that would be great.
Thanks,
Frank
stability and/or performance, even unrelated to
PostgreSQL, I'd appreciate.
Running a small PG-Server for private purposes on openVZ. Cannot
complain so far.
Cheers,
Frank
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http
or similar setup exists, is there also some database object that
provides a current count of the failed login attempts?
Thanks,
Frank
Frank Cavaliero
Database Administrator
IBM Infosphere Guardium
IBM Software Group, Information Management
978-899-3635 - Direct
For Technical Services Support
FYI
http://freebsd.1045724.n5.nabble.com/Postgresql-related-memory-question-td5759467.html
Am 2012-11-07 10:28, schrieb Achilleas Mantzios:
On Τετ 07 Νοε 2012 09:42:47 Frank Broniewski wrote:
Hey, this is really cool. I directly tried the script and there's a line
from the output that caught
Hey, this is really cool. I directly tried the script and there's a line
from the output that caught my eye:
mem_gap_vm: + 8812892160 ( 8404MB) [ 26%] Memory gap: UNKNOWN
is this the shared buffers? I guess so, but I want to confirm my guess ...
Frank
Am 2012-11-07 09:26, schrieb
# pgtune wizard 2012-04-04
max_connections = 100
synchronous_commit = off
So any help finding out why my system looses some RAM is greatly
appreciated :-) If more information is needed I will gladly provide it.
Frank
--
Frank BRONIEWSKI
METRICO s.à r.l.
géomètres
technologies d'information
be the maintenance_work_mem, which is set to 4GB?
Many thanks,
Frank
Am 2012-11-05 12:14, schrieb Achilleas Mantzios:
ipcs in FreeBSD is a little ... tricky.
ipcs -M
ipcs -m
ipcs -am
could be your friends
On Δευ 05 Νοε 2012 11:22:46 Frank Broniewski wrote:
Hi,
I am running a PostgreSQL server on FreeBSD
?
Take note that shared mem should be recorded for each and every postmaster
running.
On Δευ 05 Νοε 2012 14:36:44 Frank Broniewski wrote:
Hi,
thank you for your feedback. I had a look at those commands and their
output, especially in conjunction with the SEGSZ value from icps -am
Here's an example
-end server (12GB of Ram). It was just most of the connections were
idling most the time. Tomcat with a high number of consistent
connections for some reasons and end user stand alone clients which are
establishing a database connection on startup and keeping them until
shutdown.
Cheers,
Frank
or NFS so you have access to them via your
warm-standby-machines. I want to say: this is taken some storage but can
be reviewed kind of independent from database itself.
Cheers,
Frank
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription
the files? (Or
are you talking about icremental changes?)
Cheers,
Frank
--
Frank Lanitz fr...@frank.uvena.de
pgpJnRr67CUNQ.pgp
Description: PGP signature
be up there?
How is f_firmen_isKunde() defined?
Cheers,
Frank
--
Frank Lanitz fr...@frank.uvena.de
pgp1GbDwLQBZT.pgp
Description: PGP signature
and my condition is if the result after grouping is greeter then 1.
I use postgresql 9.1.4 x64
Any one can tell me what I miss ?
Not sure I understand you correct, but maybe count() is working for you.
Maybe you would need some primary key for good values.
cheers,
Frank
--
Sent via pgsql
Am 23.08.2012 10:45, schrieb Condor:
On , Frank Lanitz wrote:
Am 23.08.2012 09:52, schrieb Condor:
Hello ppl,
I try to make query and see how many ids have more then one row.
few records is:
ids | val | some
a | 1 | x
a | 1 | v
b | 1 | x
b | 2 | c
I focus on ids
) and do what I need to do, then revert back to
my regular user.
It's not only about the things that can be done from within psql. At
least originally. Some of our currently workflows are basing on real
shell access.
Cheers,
Frank
--
Frank Lanitz fr...@frank.uvena.de
pgpBsGObDQVNO.pgp
Description
you have any best
practice how to manage this? Is there any golden rule for this?
Cheers,
Frank
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
for the detailed idea how to do it correct. I'm not thinking
about invoice number handling but something I also don't want to have gaps.
Cheers,
Frank
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Am 02.08.2012 17:15, schrieb Andrew Hastie:
Hi Frank,
I believe this is by design. See the bottom of the documentation on
sequences where it states ;-
*Important:* To avoid blocking concurrent transactions that obtain
numbers from the same sequence, a |nextval| operation is never rolled
balance went negative), we can
issue the command ROLLBACK instead of COMMIT, and all our updates so far
will be canceled.
My understanding of all was that it includes sequences. Obviously, I'm
wrong... but how to do it right?
Cheers,
Frank
--
Sent via pgsql-general mailing list (pgsql-general
I am using Django to develop an app and I think I must have done a syncdb
(which deletes all records) without realizing it.
I have not vacuumed that database and I have also made a copy of the data
directory.
Is there some way to recover the deleted records?
--
Frank Church
understanding was, pg_database_size is the database size on disc. Am
I misunderstanding the docu here?
Cheers,
Frank
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Am 06.06.2012 17:49, schrieb Tom Lane:
Frank Lanitz fr...@frank.uvena.de writes:
I've got an issue I'm not sure I might have a misunderstanding. When
calling
select sum(pg_database_size(datid)) as total_size from pg_stat_database
the result is much bigger than running a df -s over
On Wed, 6 Jun 2012 20:31:36 +0200
Alban Hertroys haram...@gmail.com wrote:
On 6 Jun 2012, at 16:33, Frank Lanitz wrote:
the result is much bigger than running a df -s over the postgres
folder
- Its about factor 5 to 10 depending on database.
Is your du reporting sizes in Bytes
Tom Lane t...@sss.pgh.pa.us writes:
[...] Can you correlate the performance hit with any specific part
of autovacuum? In particular, I'm wondering if it matters whether
vacuum is cleaning tables or indexes [...]
In case it helps, this systemtap run will report on the top few
time-sampled
out that the python code should look similar to
plan = plpy.prepare(SELECT id FROM some_table WHERE date=
return = plpy.execure(plan)
But somehow a last piece is missing.
Can anybody help?
Cheers,
Frank
--
Frank Lanitz fr...@frank.uvena.de
pgprKWy6SHnPI.pgp
Description: PGP signature
solution, but its working ;)
Cheers,
Frank
--
Frank Lanitz fr...@frank.uvena.de
pgpKHQ4kaTSLV.pgp
Description: PGP signature
(date_plan,[some_date])
plpy.notice(date_rs[0][id_fld])
$Body$
language plpythonu;
Yes. Gave me the missing piece. Thanks a lot!
Cheers,
Frank
--
Frank Lanitz fr...@frank.uvena.de
pgpKZuZqB0PRy.pgp
Description: PGP signature
been) an issue that plpythonu is having a lot of overhead and not
able to make use of other functions? Didn't found anything on docu for
9.1 about that.
Cheers,
Frank
[1] http://spyced.blogspot.de/2005/04/plpython-intro.html
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
instance, 10 volumes (8G each), (WAL on a separate EBS volume)
with the following setup:
You might want to check with Amazon here.
Cheers,
Frank
--
Frank Lanitz fr...@frank.uvena.de
pgpmHnneAclhe.pgp
Description: PGP signature
them:
- infinity (I assume it's treaded as NULL inside this column -
unlimited password)
- 1970-01-01 00:00:00+01 (UNIX timestamp 0. I have no f. bloody idea here)
(I'm running 8.4 here)
cheers,
Frank
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
Am 23.03.2012 06:45, schrieb Gerhard Wiesinger:
With a database admin of a commercial database system I've discussed
that they have to provide and they also achieve 2^31 transactions per
SECOND!
Just corious: What is causing this many transactions?
Cheers,
Frank
--
Sent via pgsql-general
Am 23.03.2012 14:23, schrieb Adrian Klaver:
I would say either they got the numbers wrong or someone is pulling
your leg. That rate is not going to happen.
Maybe twitter or facebook all in all...
Cheers,
Frank
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
when I started to get in contact with LAMP
mysql just worked. Wasn't fast and didn't had a lot of fancy features
but it just worked in default config for day2day stuff.
Cheers,
Frank
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http
1 - 100 of 427 matches
Mail list logo