Hi,
I want to force deafults, and wonder about the performance.
The trigger i use (below) makes the query (also below) take 45% more time.
The result is the same now, but i do have a use for using the trigger (see
"background info").
Isn't there a more efficient way to force the defaults (in the
On Mon, Sep 10, 2012 at 9:34 AM, Craig Gibson wrote:
> Hi all
>
> I am no database wizard so I am hoping someone may be able to assist me :)
>
> I get a daily CSV file of 6.5 million records. I create a temporary
> table and COPY them in. On completion I create an index on the mdnid
> column. This
Em 11/09/2012 14:59, Kevin Grittner escreveu:
Edson Richter wrote:
Em 11/09/2012 14:34, Kevin Grittner escreveu:
Edson Richter wrote:
For storage, du -h --max-depth 1 on data directory gives me the
amount of data.
Biggest objects are just the tables with files.
I've 2 tables th
On Sep 11, 2012, at 22:41, Wells Oliver wrote:
> I have this view which combines a few tables. It's wide-ish, 60 rows, almost
> all columns integer. It joins five tables, all using primary keys. The
> explain output is as follows:
>
> Nested Loop Left Join (cost=0.01..92.38 rows=4 width=222)
Peter Eisentraut writes:
> We could change the tar code to produce POSIX 2001 format archives,
> which don't have that limitation. But if someone wanted to do some work
> in this area, it might be more useful to look into a zip-based format.
I find it doubtful that it's worth spending effort on
I have this view which combines a few tables. It's wide-ish, 60 rows,
almost all columns integer. It joins five tables, all using primary keys.
The explain output is as follows:
Nested Loop Left Join (cost=0.01..92.38 rows=4 width=222) (actual
time=0.291..0.711 rows=4 loops=1)
Join Filter: ((my
On Tue, 2012-09-11 at 01:21 -0400, Tom Lane wrote:
> Mike Christensen writes:
> > Oh reading the online docs, it looks like what I may have wanted was:
> > --format=custom
>
> Right. That does everything tar format does, only better --- the only
> thing tar format beats it at is you can disassem
On 09/11/12 12:55 PM, Merlin Moncure wrote:
is their any chance this will ever be addressed?
have you got your aix completely up to date with all patches? I got
burned in a similar fashion a while back on a related call --
getaddrinfo IIRC.
It was updated to 'current' a year or so ago, I beli
On Tue, Sep 11, 2012 at 11:09 AM, John R Pierce wrote:
> we're still having issues with PG on AIX 6.1, our configurations have no
> ipv6 as our WAN is purely ipv4, but if we use listen_addresses='*', we
> consistently get an error...
>
> LOG: could not bind IPv6 socket: Address already in use
On Tue, Sep 11, 2012 at 09:09:22AM -0700, John R Pierce wrote:
>
> is their any chance this will ever be addressed?
The problem you're having is that "*" means "bind to all the addresses
on this machine", and for some reason IBM's strange and wonderful
implementation of the IP layer appears to gi
Hi,
On Mon, 2012-09-10 at 21:43 -0700, Mike Christensen wrote:
> s there a place to download pgAdmin 1.16 for openSuse (or a
> repository I can add?)
>
> All I can find is packages for 1.14, however this version is unable to
> connect to Postgres 9.2 databases.
OpenSuSE has an open build serv
On 09/11/12 10:12 AM, Adrian Klaver wrote:
Did not say what version you where on. But in 9.1:
http://www.postgresql.org/docs/9.1/interactive/runtime-config-connection.html
"The special entry * corresponds to all available IP interfaces. The entry
0.0.0.0
allows listening for all IPv4 addresses
Edson Richter wrote:
> Em 11/09/2012 14:34, Kevin Grittner escreveu:
>> Edson Richter wrote:
>>
>>> For storage, du -h --max-depth 1 on data directory gives me the
>>> amount of data.
>>
>>> Biggest objects are just the tables with files.
>>
>>> I've 2 tables that held all these objects.
Em 11/09/2012 14:34, Kevin Grittner escreveu:
Edson Richter wrote:
For storage, du -h --max-depth 1 on data directory gives me the
amount of data.
Biggest objects are just the tables with files.
I've 2 tables that held all these objects. Structure is
create table MYTABLE (id bigint
Edson Richter wrote:
> For storage, du -h --max-depth 1 on data directory gives me the
> amount of data.
> Biggest objects are just the tables with files.
> I've 2 tables that held all these objects. Structure is
>
> create table MYTABLE (id bigint not null primary key, mimetype
> varchar(
On 09/11/2012 09:09 AM, John R Pierce wrote:
> we're still having issues with PG on AIX 6.1, our configurations have no
> ipv6 as our WAN is purely ipv4, but if we use listen_addresses='*', we
> consistently get an error...
>
> LOG: could not bind IPv6 socket: Address already in use
>
>
Em 11/09/2012 14:00, Kevin Grittner escreveu:
Edson Richter wrote:
there is no problem. Just trying to reduce database size
Actual database size = 8Gb
Backup size = 1.6Gb (5x smaller)
Seems to me (IMHO) that there is room for improvement in database
storage (we don't have many indexes,
Edson Richter wrote:
> there is no problem. Just trying to reduce database size
> Actual database size = 8Gb
> Backup size = 1.6Gb (5x smaller)
>
> Seems to me (IMHO) that there is room for improvement in database
> storage (we don't have many indexes, and biggest tables are just
> the ones
John R Pierce writes:
> we're still having issues with PG on AIX 6.1, our configurations have no
> ipv6 as our WAN is purely ipv4, but if we use listen_addresses='*', we
> consistently get an error...
> LOG: could not bind IPv6 socket: Address already in use
This was discussed at bug #649
Em 11/09/2012 09:40, Kevin Grittner escreveu:
Edson Richter wrote:
So, should I use
alter table MYTABLE set storage EXTENDED
Don't bother; that is the default.
This should already be happening automatically. Is there some
problem you're seeing that you want to fix? If so, you shoul
we're still having issues with PG on AIX 6.1, our configurations have no
ipv6 as our WAN is purely ipv4, but if we use listen_addresses='*', we
consistently get an error...
LOG: could not bind IPv6 socket: Address already in use
the workaround we've been using is to set listen_addresses
On 11/09/2012 14:24, Alexander Reichstadt wrote:
>
>
> What am I doing wrong?
>
What version of PG are you using? The default output format of bytea
changed from "escape" to "hex" a version or two ago - try changing it to
"escape" and see if that makes a difference:
ALTER DATABASE x SET
On 09/11/12 6:24 AM, Alexander Reichstadt wrote:
What am I doing wrong?
I'm not a php programmer (hate the sloppy stuff) but nowhere do I see
you specifying a mime image type for your image data being returned to
the browser.
this is a php problem, not a postgres problem.
--
john r pierce
Sorry if this has been asked a thousand time. I read through the web and think
to have done everything I should.
I have a db with a bytea field. In it I stored data from a png file. When I
stored it I used pg_escape_bytea() and it seems it ended up
ok in the database.
I made a test setup, che
Hi All,
Hope you can assist and that I am posting to the right forum.
We currently have multiple Postgresql 9 instances running with warm standby,
and the replication work wonderfully.
The problem is the following, we take the slave database out of recovery and it
works perfectly, but when we
Also, you could try the enterprise package, it does have the pgAdmin and
probably every other bit you could have a need for now or later:
http://enterprisedb.com/downloads/postgres-postgresql-downloads
The installer walks you through all the subpackages and you can confirm for
each if you want
On 09/10/2012 09:43 PM, Mike Christensen wrote:
> Is there a place to download pgAdmin 1.16 for openSuse (or a
> repository I can add?)
>
> All I can find is packages for 1.14, however this version is unable to
> connect to Postgres 9.2 databases. Thanks!
I could find none. You might have to bui
Sorry if this has been asked a thousand time. I read through the web and think
to have done everything I should.
I have a db with a bytea field. In it I stored data from a png file. When I
stored it I used pg_escape_bytea() and it seems it ended up
ok in the database.
I made a test setup, che
Edson Richter wrote:
> So, should I use
>
> alter table MYTABLE set storage EXTENDED
Don't bother; that is the default.
This should already be happening automatically. Is there some
problem you're seeing that you want to fix? If so, you should
probably describe that.
-Kevin
--
Sent vi
Jeff Janes wrote:
> Kevin Grittner wrote:
>> Jeff Janes wrote:
>>> So a server that is completely free of user activity will still
>>> generate an endless stream of WAL files, averaging one file per
>>> max(archive_timeout, checkpoint_timeout). That comes out to one
>>> 16MB file per hour (sin
30 matches
Mail list logo