On Thu, May 03, 2012 at 09:39:29AM +0200, Alexander Reichstadt wrote:
> Thanks, that's answering my question. In Objective-C as well as many other
> languages there is the feature to turn on Garbage Collection. It's a separate
> thread that scans memory for strong pointers, their source and origi
On Thu, May 3, 2012 at 9:39 AM, Alexander Reichstadt wrote:
> Thanks, that's answering my question. In Objective-C as well as many other
> languages there is the feature to turn on Garbage Collection. It's a
> separate thread that scans memory for strong pointers, their source and
> origin and "va
Thanks, that's answering my question. In Objective-C as well as many other
languages there is the feature to turn on Garbage Collection. It's a separate
thread that scans memory for strong pointers, their source and origin and
"vacuums" memory so to not have any leaks. Anything unreferenced and
On 05/03/12 12:08 AM, Alexander Reichstadt wrote:
since I got no answer so far I searched through the docu again. I searched for
GC as well as Garbage, and all garbage refers to is with regard to vacuuming a
database. But my question refers to wether or not memory management is with
garbage co
On Thu, May 03, 2012 at 09:08:53AM +0200, Alexander Reichstadt wrote:
>
> Hi,
>
> since I got no answer so far I searched through the docu again. I searched
> for GC as well as Garbage, and all garbage refers to is with regard to
> vacuuming a database. But my question refers to wether or not m
On Mon, Mar 5, 2012 at 6:37 AM, Mike C wrote:
> Hi,
>
> I have been using table 17-2, Postgres Shared Memory Usage
> (http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html)
> to calculate approximately how much memory the server will use. I'm
> using Postgres 9.1 on a Linux 2.6 (RHE
Mike C writes:
> Ok, that makes sense. With regards to work_mem, am I right in thinking
> the child processes only allocate enough memory to meet the task at
> hand, rather than the full 16M specified in the config file?
They only allocate what's needed ... but you have to keep in mind that
work_
On Mon, Mar 5, 2012 at 4:04 PM, Tom Lane wrote:
> Mike C writes:
>> I have been using table 17-2, Postgres Shared Memory Usage
>> (http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html)
>> to calculate approximately how much memory the server will use. I'm
>> using Postgres 9.1 on
Mike C writes:
> I have been using table 17-2, Postgres Shared Memory Usage
> (http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html)
> to calculate approximately how much memory the server will use. I'm
> using Postgres 9.1 on a Linux 2.6 (RHEL 6) 64bit system, with 8GB RAM.
> Data
On 2/15/12, Tomas Vondra wrote:
> On 15 Únor 2012, 15:20, Robert James wrote:
>> What parameters should I change to use the server best? What are good
>> starting points or them? What type of performance increase should I
>> see?
...
> But you haven't
> mentioned which version of PostgreSQL is us
On 15 Únor 2012, 15:20, Robert James wrote:
> I have a 4 core, 4 GB server dedicated to running Postgres (only other
> thing on it are monitoring, backup, and maintenance programs). It
> runs about 5 databases, backing up an app, mainly ORM queries, but
> some reporting and more complicated SQL JO
On 10/01/12 Simon Riggs said:
> You're missing 2 PQclear() calls on success.
>
> http://www.postgresql.org/docs/devel/static/libpq-exec.html#LIBPQ-EXEC-MAIN
Ah, thanks.
Diffing db.c to db.c@@/main/soulierm_MASTeleworker_dev1/3
--- db.c@@/main/soulierm_MASTeleworker_dev1/3 2011-08-10 07:09:27.
On Tue, Jan 10, 2012 at 6:48 PM, Michael P. Soulier
wrote:
> res = PQexec(conn, "BEGIN");
> if (PQresultStatus(res) != PGRES_COMMAND_OK)
> {
> fprintf(stderr, "DB: BEGIN command failed: %s", PQerrorMessage(conn));
> PQclear(res);
> exit_nicely(conn);
> }
>
> re
Cc: PostgreSQL General
Sent: Thursday, November 3, 2011 10:30:27 AM
Subject: Re: [GENERAL] Memory Issue
On Thu, Nov 3, 2011 at 7:34 AM, Ioana Danes wrote:
> After another half an hour almost the entire swap is used and the system
> performs really bad 100 TPS or lower.
> It never ru
- Original Message -
From: Scott Marlowe
To: Ioana Danes
Cc: PostgreSQL General
Sent: Thursday, November 3, 2011 10:30:27 AM
Subject: Re: [GENERAL] Memory Issue
On Thu, Nov 3, 2011 at 7:34 AM, Ioana Danes wrote:
> After another half an hour almost the entire swap is used and
On Thu, Nov 3, 2011 at 7:34 AM, Ioana Danes wrote:
> Hello Everyone,
>
> I have a performance test running with 1200 clients performing this
> transaction every second:
>
>
> begin transaction
> select nextval('sequence1');
> select nextval('sequence2');
> insert into table1;
> insert into table2
Hi all,
I now know it's somewhat an "academic exercise" of little practical
importance, thanks for the clarification!!
Cheers,
Antonio
2011/9/2 Tom Lane :
> Craig Ringer writes:
>> Even better, add a valgrind suppressions file for the warnings and
>> ignore them. They are "leaks" only in the se
Craig Ringer writes:
> Even better, add a valgrind suppressions file for the warnings and
> ignore them. They are "leaks" only in the sense that a static variable
> is a leak, ie not at all.
Yeah, the bottom line here is that valgrind will warn about many things
that are not genuine problems. Yo
On 01/09/11 22:08, Antonio Vieiro wrote:
> Hi all,
>
> I'm running one of my programs with valgrind to check for memory leaks
> and I'm seeing something like this:
You only get the one report, though, right? No matter how many times
PQconnectdb is run in a loop?
It's internal stuff within OpenSS
Antonio Vieiro writes:
> I'm running one of my programs with valgrind to check for memory leaks
> and I'm seeing something like this:
> ==13207== 4 bytes in 1 blocks are still reachable in loss record 1 of 256
These are not bugs; they are just permanent allocations that are still
there when the
2011/4/13 Simon Riggs :
> 2011/4/13 Jorge Arévalo :
>>
>> I'm very interested in PostgreSQL memory management, specially in the
>> concept "memory context". I've read the official documentation at
>> http://www.postgresql.org/docs/8.4/static/spi-memory.html, but I'd
>> like to learn more about it.
2011/4/13 Jorge Arévalo :
>
> I'm very interested in PostgreSQL memory management, specially in the
> concept "memory context". I've read the official documentation at
> http://www.postgresql.org/docs/8.4/static/spi-memory.html, but I'd
> like to learn more about it. Do you recommend me any particu
2011/4/5 Jorge Arévalo :
> Hello,
>
> I'm having problems with a PostgreSQL server side C-function. It's not
> an aggregate function (operates over a only row of data). When the
> function is called over tables with ~4000 rows, it causes postgres
> backend crash with SEGFAULT. I know the error is a
Sam Nelson writes:
> Okay, we're finally getting the last bits of corruption fixed, and I finally
> remembered to ask my boss about the kill script.
> The only details I have are these:
> 1) The script does nothing if there are fewer than 1000 locks on tables in
> the database
> 2) If there are
On Tue, Sep 21, 2010 at 12:57 PM, Sam Nelson wrote:
>> On Thu, Sep 9, 2010 at 8:14 AM, Merlin Moncure wrote:
>> Naturally people are going to be skeptical of ec2 since you are so
>> abstracted from the hardware. Maybe all your problems stem from a
>> single explainable incident -- but we definit
Okay, we're finally getting the last bits of corruption fixed, and I finally
remembered to ask my boss about the kill script.
The only details I have are these:
1) The script does nothing if there are fewer than 1000 locks on tables in
the database
2) If there are 1000 or more locks, it will gra
On Wed, Sep 8, 2010 at 6:55 PM, Sam Nelson wrote:
> Even if the corruption wasn't a result of that, we weren't too excited about
> the process being there to begin with. We thought there had to be a better
> solution than just killing the processes. So we had a discussion about the
> intent of t
My (our) complaints about EC2 aren't particularly extensive, but last time I
posted to the mailing list saying they were using EC2, the first reply was
someone saying that the corruption was the fault of EC2.
Not that we don't have complaints at all (there are some aspects that are
very frustratin
Merlin Moncure writes:
> On Wed, Sep 8, 2010 at 4:03 PM, Sam Nelson wrote:
>> So ... yes, it seems that those four id's are somehow part of the problem.
>> They're on amazon EC2 boxes (yeah, we're not too fond of the EC2 boxes
>> either), so memtest isn't available, but no new corruption has crop
On Wed, Sep 8, 2010 at 4:03 PM, Sam Nelson wrote:
> It figures I'd have an idea right after posting to the mailing list.
> Yeah, running COPY foo TO stdout; gets me a list of data before erroring
> out, so I did a copy (select * from foo order by id asc) to stdout; to see
> if I could make some ki
It figures I'd have an idea right after posting to the mailing list.
Yeah, running COPY foo TO stdout; gets me a list of data before erroring
out, so I did a copy (select * from foo order by id asc) to stdout; to see
if I could make some kind of guess as to whether this was related to a
single row
Sam Nelson writes:
> pg_dump: Error message from server: ERROR: invalid memory alloc request
> size 18446744073709551613
> pg_dump: The command was: COPY public.foo () TO stdout;
> That seems like an incredibly large memory allocation request - it shouldn't
> be possible for the table to really
On Wed, Sep 8, 2010 at 12:56 PM, Sam Nelson wrote:
> Hey, a client of ours has been having some data corruption in their
> database. We got the data corruption fixed and we believe we've discovered
> the cause (they had a script killing any waiting queries if the locks on
> their database hit 100
Jeff Ross wrote:
Hopefully if I can get it to run well under pgbench the same setup
will work well with drupal. The site I was worried about when I went
to this bigger server has started a little slower than originally
projected so the old server is handling the load.
The standard TPC-B-like
Greg Smith wrote:
Jeff Ross wrote:
I think I'm doing it right. Here's the whole script. I run it from
another server on the lan.
That looks basically sane--your description was wrong, not your
program, which is always better than the other way around.
Note that everything your script is d
Jeff Ross wrote:
I think I'm doing it right. Here's the whole script. I run it from
another server on the lan.
That looks basically sane--your description was wrong, not your program,
which is always better than the other way around.
Note that everything your script is doing and way more i
Greg Smith wrote:
Jeff Ross wrote:
pgbench is run with this:
pgbench -h varley.openvistas.net -U _postgresql -t 2 -c $SCALE
pgbench
with scale starting at 10 and then incrementing by 10. I call it
three times for each scale. I've turned on logging to 'all' to try
and help figure out whe
Jeff Ross wrote:
pgbench is run with this:
pgbench -h varley.openvistas.net -U _postgresql -t 2 -c $SCALE pgbench
with scale starting at 10 and then incrementing by 10. I call it
three times for each scale. I've turned on logging to 'all' to try
and help figure out where the system panics
Tom Lane wrote:
Martijn van Oosterhout writes:
On Tue, Feb 09, 2010 at 08:19:51PM +0500, Anton Maksimenkov wrote:
Can anybody briefly explain me how one postgres process allocate
memory for it needs?
There's no real maximum, as it depends on the exact usage. However, in
ge
Martijn van Oosterhout writes:
> On Tue, Feb 09, 2010 at 08:19:51PM +0500, Anton Maksimenkov wrote:
>> Can anybody briefly explain me how one postgres process allocate
>> memory for it needs?
> There's no real maximum, as it depends on the exact usage. However, in
> general postgres tries to keep
2010/2/10 Martijn van Oosterhout :
>> Can anybody briefly explain me how one postgres process allocate
>> memory for it needs?
>
> There's no real maximum, as it depends on the exact usage. However, in
> general postgres tries to keep below the values in work_mem and
> maintainence_workmem. Most of
On Tue, Feb 09, 2010 at 08:19:51PM +0500, Anton Maksimenkov wrote:
> It means that on openbsd i386 we have about 2,2G of virtual space for
> malloc, shm*. So, postgres will use that space.
>
> But mmap() use random addresses. So when you get big chunk of memory
> for shared buffers (say, 2G) then
2010/2/9 Scott Marlowe :
> On Tue, Feb 9, 2010 at 3:18 AM, Anton Maksimenkov wrote:
>>> Isn't the usual advice here is to log the ulimit setting from the pg
>>> startup script so you can what it really is for the user at the moment
>> I think that "su" is enough:
> In previous discussions it was m
On Tue, Feb 9, 2010 at 3:18 AM, Anton Maksimenkov wrote:
> 2010/1/28 Scott Marlowe :
>>> related to maximum per-process data space. I don't know BSD very well
>>> so I can't say if datasize is the only such value for BSD, but it'd be
>>> worth checking. (Hmm, on OS X which is at least partly BSD
2010/1/28 Scott Marlowe :
>> related to maximum per-process data space. I don't know BSD very well
>> so I can't say if datasize is the only such value for BSD, but it'd be
>> worth checking. (Hmm, on OS X which is at least partly BSDish, I see
>> -m and -v in addition to -d, so I'm suspicious Op
On Wed, Jan 27, 2010 at 4:42 PM, Tom Lane wrote:
> related to maximum per-process data space. I don't know BSD very well
> so I can't say if datasize is the only such value for BSD, but it'd be
> worth checking. (Hmm, on OS X which is at least partly BSDish, I see
> -m and -v in addition to -d,
Jeff Ross writes:
> Tom Lane wrote:
>> Better look at the "ulimit" values the postmaster is started with;
> OpenBSD makes a _postgresql user on install and it is in the daemon class
> with
> the following values:
> daemon:\
> :ignorenologin:\
> :datasize=infinity:\
>
Tom Lane wrote:
Jeff Ross writes:
Running a simple select only pgbench test against it will fail with an out of
memory error as it tries to vacuum --analyze the newly created database with
750 tuples.
Better look at the "ulimit" values the postmaster is started with;
you shouldn't be get
Jeff Ross writes:
> Running a simple select only pgbench test against it will fail with an out of
> memory error as it tries to vacuum --analyze the newly created database with
> 750 tuples.
Better look at the "ulimit" values the postmaster is started with;
you shouldn't be getting that out-
On 14/01/2010 4:49 PM, Yan Cheng Cheok wrote:
I encounter case when I call a stored procedure for 299166 th times (intensive,
i put a non-stop while true loop to call stored procedure)
, the following exception will be thrown from PQexec. I am rather sure the
exception are from PQexec, as ther
On Tue, Dec 29, 2009 at 4:09 PM, Alvaro Herrera
wrote:
> It's expecting 85k distinct groups. If that's not accurate, then
> HashAggregate would use more memory than expected.
Great diagnosis. There are actually about 76 million distinct groups.
> See if you can make it work by setting enable
Alvaro Herrera writes:
> It's expecting 85k distinct groups. If that's not accurate, then
> HashAggregate would use more memory than expected. See if you can make
> it work by setting enable_hashagg = off.
> If that works, good -- the real solution is different. Maybe you need
> to ANALYZE mor
Anthony wrote:
> On Tue, Dec 29, 2009 at 3:41 PM, Anthony wrote:
>
> > I'm running a group by query on a table with over a billion rows and my
> > memory usage is seemingly growing without bounds. Eventually the mem usage
> > exceeds my physical memory and everything starts swapping.
> >
>
> I
On Tue, Dec 29, 2009 at 3:41 PM, Anthony wrote:
> I'm running a group by query on a table with over a billion rows and my
> memory usage is seemingly growing without bounds. Eventually the mem usage
> exceeds my physical memory and everything starts swapping.
>
I guess I didn't ask my question.
Craig Ringer writes:
> On Tue, 2009-07-21 at 10:13 -0400, Tom Lane wrote:
>> It's not unusual for "top" to show the postmaster's child processes as
>> "postmaster" as well. Depends on the platform and the options given
>> to top.
> Ah. Thanks for clearing that one up. That'd make more sense, sin
On Tue, 2009-07-21 at 10:13 -0400, Tom Lane wrote:
> Craig Ringer writes:
> > I'm a bit puzzled about why you have three "postmaster" instances shown
> > as running.
>
> It's not unusual for "top" to show the postmaster's child processes as
> "postmaster" as well. Depends on the platform and the
Craig Ringer writes:
> I'm a bit puzzled about why you have three "postmaster" instances shown
> as running.
It's not unusual for "top" to show the postmaster's child processes as
"postmaster" as well. Depends on the platform and the options given
to top.
regards, tom la
On Tue, 2009-07-21 at 19:39 +0900, tanjunhua wrote:
> I'm sorry for the twice action. because the mail server reject my response.
> I should compress it with ciper code(11) and the execute program is
> compressed also.
When I build your example from source I see no indication of anything
wro
On Tue, 2009-07-21 at 13:53 +0900, tanjunhua wrote:
> I get the memory leak scenario not only from Valgrind, but also from the
> output of top command.
> At first I think the memory leak occur when I disconnect database by
> Valgrind, then I write a test sample that just connect and disconnect
Because of the three-day break, my response is late.
8.1.8 is pretty old.
Also you'll have better luck getting help if you actually include the
output
from Valgrind.
the output from Valgrind is not stored. from now on, I will do it again and
get the result from Valgrind.
PS: the memory leak
Because of the three-day break, my response is late.
Valgrind is a great tool, but you must learn how to identify false
positives and tell the difference between a leak that matters (say 1kb
allocated and not freed in a loop that runs once per second) and a leak
that doesn't.
I get the memory
Sorry for the reply-to-self, but I thought I'd take ecpg out of the
equation:
#include
#include
int main()
{
struct passwd p;
struct passwd * r;
char buf[500];
getpwuid_r(1000, &p, &buf[0], 500, &r);
}
... produces the same leak report.
Since you didn't include information li
Your test case doesn't build, but I've attached a trivially tweaked one
that does.
Valgrind's report (valgrind --leak-check=full ./test) on my Ubuntu 9.04
machine with Pg 8.3.7 is:
==23382== 156 (36 direct, 120 indirect) bytes in 1 blocks are definitely
lost in loss record 1 of 4
==23382==at
8.1.8 is pretty old.
Also you'll have better luck getting help if you actually include the output
from Valgrind.
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of tanjunhua
Sent: Friday, July 17, 2009 8:12 AM
To: Postgres G
> These numbers don't even have any demonstrable connection to Postgres,
> let alone to an xpath-related memory leak. You're going to need to come
> up with a concrete test case if you want anyone to investigate.
>
> regards, tom lane
As I said in the start of this thread, t
"Matt Magoffin" <[EMAIL PROTECTED]> writes:
> I'm following up on this thread from a month ago on PG 8.3 memory use. I'm
> afraid even after updating to 8.3.3 + this patch, I still see the same
> overall memory trend. You can see what I'm looking at here with a couple
> of memory graphs.
These num
>> Gregory Stark <[EMAIL PROTECTED]> writes:
>>> That's just a special case of what would be expected to happen with
>>> memory
>>> allocation anyways though. Few allocators return memory to the OS
>>> anyways.
>>
>> Well, that does happen on Linux for instance. Since Matt knew in his
>> original
> Gregory Stark <[EMAIL PROTECTED]> writes:
>> That's just a special case of what would be expected to happen with
>> memory
>> allocation anyways though. Few allocators return memory to the OS
>> anyways.
>
> Well, that does happen on Linux for instance. Since Matt knew in his
> original report t
Gregory Stark <[EMAIL PROTECTED]> writes:
> That's just a special case of what would be expected to happen with memory
> allocation anyways though. Few allocators return memory to the OS anyways.
Well, that does happen on Linux for instance. Since Matt knew in his
original report that the xpath l
"Tom Lane" <[EMAIL PROTECTED]> writes:
> Well, you tell me --- *you* reported a behavior that isn't obviously
> explained by the bug we found.
In case it wasn't clear, the bug found was a intra-transaction memory leak.
When the transaction ended the memory would be reclaimed. That doesn't seem to
Hi,
On Wed, 02 Jul 2008, Tom Lane <[EMAIL PROTECTED]> writes:
> Are there any foreign keys referencing this table? If so, you're
> probably running out of memory for the list of pending trigger events
> (to verify that the FK constraint isn't violated by the delete).
>
> Allowing the triggers to
> Probably the right thing for you to do now is just to install the known
> fix, and keep an eye on your server for awhile to see if you still see
> any indication of the long-term leak behavior.
Certainly, that is my plan. Once I can get the patch rolled out to these
systems, I should be able to
"Matt Magoffin" <[EMAIL PROTECTED]> writes:
>> So there may be a second issue remaining to be found. Can you put
>> together a test case for the long-term small leak?
> Hmm, I'm not sure what else to add to this test case. This test case was a
> good example of what our database is doing with xpa
> This part seems to match the bug though --- the leak is approximately
> the same size as all the text returned by xpath() within the current
> transaction.
>
> So there may be a second issue remaining to be found. Can you put
> together a test case for the long-term small leak?
>
>
"Matt Magoffin" <[EMAIL PROTECTED]> writes:
> Later, I added a large set of plpgsql trigger functions that operate on
> that new xml column data, using the xpath() function to extract bits of
> XML and populate them into normal tables. The server has been running in
> this fashion for many months n
"Matt Magoffin" <[EMAIL PROTECTED]> writes:
>>> I think this should fix it.
>>> Kris Jurka
Confirmed, that makes it go away nicely here:
LibxmlContext: 57344 total in 3 blocks; 55720 free (202 chunks); 1624 used
>> It looks like xml.c source has changed considerably since 8.3 (looking at
>> re
>> I think this should fix it.
>>
>> Kris Jurka
>
> It looks like xml.c source has changed considerably since 8.3 (looking at
> revision 1.68.2.2 from the 8.3.3. release). Do you know where/if this
> patch would apply to the 8.3 branch?
I diff'ed 1.74 and 1.68.2.2, and I'm guessing this new line c
"Matt Magoffin" <[EMAIL PROTECTED]> writes:
> It looks like xml.c source has changed considerably since 8.3
No, hardly at all actually, but this patch happens to be right next door
to one of the lines that did change. cstring_to_text() replaces some
grottier stuff that used to be used for the sam
>> I'm able to duplicate the memory leak in this function with the current
>> Fedora 8 libxml2 (2.6.32). The leak is definitely inside libxml2
>> itself, because the bloat shows up here:
>>
>
> I think this should fix it.
>
> Kris Jurka
It looks like xml.c source has changed considerably since 8.
On Wed, 2 Jul 2008, Tom Lane wrote:
"Matt Magoffin" <[EMAIL PROTECTED]> writes:
Below is a test case that simulates the use of xpath() within a plpgsql
function in my application.
I'm able to duplicate the memory leak in this function with the current
Fedora 8 libxml2 (2.6.32). The leak is
> I looked through the libxml2 sources a little bit but couldn't
> immediately find the problem. I'm fairly confident though that
> this could be reproduced outside Postgres, by replicating the sequence
> of libxml2 calls we make in xpath(). The next step should probably be
> to build a reproduce
"Matt Magoffin" <[EMAIL PROTECTED]> writes:
> Below is a test case that simulates the use of xpath() within a plpgsql
> function in my application.
I'm able to duplicate the memory leak in this function with the current
Fedora 8 libxml2 (2.6.32). The leak is definitely inside libxml2
itself, beca
Volkan YAZICI <[EMAIL PROTECTED]> writes:
> We have an IBM System x3850 machine running on RHEL 4.5 Cluster Suite
> with high-availability enabled. During a huge delete process, PostgreSQL
> (8.3.1) exhausts available memory and receives an OOM kill.
Are there any foreign keys referencing this tab
>> OK, I'll try to come up with something. Do you have a recommended way of
>> capturing the amount memory being used by Postgres related to this? I
>> was
>> thinking I would have a plpgsql function that loops a large number of
>> times, calling a few xpath() calls,
>
> Yeah, that's what I'd try f
"Matt Magoffin" <[EMAIL PROTECTED]> writes:
>> Ugh. Sounds like "small memory leak inside libxml2" --- probably not
>> going to be easy to find. Can you put together a self-contained test
>> case?
> OK, I'll try to come up with something. Do you have a recommended way of
> capturing the amount m
> Ugh. Sounds like "small memory leak inside libxml2" --- probably not
> going to be easy to find. Can you put together a self-contained test
> case?
OK, I'll try to come up with something. Do you have a recommended way of
capturing the amount memory being used by Postgres related to this? I was
"Matt Magoffin" <[EMAIL PROTECTED]> writes:
> Later, I added a large set of plpgsql trigger functions that operate on
> that new xml column data, using the xpath() function to extract bits of
> XML and populate them into normal tables. The server has been running in
> this fashion for many months n
Greg Smith wrote:
On Mon, 7 Jan 2008, Joshua D. Drake wrote:
Certainly and iptables gives you some flexibility in connection
availability "before" it hits the actual database but without having
to jimmy the production firewall.
4) Funky tricks with things like port forwarding and filtering
On Mon, 7 Jan 2008, Joshua D. Drake wrote:
Certainly and iptables gives you some flexibility in connection availability
"before" it hits the actual database but without having to jimmy the
production firewall.
It's worth emphasizing that in many environments, it's far more likely one
will ha
Chris wrote:
Are there any other recommendations whether to use 64bit or 32bit OS
with postgresql? I just want to use 64bit if it as stable as 32bit.
I'm curious as to why you would run iptables on a database server. My
expectation would be that your database machine would be behind a
dedic
Geoffrey wrote:
[EMAIL PROTECTED] wrote:
Am Montag, 7. Januar 2008 12:56 schrieb Florian Weimer:
Or would you rather vote for 64bit because there are no problems
anymore and postgresql runs fine on 64bit debian.
We haven't run into any trouble with iptables on Debian etch, running
on amd64 har
[EMAIL PROTECTED] wrote:
Am Montag, 7. Januar 2008 12:56 schrieb Florian Weimer:
Or would you rather vote for 64bit because there are no problems
anymore and postgresql runs fine on 64bit debian.
We haven't run into any trouble with iptables on Debian etch, running
on amd64 hardware. But we us
Am Montag, 7. Januar 2008 12:56 schrieb Florian Weimer:
> > Or would you rather vote for 64bit because there are no problems
> > anymore and postgresql runs fine on 64bit debian.
>
> We haven't run into any trouble with iptables on Debian etch, running
> on amd64 hardware. But we use only fairly s
> Or would you rather vote for 64bit because there are no problems
> anymore and postgresql runs fine on 64bit debian.
We haven't run into any trouble with iptables on Debian etch, running
on amd64 hardware. But we use only fairly standard iptables
functionality.
--
Florian Weimer
Am Montag, 7. Januar 2008 11:48 schrieb Martijn van Oosterhout:
> On Mon, Jan 07, 2008 at 10:32:26AM +0100, [EMAIL PROTECTED] wrote:
> > So assuming i could choose between 4 GB RAM and 8 GB RAM and 32bit debian
> > and 64bit debian. Which option would you choose?
>
> Always go for more RAM, whether
On Mon, Jan 07, 2008 at 10:32:26AM +0100, [EMAIL PROTECTED] wrote:
> So assuming i could choose between 4 GB RAM and 8 GB RAM and 32bit debian and
> 64bit debian. Which option would you choose?
Always go for more RAM, whether you use 64 or 32-bit is really
orthoginal to that, whatever memory you
In article <[EMAIL PROTECTED]>,
Sabin Coanda <[EMAIL PROTECTED]> wrote:
[...]
% So, what is better from the postgres memory point of view: to use temporary
% objects, or to use common variables ?
Temp tables can cause serious bloat in some of the system catalog tables.
--
Patrick TJ McPhee
Nor
On Thu, Aug 16, 2007 at 09:17:37AM +0300, Sabin Coanda wrote:
> >>
> >> So, what is better from the postgres memory point of view: to use
> >> temporary
> >> objects, or to use common variables ?
> >
> >A temp table might take *slightly* more room than variables...
> >
> >> Can you suggest me othe
>>
>> So, what is better from the postgres memory point of view: to use
>> temporary
>> objects, or to use common variables ?
>
>A temp table might take *slightly* more room than variables...
>
>> Can you suggest me other point of views to be taken into consideration in
>> my
>> case ?
>
>Code ma
On Wed, Aug 15, 2007 at 10:21:31AM +0300, Sabin Coanda wrote:
> Hi there,
>
> I have a procedure which uses temporary objects (table and sequence). I
> tried to optimize it, using common variables (array and long varchar)
> instead. I didn't found any difference in performance, but I'd like to
101 - 200 of 296 matches
Mail list logo