Hi,
I've facing an out of memory condition after running SLONY several hours to
get a 1TB database with about 23,000 tables replicated. The error occurs
after about 50% of the tables were replicated.
Most of the 48GB memory is being used for file system cache but for some
reason the initial copy
) unlimited
On Thu, Dec 11, 2014 at 1:30 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Carlos Henrique Reimer carlos.rei...@opendb.com.br writes:
I've facing an out of memory condition after running SLONY several hours
to
get a 1TB database with about 23,000 tables replicated. The error occurs
after
Slony version is 2.2.3
On Thu, Dec 11, 2014 at 3:29 PM, Scott Marlowe scott.marl...@gmail.com
wrote:
Just wondering what slony version you're using?
--
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br
That was exactly what the process was doing and the out of memory error
happened while one of the merges to set 1 was being executed.
On Thu, Dec 11, 2014 at 4:42 PM, Vick Khera vi...@khera.org wrote:
On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane t...@sss.pgh.pa.us wrote:
needed to hold
:
On Thu, Dec 11, 2014 at 12:05 PM, Carlos Henrique Reimer
carlos.rei...@opendb.com.br wrote:
That was exactly what the process was doing and the out of memory error
happened while one of the merges to set 1 was being executed.
You sure you don't have a ulimit getting in the way
bytes
Max nice priority 00
Max realtime priority 00
Max realtime timeout unlimitedunlimited
us
[root@2-NfseNet-SGDB ~]#
On Thu, Dec 11, 2014 at 6:01 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Carlos Henrique Reimer carlos.rei
follow to identify the root cause
in order to prevent it to happen again?
Thank you!
On Tue, Aug 6, 2013 at 9:14 PM, Sergey Konoplev gray...@gmail.com wrote:
On Tue, Aug 6, 2013 at 4:17 PM, Carlos Henrique Reimer
carlos.rei...@opendb.com.br wrote:
I have tried to drop the index and the reindex
directory to the new box.
Hope the error will not be propagated to the new box.
Reimer
On Mon, Aug 5, 2013 at 10:42 AM, Adrian Klaver adrian.kla...@gmail.comwrote:
On 08/05/2013 06:24 AM, Carlos Henrique Reimer wrote:
Hi,
Yes, I agree with you that it must be upgraded to a supported version
, 2013 at 8:35 AM, Craig Ringer cr...@2ndquadrant.com wrote:
On 08/04/2013 02:41 AM, Carlos Henrique Reimer wrote:
Hi,
I have a Windows box running Windows Server 2003 Enterprise Edition
Service Pack 2 with PostgreSQL 8.2.23 and getting a server crash while
trying to select a table
Hi,
I have a Windows box running Windows Server 2003 Enterprise Edition Service
Pack 2 with PostgreSQL 8.2.23 and getting a server crash while trying to
select a table:
select * from TOTALL.tt_est where assina=' kdkd' ;
Dumping the table with pg_dump or creating indexes in this table produce
Hi,
Currently, our application is still using PG 8.2 and we are trying to use
9.2 but there are some problems related with the implicit casts removed on
8.3.
Example:
1) select 'teste'||1;
2) select trim(1);
Select 1 2 does run fine on 8.2 but in 9.2 select 1 is ok and select 2
got an error
, May 15, 2013 at 3:17 PM, Carlos Henrique Reimer
carlos.rei...@opendb.com.br wrote:
Hi,
Currently, our application is still using PG 8.2 and we are trying to use
9.2 but there are some problems related with the implicit casts removed on
8.3.
Example:
1) select 'teste'||1;
2) select trim(1
version of 9.2 you are working with? I am also at 9.2 and its
working fine.
Try out using
select 'teste'||1::int;
See if it works or not.
On Wed, May 15, 2013 at 3:41 PM, Carlos Henrique Reimer
carlos.rei...@opendb.com.br wrote:
Actually, as stated in my first note, this is what I've done
It works if I drop the functions but then the select trim(1) does not work;
On Wed, May 15, 2013 at 5:38 PM, AI Rumman rumman...@gmail.com wrote:
Drop those functions and try again.
On Wed, May 15, 2013 at 4:22 PM, Carlos Henrique Reimer
carlos.rei...@opendb.com.br wrote:
The PG version
and ct.casttarget = target_t.oid
and ct.castfunc = proc.oid
I get 144 rows.
http://www.rummandba.com/2013/02/postgresql-type-casting-information.html
On Wed, May 15, 2013 at 4:54 PM, Carlos Henrique Reimer
carlos.rei...@opendb.com.br wrote:
It works if I drop the functions
Hi,
We are developing a solution which will run in thousands of small cash till
machines running Linux and we would like to use PostgreSQL but there is a
insecurity feeling regarding the solution basically because these boxes
would be exposed to an insecure environment and insecure procedures
Hi,
We're facing a weird performance problem in one of our PostgreSQL servers
running 8.0.26.
What can explain the difference between calling same query inside and
outside a cursor? If we run the query outside a cursor we got a response
time of 755ms and 33454ms if we call the same query inside
it not be the same inside or outside a cursor?
Thank you in advance!
On Wed, Feb 13, 2013 at 11:21 AM, Carlos Henrique Reimer
carlos.rei...@opendb.com.br wrote:
Hi,
We're facing a weird performance problem in one of our PostgreSQL servers
running 8.0.26.
What can explain the difference between
Hi,
I`m trying to figure out why a query runs in 755ms in the morning and
20054ms (26x) in the evening.
_
=11982.597..11982.597 rows=0 loops=1)
Filter: (sittrib8 = 33)
Total runtime: 11982.654 ms
(3 rows)
Thank you!
On Wed, Feb 13, 2013 at 7:53 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kgri...@ymail.com writes:
Carlos Henrique Reimer carlos.rei...@opendb.com.br wrote:
I`m trying
wrote:
Carlos Henrique Reimer carlos.rei...@opendb.com.br wrote:
Anyway it does not seam related to statistics as the query plan
is exactly the same for both scenarios, morning and evening:
Morning:
Index Scan using pagpk_aux_mes, pagpk_aux_mes, pk_cadpag,
pk_cadpag, pk_cadpag
, Nov 13, 2012 at 5:51 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Carlos Henrique Reimer carlos.rei...@opendb.com.br writes:
That is what I got from gdb:
ExecutorState: 11586756656 total in 1391 blocks; 4938408 free (6
chunks); 11581818248 used
So, query-lifespan memory leak. After
Hi,
That is what I got from gdb:
TopMemoryContext: 88992 total in 10 blocks; 10336 free (7 chunks); 78656
used
Type information cache: 24576 total in 2 blocks; 11888 free (5 chunks);
12688 used
Operator lookup cache: 24576 total in 2 blocks; 11888 free (5 chunks);
12688 used
Operator class
Hi,
How is the best way to attach a debugger to the SELECT and identify why is
it exhausting server storage.
Thank you in advance!
On Fri, Nov 9, 2012 at 4:10 AM, Craig Ringer cr...@2ndquadrant.com wrote:
On 11/08/2012 11:35 PM, Carlos Henrique Reimer wrote:
Hi Craig,
work_mem
Hi,
The following SQL join command runs the PostgreSQL server out of memory.
The server runs on a box with Red Hat Enterprise Linux Server release 6.3
(Santiago) and PostgreSQL 8.3.21.
select wm_nfsp from 5611_isarq.wm_nfsp
left join 5611_nfarq.nfe on
wm_nfsp.tpdoc = 7 where 1 = 1 and
, 2012 at 10:50 AM, Craig Ringer cr...@2ndquadrant.com wrote:
On 11/08/2012 06:20 PM, Carlos Henrique Reimer wrote:
Is there a way to make PostgreSQL 8.3.21 server stop memory bound
backends as PostgreSQL 9.0.0 does?
Are there any triggers on the table?
What's the setting for work_mem
Hi,
We're planning to move our postgreSQL database from one CPU box to another
box.
I'm considering an alternative procedure for the move as the standard one
(pg_dump from the old, copy dump to the new box, psql to restore in the
new) will take about 10 hours to complete. The ideia is installing
Hi,
I need to improve performance for a particular SQL command but facing
difficulties to understand the explain results.
Is there somewhere a tool could help on this?
I've stored the SQL code and corresponding explain analyze at
SQL: http://www.opendb.com.br/v1/sql.txt
Explain:
was: COPY brasil.cidade (gid, municpio, municpi0,
uf, longitude, latitude, the_geom) TO stdout;
pg_dump: *** aborted because of error
How can I fix this error?
Thank you!
2010/11/1 Filip Rembiałkowski filip.rembialkow...@gmail.com
2010/11/1 Carlos Henrique Reimer carlos.rei...@opendb.com.br
Hi
Hi,
I currently have my PostgreSQL server running in a windows box and now we're
migrating it to a Linux operational system.
Current windows configuration:
pg_controldata shows the LC_COLLATE and LC_CTYPE are Portuguese_Brasil.1252
psql \l command shows we have databases with encoding WIN1252
Hi,
After starting the debugger in a PostgreSQL 8.3 running in Windows 2003 SP2
box I'm getting in the log a lot of the following message:
LOG: loaded library $libdir/plugins/plugin_debugger.dll
Configuration option changed to start the debugger:
shared_preload_libraries =
Hi,
Yes, once correct schema was included in the search_path, VACUUM and ANALYZE
run fine again.
Thank you!
On Fri, Sep 10, 2010 at 11:38 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Carlos Henrique Reimer carlos.rei...@opendb.com.br writes:
Yes, you're right! I found out a functional index using
ON BRASIL.tt_tit FOR EACH ROW
EXEC
UTE PROCEDURE BRASIL.tgtit3()
On Thu, Sep 9, 2010 at 10:46 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Carlos Henrique Reimer carlos.rei...@opendb.com.br writes:
We are facing the following problem in a PG 8.2 server when trying to
vacuum
one of our databases
Hi,
We are facing the following problem in a PG 8.2 server when trying to vacuum
one of our databases:
vacuumdb: vacuuming database reimer
INFO: vacuuming pg_catalog.pg_database
INFO: pg_database: found 0 removable, 6 nonremovable row versions in 1
pages
INFO: index pg_database_datname_index
Hi,
I need to shrick a table with 102 GB and approximately 380.000.000 rows.
There is a vacuum full running for 13 hours and the only messages a get are:
INFO: vacuuming public.posicoes_controles
INFO: posicoes_controles: found 43960 removable, 394481459 nonremovable
row versions in 13308976
Henrique Reimer wrote:
Hi,
I need to shrick a table with 102 GB and approximately 380.000.000 rows.
What exactly are you trying to accomplish? You may be saving some space
temporarily by running vacuum full and reindex, but the database size will
probably grow back to its original size quite
that needs to be manually done and as any manual
operation exposed to errors.
Maybe this changed in the new PG releases but it was this way in the past.
Thank you!
On Sun, Sep 5, 2010 at 4:46 PM, Scott Marlowe scott.marl...@gmail.comwrote:
On Sun, Sep 5, 2010 at 5:09 AM, Carlos Henrique Reimer
-- and use this as a
base for
DELETE statement...
2010/8/30, George H george@gmail.com:
On Mon, Aug 30, 2010 at 5:30 AM, Carlos Henrique Reimer
carlos.rei...@opendb.com.br wrote:
Hi,
We had by mistake dropped the referencial integrety between two huge
tables
and now I'm facing
Hi,
We had by mistake dropped the referencial integrety between two huge tables
and now I'm facing the following messages when trying to recreate the
foreign key again:
alter table posicoes_controles add
CONSTRAINT protocolo FOREIGN KEY (protocolo)
REFERENCES posicoes (protocolo) MATCH
width=4)
Will this work better that a pl/pgsql as you suggested? Or is there
something even betther?
Thank you!
2010/8/30 George H george@gmail.com
On Mon, Aug 30, 2010 at 5:30 AM, Carlos Henrique Reimer
carlos.rei...@opendb.com.br wrote:
Hi,
We had by mistake dropped the referencial
Hi
I've a Linux box running postgresql 8.2.17 and facing some strange results
from the to_date function.
As you can see in the following tests the problem occurs when the template
used includes upper and lower case characters for the minute (Mi or mI).
Am I using the incorrect syntax or is it a
Hi,
We're facing performance problems in a Linux box running CentOS release 5
(Final) and PostgreSQL 8.2.4. I've done some basic checks in the
configuration but everything looks fine to me. One weird behaviour I've
found is the cached size showed by the
top and free Linux commands:
top -
...@gmail.com
On Fri, Sep 25, 2009 at 3:28 PM, Carlos Henrique Reimer
carlos.rei...@opendb.com.br wrote:
Hi,
We're facing performance problems in a Linux box running CentOS release 5
(Final) and PostgreSQL 8.2.4. I've done some basic checks in the
configuration but everything looks fine to me. One
Hi,
I've a plpgsql function that when called never ends and I would like
to trace the internal function commands and see where is the problem.
How can I trace what the function is doing?
Thank you!
Carlos
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
Hi,When the pg_locks view is used the internal lock manager data structures are momentarily locked and that is why I would like to know if some application is reading the pg_locks view and how many times.Is there a way to discover it?Thanks in advance!Reimer
Yahoo! Acesso
Hi,I would like to know how much clustered is a table related to some index How can I discover?Reimer
Yahoo! doce lar. Faça do Yahoo! sua homepage.
Hello,Is there a way do discover when was the last time a table or databasevacuumed?Thanks in advance!Reimer
Yahoo! Acesso Grátis
Internet rápida e grátis. Instale o discador agora!
I would like to change to C because it will give us better performance. As it is per-cluster I would have to initib again but what will happen when the dump will be reloaded?
Some characters that we have today in the SQL_ASCII databaseprobably can not be loaded in a LATIN1 database. Am I right?
Hi,
I´m trying to post messages in the performance list but they don´t appear in the list.
Whan can be wrong?
Reimer__Converse com seus amigos em tempo real com o Yahoo! Messenger http://br.download.yahoo.com/messenger/
Hi,
I´mthinking to test your suggestion, basically because there are only few sites to connect, but there are some points that aren´t very clear to me.
My doubts:
1. How to make a view updatable? Using the rule system?
1. Why are insertshandled differently from updates?
2. Can not I use the
ExactlyJeff Davis [EMAIL PROTECTED] escreveu:
Jim C. Nasby wrote: Or, for something far easier, try http://pgfoundry.org/projects/pgcluster/ which provides syncronous multi-master clustering. He specifically said that pgcluster did not work for him because thedatabases would be at physically
Hello,
Currently we have only one database accessed by the headquarter and two branches but the performance in the branches is very poor and I was invited to discover a way to increase it.
One possible solution is replicate the headquarter DB into the two branches.
I read about slony-i, but
I read some documents about replicationand realized that if you plan on using asynchronous replication, your application should be designed from the outset with that in mind because asynchronous replication is not something that can be easily added on after the fact.
Am I right?
Reimer
53 matches
Mail list logo