On Mon, Aug 15, 2016 at 11:40 AM, Adrian Klaver
wrote:
> On 08/15/2016 08:05 AM, Ioana Danes wrote:
>
>>
>>
>> On Mon, Aug 15, 2016 at 10:52 AM, Adrian Klaver
>> mailto:adrian.kla...@aklaver.com>> wrote:
>>
>>
>>
>>
>>
On Mon, Aug 15, 2016 at 10:52 AM, Adrian Klaver
wrote:
> On 08/15/2016 07:40 AM, Ioana Danes wrote:
>
>> Hello Adrian,
>>
>> On Mon, Aug 15, 2016 at 10:00 AM, Adrian Klaver
>> mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 08/15/2016 06:4
Hello Adrian,
On Mon, Aug 15, 2016 at 10:00 AM, Adrian Klaver
wrote:
> On 08/15/2016 06:48 AM, Ioana Danes wrote:
>
>> Hello All,
>>
>> I am running postgres 9.4.8 on centos 7 and few days ago I started to
>> get this error MultiXactId 32766 has not been created ye
Hello All,
I am running postgres 9.4.8 on centos 7 and few days ago I started to get
this error MultiXactId 32766 has not been created yet -- apparent
wraparound in 2 instances.
1. On one database server during the nightly task that does the "vacuum
analyze". I upgraded to postgres 9.4.9 as there
On Fri, Aug 12, 2016 at 2:50 PM, Adrian Klaver
wrote:
> On 08/12/2016 08:30 AM, Ioana Danes wrote:
>
>>
>>
>>
>
>>
>> The db3 database is on a different machine from all the other
>> databases you set up, correct?
>>
>> Yes, they
On Fri, Aug 12, 2016 at 12:22 PM, Adrian Klaver
wrote:
> On 08/12/2016 08:49 AM, Ioana Danes wrote:
>
>>
>>
>> On Fri, Aug 12, 2016 at 11:44 AM, Ioana Danes > <mailto:ioanada...@gmail.com>> wrote:
>>
>>
>>
> kvm
>>
>> Sorry
On Fri, Aug 12, 2016 at 11:44 AM, Ioana Danes wrote:
>
>
> On Fri, Aug 12, 2016 at 11:34 AM, Adrian Klaver > wrote:
>
>> On 08/12/2016 08:30 AM, Ioana Danes wrote:
>>
>>>
>>>
>>> On Fri, Aug 12, 2016 at 11:26 AM, Adrian Klaver
>>>
On Fri, Aug 12, 2016 at 11:34 AM, Adrian Klaver
wrote:
> On 08/12/2016 08:30 AM, Ioana Danes wrote:
>
>>
>>
>> On Fri, Aug 12, 2016 at 11:26 AM, Adrian Klaver
>> mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 08/12/2016 08:10 AM, Ioana
On Fri, Aug 12, 2016 at 11:26 AM, Adrian Klaver
wrote:
> On 08/12/2016 08:10 AM, Ioana Danes wrote:
>
>>
>>
>> On Fri, Aug 12, 2016 at 10:47 AM, Francisco Olarte
>> mailto:fola...@peoplecall.com>> wrote:
>>
>> CCing to the list...
>>
On Fri, Aug 12, 2016 at 10:47 AM, Francisco Olarte
wrote:
> CCing to the list...
>
> Thanks
> On Fri, Aug 12, 2016 at 4:10 PM, Ioana Danes wrote:
> >> given 318220 and 318216 are just a bit away ( 4db08/4db0c ), and it
> >> repeats sporadically, have you ruled
Hi Melvin,
On Fri, Aug 12, 2016 at 9:36 AM, Melvin Davidson
wrote:
>
>
> On Fri, Aug 12, 2016 at 9:09 AM, Ioana Danes wrote:
>
>> Hello Everyone,
>>
>> I have new information on this case. I also open a post for Bucardo
>> because I am still not sure w
db3 as the record in case (with drawid =
*318216*) is retrieved if I filter by drawid = *318220*
Any help is greatly appreciated,
Thank you
On Mon, Aug 8, 2016 at 1:25 PM, Adrian Klaver
wrote:
> On 08/08/2016 10:06 AM, Ioana Danes wrote:
>
>>
>>
>> On Mon, Aug 8, 2016 at
On Mon, Aug 8, 2016 at 12:55 PM, Adrian Klaver
wrote:
> On 08/08/2016 09:47 AM, Ioana Danes wrote:
>
>>
>>
>> On Mon, Aug 8, 2016 at 12:37 PM, Adrian Klaver
>> mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 08/08/2016 09:28 AM, Ioana Dane
On Mon, Aug 8, 2016 at 12:37 PM, Adrian Klaver
wrote:
> On 08/08/2016 09:28 AM, Ioana Danes wrote:
>
>>
>>
>> On Mon, Aug 8, 2016 at 12:19 PM, Adrian Klaver
>> mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 08/08/2016 09:11 AM, Ioana Danes
On Mon, Aug 8, 2016 at 12:19 PM, Adrian Klaver
wrote:
> On 08/08/2016 09:11 AM, Ioana Danes wrote:
>
>> Hi,
>>
>> I suspect I am having a case of data corruption. Here are the details:
>>
>> I am running postgres 9.4.8:
>>
>> postgresql94-9.4.8-
Hi,
I suspect I am having a case of data corruption. Here are the details:
I am running postgres 9.4.8:
postgresql94-9.4.8-1PGDG.rhel7.x86_64
postgresql94-contrib-9.4.8-1PGDG.rhel7.x86_64
postgresql94-libs-9.4.8-1PGDG.rhel7.x86_64
postgresql94-server-9.4.8-1PGDG.rhel7.x86_64
on CentOS Linux rel
It depends on the size of the table and the frequency of updates, deletes
but cold consider an audit table with triggers for update, delete and
truncate. At least you have a way to recover deleted records.
Ioana
On Fri, Sep 18, 2015 at 5:52 AM, Leif Jensen wrote:
>Hello Laurenz,
>
>Tha
/lib/postgresql94/lib64/bdr.so"
Am I missing something, doing the wrong steps? Any help is greatly
appreciated.
Thanks in advance,
Ioana Danes
Ioana Danes wrote
>
> If I will have to filter the tmp_Cashdrawer table then it executes the
> function for the all the cash drawers and then filter out the result which
> again is not efficient...
Hm
SELECT function_call(...)
FROM tbl
WHERE tbl.pk = ...;
That shoul
Ioana Danes wrote
> Hi All,
> Is there any similar syntax that only invokes the procedure once and
> returns all the columns?
Generic, adapt to fit your needs.
WITH func_call AS (
SELECT function_call(...) AS func_out_col
)
SELECT (func_out_col).*
FROM func_call;
Basically yo
Hi All,
I would like to ask for some suggestions regarding the following scenario.
I have a cash drawer table and for each cash drawer I have a function that
collects and transforms data from different tables (and a web service using
www_fdw). In normal scenarios I would have a function to r
: Ioana Danes
Cc: PostgreSQL General
Sent: Thursday, May 16, 2013 9:56:07 AM
Subject: Re: [GENERAL] Running out of memory at vacuum
On Thu, May 16, 2013 at 6:35 AM, Ioana Danes wrote:
Hi Jeff,
>
>
>
>On Tuesday, May 14, 2013, Ioana Danes wrote:
>
>
>>
>The fix is t
Hi Jeff,
On Tuesday, May 14, 2013, Ioana Danes wrote:
Hi all,
>
>I have a production database that sometimes runs out of memory at nightly
>vacuum.
>
>The application runs typically with around 40 postgres connections but there
>are times when the connections increa
a lot for your reply,
Ioana
- Original Message -
From: Scott Marlowe
To: Ioana Danes
Cc: Igor Neyman ; PostgreSQL General
Sent: Tuesday, May 14, 2013 6:16:38 PM
Subject: Re: [GENERAL] Running out of memory on vacuum
Meant to add: I'd definitely be looking at using pgbouncer if
Hi all,
I have a production database that sometimes runs out of memory at nightly
vacuum.
The application runs typically with around 40 postgres connections but there
are times when the connections increase because of some queries going on. The
reason is that the operations are slow, the termi
at a first sight...
Thanks
- Original Message -
From: Scott Marlowe
To: Ioana Danes
Cc: Igor Neyman ; PostgreSQL General
Sent: Tuesday, May 14, 2013 1:04:06 PM
Subject: Re: [GENERAL] Running out of memory on vacuum
Well definitely look at getting more memory in it if you can. 8G is
response,
ioana
- Original Message -
From: Scott Marlowe
To: Ioana Danes
Cc: Igor Neyman ; PostgreSQL General
Sent: Tuesday, May 14, 2013 12:14:18 PM
Subject: Re: [GENERAL] Running out of memory on vacuum
On Tue, May 14, 2013 at 8:30 AM, Ioana Danes wrote:
> Hi Igor,
>
> 1. I could remov
nyhow at the time of the vacuum there is
nothing else going on on the database. Sales are off.
Thanks,
- Original Message -
From: Igor Neyman
To: Ioana Danes ; PostgreSQL General
Cc:
Sent: Tuesday, May 14, 2013 11:06:25 AM
Subject: Re: [GENERAL] Running out of memory on vacuum
&g
...
Thanks for quick response,
- Original Message -
From: Igor Neyman
To: Ioana Danes ; PostgreSQL General
Cc:
Sent: Tuesday, May 14, 2013 10:11:19 AM
Subject: Re: [GENERAL] Running out of memory on vacuum
> Subject: [GENERAL] Running out of memory on vacuum
>
> Hi all,
>
Hi all,
I have a production database that sometimes runs out of
memory=at nightly vacuum.
The application runs typically with around 40 post=res
connections but there are times when the connections increase because =f some
queries going on. The reason is that the operations are slow, the t=rmin
is interested.
BTW, I did read Postgresql 9.0 High Performance written by Greg Smith yesterday
and he does mention about these parameters. I should have done that earlier, it
is a great book.
Thank you for your help,
Ioana
- Original Message -
From: Scott Marlowe
To: Ioana Danes
>
> pg_dump newdb > /DUMPDIR/newdb.dmp -n dev -T corgi -w -v -F c 2>
> /DUMPDIR/newdb.log
>
Try: -T dev.corgi instead of -T corgi
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
- Original Message -
From: Scott Marlowe
To: Ioana Danes
Cc: PostgreSQL General
Sent: Thursday, November 3, 2011 10:30:27 AM
Subject: Re: [GENERAL] Memory Issue
On Thu, Nov 3, 2011 at 7:34 AM, Ioana Danes wrote:
> After another half an hour almost the entire swap is used and
Hello Everyone,
I have a performance test running with 1200 clients performing this transaction
every second:
begin transaction
select nextval('sequence1');
select nextval('sequence2');
insert into table1;
insert into table2;
commit;
Table1 and table2 have no foreign keys and no triggers. Ther
Hi Mauro,
If you try to determine what fields were changed you can check this post:
http://jaime2ndquadrant.blogspot.com/
It might work for you.
Ioana
--- On Wed, 8/10/11, Mauro wrote:
Hi, good morning list
I'm writing a generic trigger in plpgsql to provide a system log to my system,
but I'
web120108.mail.ne1.yahoo.com>,
> Ioana Danes
> writes:
>
> > Hi,
> > I am planning to use the contrib module hstore
> > but I would like to install it on a separate schema,
> not public,
> > and include the schema in the search_path.
> > Do you know if th
Hi,
I am planning to use the contrib module hstore
but I would like to install it on a separate schema, not public,
and include the schema in the search_path.
Do you know if there are any issues with this scenario.
In the hstore.sql script I see it forces it into public:
-- Adjust this settin
Thanks a lot Sebastian
--- On Wed, 7/27/11, Sebastian Jaenicke
wrote:
> From: Sebastian Jaenicke
> Subject: Re: [GENERAL] error when compiling a c function
> To: "Ioana Danes"
> Cc: "PostgreSQL General"
> Received: Wednesday, July 27, 2011, 2:12 PM
&
Hello,
I built a c function that exports data from various tables into a text file but
when I compile the function I get this error:
In file included from /usr/include/pgsql/server/postgres.h:48,
from myfile.c:1:
/usr/include/pgsql/server/utils/elog.h:69:28: error: utils/errcode
Hi Scott,
Thank you for your answer, this is exactly what happens in this situation.
--- On Fri, 7/22/11, Scott Marlowe wrote:
> From: Scott Marlowe
> Subject: Re: [GENERAL] Why do I have reading from the swap partition?
> To: "Ioana Danes"
> Cc: "PostgreSQL G
Hi Everyone,
I am trying to debug a slowness that is happening on one of my production sites
and I would like to ask you for some help.
This is my environment:
---
Dedicated server running:
SUSE Linux Enterprise Server 11 (x86_64):
VERSION = 11
PATCHLEVEL = 1
RAM = 16GB
Po
I totally agree with you and the problem is gonna be fixed. I just needed a
temporary solution until the patch goes out.
Thank you,
Ioana
--- On Wed, 3/2/11, Tom Lane wrote:
> From: Tom Lane
> Subject: Re: [GENERAL] select DISTINCT not ordering the returned rows
> To: "Ioan
I found it: disabling enable_hashagg
--- On Wed, 3/2/11, Ioana Danes wrote:
> From: Ioana Danes
> Subject: [GENERAL] select DISTINCT not ordering the returned rows
> To: "PostgreSQL General"
> Received: Wednesday, March 2, 2011, 3:35 PM
> Hi Everyone,
>
> I
field1 without changing the statement...
Thank you in advance,
Ioana Danes
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi,
I found the issue and of course as you already suspected it was not postgres
related.
Thanks a lot for doing such a great job,
Ioana
--- On Fri, 2/18/11, Ioana Danes wrote:
> From: Ioana Danes
> Subject: [GENERAL] Logged statement apparently did not commited...
> To: pgsq
Hello Everyone,
I would like to kindly ask for your help regarding a strange situation I meet
yesterday. It might not be a postgres issue but I run out of ideas on what to
test or check and so far I could not reproduce the behavior.
I have 2 (master - slave) postgres 9.0.3 databases running on
oana
--- On Mon, 6/8/09, Ioana Danes wrote:
> From: Ioana Danes
> Subject: Re: [GENERAL] Duplicate key issue in a transaction block
> To: "Scott Marlowe"
> Cc: "Bill Moran" , "Vyacheslav Kalinin"
> , "PostgreSQL General"
> Receiv
wrote:
> From: Scott Marlowe
> Subject: Re: [GENERAL] Duplicate key issue in a transaction block
> To: "Ioana Danes"
> Cc: "Bill Moran" , "Vyacheslav Kalinin"
> , "PostgreSQL General"
> Received: Monday, June 8, 2009, 2:37 PM
> On Mon, Jun 8
minal...
Thanks a lot for your answers,
Ioana Danes
__
Looking for the perfect gift? Give the gift of Flickr!
http://www.flickr.com/gift/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to
--- On Mon, 6/8/09, Bill Moran wrote:
> From: Bill Moran
> Subject: Re: [GENERAL] Duplicate key issue in a transaction block
> To: "Ioana Danes"
> Cc: "PostgreSQL General"
> Received: Monday, June 8, 2009, 12:33 PM
> In response to Ioana Danes :
> &
connection.close();
}
catch (Exception cnfe)
{
cnfe.printStackTrace();
}
}
public static void main (String args[])
{
if (args == null || args.length != 1 ||
(!args[0].toLowerCase().equals(&q
Try working with this:
SELECT m.key AS mailings_key,
m.name AS mailings_name,
COALESCE(u.key,'') AS userdata_key,
COALESCE(u.uid,'') AS userdata_uid,
COALESCE(u.name,'') AS userdata_name
FROM (SELECT m0.key, m0.name, u0.uid
FROM mailings m0, (SELECT DISTINCT uid
tgres.log file for replication
> To: pgsql-general@postgresql.org
> Received: Thursday, November 27, 2008, 6:10 PM
> [EMAIL PROTECTED] (Ioana Danes) writes:
> > I've been wondering if anybody tried to use the
> postgresql csv log
> > file to replicate sql statements. I've bee
t keep up it's
> no big deal to the
> master.
>
> On Thu, Nov 27, 2008 at 11:33 AM, Ioana Danes
> <[EMAIL PROTECTED]> wrote:
> > Thanks for the tip Scott but am looking for an
> asynchronous replication that does not interfere with the
> performance of the app
ived: Thursday, November 27, 2008, 12:34 PM
> If you want the same thing in real time look into pgpool II
>
> On Thu, Nov 27, 2008 at 10:28 AM, Ioana Danes
> <[EMAIL PROTECTED]> wrote:
> > I know there are some limitations abut it:
> > - copy statements cannot be execu
Csaba Nagy <[EMAIL PROTECTED]> wrote:
> From: Csaba Nagy <[EMAIL PROTECTED]>
> Subject: Re: [GENERAL] Using postgres.log file for replication
> To: [EMAIL PROTECTED]
> Cc: "PostgreSQL General"
> Received: Thursday, November 27, 2008, 12:24 PM
> On Thu, 2008-11-2
Hi Everyone,
I've been wondering if anybody tried to use the postgresql csv log file to
replicate sql statements.
I've been looking into it in the past days and after a brief testing it doesn't
look bad at all...
Thanks,
Ioana
__
57 matches
Mail list logo