Hello Rob,
Thanks for your response.
[RS]
I really do not understand "why" you need the SQLSTATE code after
executing a "BEGIN" so as to go into transaction state.
AFAIK you can only retrieve the SQLSTATE error code when an error
actually occurs. So, if your query statement was successful,
http://dba.stackexchange.com/questions/61822/what-happens-in-postgresql-checkpoint
and the now classic :
http://www.depesz.com/2011/07/14/write-ahead-log-understanding-postgresql-conf-checkpoint_segments-checkpoint_timeout-checkpoint_warning/
On 12/10/2015 04:39, Richardson Hinestroza wrote:
Victor Blomqvist wrote:
[race condition causes errors due to stale plans immediately after ALTER TABLE
DROP]
> Note that these errors most of the time only happens very briefly at the same
> time as the ALTER is
> run. When I did some experiments today the server in total had around 3k
> req/s
Hello Daniel,
Thanks for your response.
Ahhh -- so after a successful query, the PHP program must keep executing
pg_get_result($cnx) until a NULL result is obtained !!
And ONLY THEN does transaction status transition from
PGSQL_TRANSACTION_ACTIVE to PGSQL_TRANSACTION_INTRANS.
OK -- makes
Steve Petrie, P.Eng. wrote:
> And yes, I am using pg_send_query(...) to execute the BEGIN; command, so as
> to get a result resource returned. (Please see my forthcoming emailed
> response to Adrian Klaver, wherein I provide the PHP source code that Adrian
> requests.)
After
On 10/12/15 10:14 AM, Jim Nasby wrote:
On 10/12/15 9:37 AM, Steve Pribyl wrote:
I am loading up a 60G database into BDR database and these "ERRORS"
are in my logs. Is not normal behavior or is something going bad.
2015-10-12 09:28:59.389 CDT,,,30371,,561bc17d.76a3,1,,2015-10-12
09:19:41
Yup, there is a disconnect on other side.
This disconnect is preceded by this.
ERROR,XX000,"invalid memory alloc request size 1073741824","slot
""bdr_16494_6204748238611542317_1_16494__"", output plugin ""bdr"", in the
change callback, associated LSN 2/FD250E48""bdr
Hello,
I'm seeing some inexplicable (to me) behavior in PostgreSQL 9.2. I checked
the archives, but I still can't explain it. Apologies if I missed something.
1. When I join two tables with "WHERE id IN (...)" versus with an explicit
join, and the join column for the inner table is a primary
From: Sean Rhea [mailto:sean.c.r...@gmail.com]
Sent: Friday, October 09, 2015 4:30 PM
To: Igor Neyman
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Merge join vs merge semi join against primary key
It does the merge (not-semi) join:
production=> explain
On 2015-10-12 14:37:07 +, Steve Pribyl wrote:
> I am loading up a 60G database into BDR database and these "ERRORS" are in my
> logs. Is not normal behavior or is something going bad.
>
> 2015-10-12 09:28:59.389 CDT,,,30371,,561bc17d.76a3,1,,2015-10-12 09:19:41
>
Hi,
On 2015-10-09 14:32:44 +0800, Victor Blomqvist wrote:
> Typically I have user defined functions for all operations, and my table
> and functions follow this pattern:
>
> CREATE TABLE users (
> id integer PRIMARY KEY,
> name varchar NOT NULL,
> to_be_removed integer NOT NULL
> );
>
>
On 10/12/2015 05:29 AM, Andres Freund wrote:
Hi,
On 2015-10-09 14:32:44 +0800, Victor Blomqvist wrote:
Typically I have user defined functions for all operations, and my table
and functions follow this pattern:
CREATE TABLE users (
id integer PRIMARY KEY,
name varchar NOT NULL,
On 2015-10-12 08:07:54 +, Albe Laurenz wrote:
> Victor Blomqvist wrote:
> [race condition causes errors due to stale plans immediately after ALTER
> TABLE DROP]
> > Note that these errors most of the time only happens very briefly at the
> > same time as the ALTER is
> > run. When I did some
On 2015-10-12 06:42:52 -0700, Adrian Klaver wrote:
> >My guess is that the problem here is that table level locking prevents
> >modification of the "users" type when the table is used, but there's no
> >locking preventing the columns to be dropped while the function is
> >used. So what happens is
Andres Freund writes:
> On 2015-10-09 14:32:44 +0800, Victor Blomqvist wrote:
>> CREATE FUNCTION select_users(id_ integer) RETURNS SETOF users AS
>> $$
>> BEGIN
>> RETURN QUERY SELECT * FROM users WHERE id = id_;
>> END;
>> $$ LANGUAGE plpgsql;
> My guess is that the problem
On 10/12/2015 06:53 AM, Tom Lane wrote:
Andres Freund writes:
On 2015-10-09 14:32:44 +0800, Victor Blomqvist wrote:
CREATE FUNCTION select_users(id_ integer) RETURNS SETOF users AS
$$
BEGIN
RETURN QUERY SELECT * FROM users WHERE id = id_;
END;
$$ LANGUAGE plpgsql;
My
I am loading up a 60G database into BDR database and these "ERRORS" are in my
logs. Is not normal behavior or is something going bad.
2015-10-12 09:28:59.389 CDT,,,30371,,561bc17d.76a3,1,,2015-10-12 09:19:41
CDT,5/0,0,ERROR,XX000,"data stream ended","bdr
Hello,
I am trying to extract ip addresses from golite by joining two tables as
posted below.
Running the queries individually and using a limit on the golite ip db
results are back immediately 1-2ms but when using the first query it takes
2-3 seconds.
Is there a way to use a limit in the join?
On 10/12/15 10:03 AM, Alex Magnum wrote:
Is there a way to use a limit in the join?
SELECT ... FROM table_a a
LEFT JOIN ( SELECT ... FROM table_b WHERE ... LIMIT 1) b
ON a.blah = b.blah
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data
Running the queries individually and using a limit on the golite ip db
results are back immediately 1-2ms but when using the first query it
takes 2-3 seconds.
Is there a way to use a limit in the join?
This sounds like the real issue is a missing/incorrect index, but if
you're on 9.4+ you can
On 10/12/15 9:37 AM, Steve Pribyl wrote:
I am loading up a 60G database into BDR database and these "ERRORS" are in my
logs. Is not normal behavior or is something going bad.
2015-10-12 09:28:59.389 CDT,,,30371,,561bc17d.76a3,1,,2015-10-12 09:19:41
CDT,5/0,0,ERROR,XX000,"data stream
On 10/6/15 12:18 PM, Olivier Dony wrote:
We would happily skip the micro-transactions (as a perf workaround) if
there was a way to detect this situation, but we couldn't find a way to
do that in 9.3. <9.3 we used SELECT FOR UPDATE NOWAIT to guard similar
cases.
If there is any way I could help
Greeetings.
I'm trying to come up with a way to allow one column to have the same value
up to two times, but no more than two times. I attempted adding a
constraint to check the count of the value in the field - count (trsqqq)
<=2 but aggregate functions are not allowed in constraints. Is there
The process used to created this
Start with clean db
Create host A database with bdr
Join host B with dbr
Load database using psql < file.sql
I was able to get it work if I do the following.
Start with clean db
Create host A database
Load data on host A
Join host A to bdr.
Join host b to bdr.
On 10/4/15 6:18 AM, Karsten Hilbert wrote:
check whether both TZs are equal,
if so
ignore them
else
convert both operands to UTC,
do "time - time",
return result AT TIME ZONE UTC,
document that UTC is returned,
such
Is there any way to do a pattern match against the elements of an array in
postgresql (9.4 if the version makes a difference)? I have a grouped query
that, among other things, returns an array of values, like:
SELECT lognum, array_agg(flightnum) as flightnums FROM logs GROUP BY lognum;
Where
On 10/12/15 12:04 PM, Dara Unglaube wrote:
Greeetings.
I'm trying to come up with a way to allow one column to have the same
value up to two times, but no more than two times. I attempted adding a
constraint to check the count of the value in the field - count
(trsqqq) <=2 but aggregate
On Mon, Oct 12, 2015 at 10:58 AM, Israel Brewster
wrote:
> Is there any way to do a pattern match against the elements of an array in
> postgresql (9.4 if the version makes a difference)? I have a grouped query
> that, among other things, returns an array of values, like:
On Mon, Oct 12, 2015 at 12:14:10PM -0500, Jim Nasby wrote:
> On 10/4/15 6:18 AM, Karsten Hilbert wrote:
> > check whether both TZs are equal,
> > if so
> > ignore them
> > else
> > convert both operands to UTC,
> > do "time - time",
> > return result AT
On Mon, Oct 12, 2015 at 10:58 AM, Israel Brewster
wrote:
> Is there any way to do a pattern match against the elements of an array in
> postgresql (9.4 if the version makes a difference)? I have a grouped query
> that, among other things, returns an array of values, like:
Jeff Janes writes:
> On Mon, Oct 12, 2015 at 10:58 AM, Israel Brewster
> wrote:
>> My first thought was to do something like this:
>>
>> SELECT * FROM (SELECT lognum,array_agg(flightnum) as flightnums FROM logs
>> GROUP BY lognum) s1 WHERE '8%' like
On 10/12/15 1:41 PM, Dara Unglaube wrote:
I created a view with the column of interest and a column of the count.
How do I do a check constraint on a view or do it all at once as a
subquery? Could you provide an example of how to create?
Please don't top-post. And do include the mailing list
In the general case, that might work, however in my actually use case the inner SQL query (and underlying table structure) is rather more complicated, making it so I really want the WHERE clause in an outside query, as in my (non-functional) example. Just to clarify, the actual structure of the
On Oct 12, 2015, at 10:39 AM, Tom Lane wrote:
>
> Jeff Janes writes:
>> On Mon, Oct 12, 2015 at 10:58 AM, Israel Brewster
>> wrote:
>>> My first thought was to do something like this:
>>>
>>> SELECT * FROM (SELECT
On Mon, Oct 12, 2015 at 11:39 AM, Tom Lane wrote:
> Jeff Janes writes:
> > On Mon, Oct 12, 2015 at 10:58 AM, Israel Brewster >
> > wrote:
> >> My first thought was to do something like this:
> >>
> >> SELECT * FROM (SELECT
On Monday, October 12, 2015 2:52 PM, Lele Gaifax wrote:
> I'm doing some experiments to find the better layout for
> reimplementing an existing db (MySQL cough!) with PostgreSQL
> 9.4+.
>
> I noticed a strange plan coming out from a simple query joining
> two tables, both
Hi all,
I'm doing some experiments to find the better layout for reimplementing
an existing db (MySQL cough!) with PostgreSQL 9.4+.
I noticed a strange plan coming out from a simple query joining two tables,
both containing 10Mrecs (and both ANALYZEd):
l10ntest=# \d master;
On 10/12/2015 12:06 PM, Lele Gaifax wrote:
Hi all,
I'm doing some experiments to find the better layout for reimplementing
an existing db (MySQL cough!) with PostgreSQL 9.4+.
I noticed a strange plan coming out from a simple query joining two tables,
both containing 10Mrecs (and both
On Monday, October 12, 2015, Lele Gaifax wrote:
> Adrian Klaver > writes:
>
> > Off hand I would say it is because of this --> count(m.num). Try
> count(l.num) instead and see
> > what happens. As your queries above show they are the
On 10 October 2015 at 08:52, Sean Rhea wrote:
>
> 1. When I join two tables with "WHERE id IN (...)" versus with an explicit
> join, and the join column for the inner table is a primary key, I would
> expect
> the same behavior in both cases, but the optimizer is choosing
---Israel BrewsterSystems Analyst IIRavn Alaska5245 Airport Industrial RdFairbanks, AK 99709(907) 450-7293---BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying Service;MIS
Adrian Klaver writes:
> Off hand I would say it is because of this --> count(m.num). Try count(l.num)
> instead and see
> what happens. As your queries above show they are the same number.
No, that's another thing I already tried tweaking and should have mentioned.
Do you have some advice how to design my functions to work around this
problem?
If I understand your conversation correct the problem is returning the
rowtype users from the function. If so, I can think of two workarounds
(both quite inconvenient and complex):
1. Use RETURNS TABLE(...) together
On 10 October 2015 at 02:53, Selim Tuvi wrote:
> node: deliver_sing (the problem node):
>
> postgres=# SELECT * FROM pg_catalog.pg_replication_identifier;
> riident | riname
> -+
>1 |
Lele Gaifax wrote:
> Hi all,
>
> I'm doing some experiments to find the better layout for reimplementing
> an existing db (MySQL cough!) with PostgreSQL 9.4+.
>
> I noticed a strange plan coming out from a simple query joining two tables,
> both containing 10Mrecs (and both ANALYZEd):
>
BDR is currently memory-limited for extremely large transactions. At a
guess, I'd say one of your big tables is large enough that the logical
decoding facility BDR uses can't keep track of the transaction
properly.
There's no hard limit, it depends on details of the transaction and a
number of
46 matches
Mail list logo