[GENERAL] Limiting DB access by role after initial connection?

2017-06-08 Thread Ken Tanzer
Hi.  As context, I'm working with an organization with a current production
database.  Organizations in other locations using the same service delivery
model want to share this database, with some but not all of the data
restricted so that people at each site can see only that site's data.  I've
been looking at doing this by creating a role for each location, and then
using RLS to restrict access to some tables.  Currently the database has
one user, the owner, and access is controlled within the application by
usernames and passwords within the DB.

My approach was to have the initial connection made by the owner, and then
after successfully authenticating the user, to switch to the role of the
site they belong to.  After investigation, this still seems feasible but
imperfect.  Specifically, I thought it would be possible to configure such
that after changing to a more restricted role, it would not be possible to
change back.  But after seeing this thread (
http://www.postgresql-archive.org/Irreversible-SET-ROLE-td5828828.html),
I'm gathering that this is not the case.

I can still go this route, either by:

1)  Setting the role as described above, and then trying to be damn sure
that a subsequent query doing a role change never ever slips through the
app. :)

2) After authentication, close the DB connection and reconnect as the site
role.  This seems tighter from a security standpoint, but at the cost of
doubling my # of DB connections, and then also needing the app to manage
passwords for each site.

So before doing either of those, I wanted to confirm if there is/isn't a
way to configure and change roles in a way that reduces privileges, and
cannot be undone.  Either with changing roles, or as some kind of
"RECONNECT TO db_name AS user" command that wold allow a DB owner to
connect without supplying credentials.

Those might both be wishful thinking.  If so, I'd also welcome any
thoughts, suggestions or feedback about 1) and 2), or better approaches
entirely.  Thanks!

Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Function with limit and offset - PostgreSQL 9.3

2017-06-08 Thread John R Pierce

On 6/8/2017 6:36 PM, marcinha rocha wrote:

|UPDATEtablea a SETmigrated =yes WHEREa.id =row.id;|
On my original select, the row will have migrated = false. Maybe All I 
need to put is a limit 2000 and the query will do the rest?


 SELECT does not return data in any determinate order unless you use an 
ORDER BY   so LIMIT 2000 would return some 2000 elements, not 
neccessarily the 'first' 2000 elements unless you somehow order them by 
however you feel 'first' is defined.



WITH ids AS (INSERT INTO tableb (id) SELECT id FROM tablea WHERE 
migrated=FALSE ORDER BY id LIMIT 2000 RETURNING id)
UPDATE tablea a SET a.migrated=TRUE WHERE a.id = ids.id 
RETURNING COUNT(a.id);




I'm not 100% sure you can do UPDATE  RETURNING COUNT(...), worse 
case the UPDATE RETURNING would be a subquery of a SELECT COUNT()...



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Function with limit and offset - PostgreSQL 9.3

2017-06-08 Thread David G. Johnston
On Thursday, June 8, 2017, marcinha rocha 
wrote:

> On Thursday, June 8, 2017, marcinha rocha  hotmail.com
> >
> wrote:
>
>> On my original select, the row will have migrated = false. Maybe All I
>> need to put is a limit 2000 and the query will do the rest?
>>
>>
> You shoud try to avoid the for loop,
>
> Why?
>

Mainly expected performance concerns. The engine is designed to handle
results sets as opposed to single row iterating.  Whether it's true in your
case I don't know but I would assume that operating on sets would be faster.


>
> Ok, cool!
>
> Now, how do tell the function to return the number of touched rows? On
> this case, it should always be 2000.
>
>
Unless there are fewer rows to process.  You could always just do i = i + 1
in the loop.

David J.


Re: [GENERAL] Function with limit and offset - PostgreSQL 9.3

2017-06-08 Thread marcinha rocha
On Thursday, June 8, 2017, marcinha rocha 
> 
wrote:

On my original select, the row will have migrated = false. Maybe All I need to 
put is a limit 2000 and the query will do the rest?

You shoud try to avoid the for loop,

Why?

but yes a limit 2000 on the for loop query should work since the migrated flag 
will ensure the same rows aren't selected again.

David J.


Ok, cool!

Now, how do tell the function to return the number of touched rows? On this 
case, it should always be 2000.

Thanks!



Re: [GENERAL] Function with limit and offset - PostgreSQL 9.3

2017-06-08 Thread David G. Johnston
On Thursday, June 8, 2017, marcinha rocha 
wrote:

> On my original select, the row will have migrated = false. Maybe All I
> need to put is a limit 2000 and the query will do the rest?
>
>
You shoud try to avoid the for loop, but yes a limit 2000 on the for loop
query should work since the migrated flag will ensure the same rows aren't
selected again.

David J.


Re: [GENERAL] Function with limit and offset - PostgreSQL 9.3

2017-06-08 Thread marcinha rocha
On 6/8/2017 5:53 PM, marcinha rocha wrote:

> Hi guys! I have the following queries, which will basically select
> data, insert it onto a new table and update a column on the original
> table.


I'm sure your example is a gross simplification of what you're really
doing, but if that's really all you're doing, why not do it all at once,
instead of row at a time?


BEGIN;
 insert into tableb (id) select id from tablea;
 update tablea set migrated=true;
COMMIT;


thats far more efficient that the row-at-a-time iterative solution you
showed.


You're right, that is just an example.

I'm basically using a CTE to select the data and then, inserting some rows onto 
a new table.

I just don't know how to tell my function to perform 2000 records at once, and 
then when calling it again it will "know" where to start from

Maybe, I already have everything I need?


UPDATE tablea a SET migrated = yes WHERE a.id = row.id;

On my original select, the row will have migrated = false. Maybe All I need to 
put is a limit 2000 and the query will do the rest?

Example:

CREATE or REPLACE FUNCTION migrate_data()
RETURNS integer;

declare
row record;

BEGIN

FOR row IN EXECUTE '
SELECT
  id
FROM
  tablea
WHERE
  migrated = false
'
LOOP

INSERT INTO tableb (id)
VALUES (row.id);

UPDATE tablea a SET migrated = yes WHERE a.id = row.id;

END LOOP;

RETURN num_rows; -- I want it to return the number of processed rows

END

$$ language 'plpgsql';





Re: [GENERAL] How does BDR replicate changes among nodes in a BDR group

2017-06-08 Thread Craig Ringer
On 9 June 2017 at 02:47, Zhu, Joshua  wrote:
> Thanks for the clarification.
>
> A follow up question, then, given *once joined all nodes are equal*, is that:
>
> should the node A dies or taken out of the group, the remaining three node 
> group (with B, C and D) would continue to function properly, correct?

Yes. Though you'll need to tell them node A isn't coming back by
parting it from one of the other nodes at some point, otherwise
they'll accumulate WAL in pg_xlog and run out of disk space eventually
as they hold onto the resources A would need to sync back up.
Monitoring is important.


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function with limit and offset - PostgreSQL 9.3

2017-06-08 Thread John R Pierce

On 6/8/2017 5:53 PM, marcinha rocha wrote:
Hi guys! I have the following queries, which will basically select 
data, insert it onto a new table and update a column on the original 
table.



I'm sure your example is a gross simplification of what you're really 
doing, but if that's really all you're doing, why not do it all at once, 
instead of row at a time?



BEGIN;
insert into tableb (id) select id from tablea;
update tablea set migrated=true;
COMMIT;


thats far more efficient that the row-at-a-time iterative solution you 
showed.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function with limit and offset - PostgreSQL 9.3

2017-06-08 Thread David G. Johnston
On Thursday, June 8, 2017, marcinha rocha 
wrote:

> When I call the function, it must execute 2000 rows and then stop. Then
> when calling it again, it must start from 2001 to 4000, and so on
>
> You can do this is with plain sql with the help of a CTE.  Insert into +
Select ... limit 2000 returning id.  Migration done.  Put that in a CTE.
In the outer query perform the update by referencing the returned rows from
the CTE.

David J.


[GENERAL] Function with limit and offset - PostgreSQL 9.3

2017-06-08 Thread marcinha rocha
Hi guys! I have the following queries, which will basically select data, insert 
it onto a new table and update a column on the original table.


CREATE or REPLACE FUNCTION migrate_data()
RETURNS integer;

declare
row record;

BEGIN

FOR row IN EXECUTE '
SELECT
  id
FROM
  tablea
WHERE
  mig = true
'
LOOP

INSERT INTO tableb (id)
VALUES (row.id);

UPDATE tablea a SET migrated = yes WHERE a.id = row.id;

END LOOP;

RETURN numrows; -- I want it to return the number of processed rows

END

$$ language 'plpgsql';


When I call the function, it must execute 2000 rows and then stop. Then when 
calling it again, it must start from 2001 to 4000, and so on.


How can I do that? I couldn't find a solution for this..



Thanks!
Marcia


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-08 Thread Éric



>Have you experimented with other queries that don't involve PostGIS?
>I'm wondering if your hook-installation code fails to work properly
>unless PostGIS was loaded first.  This would be easier to credit if
>there are hooks both extensions try to get into.


I think you're right on Tom. It looks like I cannot reproduce the issue if I 
start by calling a PostGIS function rather than a Pointcloud function. So it 
may well be a conflict between PostGIS and Pointcloud. Both use fn_extra, and 
that makes we wonder. This old thread [*] makes me wonder too! I still need to 
figure out the bug, but I can see some light now! thanks

[*] 


-- 
Éric


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Db backup

2017-06-08 Thread John R Pierce

On 6/8/2017 12:29 PM, John R Pierce wrote:

On 6/8/2017 12:10 PM, Jimmy Lunceford wrote:
A windows 7 computer went bad but the hdd is still operable. Is there 
a way to transfer the db data directories to another computer or does 
it require a dump and restore?



the data directories should work as-is on another Windows computer 
with the same major version and 'build' of postgresql and same 32 vs 
64 bitness.   By 'build' I mean like enterpriseDB installer vs BigSQL 
vs custom-built.   if custom built, the build options will need to be 
pretty much the same (visualC vs gcc/mingw vs cygwin, and the same 
general ./configure options, etc etc).



to restore said file system level backup,  I would...

A) install the compatible postgresql build (if the original was 9.3.5, 
its fine to use 9.3.17)


B) stop the postgresql database service

C) replace the 'data' directory with a copy of the data directory from 
the disk of said dead box


D) restart the postgresql database service

E) verify your databases are intact and complete.


if you used custom tablespaces, be sure to restore all those at step C, 
too, qirh the same paths as the originala.



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Db backup

2017-06-08 Thread John R Pierce

On 6/8/2017 12:10 PM, Jimmy Lunceford wrote:
A windows 7 computer went bad but the hdd is still operable. Is there 
a way to transfer the db data directories to another computer or does 
it require a dump and restore?



the data directories should work as-is on another Windows computer with 
the same major version and 'build' of postgresql and same 32 vs 64 
bitness.   By 'build' I mean like enterpriseDB installer vs BigSQL vs 
custom-built.   if custom built, the build options will need to be 
pretty much the same (visualC vs gcc/mingw vs cygwin, and the same 
general ./configure options, etc etc).




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Db backup

2017-06-08 Thread Joshua D. Drake

On 06/08/2017 12:10 PM, Jimmy Lunceford wrote:
A windows 7 computer went bad but the hdd is still operable. Is there a 
way to transfer the db data directories to another computer or does it 
require a dump and restore?


As long as the data is intact and you use the same major version of 
postgres, yes. Just use robocopy or a standard backup utility.


Thanks,

JD




--

Innerspace Strategies 

Jimmy Lunceford / HVAC Controls 
technicianjimmy_luncef...@innerspacecontrols.com 
(c) 901-825-7643


485 E. South StreetSTE 104Collierville, TN 38017http://issic.net 






--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Db backup

2017-06-08 Thread Jimmy Lunceford
A windows 7 computer went bad but the hdd is still operable. Is there a way
to transfer the db data directories to another computer or does it require
a dump and restore?
-- 

[image: Innerspace Strategies] 

Jimmy Lunceford / HVAC Controls
technicianjimmy_luncef...@innerspacecontrols.com(c) 901-825-7643

485 E. South StreetSTE 104Collierville, TN 38017http://issic.net


Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-08 Thread Tom Lane
Harry Ambrose  writes:
> Please find the jar attached (renamed with a .txt extension as I know some
> email services deem jars a security issue).

Hmm, the output from this script reminds me quite a lot of one I was
sent in connection with bug #1 awhile back:
https://www.postgresql.org/message-id/20161201165505.4360.28203%40wrigleys.postgresql.org
Was that a colleague of yours?

Anyway, the bad news is I couldn't reproduce the problem then and I can't
now.  I don't know if it's a timing issue or if there's something critical
about configuration that I'm not duplicating.  Can you explain what sort
of platform you're testing on, and what nondefault configuration settings
you're using?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How does BDR replicate changes among nodes in a BDR group

2017-06-08 Thread Zhu, Joshua
Thanks for the clarification.

A follow up question, then, given *once joined all nodes are equal*, is that:

should the node A dies or taken out of the group, the remaining three node 
group (with B, C and D) would continue to function properly, correct?
[somewhere I saw the term "downstream" nodes was used, and I am not clear what 
that meant in the context of a mesh-connected group] 

Thanks again

-Original Message-
From: Craig Ringer [mailto:cr...@2ndquadrant.com] 
Sent: Wednesday, June 07, 2017 5:59 PM
To: Zhu, Joshua 
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] How does BDR replicate changes among nodes in a BDR group

On 8 June 2017 at 04:50, Zhu, Joshua  wrote:

> How does BDR replicate a change delta on A to B, C, and D?

It's a mesh.

Once joined, it doesn't matter what the join node was, all nodes are equal.

> e.g., A
> replicates delta to B and D, and B to C, or some other way, or not 
> statically determined?

Each node replicates to all other nodes in an undefined order determined by 
network timing etc.


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-08 Thread Tom Lane
Eric Lemoine  writes:
> The initial "select pc_typmod_pcid(1)" query completely screws the
> connection.
> "select pc_typmod_pcid(1)" is just an example of a simple query that
> triggers the problem. There are many others. But it has to be a query
> using the Pointcloud extension.

My guess is that it's got nothing to do with the specific query, but
that Pointcloud is installing some hook functions when it's loaded,
and that one or another of those hooks is killing performance for
subsequent queries, either by taking too long in itself or by
defeating some important optimization.  Hard to speculate further
with just this much data.

> I have no problem if I start with the main query (my "select points
> from" query of interest). And running the "select pc_typmod_pcid(1)"
> query in the middle does not cause any problem. It has to be run first
> on the connection to do the harm.

Have you experimented with other queries that don't involve PostGIS?
I'm wondering if your hook-installation code fails to work properly
unless PostGIS was loaded first.  This would be easier to credit if
there are hooks both extensions try to get into.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-08 Thread Eric Lemoine
On 06/08/2017 07:27 PM, Moreno Andreo wrote:
> Il 08/06/2017 19:10, Eric Lemoine ha scritto:
>>
>> How can such a thing happen? Thanks for any insight on what could cause
>> this.
>>
>>
> I'd try raising shared_buffers to 1 GB or something near 40% of the
> available memory

I tried to make it 4G, but it does not make a difference. My machine has
16G of RAM.


> 
> If you run the query again, after getting bad results, what do you get?

Always bad results.

psql (9.6.3)
Type "help" for help.

lopocs=# \timing
Timing is on.
lopocs=# select pc_typmod_pcid(1);
 pc_typmod_pcid

  1
(1 row)

Time: 4.887 ms
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));


Time: 3522.135 ms
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));
Time: 3395.672 ms
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));

Time: 3454.466 ms



The initial "select pc_typmod_pcid(1)" query completely screws the
connection.

"select pc_typmod_pcid(1)" is just an example of a simple query that
triggers the problem. There are many others. But it has to be a query
using the Pointcloud extension.

I have no problem if I start with the main query (my "select points
from" query of interest). And running the "select pc_typmod_pcid(1)"
query in the middle does not cause any problem. It has to be run first
on the connection to do the harm. See below.

psql (9.6.3)
Type "help" for help.

lopocs=# \timing
Timing is on.
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));
Time: 280.117 ms
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));
Time: 210.080 ms
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));
Time: 233.095 ms
lopocs=# select pc_typmod_pcid(1);
 pc_typmod_pcid

  1
(1 row)

Time: 0.686 ms
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));
Time: 199.150 ms




-- 
Éric Lemoine
Oslandia
<>

signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-08 Thread Moreno Andreo

Il 08/06/2017 19:10, Eric Lemoine ha scritto:


How can such a thing happen? Thanks for any insight on what could cause
this.


I'd try raising shared_buffers to 1 GB or something near 40% of the 
available memory


If you run the query again, after getting bad results, what do you get?

Cheers

Moreno.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-08 Thread Eric Lemoine

> Note that the execution time is 46 ms when the query is wrapped in an
> explain analyze (while it's 3 s when it's not!)


Actually, it seems to me that the performance issue is not on the query
itself, it is on the fetching of the data returned by the query. Which
explains why the query is fast when executed in an explain analyze. I've
observed this by using a cursor.

The query returns 2506 rows. I use a cursor to fetch the resulting rows
500 by 500. The fetching of 500 rows (fetch 500 from c) takes about 50
ms in the good/normal case, i.e. when the "select pc_typmod_pcid(1)" is
not executed first. While it takes around 600 ms in the pathological case!

Below is the full test case.


Good case:

psql (9.6.3)
Type "help" for help.

lopocs=# \timing
Timing is on.
lopocs=# begin;
BEGIN
Time: 0.373 ms
lopocs=# declare c cursor for select points from public.sthelens where
pc_intersects(points, st_geomfromtext('polygon ((-2357334.41980829
-3742654.00016992, -2356120.91980829 -3742654.00016992,
-2356120.91980829 -3741278.00016992, -2357334.41980829
-3741278.00016992, -2357334.41980829 -3742654.00016992))', 4978));
DECLARE CURSOR
Time: 75.976 ms
lopocs=# fetch 500 from c;


Time: 44.648 ms
lopocs=# fetch 500 from c;
Time: 40.693 ms
lopocs=# fetch 500 from c;
Time: 45.218 ms


Base case:

psql (9.6.3)
Type "help" for help.

lopocs=# \timing
Timing is on.
lopocs=# select pc_typmod_pcid(1);  -- that screws up everything
 pc_typmod_pcid

  1
(1 row)

Time: 5.702 ms
lopocs=# begin;
BEGIN
Time: 0.234 ms
lopocs=# declare c cursor for select points from public.sthelens where
pc_intersects(points, st_geomfromtext('polygon ((-2357334.41980829
-3742654.00016992, -2356120.91980829 -3742654.00016992,
-2356120.91980829 -3741278.00016992,
-2357334.41980829 -3741278.00016992, -2357334.41980829
-3742654.00016992))', 4978));
DECLARE CURSOR
Time: 76.806 ms
lopocs=# fetch 500 from c;
Time: 669.834 ms
lopocs=# fetch 500 from c;
Time: 652.738 ms
lopocs=# fetch 500 from c;
Time: 604.293 ms




How can such a thing happen? Thanks for any insight on what could cause
this.


-- 
Éric Lemoine
Oslandia
+33 1 86 95 95 55
<>

signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-08 Thread Eric Lemoine

> Lots of missing information here ...
> 
> Is there an index on public.sthelens.points?

Yes, there are.

lopocs=# \d sthelens;
   Table "public.sthelens"
 Column |Type|   Modifiers
++---
 id | integer| not null default nextval('sthelens_id_seq'::regclass)
 points | pcpatch(2) |
 morton | bigint |
Indexes:
"sthelens_pkey" PRIMARY KEY, btree (id)
"sthelens_pc_envelopegeometry_idx" gist (pc_envelopegeometry(points))

So two indices, one for the primary key, and a Postgis index on the
Postgis geometry returned by the Pointcloud pc_envelopegeometry function.


> How many rows are in that table?

30971



> What are your shared_buffers settings?

128 MB (Debian unstable)


> How much RAM does the server have?

16 GB

> What does EXPLAIN look like for that query? How large (in bytes) are the
> tables in question?


 QUERY PLAN



 Bitmap Heap Scan on sthelens  (cost=383.26..2496.67 rows=2065 width=32)
(actual time=3.213..46.674 rows=2506 loops=1)
   Recheck Cond:
('0103207213010005002E47BC352BFC41C164910500DF8D4CC12E47BC75CCF941C164910500DF8D4CC12E47BC75CCF941C1649105002F8B4CC12E47BC352BFC41C1649105002F8B4CC12E47BC352BFC41C164910500DF8D4CC1'::geometry
&& st_geomfromewkb(pc_envelopeasbinary(points)))
   Filter:
_st_intersects('0103207213010005002E47BC352BFC41C164910500DF8D4CC12E47BC75CCF941C164910500DF8D4CC12E47BC75CCF941C1649105002F8B4CC12E47BC352BFC41C1649105002F8B4CC12E47BC352BFC41C164910500DF8D4CC1'::geometry,
st_geomfromewkb(pc_envelopeasbinary(points)))
   Heap Blocks: exact=36
   ->  Bitmap Index Scan on sthelens_pc_envelopegeometry_idx
(cost=0.00..382.75 rows=6196 width=0) (actual time=1.626..1.626
rows=2506 loops=1)
 Index Cond:
('0103207213010005002E47BC352BFC41C164910500DF8D4CC12E47BC75CCF941C164910500DF8D4CC12E47BC75CCF941C1649105002F8B4CC12E47BC352BFC41C1649105002F8B4CC12E47BC352BFC41C164910500DF8D4CC1'::geometry
&& st_geomfromewkb(pc_envelopeasbinary(points)))
 Planning time: 0.525 ms
 Execution time: 46.999 ms
(8 rows)


Note that the execution time is 46 ms when the query is wrapped in an
explain analyze (while it's 3 s when it's not!)


> What does pc_typmod_pcid() actually do?

It is one of the simplest functions of Pointcloud.


See

and
.


> 
> There are probably lots of other questions I could ask, but those questions
> are based on the fact that this _looks_ like a classic cache blowout. I.e.,
> the query runs quickly when all the related d> ata is in RAM, but is
> significantly slower when the data has to be pulled from disk. Answering
> the quesitons above will likely help to determine if my guess is correct.


I don't know. The query is fast if I run it first on the database
connection. And it is *always* very slow after the "select
pc_typmod_pcid(1)" query has run.


> 
> If my guess is correct, there are any number of potential ways to improve
> things: Add RAM to the machine, enlarge shared_buffers, put a geo index on
> public.sthelens.points so it doesn't have to scan the entire table; as a
> few examples.


It sounds like a performance issue to you, while it sounds like a bug to
me :)

> 
> Understanding what pc_typmod_pcid() actually does would help, but even 
> without that you can test things in a few ways. One would be to substitute
> a different query in your testing for select pc_typmod_pcid(1) that is
> known to push the contents of public.sthelens out of memory and see if
> the behavior is similar. Any count(*) query on some other large table
> would probably suffice. A better way would probalby be to install the
> pg_buffercache module and see what's actually in the cache at each step
> of the testing process.


I'll try to use pg_buffercache.



Thanks a lot for your response. That at least gives me courage in
debugging this :)


-- 
Éric Lemoine
Oslandia
+33 1 86 95 95 55
<>

signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-08 Thread Adrian Klaver

On 06/08/2017 09:20 AM, Eric Lemoine wrote:



Looks like you also have postgis and pointcloud_postgis in mix. I would
say this may get an answer sooner here:

http://lists.osgeo.org/mailman/listinfo/pgpointcloud/


I am actually one of the developers of the Pointcloud extension. I
haven't been able to debug this up to now.



Oops on my part.


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-08 Thread Eric Lemoine

> Looks like you also have postgis and pointcloud_postgis in mix. I would
> say this may get an answer sooner here:
> 
> http://lists.osgeo.org/mailman/listinfo/pgpointcloud/

I am actually one of the developers of the Pointcloud extension. I
haven't been able to debug this up to now.


-- 
Éric Lemoine
Oslandia
<>

signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-08 Thread Adrian Klaver

On 06/08/2017 09:00 AM, Eric Lemoine wrote:

Hi

We have a rather strange performance issue with the Pointcloud extension
[*]. The issue/bug may be in the extension, but we don't know for sure
at this point. I'm writing to the list to hopefully get some guidance on
how to further debug this.

[*] 

A query takes around 250 ms when executed first on a database
connection. But it takes like 3 s when executed after a first very
simple Pointcloud query.

Below is a test-case with psql.

Case #1 (works normally):

psql (9.6.3)
Type "help" for help.

lopocs=# \timing
Timing is on.
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));
Time: 236.423 ms


Case #2 (works abnormally):

psql (9.6.3)
Type "help" for help.

lopocs=# \timing
Timing is on.
lopocs=# select pc_typmod_pcid(1);
  pc_typmod_pcid

   1
(1 row)

Time: 4.917 ms
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));
Time: 2987.491 ms


The query takes 236 ms in case #1, and 2987 ms in case #2! Huge difference.

Anyone has any idea where this performance drop may come from? The
problem may be in the Pointcloud in the extension, but I have no idea
where the bug may be.

Any idea? Any suggestion on how to debug this? This has been driving us
crazy for some time now.


Looks like you also have postgis and pointcloud_postgis in mix. I would 
say this may get an answer sooner here:


http://lists.osgeo.org/mailman/listinfo/pgpointcloud/



Thanks.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-08 Thread Bill Moran
On Thu, 8 Jun 2017 18:00:04 +0200
Eric Lemoine  wrote:

> We have a rather strange performance issue with the Pointcloud extension
> [*]. The issue/bug may be in the extension, but we don't know for sure
> at this point. I'm writing to the list to hopefully get some guidance on
> how to further debug this.
> 
> [*] 
> 
> A query takes around 250 ms when executed first on a database
> connection. But it takes like 3 s when executed after a first very
> simple Pointcloud query.
> 
> Below is a test-case with psql.
> 
> Case #1 (works normally):
> 
> psql (9.6.3)
> Type "help" for help.
> 
> lopocs=# \timing
> Timing is on.
> lopocs=# select points from public.sthelens where pc_intersects(points,
> st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
> -2356120.91980829 -3742654.00016992, -2356120.91980829
> -3741278.00016992, -2357334.41980829 -3741278.00016992,
> -2357334.41980829 -3742654.00016992))', 4978));
> Time: 236.423 ms
> 
> 
> Case #2 (works abnormally):
> 
> psql (9.6.3)
> Type "help" for help.
> 
> lopocs=# \timing
> Timing is on.
> lopocs=# select pc_typmod_pcid(1);
>  pc_typmod_pcid
> 
>   1
> (1 row)
> 
> Time: 4.917 ms
> lopocs=# select points from public.sthelens where pc_intersects(points,
> st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
> -2356120.91980829 -3742654.00016992, -2356120.91980829
> -3741278.00016992, -2357334.41980829 -3741278.00016992,
> -2357334.41980829 -3742654.00016992))', 4978));
> Time: 2987.491 ms
> 
> 
> The query takes 236 ms in case #1, and 2987 ms in case #2! Huge difference.
> 
> Anyone has any idea where this performance drop may come from? The
> problem may be in the Pointcloud in the extension, but I have no idea
> where the bug may be.
> 
> Any idea? Any suggestion on how to debug this? This has been driving us
> crazy for some time now.

Lots of missing information here ...

Is there an index on public.sthelens.points? How many rows are in that table?
What are your shared_buffers settings? How much RAM does the server have?
What does EXPLAIN look like for that query? How large (in bytes) are the
tables in question? What does pc_typmod_pcid() actually do?

There are probably lots of other questions I could ask, but those questions
are based on the fact that this _looks_ like a classic cache blowout. I.e.,
the query runs quickly when all the related data is in RAM, but is
significantly slower when the data has to be pulled from disk. Answering
the quesitons above will likely help to determine if my guess is correct.

If my guess is correct, there are any number of potential ways to improve
things: Add RAM to the machine, enlarge shared_buffers, put a geo index on
public.sthelens.points so it doesn't have to scan the entire table; as a
few examples.

Understanding what pc_typmod_pcid() actually does would help, but even 
without that you can test things in a few ways. One would be to substitute
a different query in your testing for select pc_typmod_pcid(1) that is
known to push the contents of public.sthelens out of memory and see if
the behavior is similar. Any count(*) query on some other large table
would probably suffice. A better way would probalby be to install the
pg_buffercache module and see what's actually in the cache at each step
of the testing process.

In any event, if your testing doesn't help any; you'll probably need to
include answers to at least the above questions before the list will be
much help.

That is, of course, unless someone familar with pointcloud has seen this
exact problem and already knows the answer ...

-- 
Bill Moran 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Performance issue with Pointcloud extension

2017-06-08 Thread Eric Lemoine
Hi

We have a rather strange performance issue with the Pointcloud extension
[*]. The issue/bug may be in the extension, but we don't know for sure
at this point. I'm writing to the list to hopefully get some guidance on
how to further debug this.

[*] 

A query takes around 250 ms when executed first on a database
connection. But it takes like 3 s when executed after a first very
simple Pointcloud query.

Below is a test-case with psql.

Case #1 (works normally):

psql (9.6.3)
Type "help" for help.

lopocs=# \timing
Timing is on.
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));
Time: 236.423 ms


Case #2 (works abnormally):

psql (9.6.3)
Type "help" for help.

lopocs=# \timing
Timing is on.
lopocs=# select pc_typmod_pcid(1);
 pc_typmod_pcid

  1
(1 row)

Time: 4.917 ms
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));
Time: 2987.491 ms


The query takes 236 ms in case #1, and 2987 ms in case #2! Huge difference.

Anyone has any idea where this performance drop may come from? The
problem may be in the Pointcloud in the extension, but I have no idea
where the bug may be.

Any idea? Any suggestion on how to debug this? This has been driving us
crazy for some time now.

Thanks.


-- 
Éric Lemoine
Oslandia
<>

signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Weirdness with "not in" query

2017-06-08 Thread Adrian Klaver

On 06/08/2017 08:27 AM, greigwise wrote:

So, I'm using postgres version 9.6.3 on a mac and the results to this series
of queries seems very strange to me:

db# select count(*) from table1 where id in
(1706302,1772130,1745499,1704077);
  count
---
  4
(1 row)

db# select count(*) from table2 where table1_id in
(1706302,1772130,1745499,1704077);
  count
---
  0
(1 row)

db# select count(*) from table1 where id not in (select table1_id from
table2);
  count
---
  0
(1 row)

I would expect the "not in" query to return a result of at least 4.  Am I
totally misunderstanding how this should work (I really don't think so) or
is something wrong?


No:

https://www.postgresql.org/docs/9.6/static/functions-comparisons.html#FUNCTIONS-COMPARISONS-IN-SCALAR

Note that if the left-hand expression yields null, or if there are no 
equal right-hand values and at least one right-hand expression yields 
null, the result of the IN construct will be null, not false. This is in 
accordance with SQL's normal rules for Boolean combinations of null values.



Thanks,
Greig Wise



--
View this message in context: 
http://www.postgresql-archive.org/Weirdness-with-not-in-query-tp5965573.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Weirdness with "not in" query

2017-06-08 Thread greigwise
Wow.  That is exactly it.  Thank you.

I really would not have expected there to be NULLs in that field.  Geez.



--
View this message in context: 
http://www.postgresql-archive.org/Weirdness-with-not-in-query-tp5965573p5965576.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Weirdness with "not in" query

2017-06-08 Thread Alban Hertroys
On 8 June 2017 at 17:27, greigwise  wrote:
> So, I'm using postgres version 9.6.3 on a mac and the results to this series
> of queries seems very strange to me:
>
> db# select count(*) from table1 where id in
> (1706302,1772130,1745499,1704077);
>  count
> ---
>  4
> (1 row)
>
> db# select count(*) from table2 where table1_id in
> (1706302,1772130,1745499,1704077);
>  count
> ---
>  0
> (1 row)
>
> db# select count(*) from table1 where id not in (select table1_id from
> table2);
>  count
> ---
>  0
> (1 row)
>
> I would expect the "not in" query to return a result of at least 4.  Am I
> totally misunderstanding how this should work (I really don't think so) or
> is something wrong?

You probably have table1_id's that are NULL in table2. In that case
the result of not in is null as well.

Not exists is perhaps a better candidate in this case.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Weirdness with "not in" query

2017-06-08 Thread greigwise
So, I'm using postgres version 9.6.3 on a mac and the results to this series
of queries seems very strange to me:

db# select count(*) from table1 where id in
(1706302,1772130,1745499,1704077);
 count
---
 4
(1 row)

db# select count(*) from table2 where table1_id in
(1706302,1772130,1745499,1704077);
 count
---
 0
(1 row)

db# select count(*) from table1 where id not in (select table1_id from
table2);
 count
---
 0
(1 row)

I would expect the "not in" query to return a result of at least 4.  Am I
totally misunderstanding how this should work (I really don't think so) or
is something wrong?

Thanks,
Greig Wise



--
View this message in context: 
http://www.postgresql-archive.org/Weirdness-with-not-in-query-tp5965573.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-08 Thread Adrian Klaver

On 06/08/2017 08:13 AM, ADSJ (Adam Sjøgren) wrote:

Achilleas writes:





Anyone has a handy little script lying around?


http://www.databasesoup.com/2013/10/de-corrupting-toast-tables.html




   Thanks for the response!

 Adam




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-08 Thread Adam Sjøgren
Harry writes:

> The second vacuum causes an ERROR identical to that you are reporting
> below (unexpected chunk number n (expected n) for toast value...).
> However it may take up to ten attempts to replicate it.

Interesting.

> Out of interest, are you using any tablespaces other than pg_default?
> I can only replicate the issue when using separately mounted
> tablespaces.

No, we are using pg_default only.

I hope your finding can be reproduced, it would be really interesting to
see.


  Best regards,

Adam

-- 
 "Lägg ditt liv i min handAdam Sjøgren
  Sälj din själ till ett band"  a...@novozymes.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100

2017-06-08 Thread Adam Sjøgren
Achilleas writes:

> First try to find which tables those toast relations refer to :
> select 10919630::regclass , 10920100::regclass ;
> Are those critical tables? Can you restore them somehow?

They are our two big tables, containing the bulk of our data (one with
168M rows, the other with 320M rows).

They are constantly being updated, but if I can identify the affected
rows, I can restore a backup on another machine and cherry pick them
from there.

> Also you may consider
> REINDEX TABLE pg_toast.pg_toast_10920100;
> REINDEX TABLE pg_toast.pg_toast_10919630;
> REINDEX TABLE ;
> REINDEX TABLE ;
>
> also VACUUM the above tables.

Yes, but I'd like to know find out why it happens, because cleaning up
and having the corruption reoccur is not so fun.

> You might want to write a function which iterates over the damaged
> table's rows in order to identify the damaged row(s). And then do some
> good update to create a new version.

Yes - we started by doing a quick pg_dump, but I guess we should switch
to something that can tell us exactly what rows hit the problem.

Anyone has a handy little script lying around?


  Thanks for the response!

Adam

-- 
 "Lägg ditt liv i min handAdam Sjøgren
  Sälj din själ till ett band"  a...@novozymes.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Redo the filenode link in tablespace

2017-06-08 Thread Adrian Klaver

On 06/07/2017 06:05 PM, tel medola wrote:

Holy shit! (sorry)

Thanks, thanks!!!

It worked!

My goodness
After I point to the filnode, I did a reindex on the toast and some 
records have already been located.


Great, glad that it worked. Just realize we have been modifying a system 
catalog(pg_class) directly, which is generally a dangerous thing to do. 
I would take a backup of the table and data via pg_dump to be on the 
safe side. This assumes you have the space to store it, though the 
backup when compressed should be a good deal smaller then the actual 
table size. You might also think about moving the tables to a new 
tablespace to get a clean tablespace directory. Again this assumes space 
available.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_catalog tables don't have constraints?

2017-06-08 Thread Neil Anderson
> The bigger picture here is that catalog changes are supposed to be
> executed by C code in response to DDL commands, and it's the C code
> that is charged with maintaining catalog consistency.  Constraints
> would be useful if we supported updating the catalogs with direct
> SQL manipulations; but we don't really.  You can do that, if you're
> a superuser who's willing to take risks, but our policy is that if
> you break the catalogs that way you get to keep both pieces.

Ah! That makes sense. Constraints are there to help you when inserting
rows and when it comes to the catalogs you shouldn't be doing that
directly. I think for the diagram I'll add the links manually with the
caveat that they are not constraints, just edges in a graph.

Thanks for the details Tom!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Redo the filenode link in tablespace

2017-06-08 Thread Karsten Hilbert
On Thu, Jun 08, 2017 at 08:11:30AM -0300, tel medola wrote:

> Sure!
> It's going to be a little long,

That's the point :-)

That way, people of the future can benefit from
Adrian's excellent effort.

Thanks,
Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Redo the filenode link in tablespace

2017-06-08 Thread tel medola
Sure!
It's going to be a little long, but I write with great pleasure.

2017-06-08 8:04 GMT-03:00 Karsten Hilbert :

> On Thu, Jun 08, 2017 at 07:53:01AM -0300, tel medola wrote:
>
> > I would like to thank Adrian very much for his great help and patience.
> > Without your help, most likely I would be looking for another job now,
> > thank you very much !!!
> >
> > Thanks to the database being Postgres and the community being so strong
> and
> > united, everything worked out in the end.
> > Thank you very much the people who maintains Postgres (I could even help
> in
> > some, because I'm a programmer too). And
> > Thanks also to the people who dedicate their time to helping unknow
> people
> > with problems, especially to Adrian.
>
> Hi, I wonder whether you might muster the time to do a
> writeup for the benefit of the list archive -- describe what
> the problem was, how it came about, and how it was solved ?
>
> Thanks,
> Karsten
> --
> GPG key ID E4071346 @ eu.pool.sks-keyservers.net
> E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Redo the filenode link in tablespace

2017-06-08 Thread Karsten Hilbert
On Thu, Jun 08, 2017 at 07:53:01AM -0300, tel medola wrote:

> I would like to thank Adrian very much for his great help and patience.
> Without your help, most likely I would be looking for another job now,
> thank you very much !!!
> 
> Thanks to the database being Postgres and the community being so strong and
> united, everything worked out in the end.
> Thank you very much the people who maintains Postgres (I could even help in
> some, because I'm a programmer too). And
> Thanks also to the people who dedicate their time to helping unknow people
> with problems, especially to Adrian.

Hi, I wonder whether you might muster the time to do a
writeup for the benefit of the list archive -- describe what
the problem was, how it came about, and how it was solved ?

Thanks,
Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Redo the filenode link in tablespace

2017-06-08 Thread tel medola
Hello guys.
I would like to thank Adrian very much for his great help and patience.
Without your help, most likely I would be looking for another job now,
thank you very much !!!

Thanks to the database being Postgres and the community being so strong and
united, everything worked out in the end.
Thank you very much the people who maintains Postgres (I could even help in
some, because I'm a programmer too). And
Thanks also to the people who dedicate their time to helping unknow people
with problems, especially to Adrian.

Thanks
Roberto.

2017-06-07 22:05 GMT-03:00 tel medola :

> Holy shit! (sorry)
>
> Thanks, thanks!!!
>
> It worked!
>
> My goodness
> After I point to the filnode, I did a reindex on the toast and some
> records have already been located.
>
>
>
> 2017-06-07 17:58 GMT-03:00 Adrian Klaver :
>
>> On 06/07/2017 07:53 AM, tel medola wrote:
>>
>>>
>>> Change the relfilenode in above from 13741353 to 5214493
>>> /I' no change yeat, but i will.../
>>>
>>>
>>>
>> What is not clear is what 5214495 is?
>>> /Not to me either/
>>>
>>> select * from pg_class where relfilenode = 5214495;
>>> /returns: none records/
>>>
>>> But I'm worried about the select error. You are returning the
>>> table:pg_toast_9277966 not the pg_toast_5214489... bellow
>>> ERROR:  missing chunk number 0 for toast value 10259186 in
>>> pg_toast_9277966
>>>
>>> Could it be because of the filenode link that is still pointing to
>>> another? -> 13741353
>>>
>>
>>
>> That is what I am betting.
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>
>