Re: PG12 autovac issues

2020-03-26 Thread Michael Paquier
On Thu, Mar 26, 2020 at 09:46:47AM -0500, Justin King wrote:
> Nope, it was just these tables that were looping over and over while
> nothing else was getting autovac'd.  I'm happy to share the full log
> if you'd like.

Thanks, that could help.  If that's very large, it could be a problem
to send that to the lists, but you could send me directly a link to
it and I'll try to extract more information for the lists.  While
testing for reproducing the issue, I have noticed that basically one
set of catalog tables happened to see this "skipping redundant" log. 
And I am wondering if we have a match with the set of catalog tables
looping.

> I did have to remove it from this state, but I can undo my workaround
> and, undoubtedly, it'll end up back there.  Let me know if there's
> something specific you'd like me to provide when it happens!

For now I think it's fine.  Note that Julien and I have an environment
where the issue can be reproduced easily (it takes roughly 12 hours
until the wraparound cutoffs are reached with the benchmark and
settings used), and we are checking things using a patched instance
with 2aa6e33 reverted.  I think that we are accumulating enough
evidence that this change was not a good idea anyway thanks to the
information you sent, so likely we'll finish first by a revert of
2aa6e33 from the master and REL_12_STABLE branches, before looking at
the issues with the catalogs for those anti-wraparound and
non-aggressive jobs (this looks like a relcache issue with the so-said
catalogs).
--
Michael


signature.asc
Description: PGP signature


Re: Ident authentication failed

2020-03-26 Thread Ted To
Thanks for the many responses I've gotten!  What a useful and welcoming 
community!


I finally managed to get it configured following: 
https://github.com/miniflux/miniflux/issues/80#issuecomment-375996546


On 3/26/20 4:24 PM, Adrian Klaver wrote:

On 3/26/20 12:40 PM, Ted To wrote:
Thank you -- I added two lines to the hba file to allow for ident 
authentication, restarted postgres and still the same errors.


Alright from here:

https://miniflux.app/docs/configuration.html
The Golang Postgres client pq is being used and:

"The default value for DATABASE_URL is user=postgres password=postgres 
dbname=miniflux2 sslmode=disable."


From here:

https://godoc.org/github.com/lib/pq#hdr-Connection_String_Parameters

"host - The host to connect to. Values that start with / are for unix
  domain sockets. (default is localhost)"

So unless you have specified a host in the conf file you need to be 
paying attention to the lines in pg_hba.conf that start with host.


Where the two lines you added above for host?

You can check what is happening on the server end by looking in the 
Postgres logs. That will also show you the connection string.



Do you have more then one instance of Postgres on the machine?



On 3/26/20 3:35 PM, Ron wrote:
You're only allowing "trust" authentication, not "ident" 
authentication.


On 3/26/20 2:29 PM, Ted To wrote:


Hi,

I'm trying to configure miniflux and am struggling to figure out 
how to configure postgres for that purpose.  (The miniflux 
instructions are very terse.)  Regardless of whether I run the 
command as the postgres user or the miniflux user, I get similar 
errors.  I am running Centos 7 using the updated scl version of 
postgres (9.6.10).


With the following, the postgres user "miniflux" along with the 
password are specified in /etc/miniflux.conf.


$ miniflux -c /etc/miniflux.conf -migrate
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "miniflux"

I have no problems logging into postgres as the miniflux user using 
the same password specified in /etc/miniflux.conf.


Running the same command without specifying the configuration file 
uses the postgres user and also fails


$ miniflux -migrate
[INFO] The default value for DATABASE_URL is used
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "postgres"

My /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf file 
includes


# TYPE  DATABASE    USER    ADDRESS METHOD

local   miniflux   miniflux  trust
local   miniflux   postgres  trust

# "local" is for Unix domain socket connections only
local   all all peer
# IPv4 local connections:
host    all all 127.0.0.1/32 trust

I'm at a complete loss.

Any suggestions?

Thanks,

Ted To








Re: Ident authentication failed

2020-03-26 Thread Christoph Moench-Tegeder
## Ted To (t...@theo.to):

> Thank you -- I added two lines to the hba file to allow for ident
> authentication, restarted postgres and still the same errors.

You probably don't want "ident" authentication - that's the thing
with "identd" (see RfC 1413), which even 20 years ago was only used
in connection with IRC, but not really for "authentication".

You'd rather want "scram-sha-256" as the authentication method (hoping
that whatever-go-uses can do that - come on, we got that in version 10),
or (if that's not possible) "md5". See
https://www.postgresql.org/docs/12/auth-methods.html and
https://www.postgresql.org/docs/12/auth-pg-hba-conf.html .
Note that "password_encryption" has to match that authentication method
from pg_hba.conf, that's described in
https://www.postgresql.org/docs/12/auth-password.html .

Regards,
Christoph

-- 
Spare Space




Re: Ident authentication failed

2020-03-26 Thread Adrian Klaver

On 3/26/20 12:40 PM, Ted To wrote:
Thank you -- I added two lines to the hba file to allow for ident 
authentication, restarted postgres and still the same errors.


Alright from here:

https://miniflux.app/docs/configuration.html
The Golang Postgres client pq is being used and:

"The default value for DATABASE_URL is user=postgres password=postgres 
dbname=miniflux2 sslmode=disable."


From here:

https://godoc.org/github.com/lib/pq#hdr-Connection_String_Parameters

"host - The host to connect to. Values that start with / are for unix
  domain sockets. (default is localhost)"

So unless you have specified a host in the conf file you need to be 
paying attention to the lines in pg_hba.conf that start with host.


Where the two lines you added above for host?

You can check what is happening on the server end by looking in the 
Postgres logs. That will also show you the connection string.



Do you have more then one instance of Postgres on the machine?



On 3/26/20 3:35 PM, Ron wrote:

You're only allowing "trust" authentication, not "ident" authentication.

On 3/26/20 2:29 PM, Ted To wrote:


Hi,

I'm trying to configure miniflux and am struggling to figure out how 
to configure postgres for that purpose.  (The miniflux instructions 
are very terse.)  Regardless of whether I run the command as the 
postgres user or the miniflux user, I get similar errors.  I am 
running Centos 7 using the updated scl version of postgres (9.6.10).


With the following, the postgres user "miniflux" along with the 
password are specified in /etc/miniflux.conf.


$ miniflux -c /etc/miniflux.conf -migrate
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "miniflux"

I have no problems logging into postgres as the miniflux user using 
the same password specified in /etc/miniflux.conf.


Running the same command without specifying the configuration file 
uses the postgres user and also fails


$ miniflux -migrate
[INFO] The default value for DATABASE_URL is used
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "postgres"

My /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf file includes

# TYPE  DATABASE    USER    ADDRESS METHOD

local   miniflux   miniflux  trust
local   miniflux   postgres  trust

# "local" is for Unix domain socket connections only
local   all all peer
# IPv4 local connections:
host    all all 127.0.0.1/32 trust

I'm at a complete loss.

Any suggestions?

Thanks,

Ted To






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Ident authentication failed

2020-03-26 Thread Rob Sargent


> On Mar 26, 2020, at 1:43 PM, Ted To  wrote:
> 
> 
> 
> On 3/26/20 3:35 PM, Adrian Klaver wrote:
>> Can you show the entire connection string used by miniflux? 
> The miniflux command is binary so, no, not easily.
>> 
>> Did you reload/restart the server after making the changes below? 
> Yes.
>> 
>>> 

Do you have psql client?  Connect as postgres user?



Re: Ident authentication failed

2020-03-26 Thread Ted To


On 3/26/20 3:35 PM, Adrian Klaver wrote:

Can you show the entire connection string used by miniflux?

The miniflux command is binary so, no, not easily.


Did you reload/restart the server after making the changes below?

Yes.




My /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf file includes

# TYPE  DATABASE    USER    ADDRESS METHOD

local   miniflux   miniflux  trust
local   miniflux   postgres  trust

# "local" is for Unix domain socket connections only
local   all all peer
# IPv4 local connections:
host    all all 127.0.0.1/32 trust

I'm at a complete loss.

Any suggestions?

Thanks,

Ted To






Re: How can I recreate a view in a new schema such that the view def references tables in the new schema ?

2020-03-26 Thread Adrian Klaver

On 3/26/20 11:38 AM, David Gauthier wrote:

sqf-> SELECT sr.project,
sqf->     sr.sqf_id,
sqf->     wa.wa_path,
sqf->     sr.cbwa_type,
sqf->     sr.status,
sqf->     sr.nightly_rg_cl,
sqf->     ( SELECT max(fse.end_datetime) AS max
sqf(>            FROM public.flow_step_events fse
sqf(>           WHERE fse.sqf_id = sr.sqf_id) AS last_sqf_step_end,
sqf->     ( SELECT DISTINCT f.perl_sub_name
sqf(>            FROM public.flows f,
sqf(>             public.flow_step_events fse
sqf(>           WHERE f.flow_type = fse.flow_type AND fse.sqf_id = 
sr.sqf_id AND f.step_number = (( SELECT max(fse2.step_number) AS max

sqf(>                    FROM public.flow_step_events fse2
sqf(>                   WHERE fse2.sqf_id = sr.sqf_id))) AS last_step_run
sqf->    FROM public.sqf_runs sr,
sqf->     public.workareas wa
sqf->   WHERE wa.current_user_sqf_id = sr.sqf_id
sqf->   ORDER BY sr.project, wa.wa_path, (( SELECT max(fse.end_datetime) 
AS max

sqf(>            FROM public.flow_step_events fse
sqf(>           WHERE fse.sqf_id = sr.sqf_id));
CREATE VIEW

sqf=> \d+ current_workarea_users;
                        View "public.current_workarea_users"
       Column       |           Type           | Modifiers | Storage  | 
Description

---+--+---+--+-
  project           | text                     |           | extended |
  sqf_id            | text                     |           | extended |
  wa_path           | text                     |           | extended |
  cbwa_type         | text                     |           | extended |
  status            | text                     |           | extended |
  nightly_rg_cl     | integer                  |           | plain    |
  last_sqf_step_end | timestamp with time zone |           | plain    |
  last_step_run     | text                     |           | extended |
View definition:
  SELECT sr.project,
     sr.sqf_id,
     wa.wa_path,
     sr.cbwa_type,
     sr.status,
     sr.nightly_rg_cl,
     ( SELECT max(fse.end_datetime) AS max
            FROM flow_step_events fse
           WHERE fse.sqf_id = sr.sqf_id) AS last_sqf_step_end,
     ( SELECT DISTINCT f.perl_sub_name
            FROM flows f,
             flow_step_events fse
           WHERE f.flow_type = fse.flow_type AND fse.sqf_id = sr.sqf_id 
AND f.step_number = (( SELECT max(fse2.step_number) AS max

                    FROM flow_step_events fse2
                   WHERE fse2.sqf_id = sr.sqf_id))) AS last_step_run
    FROM sqf_runs sr,
     workareas wa
   WHERE wa.current_user_sqf_id = sr.sqf_id
   ORDER BY sr.project, wa.wa_path, (( SELECT max(fse.end_datetime) AS max
            FROM flow_step_events fse
           WHERE fse.sqf_id = sr.sqf_id));

sqf=>


You can see the "public." refs in the create view, but not echoed in the 
stored view def.




See this post:

https://www.postgresql.org/message-id/31367.1572815723%40sss.pgh.pa.us


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Ident authentication failed

2020-03-26 Thread Adrian Klaver

On 3/26/20 12:40 PM, Ted To wrote:
Thank you -- I added two lines to the hba file to allow for ident 
authentication, restarted postgres and still the same errors.


Without knowing how the client is trying to connect to the server this 
issue will be difficult to solve.




On 3/26/20 3:35 PM, Ron wrote:

You're only allowing "trust" authentication, not "ident" authentication.

On 3/26/20 2:29 PM, Ted To wrote:


Hi,

I'm trying to configure miniflux and am struggling to figure out how 
to configure postgres for that purpose.  (The miniflux instructions 
are very terse.)  Regardless of whether I run the command as the 
postgres user or the miniflux user, I get similar errors.  I am 
running Centos 7 using the updated scl version of postgres (9.6.10).


With the following, the postgres user "miniflux" along with the 
password are specified in /etc/miniflux.conf.


$ miniflux -c /etc/miniflux.conf -migrate
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "miniflux"

I have no problems logging into postgres as the miniflux user using 
the same password specified in /etc/miniflux.conf.


Running the same command without specifying the configuration file 
uses the postgres user and also fails


$ miniflux -migrate
[INFO] The default value for DATABASE_URL is used
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "postgres"

My /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf file includes

# TYPE  DATABASE    USER    ADDRESS METHOD

local   miniflux   miniflux  trust
local   miniflux   postgres  trust

# "local" is for Unix domain socket connections only
local   all all peer
# IPv4 local connections:
host    all all 127.0.0.1/32 trust

I'm at a complete loss.

Any suggestions?

Thanks,

Ted To






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Ident authentication failed

2020-03-26 Thread Ted To
Thank you -- I added two lines to the hba file to allow for ident 
authentication, restarted postgres and still the same errors.


On 3/26/20 3:35 PM, Ron wrote:

You're only allowing "trust" authentication, not "ident" authentication.

On 3/26/20 2:29 PM, Ted To wrote:


Hi,

I'm trying to configure miniflux and am struggling to figure out how 
to configure postgres for that purpose.  (The miniflux instructions 
are very terse.)  Regardless of whether I run the command as the 
postgres user or the miniflux user, I get similar errors.  I am 
running Centos 7 using the updated scl version of postgres (9.6.10).


With the following, the postgres user "miniflux" along with the 
password are specified in /etc/miniflux.conf.


$ miniflux -c /etc/miniflux.conf -migrate
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "miniflux"

I have no problems logging into postgres as the miniflux user using 
the same password specified in /etc/miniflux.conf.


Running the same command without specifying the configuration file 
uses the postgres user and also fails


$ miniflux -migrate
[INFO] The default value for DATABASE_URL is used
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "postgres"

My /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf file includes

# TYPE  DATABASE    USER    ADDRESS METHOD

local   miniflux   miniflux  trust
local   miniflux   postgres  trust

# "local" is for Unix domain socket connections only
local   all all peer
# IPv4 local connections:
host    all all 127.0.0.1/32 trust

I'm at a complete loss.

Any suggestions?

Thanks,

Ted To





Re: Ident authentication failed

2020-03-26 Thread Adrian Klaver

On 3/26/20 12:29 PM, Ted To wrote:

Hi,

I'm trying to configure miniflux and am struggling to figure out how to 
configure postgres for that purpose.  (The miniflux instructions are 
very terse.)  Regardless of whether I run the command as the postgres 
user or the miniflux user, I get similar errors.  I am running Centos 7 
using the updated scl version of postgres (9.6.10).


With the following, the postgres user "miniflux" along with the password 
are specified in /etc/miniflux.conf.


$ miniflux -c /etc/miniflux.conf -migrate
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "miniflux"

I have no problems logging into postgres as the miniflux user using the 
same password specified in /etc/miniflux.conf.


Running the same command without specifying the configuration file uses 
the postgres user and also fails


$ miniflux -migrate
[INFO] The default value for DATABASE_URL is used
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "postgres"


Can you show the entire connection string used by miniflux?

Did you reload/restart the server after making the changes below?



My /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf file includes

# TYPE  DATABASE    USER    ADDRESS METHOD

local   miniflux   miniflux  trust
local   miniflux   postgres  trust

# "local" is for Unix domain socket connections only
local   all all peer
# IPv4 local connections:
host    all all 127.0.0.1/32 trust

I'm at a complete loss.

Any suggestions?

Thanks,

Ted To




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Ident authentication failed

2020-03-26 Thread Ron

You're only allowing "trust" authentication, not "ident" authentication.

On 3/26/20 2:29 PM, Ted To wrote:


Hi,

I'm trying to configure miniflux and am struggling to figure out how to 
configure postgres for that purpose.  (The miniflux instructions are very 
terse.)  Regardless of whether I run the command as the postgres user or 
the miniflux user, I get similar errors.  I am running Centos 7 using the 
updated scl version of postgres (9.6.10).


With the following, the postgres user "miniflux" along with the password 
are specified in /etc/miniflux.conf.


$ miniflux -c /etc/miniflux.conf -migrate
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "miniflux"

I have no problems logging into postgres as the miniflux user using the 
same password specified in /etc/miniflux.conf.


Running the same command without specifying the configuration file uses 
the postgres user and also fails


$ miniflux -migrate
[INFO] The default value for DATABASE_URL is used
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "postgres"

My /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf file includes

# TYPE  DATABASE    USER    ADDRESS METHOD

local   miniflux   miniflux  trust
local   miniflux   postgres  trust

# "local" is for Unix domain socket connections only
local   all all peer
# IPv4 local connections:
host    all all 127.0.0.1/32 trust

I'm at a complete loss.

Any suggestions?

Thanks,

Ted To



--
Angular momentum makes the world go 'round.




Ident authentication failed

2020-03-26 Thread Ted To

Hi,

I'm trying to configure miniflux and am struggling to figure out how to 
configure postgres for that purpose.  (The miniflux instructions are 
very terse.)  Regardless of whether I run the command as the postgres 
user or the miniflux user, I get similar errors.  I am running Centos 7 
using the updated scl version of postgres (9.6.10).


With the following, the postgres user "miniflux" along with the password 
are specified in /etc/miniflux.conf.


$ miniflux -c /etc/miniflux.conf -migrate
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "miniflux"

I have no problems logging into postgres as the miniflux user using the 
same password specified in /etc/miniflux.conf.


Running the same command without specifying the configuration file uses 
the postgres user and also fails


$ miniflux -migrate
[INFO] The default value for DATABASE_URL is used
Current schema version: 0
Latest schema version: 26
Migrating to version: 1
[FATAL] [Migrate] pq: Ident authentication failed for user "postgres"

My /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf file includes

# TYPE  DATABASE    USER    ADDRESS METHOD

local   miniflux   miniflux  trust
local   miniflux   postgres  trust

# "local" is for Unix domain socket connections only
local   all all peer
# IPv4 local connections:
host    all all 127.0.0.1/32 trust

I'm at a complete loss.

Any suggestions?

Thanks,

Ted To



Re: Duplicate key violation on upsert

2020-03-26 Thread Matt Magoffin

> On 27/03/2020, at 5:26 AM, Adrian Klaver  wrote:
> 
> Well morning and coffee helped some, but not enough to offer blinding 
> insight. Reviewing the function above, the TimescaleDB insert block function 
> and the overview of the TimescaleDB hypertable architecture leads me to 
> believe there is some sort of conflict between the solarnetwork functions and 
> the TimescaleDB hypertable actions. It is a wishy-washy answer as I do not 
> understand the TimescaleDB architecture well enough. You might want to reach 
> to the TimescaleDB community to see if they can offer any further insight.

Fair enough! Thank you for taking the time to look through this issue with me, 
I really appreciate it. I’ll reach out the the TimescaleDB folks and see what 
they think.

— m@

Re: How can I recreate a view in a new schema such that the view def references tables in the new schema ?

2020-03-26 Thread David Gauthier
sqf-> SELECT sr.project,
sqf-> sr.sqf_id,
sqf-> wa.wa_path,
sqf-> sr.cbwa_type,
sqf-> sr.status,
sqf-> sr.nightly_rg_cl,
sqf-> ( SELECT max(fse.end_datetime) AS max
sqf(>FROM public.flow_step_events fse
sqf(>   WHERE fse.sqf_id = sr.sqf_id) AS last_sqf_step_end,
sqf-> ( SELECT DISTINCT f.perl_sub_name
sqf(>FROM public.flows f,
sqf(> public.flow_step_events fse
sqf(>   WHERE f.flow_type = fse.flow_type AND fse.sqf_id =
sr.sqf_id AND f.step_number = (( SELECT max(fse2.step_number) AS max
sqf(>FROM public.flow_step_events fse2
sqf(>   WHERE fse2.sqf_id = sr.sqf_id))) AS last_step_run
sqf->FROM public.sqf_runs sr,
sqf-> public.workareas wa
sqf->   WHERE wa.current_user_sqf_id = sr.sqf_id
sqf->   ORDER BY sr.project, wa.wa_path, (( SELECT max(fse.end_datetime) AS
max
sqf(>FROM public.flow_step_events fse
sqf(>   WHERE fse.sqf_id = sr.sqf_id));
CREATE VIEW

sqf=> \d+ current_workarea_users;
   View "public.current_workarea_users"
  Column   |   Type   | Modifiers | Storage  |
Description
---+--+---+--+-
 project   | text |   | extended |
 sqf_id| text |   | extended |
 wa_path   | text |   | extended |
 cbwa_type | text |   | extended |
 status| text |   | extended |
 nightly_rg_cl | integer  |   | plain|
 last_sqf_step_end | timestamp with time zone |   | plain|
 last_step_run | text |   | extended |
View definition:
 SELECT sr.project,
sr.sqf_id,
wa.wa_path,
sr.cbwa_type,
sr.status,
sr.nightly_rg_cl,
( SELECT max(fse.end_datetime) AS max
   FROM flow_step_events fse
  WHERE fse.sqf_id = sr.sqf_id) AS last_sqf_step_end,
( SELECT DISTINCT f.perl_sub_name
   FROM flows f,
flow_step_events fse
  WHERE f.flow_type = fse.flow_type AND fse.sqf_id = sr.sqf_id AND
f.step_number = (( SELECT max(fse2.step_number) AS max
   FROM flow_step_events fse2
  WHERE fse2.sqf_id = sr.sqf_id))) AS last_step_run
   FROM sqf_runs sr,
workareas wa
  WHERE wa.current_user_sqf_id = sr.sqf_id
  ORDER BY sr.project, wa.wa_path, (( SELECT max(fse.end_datetime) AS max
   FROM flow_step_events fse
  WHERE fse.sqf_id = sr.sqf_id));

sqf=>


You can see the "public." refs in the create view, but not echoed in the
stored view def.

On Thu, Mar 26, 2020 at 1:58 PM Adrian Klaver 
wrote:

> On 3/26/20 10:55 AM, David Gauthier wrote:
> > Thanks Adrian for the quick reply.
> > I don't have a lot of choice regarding PG version.  I work for a large
> > corp with an IT dept which offers the version I have.  They create VMs
> > which are DB servers and this is the best they offer.  But I could
> > request something newer.  Never hurts to try.
> >
> > Ya, I kinda figured that there's nothing wrong with referencing tables
> > from the default (public) schema.  So I tried to redefine the view by
> > referencing the public tables literally, as in "public.thetable".  The
> > plan was to do some sort of global replace of "public." with "myschem."
> > in the output of pg_dump, maybe with sed or something.  But even after
> > explicitly using "public.", it didn't stick in the view def.
>
> Can we see an example view definition?
>
> >
> > On Thu, Mar 26, 2020 at 1:34 PM Adrian Klaver  > > wrote:
> >
> > On 3/26/20 10:16 AM, David Gauthier wrote:
> >  > Here's an interesting one for you...
> >  > psql (9.6.7, server 11.3) on linux
> >  >
> >  > I have 2 DBs, differnet servers/instances.  I want to take all the
> >  > metadata and data for a set of tables/views in the public schema
> > of one
> >  > DB and move it all over to be inside a schema of a second
> > DB/instance.
> >
> >
> > Well first, the current minor version of 9.6 is .17 so you are 10
> > releases behind. In fact the 9.6.8 release includes changes that
> impact
> > the below:
> > https://www.postgresql.org/docs/9.6/release-9-6-8.html
> >
> >  >
> >  > I'm using pg_dump to create the script and I believe I can insert
> > a "set
> >  > search_path=myschem" in the output of pg_dump such that when it
> > runs,
> >  > the "CREATE TABLE", "CREATE VIEW", "GRANT...", etc commands,
> > will
> >  > all go into the new schema (which I have prepared).  Problem is
> > the view
> >  > defs.
> >  > The view defs do not prefix the referenced tables with
> > "myschem.", so
> >  > the CREATE VIEW xyx commands fail.
> >  >
> >  > Is there 

Re: Replacing Apache Solr with Postgre Full Text Search?

2020-03-26 Thread Artjom Simon

On 26.03.20 17:05, J2eeInside J2eeInside wrote:
>> P.S. I need to index .pdf, .html and MS Word .doc/.docx files, is
>> there any constraints in Ful Text search regarding those file types?
>
> - Can you recommend those tools you mention above/any useful resource 
on how to do that?



For PDFs, I know of at least two tools that can extract text. Try 
Ghostscript:


gs -sDEVICE=txtwrite -o output.txt input.pdf


or a tool called 'pdftotext':

pdftotext [options] [PDF-file [text-file]]

Both give slightly different results, mainly in terms of indentation and 
layout of the generated plain text, and how they deal with tabular layouts.


Note that PDF is a container format that can embed virtually anything: 
text, images, flash videos, ...
You'll get good results if the PDF input is plain text. If you're 
dealing with embedded images like scanned documents, you'll probably 
need a OCR pass with tools like 'tesseract' to extract the recognized text.


You'll need similar tools to extract the text from DOC and HTML files 
since you're only interested in their plain text representation, not the 
meta data and markup.
Finding converters from HTML/DOC to plain text shouldn't be too hard. 
You could also try to find a commercial document conversion vendor, or 
try to convert HTML and DOC both to PDF so you'll only have to deal with 
PDF-to-text extraction in the end.


Good luck!

Artjom


--
Artjom Simon




Re: How can I recreate a view in a new schema such that the view def references tables in the new schema ?

2020-03-26 Thread Adrian Klaver

On 3/26/20 10:55 AM, David Gauthier wrote:

Thanks Adrian for the quick reply.
I don't have a lot of choice regarding PG version.  I work for a large 
corp with an IT dept which offers the version I have.  They create VMs 
which are DB servers and this is the best they offer.  But I could 
request something newer.  Never hurts to try.


Ya, I kinda figured that there's nothing wrong with referencing tables 
from the default (public) schema.  So I tried to redefine the view by 
referencing the public tables literally, as in "public.thetable".  The 
plan was to do some sort of global replace of "public." with "myschem." 
in the output of pg_dump, maybe with sed or something.  But even after 
explicitly using "public.", it didn't stick in the view def.


Can we see an example view definition?



On Thu, Mar 26, 2020 at 1:34 PM Adrian Klaver > wrote:


On 3/26/20 10:16 AM, David Gauthier wrote:
 > Here's an interesting one for you...
 > psql (9.6.7, server 11.3) on linux
 >
 > I have 2 DBs, differnet servers/instances.  I want to take all the
 > metadata and data for a set of tables/views in the public schema
of one
 > DB and move it all over to be inside a schema of a second
DB/instance.


Well first, the current minor version of 9.6 is .17 so you are 10
releases behind. In fact the 9.6.8 release includes changes that impact
the below:
https://www.postgresql.org/docs/9.6/release-9-6-8.html

 >
 > I'm using pg_dump to create the script and I believe I can insert
a "set
 > search_path=myschem" in the output of pg_dump such that when it
runs,
 > the "CREATE TABLE", "CREATE VIEW", "GRANT...", etc commands,
will
 > all go into the new schema (which I have prepared).  Problem is
the view
 > defs.
 > The view defs do not prefix the referenced tables with
"myschem.", so
 > the CREATE VIEW xyx commands fail.
 >
 > Is there a way to do this ?

By manually changing the definition? It is not an error for a VIEW in
one schema to refer to tables in other schemas. AFAIK the code has no
way of knowing you want to move the underlying tables just by
specifying
a search_path.

 >
 > Thanks in Advance.


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: How can I recreate a view in a new schema such that the view def references tables in the new schema ?

2020-03-26 Thread David Gauthier
Thanks Adrian for the quick reply.
I don't have a lot of choice regarding PG version.  I work for a large corp
with an IT dept which offers the version I have.  They create VMs which are
DB servers and this is the best they offer.  But I could request something
newer.  Never hurts to try.

Ya, I kinda figured that there's nothing wrong with referencing tables from
the default (public) schema.  So I tried to redefine the view by
referencing the public tables literally, as in "public.thetable".  The plan
was to do some sort of global replace of "public." with "myschem." in the
output of pg_dump, maybe with sed or something.  But even after explicitly
using "public.", it didn't stick in the view def.


On Thu, Mar 26, 2020 at 1:34 PM Adrian Klaver 
wrote:

> On 3/26/20 10:16 AM, David Gauthier wrote:
> > Here's an interesting one for you...
> > psql (9.6.7, server 11.3) on linux
> >
> > I have 2 DBs, differnet servers/instances.  I want to take all the
> > metadata and data for a set of tables/views in the public schema of one
> > DB and move it all over to be inside a schema of a second DB/instance.
>
>
> Well first, the current minor version of 9.6 is .17 so you are 10
> releases behind. In fact the 9.6.8 release includes changes that impact
> the below:
> https://www.postgresql.org/docs/9.6/release-9-6-8.html
>
> >
> > I'm using pg_dump to create the script and I believe I can insert a "set
> > search_path=myschem" in the output of pg_dump such that when it runs,
> > the "CREATE TABLE", "CREATE VIEW", "GRANT...", etc commands, will
> > all go into the new schema (which I have prepared).  Problem is the view
> > defs.
> > The view defs do not prefix the referenced tables with "myschem.", so
> > the CREATE VIEW xyx commands fail.
> >
> > Is there a way to do this ?
>
> By manually changing the definition? It is not an error for a VIEW in
> one schema to refer to tables in other schemas. AFAIK the code has no
> way of knowing you want to move the underlying tables just by specifying
> a search_path.
>
> >
> > Thanks in Advance.
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: How can I recreate a view in a new schema such that the view def references tables in the new schema ?

2020-03-26 Thread Adrian Klaver

On 3/26/20 10:16 AM, David Gauthier wrote:

Here's an interesting one for you...
psql (9.6.7, server 11.3) on linux

I have 2 DBs, differnet servers/instances.  I want to take all the 
metadata and data for a set of tables/views in the public schema of one 
DB and move it all over to be inside a schema of a second DB/instance.



Well first, the current minor version of 9.6 is .17 so you are 10 
releases behind. In fact the 9.6.8 release includes changes that impact 
the below:

https://www.postgresql.org/docs/9.6/release-9-6-8.html



I'm using pg_dump to create the script and I believe I can insert a "set 
search_path=myschem" in the output of pg_dump such that when it runs, 
the "CREATE TABLE", "CREATE VIEW", "GRANT...", etc commands, will 
all go into the new schema (which I have prepared).  Problem is the view 
defs.
The view defs do not prefix the referenced tables with "myschem.", so 
the CREATE VIEW xyx commands fail.


Is there a way to do this ?


By manually changing the definition? It is not an error for a VIEW in 
one schema to refer to tables in other schemas. AFAIK the code has no 
way of knowing you want to move the underlying tables just by specifying 
a search_path.




Thanks in Advance.



--
Adrian Klaver
adrian.kla...@aklaver.com




How can I recreate a view in a new schema such that the view def references tables in the new schema ?

2020-03-26 Thread David Gauthier
Here's an interesting one for you...
psql (9.6.7, server 11.3) on linux

I have 2 DBs, differnet servers/instances.  I want to take all the metadata
and data for a set of tables/views in the public schema of one DB and move
it all over to be inside a schema of a second DB/instance.

I'm using pg_dump to create the script and I believe I can insert a "set
search_path=myschem" in the output of pg_dump such that when it runs, the
"CREATE TABLE", "CREATE VIEW", "GRANT...", etc commands, will all go
into the new schema (which I have prepared).  Problem is the view defs.
The view defs do not prefix the referenced tables with "myschem.", so the
CREATE VIEW xyx commands fail.

Is there a way to do this ?

Thanks in Advance.


Re: Duplicate key violation on upsert

2020-03-26 Thread Adrian Klaver

On 3/25/20 5:23 PM, Matt Magoffin wrote:


On 23/03/2020, at 1:10 PM, Adrian Klaver > wrote:


So the query is in the function solardatum.store_datum()?

If so what is it doing?


Yes. This function first performs the INSERT INTO the 
solardatum.da_datum table that we’re discussing here; then it inserts 
into two different tables. If it helps, the actual SQL is available here:


https://github.com/SolarNetwork/solarnetwork-central/blob/4fa585929a5526187ade0e842c809837647c6a41/solarnet-db-setup/postgres/postgres-init-generic-datum-x-functions.sql#L203-L242


Well morning and coffee helped some, but not enough to offer blinding 
insight. Reviewing the function above, the TimescaleDB insert block 
function and the overview of the TimescaleDB hypertable architecture 
leads me to believe there is some sort of conflict between the 
solarnetwork functions and the TimescaleDB hypertable actions. It is a 
wishy-washy answer as I do not understand the TimescaleDB architecture 
well enough. You might want to reach to the TimescaleDB community to see 
if they can offer any further insight.





And could you capture the values and pass them to a RAISE NOTICE?


It would take me some time to get that change deployed. If I was able 
to, what information do you think would be helpful here, e.g. that 
jdata_a is NULL or not, or something else?


The duplicate key violation occurs infrequently, and it does seem 
appropriate to drop the UNIQUE constraint on the da_datum_x_acc_idx 
given uniqueness is really only wanted on (node_id, ts, source_id). As 
long as I can confirm that query performance doesn’t decrease, I’d like 
to recreate the index without UNIQUE. Then I’m hoping this problem, 
whatever the cause, goes away.


— m@



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Replacing Apache Solr with Postgre Full Text Search?

2020-03-26 Thread J2eeInside J2eeInside
You are wellcome Andreas, and thanks for useful answer ;-)

On Thu, Mar 26, 2020 at 4:33 PM Andreas Joseph Krogh 
wrote:

> På onsdag 25. mars 2020 kl. 13:36:38, skrev J2eeInside J2eeInside <
> j2eeins...@gmail.com>:
>
> Hi all,
>
> I hope someone  can help/suggest:
> I'm currently maintaining a project that uses Apache Solr /Lucene. To be
> honest, I wold like to replace Solr with Postgre Full Text Search. However,
> there is a huge amount of documents involved - arround 200GB. Wondering,
> can Postgre handle this efficiently?
> Does anyone have specific experience, and what should the infrastructure
> look like?
>
> P.S. Not to be confused, the Sol works just fine, i just wanted to
> eliminate one component from the whole system (if Full text search can
> replace Solr at all)
>
>
> I see you've gotten some answers but wanted to chime in...
> We seach in ~15mill. emails and ~10 mill documents (extracted text from
> Word/PDF etc. using Java-tools), and use PG and FTS (gin, not rum) for the
> exact same reasons as Evergreen (it seems). We have to mix FTS with
> domain-specific logic/filtering and that is based on relational data in the
> database. I don't see how we could have done that using an external
> search-engine. Maybe it's easy, I don't have any experience with it.
>
> --
> *Andreas Joseph Krogh*
> CTO / Partner - Visena AS
> Mobile: +47 909 56 963
> andr...@visena.com
> www.visena.com
> 
>
>


Re: Replacing Apache Solr with Postgre Full Text Search?

2020-03-26 Thread J2eeInside J2eeInside
Thanks again.
For the end, the finally question:

On Thu, Mar 26, 2020 at 4:18 PM Mike Rylander  wrote:

> On Thu, Mar 26, 2020 at 4:03 AM J2eeInside J2eeInside
>  wrote:
> >
>
> > P.S. I need to index .pdf, .html and MS Word .doc/.docx files, is there
> any constraints in Ful Text search regarding those file types?
> >
>
> It can't handle those without some help -- it supports exactly text --
> but you can extract the text using other tools.
>
>
- Can you recommend those tools you mention above/any useful resource on
how to do that?


Sv: Replacing Apache Solr with Postgre Full Text Search?

2020-03-26 Thread Andreas Joseph Krogh

På onsdag 25. mars 2020 kl. 13:36:38, skrev J2eeInside J2eeInside <
j2eeins...@gmail.com >: 
Hi all,

 I hope someone can help/suggest:
 I'm currently maintaining a project that uses Apache Solr /Lucene. To be 
honest, I wold like to replace Solr with Postgre Full Text Search. However, 
there is a huge amount of documents involved - arround 200GB. Wondering, can 
Postgre handle this efficiently?
 Does anyone have specific experience, and what should the infrastructure look 
like?

 P.S. Not to be confused, the Sol works just fine, i just wanted to eliminate 
one component from the whole system (if Full text search can replace Solr at 
all) 

I see you've gotten some answers but wanted to chime in... 
We seach in ~15mill. emails and ~10 mill documents (extracted text from 
Word/PDF etc. using Java-tools), and use PG and FTS (gin, not rum) for the 
exact same reasons as Evergreen (it seems). We have to mix FTS with 
domain-specific logic/filtering and that is based on relational data in the 
database. I don't see how we could have done that using an external 
search-engine. Maybe it's easy, I don't have any experience with it. 


-- 
Andreas Joseph Krogh 
CTO / Partner - Visena AS 
Mobile: +47 909 56 963 
andr...@visena.com  
www.visena.com  
  


Re: Replacing Apache Solr with Postgre Full Text Search?

2020-03-26 Thread Mike Rylander
On Thu, Mar 26, 2020 at 4:03 AM J2eeInside J2eeInside
 wrote:
>
> Hi Mike, and thanks for valuable answer!
> In short, you think a PG Full Text Search can do the same as Apache Solr?
>

Can it?  I mean, it does today.  Whether it would for you depends on
your needs and how much effort you can afford to put into the stuff
that is /not/ the full text engine itself, like document normalizers
and search UIs.

There are trade-offs to be made when choosing any tool.  Solr is
great, and so is Lucene (Solr's heart), and so is Elastic Search.  For
that matter, Zebra is awesome for full text indexing, too.  Those all
make indexing a pile of documents easy.  But, none of those are great
as an authoritative data store, so for instance there will necessarily
be drift between your data and the Solr index requiring a full
refresh.  It's also hard to integrate non-document filtering
requirements like I have in my use case.  Both of those are important
to my use case, so PG's full text is my preference.

Solr also didn't exist (publicly) in 2004 when we started building Evergreen. :)

> P.S. I need to index .pdf, .html and MS Word .doc/.docx files, is there any 
> constraints in Ful Text search regarding those file types?
>

It can't handle those without some help -- it supports exactly text --
but you can extract the text using other tools.

--
Mike Rylander
 | Executive Director
 | Equinox Open Library Initiative
 | phone:  1-877-OPEN-ILS (673-6457)
 | email:  mi...@equinoxinitiative.org
 | web:  http://equinoxinitiative.org

>
> On Wed, Mar 25, 2020 at 3:36 PM Mike Rylander  wrote:
>>
>> On Wed, Mar 25, 2020 at 8:37 AM J2eeInside J2eeInside
>>  wrote:
>> >
>> > Hi all,
>> >
>> > I hope someone  can help/suggest:
>> > I'm currently maintaining a project that uses Apache Solr /Lucene. To be 
>> > honest, I wold like to replace Solr with Postgre Full Text Search. 
>> > However, there is a huge amount of documents involved - arround 200GB. 
>> > Wondering, can Postgre handle this efficiently?
>> > Does anyone have specific experience, and what should the infrastructure 
>> > look like?
>> >
>> > P.S. Not to be confused, the Sol works just fine, i just wanted to 
>> > eliminate one component from the whole system (if Full text search can 
>> > replace Solr at all)
>>
>> I'm one of the core developers (and the primary developer of the
>> search subsystem) for the Evergreen ILS [1] (integrated library system
>> -- think book library, not software library).  We've been using PGs
>> full-text indexing infrastructure since day one, and I can say it is
>> definitely capable of handling pretty much anything you can throw at
>> it.
>>
>> Our indexing requirements are very complex and need to be very
>> configurable, and need to include a lot more than just "search and
>> rank a text column," so we've had to build a ton of infrastructure
>> around record (document) ingest, searching/filtering, linking, and
>> display.  If your indexing and search requirements are stable,
>> specific, and well-understood it should be straight forward,
>> especially if you don't have to take into account non-document
>> attributes like physical location, availability, and arbitrary
>> real-time visibility rules like Evergreen does.
>>
>> As for scale, it's more about document count than total size.  There
>> are Evergreen libraries with several million records to search, and
>> with proper hardware and tuning everything works well.  Our main
>> performance issue has to do with all of the stuff outside the records
>> (documents) themselves that have to be taken into account during
>> search.  The core full-text search part of our queries is extremely
>> performant, and has only gotten better over the years.
>>
>> [1] http://evergreen-ils.org
>>
>> HTH,
>> --
>> Mike Rylander
>>  | Executive Director
>>  | Equinox Open Library Initiative
>>  | phone:  1-877-OPEN-ILS (673-6457)
>>  | email:  mi...@equinoxinitiative.org
>>  | web:  http://equinoxinitiative.org




Re: PG12 autovac issues

2020-03-26 Thread Justin King
On Wed, Mar 25, 2020 at 8:43 PM Michael Paquier  wrote:
>
> On Wed, Mar 25, 2020 at 10:39:17AM -0500, Justin King wrote:
> > This started happening again.  DEBUG1 is enabled:
>
> Thanks for enabling DEBUG1 logs while this happened.
>
> > Mar 25 14:48:26 cowtn postgres[39875]: [35298-1] 2020-03-25
> > 14:48:26.329 GMT [39875] DEBUG:  skipping redundant vacuum to prevent
> > wraparound of table "postgres.pg_catalog.pg_tablespace"
> > Mar 25 14:48:26 cowtn postgres[39875]: [35299-1] 2020-03-25
> > 14:48:26.339 GMT [39875] DEBUG:  skipping redundant vacuum to prevent
> > wraparound of table "postgres.pg_catalog.pg_auth_members"
> > Mar 25 14:48:26 cowtn postgres[39875]: [35300-1] 2020-03-25
> > 14:48:26.350 GMT [39875] DEBUG:  skipping redundant vacuum to prevent
> > wraparound of table "postgres.pg_catalog.pg_replication_origin"
>
> Are you seeing such log entries happening multiple times for the same
> relations, meaning that autovacuum workers are just looping on the
> same relations all over again?  This part of the logs point to catalog
> tables, but are there other tables within your system facing the same
> logs, particularly the database "feedi" with some of your own tables?

Nope, it was just these tables that were looping over and over while
nothing else was getting autovac'd.  I'm happy to share the full log
if you'd like.

>
> >  postgres=# SELECT oid::regclass, age(relfrozenxid), relfrozenxid FROM
> > pg_class WHERE relfrozenxid <> 0 ORDER BY age(relfrozenxid) DESC LIMIT
> > 1;
> > oid|age| relfrozenxid
> > ---+---+--
> >  pg_authid | 202793549 |   4284570172
>
> Ugh.  I think that this is exactly the thing I was suspecting
> previously:
> - The database stats look sane.
> - The relation stats don't look good and visibly are put in such a
> state that only one type of jobs gets triggered (non-aggressive but
> anti-wraparound), which just keep being skipped and the relation stats
> don't get refreshed.  (Such autovacuum jobs should never happen and we
> have some underlying issues that will need separate care).
>
> If you still have the cluster in this current state (perhaps you are
> not able to keep it longer), could you provide more data about
> pg_class.relfrozenxid for the tables which are mentioned in the logs
> of the type "skipping redundant vacuum to prevent of table"?

I did have to remove it from this state, but I can undo my workaround
and, undoubtedly, it'll end up back there.  Let me know if there's
something specific you'd like me to provide when it happens!

>
> > Let me know if there's anything else useful I can provide.
>
> Thanks!
> --
> Michael




Re: PostgreSQL 10 not archiving some WAL files

2020-03-26 Thread Norberto Dellê
>
> >>On Wed, Mar 18, 2020 at 10:57:22AM -0300, Norberto Dellê wrote:
>> This setup worked very well for a long time, but since we upgraded
>> PostgreSQL to newer versions (10.x), sometimes it just skips archiving
some
>> wal files. This seems to happen mainly when the server is shut down.
>> There's no messages about errors in the logs, and also no corresponding
>> .done file in archive_status, it just goes to the next wal file.
>>
>> I would like to know if any of you had or has a problem like this, and if
>> you had, how you solved it.
>
>Hmm.  I don't recall seeing any of that.  This would mean that we are
>either missing the generation of some .ready file, or that some .done
>file gets generated when they should not in archive_status/.  What
>kind of server shutdown are you doing?  Immediate so as recovery
>happens at the follow-up startup.  Or is that a clean service
>shutdown?
>--
>Michael
>...

Hi Michael

Almost daily the server is shutdown, because my client wants to save energy.
In the logs, it appears that the service was not shut down properly. Here's
the last lines of one log file:

2020-03-16 18:10:03.130 -04 [1964] LOG:  received fast shutdown request
2020-03-16 18:10:03.132 -04 [2948] ERROR:  canceling statement due to user
request
2020-03-16 18:10:03.192 -04 [1964] LOG:  aborting any active transactions
2020-03-16 18:10:03.194 -04 [1964] LOG:  worker process: logical
replication launcher (PID 2948) exited with exit code 1
2020-03-16 18:10:03.210 -04 [2188] FATAL:  terminating connection due to
administrator command
2020-03-16 18:10:03.210 -04 [6688] FATAL:  terminating connection due to
administrator command
2020-03-16 18:10:03.210 -04 [3348] FATAL:  terminating connection due to
administrator command
2020-03-16 18:10:03.210 -04 [6356] FATAL:  terminating connection due to
administrator command
2020-03-16 18:10:03.210 -04 [5736] FATAL:  terminating connection due to
administrator command
2020-03-16 18:10:03.210 -04 [4028] FATAL:  terminating connection due to
administrator command
2020-03-16 18:10:03.211 -04 [2552] FATAL:  terminating connection due to
administrator command
2020-03-16 18:10:03.212 -04 [4280] FATAL:  terminating connection due to
administrator command
2020-03-16 18:10:03.210 -04 [2336] FATAL:  terminating connection due to
administrator command
2020-03-16 18:10:03.227 -04 [6564] FATAL:  terminating connection due to
administrator command
2020-03-16 18:10:03.255 -04 [2884] LOG:  shutting down

When the service is started the next day, this is what Postgres logs:

2020-03-17 07:47:26.721 -04 [2272] LOG:  database system was interrupted;
last known up at 2020-03-16 17:56:20 -04
2020-03-17 07:47:34.974 -04 [2884] FATAL:  the database system is starting
up
2020-03-17 07:47:39.378 -04 [2272] LOG:  database system was not properly
shut down; automatic recovery in progress
2020-03-17 07:47:39.631 -04 [2272] LOG:  redo starts at 38/F30F5108
2020-03-17 07:47:40.038 -04 [3052] FATAL:  the database system is starting
up
2020-03-17 07:47:40.126 -04 [2272] LOG:  redo done at 38/F4001738
2020-03-17 07:47:40.207 -04 [2272] LOG:  last completed transaction was at
log time 2020-03-16 18:06:53.269798-04
2020-03-17 07:47:42.590 -04 [2080] LOG:  database system is ready to accept
connections

In this shutdown/startup process, this PostgreSQL instance skipped
archiving a wal file.
Another thing that I perceived today is that in another instance, days
after (about a week) skipping a wal file, the file was finally archived.
Both of these instances that I talked about also have streaming replication
configured. But I don't think it is related because there other instances
skipping archiving wal files that don't have streaming replication
configured.

Norberto


Re: core. files inside base directory

2020-03-26 Thread Peter J. Holzer
On 2020-03-26 10:42:46 +, Daulat Ram wrote:
> Hello, I have the core. Files inside the base directory of my postgres 10 
> setup
> . Would you please let me know the importance of these files . When they
> generate . Can we delete them ?

Core files are created by the kernel when a process crashes. Unless you
already know which program crashed and why, it is probably worth
investigating.

As a first step you could just use the file utility to find out which
program crashed.

file core.*

should print something like 

core.26166: ELF 64-bit LSB core file x86-64, version 1 (SYSV),
SVR4-style, from 'sleep 120', real uid: ... execfn: '/bin/sleep' ...

for each file. (Of course the program won't be "sleep" in your case.

To analyze the coredumps further you would have to use a debugger (e.g.
gdb).

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


core. files inside base directory

2020-03-26 Thread Daulat Ram
Hello, I have the core. Files inside the base directory of my postgres 10 setup 
. Would you please let me know the importance of these files . When they 
generate . Can we delete them ?
/base/
du -sh *
24M base
8.1Gcore.26166
1.5Gcore.26258
8.1Gcore.27849
12M core.27951
4.3Gcore.2951
8.1Gcore.324
4.9Gcore.3241
8.1Gcore.3295
5.0Gcore.3470
5.2Gcore.3485
5.3Gcore.3699
5.0Gcore.3724
5.3Gcore.375
5.0Gcore.3935
5.3Gcore.3960
1.2Gcore.408

Thanks,



Re: PostgreSQL 13: native JavaScript Procedural Language support ?

2020-03-26 Thread Dmitry Igrishin
чт, 26 мар. 2020 г. в 10:08, Ivan E. Panchenko :
>
>
> On 26.03.2020 03:50, Bruce Momjian wrote:
> > On Wed, Mar 25, 2020 at 05:46:27PM +0200, Marius Andreiana wrote:
> >> Thanks Tom, that makes sense. Appreciate your time to explain the context.
> >>
> >> I'll followup with Heroku.
> > Also, I have heard PL/V8 is very hard to build for packagers (because of
> > changes by Google in the way V8 is packaged), which has decreased PL/V8
> > adoption.
>
> True.
>
> Btw, Nginx made their own JS implementation
>
> https://nginx.org/en/docs/njs/index.html
>
> It might be easier in integration than V8.
Yet more lightweight JavaScript engine - https://bellard.org/quickjs/
"Small and easily embeddable: just a few C files, no external
dependency, 210 KiB of x86 code for a simple hello world program."
"Almost complete ES2019 support including modules, asynchronous
generators and full Annex B support (legacy web compatibility)."




Re: PostgreSQL 13: native JavaScript Procedural Language support ?

2020-03-26 Thread Bruce Momjian
On Thu, Mar 26, 2020 at 10:07:48AM +0300, Ivan Panchenko wrote:
> 
> On 26.03.2020 03:50, Bruce Momjian wrote:
> > On Wed, Mar 25, 2020 at 05:46:27PM +0200, Marius Andreiana wrote:
> > > Thanks Tom, that makes sense. Appreciate your time to explain the context.
> > > 
> > > I'll followup with Heroku.
> > Also, I have heard PL/V8 is very hard to build for packagers (because of
> > changes by Google in the way V8 is packaged), which has decreased PL/V8
> > adoption.
> 
> True.
> 
> Btw, Nginx made their own JS implementation
> 
> https://nginx.org/en/docs/njs/index.html
> 
> It might be easier in integration than V8.

Oh, that sounds promising.  :-)

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: Replacing Apache Solr with Postgre Full Text Search?

2020-03-26 Thread J2eeInside J2eeInside
Hi Mike, and thanks for valuable answer!
In short, you think a PG Full Text Search can do the same as Apache Solr?

P.S. I need to index .pdf, .html and MS Word .doc/.docx files, is there any
constraints in Ful Text search regarding those file types?


On Wed, Mar 25, 2020 at 3:36 PM Mike Rylander  wrote:

> On Wed, Mar 25, 2020 at 8:37 AM J2eeInside J2eeInside
>  wrote:
> >
> > Hi all,
> >
> > I hope someone  can help/suggest:
> > I'm currently maintaining a project that uses Apache Solr /Lucene. To be
> honest, I wold like to replace Solr with Postgre Full Text Search. However,
> there is a huge amount of documents involved - arround 200GB. Wondering,
> can Postgre handle this efficiently?
> > Does anyone have specific experience, and what should the infrastructure
> look like?
> >
> > P.S. Not to be confused, the Sol works just fine, i just wanted to
> eliminate one component from the whole system (if Full text search can
> replace Solr at all)
>
> I'm one of the core developers (and the primary developer of the
> search subsystem) for the Evergreen ILS [1] (integrated library system
> -- think book library, not software library).  We've been using PGs
> full-text indexing infrastructure since day one, and I can say it is
> definitely capable of handling pretty much anything you can throw at
> it.
>
> Our indexing requirements are very complex and need to be very
> configurable, and need to include a lot more than just "search and
> rank a text column," so we've had to build a ton of infrastructure
> around record (document) ingest, searching/filtering, linking, and
> display.  If your indexing and search requirements are stable,
> specific, and well-understood it should be straight forward,
> especially if you don't have to take into account non-document
> attributes like physical location, availability, and arbitrary
> real-time visibility rules like Evergreen does.
>
> As for scale, it's more about document count than total size.  There
> are Evergreen libraries with several million records to search, and
> with proper hardware and tuning everything works well.  Our main
> performance issue has to do with all of the stuff outside the records
> (documents) themselves that have to be taken into account during
> search.  The core full-text search part of our queries is extremely
> performant, and has only gotten better over the years.
>
> [1] http://evergreen-ils.org
>
> HTH,
> --
> Mike Rylander
>  | Executive Director
>  | Equinox Open Library Initiative
>  | phone:  1-877-OPEN-ILS (673-6457)
>  | email:  mi...@equinoxinitiative.org
>  | web:  http://equinoxinitiative.org
>


Re: PostgreSQL 13: native JavaScript Procedural Language support ?

2020-03-26 Thread Ivan E. Panchenko



On 26.03.2020 03:50, Bruce Momjian wrote:

On Wed, Mar 25, 2020 at 05:46:27PM +0200, Marius Andreiana wrote:

Thanks Tom, that makes sense. Appreciate your time to explain the context.

I'll followup with Heroku.

Also, I have heard PL/V8 is very hard to build for packagers (because of
changes by Google in the way V8 is packaged), which has decreased PL/V8
adoption.


True.

Btw, Nginx made their own JS implementation

https://nginx.org/en/docs/njs/index.html

It might be easier in integration than V8.