Re: vacuumdb did not analyze all tables?=

2023-12-14 Thread Ron Johnson
On Thu, Dec 14, 2023 at 7:51 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Thu, Dec 14, 2023 at 5:46 PM  wrote:
>
>> On Thu, 14 Dec 2023 13:10:16 -0500 Ron Johnson wrote:
>>
>> >> I'm not sure if you kept the line, but you have ellipsed-out ( is that
>> >> a word? )
>>
>> ellipse:  curve
>> ellipsis:  ...
>>
>>
> Though in contect "redacted" makes sense too.
>

Context, not contect.  :D


Re: vacuumdb did not analyze all tables?=

2023-12-14 Thread David G. Johnston
On Thu, Dec 14, 2023 at 5:46 PM  wrote:

> On Thu, 14 Dec 2023 13:10:16 -0500 Ron Johnson wrote:
>
> >> I'm not sure if you kept the line, but you have ellipsed-out ( is that
> >> a word? )
>
> ellipse:  curve
> ellipsis:  ...
>
>
Though in contect "redacted" makes sense too.

David J.


Re: vacuumdb did not analyze all tables?=

2023-12-14 Thread pf
On Thu, 14 Dec 2023 13:10:16 -0500 Ron Johnson wrote:

>> I'm not sure if you kept the line, but you have ellipsed-out ( is that
>> a word? )  

ellipse:  curve
ellipsis:  ...




Re: vacuumdb did not analyze all tables?=

2023-12-14 Thread Ron Johnson
On Thu, Dec 14, 2023 at 12:20 PM Francisco Olarte 
wrote:

> Ron:
>
> On Thu, 14 Dec 2023 at 03:39, Ron Johnson  wrote:
> ...
> > Three of the 71 tables were not analyzed.  Why would that be?
> ...
> > vacuumdb -U postgres -h $DbServer --analyze -j6 -t ... -t
> cds.cdstransaction_rp20_y2021 -t ...
> ...
> >  cds.cdstransaction_rp20_y2021   | 2023-12-13 10:42:09.683143-05 |
> 2023-11-17 04:11:08.761861-05
> >  css.image_annotation_rp20_y2021 | 2023-09-25 20:00:07.831722-04 |
> 2023-09-25 20:00:07.831793-04
> >  tms.document_rp20_y2021 | 2023-12-13 10:42:03.079969-05 |
> 2023-11-17 04:11:56.583881-05
>
> I'm not sure if you kept the line, but you have ellipsed-out ( is that
> a word? )


I think so.


> the interesting names, so quoted vacuumdb line is useless
> for check.


71 tables were listed, and didn't want to flood my email with a KB or two
of non-essential text.

I verified that all three tables were in the vacuumdb command line.  (The
list was generated by a query, and stdout and stderr were redirected to a
file, and I grepped it for the table names.)

If you want, I can attach the log file.


Re: vacuumdb did not analyze all tables?=

2023-12-14 Thread Francisco Olarte
Ron:

On Thu, 14 Dec 2023 at 03:39, Ron Johnson  wrote:
...
> Three of the 71 tables were not analyzed.  Why would that be?
...
> vacuumdb -U postgres -h $DbServer --analyze -j6 -t ... -t 
> cds.cdstransaction_rp20_y2021 -t ...
...
>  cds.cdstransaction_rp20_y2021   | 2023-12-13 10:42:09.683143-05 | 2023-11-17 
> 04:11:08.761861-05
>  css.image_annotation_rp20_y2021 | 2023-09-25 20:00:07.831722-04 | 2023-09-25 
> 20:00:07.831793-04
>  tms.document_rp20_y2021 | 2023-12-13 10:42:03.079969-05 | 2023-11-17 
> 04:11:56.583881-05

I'm not sure if you kept the line, but you have ellipsed-out ( is that
a word? ) the interesting names, so quoted vacuumdb line is useless
for check.

Francisco Olarte.




Re: Increased storage size of jsonb in pg15

2023-12-14 Thread David G. Johnston
On Thu, Dec 14, 2023 at 7:48 AM Sean Flaherty 
wrote:

> We have a process that runs once an hour to read the .dat file in csv
> format then a node script using the pg package
>  version "8.8.0" to create the
> json objects and insert the data records as jsonb data.
>
> None of the upload process changed during the underlying database upgrade.
>

Basic debugging requires the existence of a self-contained reproducer.  In
this case ideally one that only uses psql and some static (already
processed) data files, and that is known to produce the observed behaviors
on non-RDS PostgreSQL.

David J.


Re: Increased storage size of jsonb in pg15

2023-12-14 Thread Adrian Klaver

On 12/14/23 06:48, Sean Flaherty wrote:
We have a process that runs once an hour to read the .dat file in csv 
format then a node script using the pg package 
 version "8.8.0" to create the 
json objects and insert the data records as jsonb data.


Now I am not understanding.

1) In your OP you mentioned checking size of the column storage using 
pg_column_size, yet what you show for increase in size are datafile.dat.


2) So how is datafile.dat related to this issue?

3) Show how you are determining that the storage in the database has 
increased in size.




None of the upload process changed during the underlying database upgrade.


On Wed, Dec 13, 2023 at 4:56 PM Adrian Klaver > wrote:


On 12/13/23 15:49, Sean Flaherty wrote:
 > More information needed:
 >

 > 2) An example of reported size for the 14.? and 15.5 cases.
 >
 >    Since upgrading from 14.8 to 15.5, the jsonb data that was
previously
 > written in 14.8 is reporting a smaller size than the same hourly
data
 > written after the upgrade (upgrade indicated in yellow):

What is producing datafile.dat and how?

 >
 > *file*        *hourly_timestamp*      *filename_bytes*   
*timestamp_bytes*

 > *data_filesize*       *created_at_bytes*      *updated_at_bytes*
 > datafile.dat  2023-10-19 12:00:00     23      8       1682    8 
      8
 > datafile.dat  2023-10-19 13:00:00     23      8       1687    8 
      8
 > datafile.dat  2023-10-19 14:00:00     23      8       1685    8 
      8
 > datafile.dat  2023-10-19 15:00:00     23      8       1668    8 
      8
 > datafile.dat  2023-10-19 16:00:00     23      8       2155    8 
      8
 > datafile.dat  2023-10-19 17:00:00     23      8       2178    8 
      8
 > datafile.dat  2023-10-19 18:00:00     23      8       2199    8 
      8
 > datafile.dat  2023-10-19 19:00:00     23      8       2187    8 
      8
 > datafile.dat  2023-10-19 20:00:00     23      8       2180    8 
      8
 > datafile.dat  2023-10-19 21:00:00     23      8       2176    8 
      8
 > datafile.dat  2023-10-19 22:00:00     23      8       2053    8 
      8
 > datafile.dat  2023-10-19 23:00:00     23      8       2043    8 
      8


-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: how can I fix my accent issues?

2023-12-14 Thread Igniris Valdivia Baez
Hello to all, we have found the solution to our accents problem, a
colleague of mine got the idea to use xlsx instead of xls and the
magic happened, thanks to all for your support
best regards

El mié, 13 dic 2023 a las 0:19, Adrian Klaver
() escribió:
>
> On 12/12/23 16:09, Igniris Valdivia Baez wrote:
> > Hello to all, to clarify the data is moving this way:
> > 1. The data is extracted from a database in postgres using Pentaho(Kettle)
> > 2. Here is there is a bifurcation some data is loaded into the destiny
> > database and behaves fine the other scenario the data is saved in xls
> > files to be reviewed
>
> How is saved to xls files?
>
> > 3. After the revision the data is loaded to the destiny database and
> > here is were I believe the issue is, because the data is reviewed in
> > Windows and somehow Pentaho is not understanding correctly the
> > interaction between both operating systems.
>
> Defined reviewed, on particular is the data changed?
>
> How is transferred from xls to to the database?
>
> Is the data reviewed in Excel only on one machine or many?
>
> What the locales/encodings/character sets involved?
>
> >
> > PD: when the hole operation is executed in Windows it never fails
>
> Define what you mean by whole operation done in Windows.
>
> > Thank you all
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>




Re: Increased storage size of jsonb in pg15

2023-12-14 Thread Sean Flaherty
We have a process that runs once an hour to read the .dat file in csv
format then a node script using the pg package
 version "8.8.0" to create the
json objects and insert the data records as jsonb data.

None of the upload process changed during the underlying database upgrade.


On Wed, Dec 13, 2023 at 4:56 PM Adrian Klaver 
wrote:

> On 12/13/23 15:49, Sean Flaherty wrote:
> > More information needed:
> >
>
> > 2) An example of reported size for the 14.? and 15.5 cases.
> >
> >Since upgrading from 14.8 to 15.5, the jsonb data that was previously
> > written in 14.8 is reporting a smaller size than the same hourly data
> > written after the upgrade (upgrade indicated in yellow):
>
> What is producing datafile.dat and how?
>
> >
> > *file**hourly_timestamp*  *filename_bytes*
> *timestamp_bytes*
> > *data_filesize*   *created_at_bytes*  *updated_at_bytes*
> > datafile.dat  2023-10-19 12:00:00 23  8   16828   8
> > datafile.dat  2023-10-19 13:00:00 23  8   16878   8
> > datafile.dat  2023-10-19 14:00:00 23  8   16858   8
> > datafile.dat  2023-10-19 15:00:00 23  8   16688   8
> > datafile.dat  2023-10-19 16:00:00 23  8   21558   8
> > datafile.dat  2023-10-19 17:00:00 23  8   21788   8
> > datafile.dat  2023-10-19 18:00:00 23  8   21998   8
> > datafile.dat  2023-10-19 19:00:00 23  8   21878   8
> > datafile.dat  2023-10-19 20:00:00 23  8   21808   8
> > datafile.dat  2023-10-19 21:00:00 23  8   21768   8
> > datafile.dat  2023-10-19 22:00:00 23  8   20538   8
> > datafile.dat  2023-10-19 23:00:00 23  8   20438   8
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>