Re: [SPAM] Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-08-04 Thread Francisco Olarte
Hi Moreno:

On Wed, Aug 3, 2016 at 1:07 PM, Moreno Andreo  wrote:

It's already been answered, but as it seems to be answering a chunk of
my mail...

> Should I keep fsync off? I'd think it would be better leaving it on, right?

Yes. If you have to ask wether fsync should be on, it should.

I mean, you only take it off when you are absolutely sure of where you
are doing, fsync off goes against the D in acid.

You normally only turn it off in counted cases. As an example we have
an special postgresql.conf for full cluster restores, with fsync=off.
Wehen we need it we stop the cluster, boot it with that, restore, stop
it again and reboot with the normal fsync=on config. In this case we
do not mind losing data as we are doing a full restore anyway.

But normally, its a bad idea. As a classic photo caption says,
fsync=off => DBAs running with scissors.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-08-03 Thread Moreno Andreo

Il 03/08/2016 18:01, Jeff Janes ha scritto:

On Thu, Jul 28, 2016 at 6:25 AM, Moreno Andreo  wrote:

Hi folks! :-)
I'm about to bring up my brand new production server and I was wondering if
it's possible to calculate (approx.) the WAL directory size.
I have to choose what's better in terms of cost vs. performance (we are on
Google Cloud Platform) between a ramdisk or a separate persistent disk.

As others have said, there is almost no point in putting WAL on a
ramdisk.  It will not be there exactly at the time you need it.

OK, got it, as I already stated. That was just a bad thought :-)



Obviously ramdisk will be times faster disk, but having a, say, 512 GB
ramdisk will be a little too expensive :-)
I've read somewhere that the formula should be 16 MB * 3 *
checkpoint_segment in size. But won't it be different depending on the type
of /wal_level/ we set? And won't it also be based on the volume of
transactions in the cluster?

Not in usual cases.  If you have more volume, then checkpoint_segment
will get exceeded more frequently and you will have more frequent
checkpoints.  As long as your system can actually keep up with the
checkpoints, then the more frequent checkpoints will cancel the higher
volume, leaving you with the same steady-state number of segments.


So if I want to keep checkpoint happening not frequently, the solution 
is to have a bigger checkpoint_segment (or max_wal_size), so value gets 
exceeded less frequently?



And, in place of not-anymore-used-in-9.5 /checkpoint_segment/ what should I
use? /max_wal_size/?

max_wal_size doesn't just replace "checkpoint_segment" in the formula.
It replaces the entire
formula itself.  That was the reason for introducing it.


Another point cleared. I did not get this in the docs. I'll go an read 
it again.





Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
to use wal_level=archive because I plan to have a backup server with barman.

Using the above formula I have:
 16 MB * 3 * 1 GB

If you are getting the "1 GB" from max_wal_size, then see above.


Exactly. I think it's its default value, since I didn't change it.


Note that max_wal_size is not a hard limit.  It will be exceeded if
your system can't keep up with the checkpoint schedule.  Or if
archive_command can't keep up.

Got it.
Thanks
Moreno


Cheers,

Jeff







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] Re: [SPAM] Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-08-03 Thread Moreno Andreo

Il 03/08/2016 14:12, Michael Paquier ha scritto:

On Wed, Aug 3, 2016 at 8:07 PM, Moreno Andreo  wrote:

Should I keep fsync off? I'd think it would be better leaving it on, right?

>From the docs: 
https://www.postgresql.org/docs/9.6/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-SETTINGS
While turning off fsync is often a performance benefit, this can
result in unrecoverable data corruption in the event of a power
failure or system crash. Thus it is only advisable to turn off fsync
if you can easily recreate your entire database from external data.

So if you care about your data, that's to set in any case to on, as is
full_page_writes.


Got it.

Thanks!

Moreno




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-08-03 Thread Michael Paquier
On Wed, Aug 3, 2016 at 8:07 PM, Moreno Andreo  wrote:
> Should I keep fsync off? I'd think it would be better leaving it on, right?

>From the docs: 
>https://www.postgresql.org/docs/9.6/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-SETTINGS
While turning off fsync is often a performance benefit, this can
result in unrecoverable data corruption in the event of a power
failure or system crash. Thus it is only advisable to turn off fsync
if you can easily recreate your entire database from external data.

So if you care about your data, that's to set in any case to on, as is
full_page_writes.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-08-03 Thread Moreno Andreo

Il 29/07/2016 17:26, Francisco Olarte ha scritto:

Hi:

On Fri, Jul 29, 2016 at 10:35 AM, Moreno Andreo
 wrote:

After Andreas post and thinking about it a while, I went to the decision
that it's better not to use RAM but another persistent disk, because there
can be an instant between when a WAL is written and it's fsync'ed, and if a
failure happens in this instant the amount of data not fsync'ed is lost. Am
I right?

With the usual configuration, fsync on, etc.. what postgres does is to
write and sync THE WAL before commit, but it does not sync the table
pages. Should anything bad (tm) happen it can replay the synced wal to
recover. If you use a ram disk for WAL and have a large enough ram
cache you can lose a lot of data, not just from the last sync. At the
worst point you could start a transaction, create a database, fill it
and commit and have everything in the ram-wal and the hd cache, then
crash and have nothing on reboot.

Francisco Olarte.



This is another good point.

I'm ending up with a 10 GB SSD dedicated to WAL files. I'm starting with 
a small slice of my clients for now, to test production environment, and 
as traffic will grow, I'll see if my choice was good or has to be improved.


Should I keep fsync off? I'd think it would be better leaving it on, right?





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-08-01 Thread Moreno Andreo

  
  
Il 29/07/2016 15:30, David G. Johnston
  ha scritto:


  
On Fri, Jul 29, 2016 at
7:08 AM, Moreno Andreo  wrote:


  

  
  
​R​
egarding backups I disagree. Files related to
database must be consistent to the database itself,
so backup must be done saving both database and
images. 





  ​I'd
suggest you consider that such binary data be defined as
immutable.  Then the only problem you have to worry
about is existence - versioning consistency goes away. 
You only need focus on the versioning of associations -
which remains in the database and is very lightweight. 
It is then a separate matter to ensure that all
documents you require are accessible given the
identifying information stored in the database and
linked to the primary records via those versioned
associations.


  
  


  David
J.


  ​
   
  

  

I think you are right on this point, there are only some kind of
  bytea that are not immutable, and that's where we store bytea
  instead of images (many of these fields have been already
  converted to text type, though)
Moreno

  





Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-07-29 Thread Francisco Olarte
Hi:

On Fri, Jul 29, 2016 at 10:35 AM, Moreno Andreo
 wrote:
> After Andreas post and thinking about it a while, I went to the decision
> that it's better not to use RAM but another persistent disk, because there
> can be an instant between when a WAL is written and it's fsync'ed, and if a
> failure happens in this instant the amount of data not fsync'ed is lost. Am
> I right?

With the usual configuration, fsync on, etc.. what postgres does is to
write and sync THE WAL before commit, but it does not sync the table
pages. Should anything bad (tm) happen it can replay the synced wal to
recover. If you use a ram disk for WAL and have a large enough ram
cache you can lose a lot of data, not just from the last sync. At the
worst point you could start a transaction, create a database, fill it
and commit and have everything in the ram-wal and the hd cache, then
crash and have nothing on reboot.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-07-29 Thread Moreno Andreo

  
  
Il 29/07/2016 15:13,
  FarjadFarid(ChkNet) ha scritto:


  
  
  
  
 
If
you add a URL to an ftp with SSL certificate. Your backup
will be much quicker and if someone stole the computer the
images are still encrypted as before. It is just the source
where data comes from that changes.  
  
  

... and if while working the Internet connection drops? or my
office is not covered by broadband at all (and, still in 2016, in
Italy there are so many places not covered by broadband... no adsl,
no wi-max, low-performing mobile)?
Local copies of databases that we synchronize are made to permit
users to work even if no internet connection is available (since
they're doctors, they have to have their data available almost all
the time)

This architecture is made by design. Some years ago, when we started
designing our software, we went into this situation, and the
question was "Why don't we have just a remote server and users
connect remotely, instead of having replicas in their places?"
This can ease updates, troubleshooting, almost everything.
After a while, the answer we went into is exactly as above. Until we
have slow and unreliable Internet connections (fiber coverage is
growing, but it's still very sparse) so we can't count on them, we
can't rely only to a remote server.

Thanks
Moreno


  

 


 

  
From: Moreno Andreo
[mailto:moreno.and...@evolu-s.it] 
Sent: 29 July 2016 12:08
To: FarjadFarid(ChkNet)
<farjad.fa...@checknetworks.com>;
pgsql-general@postgresql.org
        Subject: Re: [SPAM] Re: [GENERAL] WAL directory
    size calculation
  

 

  Il 29/07/2016 11:44, FarjadFarid(ChkNet)
ha scritto:


   
  The
  question to ask is what benefit would you gain by saving
  BLOB object on a database than on say a flat file server
  or url on an ftp server? Specially larger ones. 

Privacy. Blobs are stored encrypted, since
  they are health-related images or documents.
  You should be right if all of this data would be resident only
  on our server (that can only be accessed by application), but
  every user has a small PG cluster in his PC with his patients
  data and images that replicates continuously with our server.
  Our application runs on Windows. To get into patient data from
  another user (say, someone that stole the computer) is a bit
  tricky, because you have to know how to exclude authentication
  in postgres and even after this, you have to know where to
  search and what to search and sometines what is the meaning on
  the encodings.
  Imagine if we have a folder containing all images double
  click and open...
  
  Another point is a bit of self-defense. Our users are far to
  be smart computer users, and in the past we had some cases in
  which someone, trying to clean up a filled-up disk, deleted a
  directory under his Paradox database (!!!) and then asked us
  why the app was not loading anymore
   
  

  BLOB’s
  cause a lot problem for all DBs. Not unless the DB engine
  can understand their structure and process them. It is not
  worth the effort. 
  It
  can hit the DB performance in Indexing, backups,
  migrations and load balancing. 

Regarding backups I disagree. Files related
  to database must be consistent to the database itself, so
  backup must be done saving both database and images. AFAIK
  there's not a big difference in backing up image files versus
  BLOBS in a database.
  I agree about load balancing, but only in case of a bulk load
  of several megabytes. (our actual server got an overload 2
  months ago when a client we were activating sent a transaction
  with the insertion of 50 blobs sizing about 300 megabytes)
  
  

   
   
  Hope
  this helps. 
   
   
   
  

  From: pgsql-general-ow...@postgresql.org
  [mailto:pgsql-general-ow...@postgresql.org]
  On Behalf Of Moreno Andreo
  Sent: 29 July 2016 10:19
  To: pgsql-general@postgresql.org
          Subject: Re: [SPAM] Re: [GENERAL] WAL directory
  

Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-07-29 Thread David G. Johnston
On Fri, Jul 29, 2016 at 7:08 AM, Moreno Andreo 
wrote:

> ​R​
> egarding backups I disagree. Files related to database must be consistent
> to the database itself, so backup must be done saving both database and
> images.
>

​I'd suggest you consider that such binary data be defined as immutable.
Then the only problem you have to worry about is existence - versioning
consistency goes away.  You only need focus on the versioning of
associations - which remains in the database and is very lightweight.  It
is then a separate matter to ensure that all documents you require are
accessible given the identifying information stored in the database and
linked to the primary records via those versioned associations.

David J.
​


Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-07-29 Thread FarjadFarid(ChkNet)
Sorry the URL should have been https://www.maytech.net/

 

Of course there are other companies in this space. 

 

From: Moreno Andreo [mailto:moreno.and...@evolu-s.it] 
Sent: 29 July 2016 12:08
To: FarjadFarid(ChkNet) <farjad.fa...@checknetworks.com>; 
pgsql-general@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 11:44, FarjadFarid(ChkNet) ha scritto:

 

The question to ask is what benefit would you gain by saving BLOB object on a 
database than on say a flat file server or url on an ftp server? Specially 
larger ones. 

Privacy. Blobs are stored encrypted, since they are health-related images or 
documents.
You should be right if all of this data would be resident only on our server 
(that can only be accessed by application), but every user has a small PG 
cluster in his PC with his patients data and images that replicates 
continuously with our server.
Our application runs on Windows. To get into patient data from another user 
(say, someone that stole the computer) is a bit tricky, because you have to 
know how to exclude authentication in postgres and even after this, you have to 
know where to search and what to search and sometines what is the meaning on 
the encodings.
Imagine if we have a folder containing all images double click and open...

Another point is a bit of self-defense. Our users are far to be smart computer 
users, and in the past we had some cases in which someone, trying to clean up a 
filled-up disk, deleted a directory under his Paradox database (!!!) and then 
asked us why the app was not loading anymore
  

BLOB’s cause a lot problem for all DBs. Not unless the DB engine can understand 
their structure and process them. It is not worth the effort. 

It can hit the DB performance in Indexing, backups, migrations and load 
balancing. 

Regarding backups I disagree. Files related to database must be consistent to 
the database itself, so backup must be done saving both database and images. 
AFAIK there's not a big difference in backing up image files versus BLOBS in a 
database.
I agree about load balancing, but only in case of a bulk load of several 
megabytes. (our actual server got an overload 2 months ago when a client we 
were activating sent a transaction with the insertion of 50 blobs sizing about 
300 megabytes)



 

 

Hope this helps. 

 

 

 

From: pgsql-general-ow...@postgresql.org 
<mailto:pgsql-general-ow...@postgresql.org>  
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Moreno Andreo
Sent: 29 July 2016 10:19
To: pgsql-general@postgresql.org <mailto:pgsql-general@postgresql.org> 
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 10:43, John R Pierce ha scritto:

 

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan 
to use wal_level=archive because I plan to have a backup server with barman. 

 

With that many databases with that so many objects

350 DBs with about 130 tables and a bunch of sequences each, for the sake of 
precision.
With extensive use of BLOBs.





and undoubtable client connections, 

Yes, that's another big problem... we run normally between 500 and 700 
concurrent connections... I had to set max_connections=1000, the whole thing 
grew up faster than we were prepared for...





I'd want to spread that across a cluster of smaller servers.

That will be step 2... while migration is running (and will run for some 
months, we have to plan migration with users) I'll test putting another one or 
two machines in cluster, make some test cases, and when ready, databases will 
be migrated on other machines, too.
I posted a question about this some months ago, and I was told that one 
solution would be to set the servers to be master on some databases and slave 
on others, so we can have a better load balancing (instead of having all writes 
on the sole master, we split among all masters depending on which database is 
getting the write command, especially when having to write BLOBs that can be 
some megabytes in size).
I don't know to achieve this, but I will find a way somewhere.





just sayin...

ideas are always precious and welcome.




 

-- 
john r pierce, recycling bits in santa cruz

 

 



Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-07-29 Thread FarjadFarid(ChkNet)
 

Actually you can increase the over-all performance of your system several fold 
by distributing the source of data with encryption. CDN services use this old 
technique consistently all the time. 



If you add a URL to an ftp with SSL certificate. Your backup will be much 
quicker and if someone stole the computer the images are still encrypted as 
before. It is just the source where data comes from that changes.  

 

Of course for small amount of data, say encrypted user name, password or id 
credential, db engine is still the best. But for larger files you could benefit 
substantially by looking at hybrid solutions. 

 

Check out companies like www.maytech.com <http://www.maytech.com> , not related 
to me at all. But they have secure network used for NHS (UK). 

 

Their ftp service does have user name password protection which could be 
customised for different customer. They also distributed servers around the 
world. 

 

Hope this helps.



 

From: Moreno Andreo [mailto:moreno.and...@evolu-s.it] 
Sent: 29 July 2016 12:08
To: FarjadFarid(ChkNet) <farjad.fa...@checknetworks.com>; 
pgsql-general@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 11:44, FarjadFarid(ChkNet) ha scritto:

 

The question to ask is what benefit would you gain by saving BLOB object on a 
database than on say a flat file server or url on an ftp server? Specially 
larger ones. 

Privacy. Blobs are stored encrypted, since they are health-related images or 
documents.
You should be right if all of this data would be resident only on our server 
(that can only be accessed by application), but every user has a small PG 
cluster in his PC with his patients data and images that replicates 
continuously with our server.
Our application runs on Windows. To get into patient data from another user 
(say, someone that stole the computer) is a bit tricky, because you have to 
know how to exclude authentication in postgres and even after this, you have to 
know where to search and what to search and sometines what is the meaning on 
the encodings.
Imagine if we have a folder containing all images double click and open...

Another point is a bit of self-defense. Our users are far to be smart computer 
users, and in the past we had some cases in which someone, trying to clean up a 
filled-up disk, deleted a directory under his Paradox database (!!!) and then 
asked us why the app was not loading anymore
  

BLOB’s cause a lot problem for all DBs. Not unless the DB engine can understand 
their structure and process them. It is not worth the effort. 

It can hit the DB performance in Indexing, backups, migrations and load 
balancing. 

Regarding backups I disagree. Files related to database must be consistent to 
the database itself, so backup must be done saving both database and images. 
AFAIK there's not a big difference in backing up image files versus BLOBS in a 
database.
I agree about load balancing, but only in case of a bulk load of several 
megabytes. (our actual server got an overload 2 months ago when a client we 
were activating sent a transaction with the insertion of 50 blobs sizing about 
300 megabytes)



 

 

Hope this helps. 

 

 

 

From: pgsql-general-ow...@postgresql.org 
<mailto:pgsql-general-ow...@postgresql.org>  
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Moreno Andreo
Sent: 29 July 2016 10:19
To: pgsql-general@postgresql.org <mailto:pgsql-general@postgresql.org> 
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 10:43, John R Pierce ha scritto:

 

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan 
to use wal_level=archive because I plan to have a backup server with barman. 

 

With that many databases with that so many objects

350 DBs with about 130 tables and a bunch of sequences each, for the sake of 
precision.
With extensive use of BLOBs.





and undoubtable client connections, 

Yes, that's another big problem... we run normally between 500 and 700 
concurrent connections... I had to set max_connections=1000, the whole thing 
grew up faster than we were prepared for...





I'd want to spread that across a cluster of smaller servers.

That will be step 2... while migration is running (and will run for some 
months, we have to plan migration with users) I'll test putting another one or 
two machines in cluster, make some test cases, and when ready, databases will 
be migrated on other machines, too.
I posted a question about this some months ago, and I was told that one 
solution would be to set the servers to be master on some databases and slave 
on others, so we can have a better load balancing (instead of having all writes 
on the sole master, we split among all masters depending on which database is 
getting the write command, especially when having to write BLOBs that can be 
some megabytes in size).
I don't know to achieve this, but I w

Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-07-29 Thread Moreno Andreo

  
  
Il 29/07/2016 11:44,
  FarjadFarid(ChkNet) ha scritto:


  
  
  
  
 
The
question to ask is what benefit would you gain by saving
BLOB object on a database than on say a flat file server or
url on an ftp server? Specially larger ones. 
  

Privacy. Blobs are stored encrypted, since they are health-related
images or documents.
You should be right if all of this data would be resident only on
our server (that can only be accessed by application), but every
user has a small PG cluster in his PC with his patients data and
images that replicates continuously with our server.
Our application runs on Windows. To get into patient data from
another user (say, someone that stole the computer) is a bit tricky,
because you have to know how to exclude authentication in postgres
and even after this, you have to know where to search and what to
search and sometines what is the meaning on the encodings.
Imagine if we have a folder containing all images double click
and open...

Another point is a bit of self-defense. Our users are far to be
smart computer users, and in the past we had some cases in which
someone, trying to clean up a filled-up disk, deleted a directory
under his Paradox database (!!!) and then asked us why the app was
not loading anymore
 

  
BLOB’s
cause a lot problem for all DBs. Not unless the DB engine
can understand their structure and process them. It is not
worth the effort. 
It
can hit the DB performance in Indexing, backups, migrations
and load balancing. 
  

Regarding backups I disagree. Files related to database must be
consistent to the database itself, so backup must be done saving
both database and images. AFAIK there's not a big difference in
backing up image files versus BLOBS in a database.
I agree about load balancing, but only in case of a bulk load of
several megabytes. (our actual server got an overload 2 months ago
when a client we were activating sent a transaction with the
insertion of 50 blobs sizing about 300 megabytes)

  
 
 
Hope
this helps. 
 
 
 

  
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On
  Behalf Of Moreno Andreo
Sent: 29 July 2016 10:19
To: pgsql-general@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL directory
size calculation
  

 

  Il 29/07/2016 10:43, John R Pierce ha
scritto:


  
 
  
  

  
Aside of this, I'm having 350 DBs
  that sum up a bit more than 1 TB, and plan 
  to use wal_level=archive because I plan to have a
  backup server with barman. 
  

  
   
  With that many databases with that so many objects

350 DBs with about 130 tables and a bunch
  of sequences each, for the sake of precision.
  With extensive use of BLOBs.
  
  
  

  and undoubtable client connections, 

Yes, that's another big problem... we run
  normally between 500 and 700 concurrent connections... I had
  to set max_connections=1000, the whole thing grew up faster
  than we were prepared for...
  
  
  

  I'd want to spread that across a cluster of smaller
servers.

That will be step 2... while migration is
  running (and will run for some months, we have to plan
  migration with users) I'll test putting another one or two
  machines in cluster, make some test cases, and when ready,
  databases will be migrated on other machines, too.
  I posted a question about this some months ago, and I was told
  that one solution would be to set the servers to be master on
  some databases and slave on others, so we can have a better
  load balancing (instead of having all writes on the sole
  master, we split among all masters depending on which database
  is getting the write command, especially when having to write
  BLOBs that can be some megabytes in size).
  I don't know to achieve this, but I will find a way somewhere.
  
  
  

  just sayin...

ideas are always precious and welcome.
  
  

   
  -- 
  john r pierce, recycling

Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-07-29 Thread FarjadFarid(ChkNet)
Another option which is growing in popularity is distributed in memory cache. 
There are quite a few companies providing such technology. 

 

Pricing can range from free to quite expensive. 

 

One recommendation with these technologies is to test them under heavy load 
conditions. 

 

Good luck.

 

 

 

 

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Moreno Andreo
Sent: 29 July 2016 10:19
To: pgsql-general@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 10:43, John R Pierce ha scritto:

 

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan 
to use wal_level=archive because I plan to have a backup server with barman. 

 

With that many databases with that so many objects

350 DBs with about 130 tables and a bunch of sequences each, for the sake of 
precision.
With extensive use of BLOBs.




and undoubtable client connections, 

Yes, that's another big problem... we run normally between 500 and 700 
concurrent connections... I had to set max_connections=1000, the whole thing 
grew up faster than we were prepared for...




I'd want to spread that across a cluster of smaller servers.

That will be step 2... while migration is running (and will run for some 
months, we have to plan migration with users) I'll test putting another one or 
two machines in cluster, make some test cases, and when ready, databases will 
be migrated on other machines, too.
I posted a question about this some months ago, and I was told that one 
solution would be to set the servers to be master on some databases and slave 
on others, so we can have a better load balancing (instead of having all writes 
on the sole master, we split among all masters depending on which database is 
getting the write command, especially when having to write BLOBs that can be 
some megabytes in size).
I don't know to achieve this, but I will find a way somewhere.




just sayin...

ideas are always precious and welcome.



 

-- 
john r pierce, recycling bits in santa cruz

 



Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-07-29 Thread FarjadFarid(ChkNet)
 

The question to ask is what benefit would you gain by saving BLOB object on a 
database than on say a flat file server or url on an ftp server? Specially 
larger ones. 

 

BLOB’s cause a lot problem for all DBs. Not unless the DB engine can understand 
their structure and process them. It is not worth the effort. 

It can hit the DB performance in Indexing, backups, migrations and load 
balancing.  

 

Hope this helps. 

 

 

 

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Moreno Andreo
Sent: 29 July 2016 10:19
To: pgsql-general@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL directory size calculation

 

Il 29/07/2016 10:43, John R Pierce ha scritto:

 

Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan 
to use wal_level=archive because I plan to have a backup server with barman. 

 

With that many databases with that so many objects

350 DBs with about 130 tables and a bunch of sequences each, for the sake of 
precision.
With extensive use of BLOBs.




and undoubtable client connections, 

Yes, that's another big problem... we run normally between 500 and 700 
concurrent connections... I had to set max_connections=1000, the whole thing 
grew up faster than we were prepared for...




I'd want to spread that across a cluster of smaller servers.

That will be step 2... while migration is running (and will run for some 
months, we have to plan migration with users) I'll test putting another one or 
two machines in cluster, make some test cases, and when ready, databases will 
be migrated on other machines, too.
I posted a question about this some months ago, and I was told that one 
solution would be to set the servers to be master on some databases and slave 
on others, so we can have a better load balancing (instead of having all writes 
on the sole master, we split among all masters depending on which database is 
getting the write command, especially when having to write BLOBs that can be 
some megabytes in size).
I don't know to achieve this, but I will find a way somewhere.




just sayin...

ideas are always precious and welcome.



 

-- 
john r pierce, recycling bits in santa cruz

 



Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-07-29 Thread Moreno Andreo

  
  
Il 29/07/2016 10:43, John R Pierce ha
  scritto:


  
  
  
  

  Aside of this,
I'm having 350 DBs that sum up a bit more than 1 TB, and
plan 
to use wal_level=archive because I plan to have a backup
server with barman. 
  
  
  
  
  With that many databases with that so many objects

350 DBs with about 130 tables and a bunch of sequences each, for the
sake of precision.
With extensive use of BLOBs.


  and undoubtable client connections, 

Yes, that's another big problem... we run normally between 500 and
700 concurrent connections... I had to set max_connections=1000, the
whole thing grew up faster than we were prepared for...


  I'd want to spread that across a cluster of smaller servers.

That will be step 2... while migration is running (and will run for
some months, we have to plan migration with users) I'll test putting
another one or two machines in cluster, make some test cases, and
when ready, databases will be migrated on other machines, too.
I posted a question about this some months ago, and I was told that
one solution would be to set the servers to be master on some
databases and slave on others, so we can have a better load
balancing (instead of having all writes on the sole master, we split
among all masters depending on which database is getting the write
command, especially when having to write BLOBs that can be some
megabytes in size).
I don't know to achieve this, but I will find a way somewhere.


  just sayin...

ideas are always precious and welcome.

  
  
  -- 
john r pierce, recycling bits in santa cruz



  





Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-07-29 Thread John R Pierce


Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, 
and plan
to use wal_level=archive because I plan to have a backup server with 
barman.


With that many databases with that so many objectsand undoubtable client 
connections, I'd want to spread that across a cluster of smaller servers.


just sayin...


--
john r pierce, recycling bits in santa cruz



Re: [SPAM] Re: [GENERAL] WAL directory size calculation

2016-07-29 Thread Moreno Andreo

Il 28/07/2016 20:45, Francisco Olarte ha scritto:

On Thu, Jul 28, 2016 at 3:25 PM, Moreno Andreo  wrote:

Obviously ramdisk will be times faster disk, but having a, say, 512 GB
ramdisk will be a little too expensive :-)

Besides defeating the purpose of WAL, if you are going to use non
persistent storage for WAL you could as well use minimal level,
fsync=off and friends.
After Andreas post and thinking about it a while, I went to the decision 
that it's better not to use RAM but another persistent disk, because 
there can be an instant between when a WAL is written and it's fsync'ed, 
and if a failure happens in this instant the amount of data not fsync'ed 
is lost. Am I right?



Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB, and plan
to use wal_level=archive because I plan to have a backup server with barman.

Is this why you plan using RAM for WAL ( assuming fast copies to the
archive and relying on it for recovery ) ?
Yes, but having to deal with the risk of having loss of data, I think 
I'll go on a bigger persistent disk, have bigger checkpoint intervals 
and end up having a longer rescue time, but the main thing is *no data loss*


Francisco Olarte.



Thanks

Moreno.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general