You have already been informed. PG, as yet, does not allow incremental refresh
of a MV. It allows online refresh of a MV, but that it does by doing a full
table scan of the base table and rebuilding the MV.
From: Krithika Venkatesh
To: John R Pierce
Cc: pgsql
>Those naming conventions are twenty-five years old, and there is an
>astonishing amount of client code that would break if we ran around
>changing existing system catalog column names. It's very unlikely that
>any proposal to do that would even receive serious consideration.
>The bar to using n
>> Yes John I do know about using WAL archive. IMO that will not be as fast as
>> restoring using the incremental backup.
>That's an opinion, have you tried measuring? Because normally I've found that
>1.- Incremental backups are slow and impose a greater runtime penalty
>on the system than log-c
PG does not have a concept of incremental backup. The way it works in Oracle
and other RDBMS is that incremental backup only backups up changed blocks since
the last full backup. So if only 10% of blocks changed since the last full
backup, incremental backup will be only for 10%.
I am wonderin
>More to the point, whatever "MorphOS" is, it isn't AmigaOS. It was pretty
>clearly stated in the original thread that AmigaOS had no support for
>fork(), without which there was no chance of running Postgres.
Just curious, if PG is completely dependent on fork(), how was it ported to
Windows
w
> We thought to implement one partition for day.
That would be 365 partitions in a year.
In our experience INSERTS suffers the most in a partitioned table because
triggers are the only way to route the row to the proper child (inherited)
table.
Question: How is your insert pattern? Do you inse
In the chapter "Using optimistic locking" of the book "PG Cookbook Second
Edition"
it is mentioned how the app can first fetch row from the table in the form
select a.*::text from table a where ...
Then do the work and then when it comes to committing do it as
update table
set
https://www.youtube.com/watch?v=8mKpfutwD0U&t=1741s
Somewhere around 13th minute, Chris Tavers mentions this:
1 - In the queuing table, the application deletes lot of rows (typical for a
queuing table).
2 - Query trying to find out rows to be picked by the queue, accesses them via
indexes.
3 -
LOCK TABLE yourtable ;
CREATE TEMPORARY TABLE keep AS SELECT * FROM yourtable WHERE ;
TRUNCATE yourtable;
INSERT INTO yourtable SELECT * from keep;
COMMIT;
===
the above snippet assumes truncate in PG can be in a transaction. In other
words, while truncate by itself
is atomic, it can't be rolled b
>> Kill ? You mean "pg_ctl stop -m fast" right ?
Yes.
>Use timeline to resync ? Timeline is an internal mechanism in PostgreSQL, not
>a tool, so I don't get this step...You mean using pg_rewind ?
pg_rewind which uses timeline.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql
>Sure, but when you are doing a switchover, the standby is supposed to be
>connected to the master when you shutdown the master. So based on the doc,
>the standby should receive **everything** from the master before the master
>actually shutdown.
We use 9.5 and even in that version there is no han
Ver 9.6.1
In a streaming replication can it be assumed that if both primary and standby
are of the same hardware, then the rate at which transactions are applied on
the standby will be same as that on primary. Or standbys are always slower than
primary in applying transactions because of the wa
In order to speed up loading a set of 12 tables, I have declared them as
UNLOGGED. As per documentation, an unlogged table will also have unlogged
indices.
Question: When the table is converted back to logged, does PG rebuild all
indices before marking them as logged. Also what about FKY co
Is there a way to query dead rows (that is, rows which are dead and still not
cleaned up by Vacuum) using SQL. I am asking this just as an academical
question.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
I forgot to mention that the db is replicated synchronously. I think that is
the culprit.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
PG 9.5.3
RHEL Linux
To cut the story short and giving as much details as I can remember:
I created 40 database db1 .. db40 in a cluster.
On each database I initialized pgbench tables with a scale of 15.
Then I concurrently ran 5 pgbenches for 15 min. Each pgbench connected to one
db. In other w
I need some ideas here.
Let us say we have a bunch of tables, called a,b,c,d. We will get a batch of
data in files which need to be processed. At the end of processing, the tables
will get a fresh set of data. The problem we would like to solve is to allow
access to the tables while they are
Reading the following two:
http://www.greenplumdba.com/partitioning-in-greenplum
http://gpdb.docs.pivotal.io/4350/admin_guide/ddl/ddl-partition.html
It is clear that GP's partitioning is just a skin on top of inheritance. Does
anyone know how GP implements the insert re-routing. Is it done vi
>Other thing, with bucardo what is the difference between in a
>master-master replica vs master-slave, at the end in both ways the
>user has a 2nd DB with R/W permission?
I think in a M-M mode, updates from both dbs will be replicated to the other db.
In other words they will be eventually consist
>Cores do not help, postgres is single-threaded. RAM MAY help, but I
I hope this is no longer true from 9.6 for those queries where PG can use
parallelism.
>suspect your operations are IO bound. Of course, with the sparseness
>of the details, one can not say too much.
--
Sent via pgsql-gener
I posted this in the pgpool mailing list and so far has not recvd any response.
Looks like that mailing list is hardly active.
=
I am researching pgpool technology. Let us say we want to use it as an
alternative to streaming replication of PG. Apparently
>do you run a separate instance of the app for each tenant, or is there one app
>that identifies the
>tenant and handles them accordingly ?
Each tenant will have different app server. there will be persistent connection
for each tenant.
--
Sent via pgsql-general mailing list (pgsql-general@p
>Are you restoring because your whole system failed, or because one client did
>something
>wrong and needs just their data rolled back?
Chances of restoring just for one client will probably be 99% of use cases.
> Do your clients authenticate directly to the database, or to the app server?
t
I have a bytea col which stores Java UUID data. When I run
Select cast(uuid_col as varchar) from table limit 1
Output in pgadmin
-===
"A001"
Output in psql
\x4130303030303030303030303030303030303030303030303030303030303031
is there some set
I don't know if that's helpful to you or not, but hopefully it was at least a
little.
===
yes it was. thanks
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
>Then you need different clusters per tenant. Otherwise, the WAL records
> of different tenants are inextricably mingled together.
Yes we are aware of it .This part is OK as it is not deemed as user table data.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make chang
> I've been reading this discussion with great interest, to see what other
> Postgres experts think. :-)
I am bit disappointed that most of the replies are questioning why we are
doing what we are doing. Once again, we (db designers) have no choice
in that. What I would like to know that which on
> ok, thats ridiculous, isn't it. so now its time to find a compromise.
You don't understand how sales people pitch our products. We deal with
financial data
and our customers are extremely sensitive to even imagining that their data
will co-reside
with that of their competitors who also are o
From: Venkata B Nagothi
Sent: Friday, September 30, 2016 02:48
To: Rakesh Kumar
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Multi tenancy : schema vs databases
On Fri, Sep 30, 2016 at 10:16 AM, Rakesh Kumar
mailto:rakeshkumar...@outlook.com
From: Venkata B Nagothi
Sent: Thursday, September 29, 2016 17:25
To: Rakesh Kumar
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Multi tenancy : schema vs databases
On Fri, Sep 30, 2016 at 5:18 AM, Rakesh Kumar
mailto:rakeshkumar
Hi
I would like to know which technique is better for supporting multi-tenancy=
applications, going upto hundreds or even thousands of tenants.
1 - One database with difference schemas (one schema per tenant)
or
2 - One database per tenant.
The points to be considered are:
1 - which is more l
> Of course 9.5 is the current release so the answer is Yes, since 9.5
https://www.postgresql.org/docs/current/static/sql-creatematerializedview.html
What am I missing. I don't see any support for incremental refresh.
Just in case we are not speaking the same thing: When a MV is created for the
Does PG support INCREMENTAL MV ? Looks like not (until 9.5)
>You sound like you think that varchar(50) is somehow cheaper than text.
The biggest impediment to text cols in other RDBMS is no index allowed.
If PG has an elegant solution to that, then yes I see the point made by the
original poster.
>I have done some research after converting my database from MySQL 5.6 to
>PostgreSQL 9.6 (the best move I have ever made),
>and the consensus I found can be summed up as:
>1. Never, neve, never use VARCHAR or even CHAR
>2. Always always, always use TEXT
>Unless, that is, you have some kind of
Hi
I am noticing that if I do this
insert into table values(1,a)
insert into table values(2,b)
insert into table values(3,c)
commit after 500 rows
it is lot slower (almost 10x) than
insert into table values((1,a),(2,b),(3,c)) .. upto 500
It is obvious that the gain is due to network trip a
Is it true that one datafile in PG can only belong to one object (table/index)
On Tue, Aug 23, 2016 at 9:55 AM, Francisco Olarte
wrote:
> On Tue, Aug 23, 2016 at 2:32 PM, Ilya Kazakevich
> wrote:
>>>does that mean that I should always execute a VACUUM to recover the
>>>wasted space when an error
As per this blog post
http://bonesmoses.org/2016/07/29/pg-phriday-constipated-connections/
I have a question which I asked there too, but unfortunately did not
receive any answer.
The question is re the following claim:
- Readers do not block readers.
- Readers do not block writers.
- Writers d
Can an existing stored procedure be modified online while other users
are executing it. In Oracle, the session doing CREATE OR REPLACE
PACKAGE would wait for other session to complete. Once the package is
changed, first time other sessions will get an error "package
invalidated". How is it in PG.
> Are you saying that?:
>
> 1) You ran pg_basebackup against a live cluster and sent the output to
> another location.
>
> 2) At the other location the cluster is not in use.
>
> 3) You want to grab the contents of the inactive cluster directly off the
> disk.
>
> If that is the case, then no it is
If a cluster is backed up physically using pg_basebackup, how can we
restore only a particular schema from it. Is it even possible?
Thanks
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> And? Oracle and MySql doesnt have it but can downgrade right out the box.
> Quick and easy.
No it is not for mysql.
http://dev.mysql.com/doc/refman/5.7/en/downgrading.html
"In-place Downgrade: Involves shutting down the new MySQL version,
replacing the new MySQL binaries or packages with the o
On Thu, Jul 28, 2016 at 10:38 AM, Alex Ignatov wrote:
> Sorry, what? You can rollback to previous version of software what you had.
>
> https://docs.oracle.com/cd/E11882_01/server.112/e23633/downgrade.htm#UPGRD007
Not so fast. This requires the db to be set in compatibility mode of
the lower ver
On Wed, Jul 27, 2016 at 2:07 PM, Andrew Sullivan wrote:
> It almost never happens that a minor version (N.M.x, x is minor)
> requires an upgrade at all. Change your binaries and you're done.
> Catalogue incompatibility historically was the basis for something
> becoming a major version upgrade.
On Wed, Jul 27, 2016 at 1:54 PM, Marc Fournier
wrote:
> Stupid question here, but do we provide any less then what MySQL does? I’m
> reading:
mysql provides same functionality for rollback like oracle/db2
provides. That is,
rollback on a minor version upgrade possible, but not on major version
On Wed, Jul 27, 2016 at 12:47 PM, Bruce Momjian wrote:
> Yes. I was saying I don't know how to improve pg_upgrade to address it.
This problem is there even in oracle/db2/sqlserver. None of them allow
rollback to the lower version
unless it is a minor version upgrade. Major version upgrade almos
On Wed, Jul 27, 2016 at 11:45 AM, Bruce Momjian wrote:
> I agree, but I am not sure how to improve it. The big complaint I have
> heard is that once you upgrade and open up writes on the upgraded
> server, you can't re-apply those writes to the old server if you need to
> fall back to the old se
--
Sent from mobile.
On Jul 26, 2016, at 5:56 PM, Joshua D. Drake wrote:
> On 07/26/2016 02:49 PM, Rakesh Kumar wrote:
> This is an old news. They are using mysql as a nosql to store schemaless.
> Basically one giant blob col. And thats where the role of mysql ends. The
>
This is an old news. They are using mysql as a nosql to store schemaless.
Basically one giant blob col. And thats where the role of mysql ends. The bulk
of the processing will be in nosql.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscript
But then autovaccum avoids this. Granted it won't give back free space to OS,
but it won't let it grow too (or am I missing something).
From: Job
To: Rakesh Kumar ; "pgsql-general@postgresql.org"
Sent: Monday, June 20, 2016 5:39 AM
Subject: R: [GENERAL] Vacuu
Any reason why you need the space back? What is wrong with space remaining
constant at 4GB.
From: Job
To: "pgsql-general@postgresql.org"
Sent: Monday, June 20, 2016 5:18 AM
Subject: [GENERAL] Vacuum full: alternatives?
Hello,
we have a table with an heavy traffic of pg_bulkload
> 8.2, 8.3, and 8.4 are all Major releases of the PostgreSQL product. For
> most other products it would as if the numbering went from "15, to 16, to
> 17". The last 8.x release was 8.4 (so, there were 5 major releases that all
> shared the same prefix value of 8) and the 9.x series goes from 9.0
http://bdr-project.org/docs/next/logical-vs-physical.html
"It (BDR) has significant advantages - and some disadvantages - when
compared to PostgreSQL's older physical (block-based) streaming or
archive-based replication with warm or hot standby"
What exactly is block based? Changes are recorded i
> This seems relevant...
>
> http://bdr-project.org/docs/stable/logical-vs-physical.html
thanks. very useful.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Sorry if this question was asked before. As I understand currently
BDR does not support the replicating nodes to run different major
versions, like
9.4 <-> 9.5.
Is this in the works?
thanks
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscript
> Their main problem to overcome when using them is that they tie PostgreSQL
> much more tightly to the underlying configuration of the operating system
> and thus you need to ensure that your processes and procedures accommodate
> that reality since the tools that PostgreSQL provides can only do s
I saw a slide recently where the use of tablespaces was discouraged.
What does the community think of tablespaces.
thanks
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On May 12, 2016, at 4:57 AM, sangeetha wrote:
Currently , I am using "With hold" cursor. In our case , the With hold cursor
is used to fetch the next record of the given primary key .
Can you explain your use case. If i understand with hold correctly, it is
typically used to preserve locks ev
Compose (@composeio)
4/26/16, 1:24 PM
You can now upgrade your #PostgreSQL 9.4 to 9.5 easily at Compose.
buff.ly/1WRsFFu #RDBMS
Based on the above tweet it seems that PG has no native way of doing an inplace
upgrade of a db. How do users upgrade db of tera byte size.
--
Sent from mobil
Pardon me if this has been discussed before.
I believe that PG back-end does not version index rows the way it does the data
rows. Assume that the app updates a row frequently (several times in a second).
For each update, PG will create a new version. However I believe the primary
key index p
I think PG does fixed time within a tran. check the output of the following sql
begin;
select now() ;
select pg_sleep(10);
select now() ;
commit;
select now() ;
select pg_sleep(10);
select now() ;
~
On Tue, Apr 12, 2016 at 6:50 AM, Alex Ignatov wrote:
> Hello!
> Is there any method to freeze loc
On Wed, Apr 6, 2016 at 6:33 AM, Albe Laurenz wrote:
> Rakesh Kumar wrote:
> Every row has two system columns associated with it: xmin and xmax
>
> xmin is the transaction ID that created the row, while xmax is
> the transaction ID that removed the row.
>
> So when an update
Hello
I understand that when an update of say 100,000 rows are made, PG
writes the updated rows as a new row. These new rows are not visible
to any sessions except the one creating it. At commit time PG flips
something internally to make these rows visible to all.
My Q: what happens to those rows
On Tue, Apr 5, 2016 at 1:10 PM, Melvin Davidson
wrote:
> FYI, in PgAdmin, there is an Option which determines if a _physical row
> count_ should be performed if the _estimated_ row count is less than a
> specified amount. It defaults to 2000.
> To change it, Click on File, Options and under Brow
This is on the object browser. When you navigate to the table and
click on the table name, you will see the output on the right side and
check the Properties tab.
On Tue, Apr 5, 2016 at 10:37 AM, Adrian Klaver
wrote:
> On 04/05/2016 07:11 AM, Rakesh Kumar wrote:
>>
>> pgA
pgAdmin shows this:
Name: tableA
OID
Owner
Tablespace
Rows (estimated) : 10
Rows (Counted) : not counted
What is Rows (Counted) and why it is showing not counted even though
the table has been analyzed.
thanks
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To ma
On 03/21/2016 10:57 AM, Thomas Kellerer wrote:
So - at least as far as I can tell - it's usually only used where
high-availability is really important, e.g. where zero-downtime is required.
If you can live with a short downtime, a hot standby is much cheaper and
probably not that much slower.
is there a possibility that there is no index on the FKY column
bravo.alpha_id.
On Wed, Mar 16, 2016 at 11:09 AM, Albe Laurenz
wrote:
> Thomas Kellerer wrote:
> >> Can you determine what statements were executed in these transactions
> before the deadlock?
> >> It was probably one of these that
PG loads data at the block level to shared_buffers. Most likely it is
because the second sql selects different set of rows (from different
blocks) than the first sql.
On Fri, Mar 18, 2016 at 4:24 PM, Paul Jones wrote:
> In Postgres 9.5.1 with a shared_buffer cache of 7Gb, a SELECT from
> a single
temporarily, it will be great.
-Original Message-
From: Melvin Davidson
To: Rakesh Kumar ; pgsql-general
Sent: Thu, Feb 18, 2016 4:12 pm
Subject: Re: [GENERAL] Multiple databases and shared_buffers
>What do you mean that the data is actually not stored in the shared buffer.
>Fr
NOTE: It is very important you provide the version of PostgreSQL and O/S when
addressing this mail list.
Since it is a new project, we are going with:
PG: 9.5.1
OS: RHEL 6.4
What do you mean that the data is actually not stored in the shared buffer.
>From the link you provided :
"The “shared_buffers” configuration parameter determines how much memory
is dedicated to PostgreSQL to use for caching data."
This seems to be the same like other RDBMS. Typically the termi
It is a business requirement that we store our clients data in separate
databases. Our sales folks insist it is non negotiable. Our current technology
does
support that and also maintain buffer pools for each db independently. That's
why I brought this up. Personally I don't think this is a seri
73 matches
Mail list logo