On Mon, Jun 11, 2018 at 11:47:59AM -0400, Alvaro Herrera wrote:
> On 2018-Jun-11, Justin Pryzby wrote:
>
> > I noticed that this is accepted:
> >
> > postgres=# ALTER TABLE t SET (toast.asdf=128);
> > ALTER TABLE
> >
> > I thought since "toast" was a core namespace, it would've been rejected?
>
It is an unfortunate historical naming.
In these conversations I tell people to just always mentally translate
"timestamp with time zone" to "point in time". How it is stored internally
is entirely irrelevant except to the PostgreSQL developers and can
otherwise be ignored. All that matters is tha
On Fri, Jun 15, 2018 at 08:54:52 +0200,
Laurenz Albe wrote:
Bruno Wolff III wrote:
I think I know what is happening, but I wanted to see if my understanding
is correct.
I have a perl after insert trigger for a table with a non-null column element
and I am getting an occasional error when the
On 06/15/2018 09:59 AM, Jeremy Finzel wrote:
On Fri, Jun 15, 2018 at 11:23 AM, Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:
On 06/15/2018 08:26 AM, Jeremy Finzel wrote:
Several months ago we had some detailed discussions about
whether to use separate date colu
On 06/15/2018 12:24 PM, Jeremy Finzel wrote:
Hello!
We often prefer to use timestamptz or "timestamp with time zone" in our
environment because of its actually storing "objective time" with
respect to UTC. But in my own work experience, I have scarcely
encountered a case where business users
Jeremy Finzel writes:
> We often prefer to use timestamptz or "timestamp with time zone" in our
> environment because of its actually storing "objective time" with respect
> to UTC. But in my own work experience, I have scarcely encountered a case
> where business users, and software engineers, d
On Fri, Jun 15, 2018 at 12:24 PM, Jeremy Finzel wrote:
> So it seems to me that "timestamp with time zone" is a misnomer in a big
> way, and perhaps it's worth at least clarifying the docs about this, or
> even renaming the type or providing an aliased type that means the same
> thing, something
Hello!
We often prefer to use timestamptz or "timestamp with time zone" in our
environment because of its actually storing "objective time" with respect
to UTC. But in my own work experience, I have scarcely encountered a case
where business users, and software engineers, do not actually think it
On Fri, Jun 15, 2018 at 11:23 AM, Adrian Klaver
wrote:
> On 06/15/2018 08:26 AM, Jeremy Finzel wrote:
>
>> Several months ago we had some detailed discussions about whether to use
>> separate date columns to indicate a date range, or to use the daterange
>> data type. We opted for the latter bec
On 14 June 2018 07:28:53 CEST, Atul Kumar wrote:
>Hi,
>
>I have postgres edb 9.6 version, i have below query to solve it out.
>
>i have configured streaming replication having master and slave node
>on same server just to test it.
>
>All worked fine but when i made slave service stop, and create
On Fri, Jun 15, 2018 at 12:26 PM, Data Ace wrote:
> Well I think my question is somewhat away from my intention cause of my
> poor understanding and questioning :(
>
>
>
> Actually, I have 1TB data and have hardware spec enough to handle this
> amount of data, but the problem is that it needs too
Well I think my question is somewhat away from my intention cause of my
poor understanding and questioning :(
Actually, I have 1TB data and have hardware spec enough to handle this
amount of data, but the problem is that it needs too many join operations
and the analysis process is going too slo
On 06/15/2018 08:26 AM, Jeremy Finzel wrote:
Several months ago we had some detailed discussions about whether to use
separate date columns to indicate a date range, or to use the daterange
data type. We opted for the latter because this type is specifically
designed for this use case - a tabl
Am 14.06.2018 um 14:04 schrieb Uri Braun:
Hi,
I'm looking to run Postgres -- flexible on exact version -- on some
devices installed in cars and replicated to a central server over cell
phone modems. I expect dropped connections due to: lack of coverage
(remote areas), dead spots, tunnels,
Several months ago we had some detailed discussions about whether to use
separate date columns to indicate a date range, or to use the daterange
data type. We opted for the latter because this type is specifically
designed for this use case - a table that has a range of valid dates for
the data it
On Thu, Jun 14, 2018 at 10:58 AM, Atul Kumar wrote:
> Hi,
>
> I have postgres edb 9.6 version, i have below query to solve it out.
>
This is not the right place to ask queries on edb versions. You need
to check with your vendor about the right place to ask questions.
Here, you can ask the questi
On Thu, Jun 14, 2018 at 8:04 AM, Uri Braun wrote:
> To be clear, the car device will surely add data -- append rows -- and may
> very occasionally add a new table. I would expect the only case where a
> delete may occur -- other than culling old data -- is during recovery of a
> partial write or
On Thu, 14 Jun 2018 14:33:54 -0700
Data Ace wrote:
> Hi, I'm new to the community.
>
> Recently, I've been involved in a project that develops a social
> network data analysis service (and my client's DBMS is based on
> PostgreSQL). I need to gather huge volume of unstructured raw data
> for thi
Ilyeop Yi wrote:
> I have some questions about "cost-based vacuum delay".
>
> Q1. How can I know/check if the autovacuum is actually paused periodically
> according to autovacuum_vacuum_cost_limit and autovacuum_vacuum_cost_delay?
>
> I cannot find such an information from log files.
These pause
Hey Ilyeop,
> Q1. How can I know/check if the autovacuum is actually paused periodically
> according to autovacuum_vacuum_cost_limit and autovacuum_vacuum_cost_delay?
Vacuums that are triggered by the auto-vacuum process will be governed by the
autovacuum cost configuration variables. You won’t
On Fri, Jun 15, 2018 at 08:54:52 +0200,
Laurenz Albe wrote:
Absolutely, but it should be easy to run a few tests with only a single row
insert that confirms your theory.
Thanks.
Hi Guys,
I have some questions about "cost-based vacuum delay".
Q1. How can I know/check if the autovacuum is actually paused periodically
according to autovacuum_vacuum_cost_limit and autovacuum_vacuum_cost_delay?
I cannot find such an information from log files.
Q2. Is there any way to manual
On Fri, Jun 15, 2018 at 10:29:02AM +1000, Sam Saffron wrote:
> SELECT pg_database.datname, pg_database_size(pg_database.datname) as
> size FROM pg_database
Consider reading and using approach shown in
https://www.depesz.com/2018/02/17/which-schema-is-using-the-most-disk-space/
Best regards,
depe
Hi Sam,
When behind a terminal I use \l+ to show the size of the databases, since it is
handy to remember. It shows db size in a "pretty size".
Timing both commands, i see that \l+ takes more or less the same time your
query takes, but I think your query better fits the monitoring purpose.
But
24 matches
Mail list logo