services which don't need PostgreSQL (e.g. SSH or X11
login or a web- or mail server) shouldn't depend on it.
One of the purported advantages of systemd over SystemV init is that it
starts up services in parallel, so a service which takes a long (or
infinite) time to start doesn't block other servic
actly it make the difference for
> client
> if i use md5/password in pg_hba.conf file in DB server?.
See
https://www.postgresql.org/docs/10/static/auth-methods.html#AUTH-PASSWORD
With method password, passwords are sent in plain text. With md5, an md5
hash of the password, the username
es, which isn't the case
for PostgreSQL.)
hp
--
_ | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| | | h...@hjp.at | management tools.
__/ | http://www.hjp.at/ | -- Ross Anderson <ht
understand: There are 6
non-null distinct values of arbeitsvolumen in facttable_kon_eh, and each
appears 36958 times. 36958 * 5 + 1 = 184791. So it stops once it reaches
the largest value. Although now I'm wondering how it knows that this is
the largest value without scanning to the end).
lity
POV I think a dictionary lookup in Perl is a lot nicer than 50 joins
(or 300 in your case).
hp
--
_ | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| | | h...@hjp.at | management tools.
__/
On 2017-09-01 10:29:51 +0200, Peter J. Holzer wrote:
> pglogical supports replication of sequences, and although the way it
> does this suggests that it can't really work in both directions
> (actually I'm sceptical that it works reliably in one direction), of
> course I had to try i
On 2017-09-01 09:57:52 -0600, Rob Sargent wrote:
> On 09/01/2017 02:29 AM, Peter J. Holzer wrote:
> >TLDR: Don't.
> >
> >I'm currently conducting tests which should eventually lead to a 2 node
> >cluster with working bidirectional logical replication.
> >
> >
ing the whole database.
hp
--
_ | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| | | h...@hjp.at | management tools.
__/ | http://www.hjp.at/ | -- Ross Anderson <https://www.ed
On 2017-08-22 12:57:15 -0300, marcelo wrote:
> We'll replace those QNX machines with WIndows XP ones
The future is already here — it's just not very evenly distributed.
SCNR,
hp
--
_ | Peter J. Holzer| we build much bigger, better disasters
On 2017-08-18 15:57:39 -0500, Justin Pryzby wrote:
> On Fri, Aug 18, 2017 at 10:47:37PM +0200, Peter J. Holzer wrote:
> > On 2017-08-18 06:37:15 -0500, Justin Pryzby wrote:
> > > On Fri, Aug 18, 2017 at 01:01:45PM +0200, Rob Audenaerde wrote:
> > > > Can anyone
tion. So apparently
columnname open-parenthesis tablename closed-parenthesis is a specific
syntactic construct, but I can't find it documented anywhere.
hp
--
_ | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much mo
ber). That way
two transactions won't be able to add a node with the same sequence
number under the same parent. You will have to handle duplicate key
errors, though.
hp
--
_ | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much
gt; No problem. The plan is to perform 2k rows at once, which is not much.
Are rows deleted from tablea after they are migrated? Otherwise you will
have a problem:
select ... limit 2000 offset 1234000
will have to retrieve 1236000 rows and then discard 1234000 of them.
hp
--
_ | Pet
like a long time, and the time seems to grow exponentially with
> file size rather than linearly.
>
>
>
> Do these numbers surprise you?
Yes. on my system, storing a 25 MB bytea value takes well under 1 second.
hp
--
_ | Peter J. Hol
On 2017-05-05 11:46:55 -0700, John R Pierce wrote:
> On 5/5/2017 11:28 AM, Peter J. Holzer wrote:
>
> On 2017-05-04 23:08:25 +0200, Sven R. Kunze wrote:
>
> On 03.05.2017 12:57, Thomas Güttler wrote:
>
> Am 02.05.2017 um 05:
resolve a
bottleneck, then by all means separate them.
hp
[1] "I read somewhere on the internet" is usually not a good reason.
--
_ | Peter J. Holzer| A coding theorist is someone who doesn't
|_|_) || think Alice is crazy.
| | | h...@hjp.at
erver, start a screen session
and psql in the screen session. Then if your network connection drops
you can simply login again and resume the screen session. Of course this
only works if you have a shell login on the server which may not be the
case.
hp
--
_ | P
might be
useful in practice, but whichever of them you pick, you've picked the
wrong one with a probability of 2/3. “The first monday in the year -1 of
the proleptic Gregorian calendar” would be consistent with how
to_timestamp('12:34:56', 'HH24:MI:SS') works, but apart from that and
being in
onstraint you need to be the owner of the referencing table
and have the references privilege on the referenced table. It's not
symmetrical.)
hp
--
_ | Peter J. Holzer| A coding theorist is someone who doesn't
|_|_) || think Alice is crazy.
| | | h...@hjp.at
On 2017-03-05 12:01:07 +0100, Peter J. Holzer wrote:
[...]
> At the current rate of inserts, this threshold will be reached on
> March 24nd. I'll check whether the table is analyzed then.
It was (a little earlier than expected because pg_class.reltuples didn't
increase in the me
On 2017-03-05 08:39:05 -0800, Adrian Klaver wrote:
> On 03/05/2017 03:01 AM, Peter J. Holzer wrote:
> >So it is likely that something happened on that day (disk full?) which
> >wiped out the contents of pg_stat_user_tables.
>
> Are there any logs from that time, either Postg
On 2017-03-03 06:39:35 -0800, Adrian Klaver wrote:
> On 03/03/2017 12:33 AM, Peter J. Holzer wrote:
> >This is with PostgreSQL 9.5.6 on Debian Linux.
> >
> >I noticed that according to pg_stat_user_tables autoanalyze has never
> >run on a lot of tables. Here is one exa
urrent: I see entries in most_common_vals which were only inserted in
January. Is it possible that autoanalyze runs without updating
pg_stat_user_tables?
hp
--
_ | Peter J. Holzer| A coding theorist is someone who doesn't
|_|_) || think Alice is crazy.
On 2017-02-10 14:24:36 +0100, Thomas Kellerer wrote:
> Peter J. Holzer schrieb am 10.02.2017 um 14:02:
> > So it's doing a sequential scan on the initial select in the recursive
> > CTE, but using the index on the subsequent selects.
> >
> > But why? If it uses the
Heap Fetches: 2
Planning time: 8.883 ms
Execution time: 0.801 ms
(23 rows)
800 times faster :-).
hp
--
_ | Peter J. Holzer| A coding theorist is someone who doesn't
|_|_) || think Alice is crazy.
| | | h...@hjp.at
tend to
cache plain-text passwords to resubmit them for each transaction, but to
use something more ethereal, like session cookies or kerberos tickets.
hp
--
_ | Peter J. Holzer| A coding theorist is someone who doesn't
|_|_) || think Alice is crazy.
| | | h
filled. Keeping all this in mind, the limit is between 250
| and 1600.
hp
--
_ | Peter J. Holzer| A coding theorist is someone who doesn't
|_|_) || think Alice is crazy.
| | | h...@hjp.at | -- John Gordon
__/ | http://www.hjp.at/ |http://downlode.o
ally
low (off, sz) value which matches the query. So now the query can return
after checking only a handful of rows.
LIMIT, EXISTS, etc. are awful when you want predictable performance. You
may be lucky and the rows you are looking for are just at the start or
you may be unlucky and you have to s
out if necessary: Currently my database and server
process run on the same machine, but I could distribute them over
several machines with (almost) no change in logic.
hp
--
_ | Peter J. Holzer| A coding theorist is someone who doesn't
|_|_) || th
just for the changed field) will have to be updated. You
can set fillfactor to a smaller value to make this less likely.
hp
--
_ | Peter J. Holzer| A coding theorist is someone who doesn't
|_|_) || think Alice is crazy.
| | | h...@hjp.at | -- John G
\(/ o) ( o) ) |
> | \_ (_ ) \ ) / |
> | \ /\_/\)_/|
> | \/ //| |\\ |
> | v | | v |
> |\__/|
> ||
> | PostgreSQL 1996-2016 |
> | 20 Years of success |
> +----+
Nice.
hp
--
_ | Peter J. Holzer
bedded mod_php) under different uids. So while
running everything as nobody is the default, it is possible to use
different users, and I would strongly recommend doing this if you have
multiple customers.
hp
--
_ | Peter J. Holzer| I want to forget all about both belts and
|_|_) |
etablewithareallylongname less readable
than SomeTableWithAReallyLongName.
2) Since case doesn't matter, they might be inconsistent: One programmer
might write MyTable, another mytable, the third MYTABLE, ...
3) You might want to use a tool to automatically generate SQL queries,
but th
SSD and a larger hard disk.
That might be a reason to look for an alternate hoster, but if he's
otherwise happy, switching to an unknown provider might be considered
too large a risk.
hp
--
_ | Peter J. Holzer| I want to forget all about both belts and
|_|_) |
the index helped in my case. Still, I find it
worrying if a value which obviously is in the table can't be found using
the index.
hp
--
_ | Peter J. Holzer| I want to forget all about both belts and
|_|_) || suspenders; instead, I want to buy pants
| | |
On 2016-03-12 21:00:04 +, Geoff Winkless wrote:
> On 12 March 2016 at 18:43, Peter J. Holzer <hjp-pg...@hjp.at> wrote:
> > The question is what can be done to improve the situation.
> >
> > Tom thinks that correlation statistics would help. That seems plausible
>
On 2016-03-08 10:16:57 +, Geoff Winkless wrote:
> On 7 March 2016 at 20:40, Peter J. Holzer <hjp-pg...@hjp.at> wrote:
> > As Tom wrote, the estimate of having to read only about 140 rows is only
> > valid if sc_id and sc_date are uncorrelated. In reality your query has
&
at knowledge to the planner. (And yes, I
know that quite often the programmer is wrong - but I do believe in
giving people enough rope to hang themselves with)
hp
--
_ | Peter J. Holzer| I want to forget all about both belts and
|_|_) || suspenders; instead, I
On 2016-02-18 13:37:37 -0500, Tom Smith wrote:
> it is for reducing index size as the table become huge.
> sorry for confusion, by timestamp, I meant a time series number, not the sql
> timestamp type.
> I need the unique on the column to ensure no duplicate, but the btree index
> is getting
>
of records per block in your second
query (65512/7372 = 8.9). I suspect that the records in your larger
table are sorted by taxiid within each interval.
You can almost certainly get a similar speedup by sorting each 5 minute
interval by taxi id before appending it to the table.
If querying by ta
On 2015-12-03 10:02:18 -0500, Tom Lane wrote:
> "Peter J. Holzer" <hjp-pg...@hjp.at> writes:
> > Can those signals be safely ignored? Just blocking them (so that they
> > are delivered after the UDF finishes) might be safer. But even that may
> > be a probl
On 2015-12-02 19:07:55 -0600, Jim Nasby wrote:
> On 12/2/15 9:26 AM, Peter J. Holzer wrote:
> >As explained in backend/utils/misc/timeout.c, the timers are never
> >cancelled: If a timeout is cancelled, postgres just sees that it has
> >nothing to do and resumes whatever
On 2015-12-01 20:55:02 +0100, Peter J. Holzer wrote:
> On 2015-12-01 18:58:31 +0100, Peter J. Holzer wrote:
> > I suspect such an interaction because I cannot reproduce the problem
> > outside of a stored procedure. A standalone Perl script doing the same
> > requests
On 2015-12-01 13:13:27 -0500, Tom Lane wrote:
> "Peter J. Holzer" <hjp-pg...@hjp.at> writes:
> > Postgres worker processes are single-threaded, are they? Is there
> > something else which could interact badly with a moderately complex
> > multithreaded I/O
On 2015-12-01 07:16:04 -0800, Adrian Klaver wrote:
> On 12/01/2015 06:51 AM, Peter J. Holzer wrote:
> >A rather weird observation from the log files of our server (9.5 beta1):
> >
> >2015-12-01 09:23:37 CET [26265]: [42-1] user=fragner,db=wds,pid=26265
> >WARNING:
On 2015-12-01 10:20:09 -0800, Adrian Klaver wrote:
> On 12/01/2015 09:58 AM, Peter J. Holzer wrote:
> >On 2015-12-01 07:16:04 -0800, Adrian Klaver wrote:
> >>On 12/01/2015 06:51 AM, Peter J. Holzer wrote:
> >>>A rather weird observation from the log files of our serve
On 2015-12-01 18:58:31 +0100, Peter J. Holzer wrote:
> I suspect such an interaction because I cannot reproduce the problem
> outside of a stored procedure. A standalone Perl script doing the same
> requests doesn't get a timeout.
>
> I guess Alvaro is right: I should strace the
an IPC framework: See http://zeromq.org/ We use it to make
RPC calls from stored procedures to a server process.
--
_ | Peter J. Holzer| I want to forget all about both belts and
|_|_) || suspenders; instead, I want to buy pants
| | | h...@hjp.at | that
and restore yesterdays backup?
* Something else?
hp
--
_ | Peter J. Holzer| I want to forget all about both belts and
|_|_) || suspenders; instead, I want to buy pants
| | | h...@hjp.at | that actually fit.
__/ | http://www.hjp.at/ | -- http
On 2015-08-18 20:40:10 +0900, Masahiko Sawada wrote:
On Tue, Aug 18, 2015 at 6:06 PM, Peter J. Holzer hjp-pg...@hjp.at wrote:
It looks like the catalog version has changed between 9.5alpha1 and
9.5alpha2:
[...]
So, what's the best way to do the upgrade?
* Copy the bindir before
pg_indexes AS idx ON indexrelname = indexname and idstat.schemaname =
idx.schemaname
JOIN pg_stat_user_tables AS tabstat ON idstat.relname = tabstat.relname and
idstat.schemaname = tabstat.schemaname
(for some reason that makes it a lot slower, though)
hp
--
_ | Peter J. Holzer
ON (idx.indexrelid = i.indexrelid )
WHERE i.idx_scan 200
AND NOT idx.indisprimary
AND NOT idx.indisunique
ORDER BY 1, 2, 3;
is not the query you posted in your original message.
Here is what you posted:
On Mon, May 25, 2015 at 10:41 AM, Peter J. Holzer hjp-pg...@hjp.at wrote
52 matches
Mail list logo