To get the cluster up and running, you only need to move a GB or two.
On 5/1/19 9:24 PM, Igal Sapir wrote:
Thank you both. The symlink sounds like a very good idea. My other disk
is 100GB and the database is already 130GB so moving the whole thing will
require provisioning that will take more time. I will try the symlinks
first. Possibly moving some tables to a tablespace on the other partition
to make more room.
I have a scheduled process that runs daily to delete old data and do full
vacuum. Not sure why this happened (again).
Thanks,
Igal
On Wed, May 1, 2019 at 6:02 PM Michael Loftis <mlof...@wgops.com
<mailto:mlof...@wgops.com>> wrote:
Best option....Copy/move the entire pgdata to a larger space. It may
also be enough to just move the WAL (leaving a symlink) freeing up the
623M but I doubt it since VACUUM FULL occurs in the same table space
and can need an equal amount of space (130G) depending on how much it
can actually free up.
You may also get away with just moving (and leaving a symlink) for the
base but I don't recall if that works or not.
On Wed, May 1, 2019 at 18:07 Igal Sapir <i...@lucee.org
<mailto:i...@lucee.org>> wrote:
I have Postgres running in a Docker container with PGDATA mounted
from the host. Postgres consume all of the disk space, 130GB [1],
and can not be started [2]. The database has a lot of bloat due
to much many deletions. The problem is that now I can not start
Postgres at all.
I mounted an additional partition with 100GB, hoping to fix the
bloat with a TABLESPACE in the new mount, but how can I do
anything if Postgres will not start in the first place?
I expected there to be a tool that can defrag the database files,
e.g. a "vacuumdb" utility that can run without Postgres. Or maybe
run Postgres and disable the WAL so that no new disk space will be
required.
Surely, I'm not the first one to experience this issue. How can I
fix this?
Thank you,
Igal
[1]
root@ff818ff7550a:/# du -h --max-depth=1 /pgdata
625M /pgdata/pg_wal
608K /pgdata/global
0 /pgdata/pg_commit_ts
0 /pgdata/pg_dynshmem
8.0K /pgdata/pg_notify
0 /pgdata/pg_serial
0 /pgdata/pg_snapshots
16K /pgdata/pg_subtrans
0 /pgdata/pg_twophase
16K /pgdata/pg_multixact
130G /pgdata/base
0 /pgdata/pg_replslot
0 /pgdata/pg_tblspc
0 /pgdata/pg_stat
0 /pgdata/pg_stat_tmp
7.9M /pgdata/pg_xact
4.0K /pgdata/pg_logical
0 /pgdata/tmp
130G /pgdata
[2]
postgres@1efd26b999ca:/$ /usr/lib/postgresql/11/bin/pg_ctl start
waiting for server to start....2019-05-01 20:43:59.301 UTC [34]
LOG: listening on IPv4 address "0.0.0.0", port 5432
2019-05-01 20:43:59.301 UTC [34] LOG: listening on IPv6 address
"::", port 5432
2019-05-01 20:43:59.303 UTC [34] LOG: listening on Unix socket
"/var/run/postgresql/.s.PGSQL.5432"
2019-05-01 20:43:59.322 UTC [35] LOG: database system shutdown was
interrupted; last known up at 2019-05-01 19:37:32 UTC
2019-05-01 20:43:59.863 UTC [35] LOG: database system was not
properly shut down; automatic recovery in progress
2019-05-01 20:43:59.865 UTC [35] LOG: redo starts at 144/4EFFFC18
...2019-05-01 20:44:02.389 UTC [35] LOG: redo done at 144/74FFE060
2019-05-01 20:44:02.389 UTC [35] LOG: last completed transaction
was at log time 2019-04-28 05:05:24.687581+00
.2019-05-01 20:44:03.474 UTC [35] PANIC: could not write to file
"pg_logical/replorigin_checkpoint.tmp": No space left on device
2019-05-01 20:44:03.480 UTC [34] LOG: startup process (PID 35) was
terminated by signal 6: Aborted
2019-05-01 20:44:03.480 UTC [34] LOG: aborting startup due to
startup process failure
2019-05-01 20:44:03.493 UTC [34] LOG: database system is shut down
stopped waiting
pg_ctl: could not start server
Examine the log output.
--
"Genius might be described as a supreme capacity for getting its
possessors
into trouble of all kinds."
-- Samuel Butler
--
Angular momentum makes the world go 'round.