Hi.
I have a 1.8TB PG database, we're doing "fairly heavy" batch updates on.
Say 2/3 of
the database monthly in a background batch process. The system is
working really
well and performing good, but we're always haunting more speed (and
smaller amount of WAL-log).
So I tried to look into how
On 23/04/2012, at 19.10, A J wrote:
> In FTS, how do I search for partial substrings that don't form a English word.
> Example, in the text: 'one hundred thirty four' I want to find the records
> based on 'hun'
>
> SELECT to_tsvector('one hundred thirty four') @@ to_tsquery('hun');
> does not
Hi.
Is there any way I can force explicit use of transactions on
insert/updates?
I do have users sitting on the system, that may forget to code it that
way and it would be nice to be able to force the database to just kick
them out if it didn't happen.
--
Jesper
--
Sent via pgsql-admin ma
Hi List.
I have somehow, come up with a postgresql left in this state..
The database crashed today, needing to do a point in time restore that
should stop today before 10:30. I do have wal-files from today - 7 days
available
and I have a scheduled base backup that runs:
pg_start_backup before
On 2011-10-09 17:41, Tom Lane wrote:
Jesper Krogh writes:
I have got a corrupt db.. most likely due to an xfs bug..
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: invalid page header in block
14174944 of relation base/16385/58318948
Can I somehow get pg_dump to "i
Hi.
I have got a corrupt db.. most likely due to an xfs bug..
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: invalid page header in block
14174944 of relation base/16385/58318948
Can I somehow get pg_dump to "ignore" that block and dump everything else?
Jesper
--
Se
I was wondering if we can query/obtain the high-water mark of number
of sessions or connections reached in a Postgres database. Is there a
view or command that can provide this information. The
pg_stat_database shows the current number of connections, but not the
high-water mark a database ha
On 2011-05-28 15:19, Selva manickaraja wrote:
Please advise as to how we can solve this problem. We are worried that this
error could happen over the weekend and without our knowledge the db may
just show down again.
Add in 2TB of space for the logs and instruct your monitoring system
to tell yo
On 2010-09-07 22:47, Scott Marlowe wrote:
Ok, recently I have compared prices a NexSan SASBeast with 42 15K SAS
drives
with a HP MDS600 with 15K SAS drives.
The first is 8gbit Fibre Channel, the last is 3Gbit DAS SAS. The
fibre channel version is about 20% more expensive pr TB.
So of course it
On 2010-09-07 20:42, Scott Marlowe wrote:
With the right supplier, you can plug in literally 100 hard drives to
a regular server with DAS and for a fraction of the cost of a SAN.
Ok, recently I have compared prices a NexSan SASBeast with 42 15K SAS drives
with a HP MDS600 with 15K SAS drives
On 2010-08-28 17:44, Joshua D. Drake wrote:
On Sat, 2010-08-28 at 17:25 +0200, David Montoya wrote:
Hello:
I have a DB with 90GB in postgre 8.1 and I want to move to another
server. The new server has postgre 8.3 and I don't know if I can use
PITR to do the replication.
You can not.
On 2010-05-21 00:04, Greg Smith wrote:
Jesper Krogh wrote:
> A Battery Backed raid controller is not that expensive. (in the
> range of 1 or 2 SSD disks). And it is (more or less) a silverbullet
> to the task you describe.
Maybe even less; in order to get a SSD that's reli
On 2010-05-20 22:26, Balkrishna Sharma wrote:
But if we have write-through setting, failure before the cache can write to
disk will result in incomplete transaction (i.e. host will know that the
transaction was incomplete). Right
Two things I need for my system is:1. Unsuccessful transactio
Renato Oliveira wrote:
> Dear all,
>
> I have been thinking about PostgreSQL backup for some time now.
>
> I can't implement PITR right now on our live systems, for commercial
> reasons.
I think you need to rethink this one...
> 4 - Backup instead of time use transactionID Do a full backup, mar
Michael Graziano wrote:
> On Nov 18, 2009, at 2:45 AM, Julius Tuskenis wrote:
>
>> The question is what user should do backups. Is it good practice to
>> use superuser for that?
>
> If you're doing your backup with pg_dump (on an individual DB) you need
> a DB user who has read access to everythi
Hi.
I have read the manual chapter over a couple of times. I've even done a
restore of my database and that worked fine. But I'd like to have
confirmed that it seems correct, so it didn't just work because I didn't
have any real activity on the database in my test-setup.
I have a Before Backu
Hi.
I have a mostly insert/select database, so vacumming is generally not
much needed.. but among the tables I have one which is used as a
message-queue so a lot of insert/updates/deletes on small simple records
on that one.
How do I check that autovacumm is running at all? (I think i have
Tom Lane wrote:
> Jesper Krogh <[EMAIL PROTECTED]> writes:
>> Tom Lane wrote:
>>> Drop the constraints in the source database.
>
>> That would be my workaround for the problem. But isn't it somehow
>> desirable that pg_dumpall | psql "allways&q
Tom Lane wrote:
> Jesper Krogh <[EMAIL PROTECTED]> writes:
>> The tables are running a "home-made" timetravelling feature where a
>> contraint on the table implements the foreing keys on the table.
>
> You mean you have check constraints that do selects on o
a
contraint on the table implements the foreing keys on the table.
How can I instruct pg_dumpall to turn off these constriants during
dump/restore?
Jesper
--
Jesper Krogh, [EMAIL PROTECTED]
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Is it possible to migrate from 7.4.3 - i386 to 7.4.6 - x86-64 without a
dump and restore?
Jesper
--
./Jesper Krogh, [EMAIL PROTECTED]
Jabber ID: [EMAIL PROTECTED]
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an
21 matches
Mail list logo