On 14/11/2008 00:24, Eric Jain wrote:
your own risk. There was also mention that pgAdmin had a function to
batch GRANTs, but I couldn't find any such thing in the current
version...
Right-click on the schema in the tree, select Grant wizard from the
context menu, and you're off.
Ray.
On Fri, Nov 14, 2008 at 12:57:32PM +0800, Phoenix Kiula wrote:
1. If I have a unique index on (user_id, url_encrypted), then will
queries asking only for user_id also use this index? Or should i
simply have separate indexes on user_id and url_encrypted? I vaguely
recall reading somewhere that
Eric Jain wrote:
I came across some PL/pgSQL procedures for doing batch GRANTs, but
they appear to be outdated (i.e. don't work with 8.3) and are run at
your own risk. There was also mention that pgAdmin had a function to
batch GRANTs, but I couldn't find any such thing in the current
paulo matadr wrote:
How the best way to controling fast growth in my Database.
I'm going to assume, based on the config info you quoted and the subject
line, that you mean controlling the size of the archived WAL files on
the warm standby server. If that assumption is incorrect, you might
Scott Marlowe wrote:
On Thu, Nov 13, 2008 at 4:08 PM, Dan Armbrust
[EMAIL PROTECTED] wrote:
Why did those particular tables and indexes take _so_ long to vacuum?
Perhaps we have a disk level IO problem on this system?
Assuming pagesize is 8k, then we're talking about scanning 1303*8192
8.4 seconds is a very long time to spend looking up a single record.
Is this table bloated? What does
vacuum verbose books;
say about it? Look for a line like this:
There were 243 unused item pointers
Thanks but this table books has autovac on, and it's manually
vacuumed every hour!
Hi,I'm not sure if the following is a bug. I certainly found itsurprising, but
maybe more experienced users won't.I have a table with a trigger on it,
designed to run securityinvoker. In my real code this accesses a temporary
table belonging tothe invoker.Then I have second table, together
Sorry, Opera removed all the newlines from my last post.
Trying again in Firefox...
Hi,
I'm not sure if the following is a bug. I certainly found it
surprising, but maybe more experienced users won't.
I have a table with a trigger on it, designed to run security
invoker. In my real code this
On Thu, Nov 13, 2008 at 06:39:47PM -0800, Eus wrote:
Isn't that something like this is better handled at the application level
instead of the DB level?
IOW, isn't that the cost of doing the query above far more expensive than
doing a little coding at the application level?
That's
Let me see, for example one insert type (inser into table2 (select * from
table1), where table have size 26megas,make 226megas for archives files.
(i made a test with parameter wal_bufffer before this is defaul value 64k and
the same commando make 640megas of archives after modify to 1024k a
On Thu, Nov 13, 2008 at 11:31:51AM -0800, Erik Jones wrote:
P.S. To whomever said that Gentoo for for single users running
cutting edge software, poppycock.
I'm a fellow ex-Gentoo user; moved to Debian and things are much easier.
Any self-respecting company running
Gentoo should be
I'm trying to get postgressql running on FreeBSD with 64 GB of memory.
Postgres will the the only thing running, so I want it to use as much
memory as needed.
So I increased shared_memory to 50GB, and also set:
kern.ipc.semmni=512
kern.ipc.semmns=1024
kern.ipc.semmnu=512
in loader.conf and:
Hi!
I am currently struggling with a couple oif tainted bytes in one of our
PostgreSQL 8.2 databases which I plan to move to 8.3 soon - so I need to dump
restore.
I think this problem bit me almost every single time during a major upgrade in
the last couple of years, so I must say that I
On Fri, Nov 14, 2008 at 01:58:04PM +0100, Markus Wollny wrote:
Hi!
I am currently struggling with a couple oif tainted bytes in one of
our PostgreSQL 8.2 databases which I plan to move to 8.3 soon - so I
need to dump restore.
At some point there was a plpgsql function posted that you could
On Fri, Nov 14, 2008 at 01:56:31PM +0100, Christiaan Willemsen wrote:
I'm trying to get postgressql running on FreeBSD with 64 GB of memory.
Postgres will the the only thing running, so I want it to use as much
memory as needed.
So I increased shared_memory to 50GB, and also set:
Umm, do
Thanks Tomas.
The table may still be bloated - the default autovacuum parameters may not
be agressive enough for heavily modified tables.
My autovacuum settings:
autovacuum = on
autovacuum_vacuum_cost_delay = 20
vacuum_cost_delay= 20
autovacuum_naptime
Yes, it is a 64 bit machine, I'm sure of that:
file ../bin/postgres
../bin/postgres: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD),
for FreeBSD 7.0 (700055), dynamically linked (uses shared libs),
FreeBSD-style, not stripped
I just set kern.ipc.shmall equal to kern.ipc.shmmax, still
On Fri, Nov 14, 2008 at 9:22 PM, Hoover, Jeffrey [EMAIL PROTECTED] wrote:
There were 2132065 unused item pointers.
Looks to me like a large update or insert failed on this table
Thanks. So what can I do? I have reindexed all indexes already!
--
Sent via pgsql-general mailing list
Thanks Tomas.
The table may still be bloated - the default autovacuum parameters may
not
be agressive enough for heavily modified tables.
My autovacuum settings:
autovacuum = on
autovacuum_vacuum_cost_delay = 20
vacuum_cost_delay= 20
autovacuum_naptime
To resolve the issue, a little bit more information would be very useful.What
OS are you using
What resource have you used to install the software
(tutorial,howto,reference, guide,etc)
What is the exact error you get and what program gives that error
Regards,
Serge Fonville
On Fri, Nov 14, 2008
On Wed, Nov 12, 2008 at 3:59 AM, zxo102 ouyang [EMAIL PROTECTED] wrote:
Hi everyone,
My data with same timestamp 2008-11-12 12:12:12 in postgresql are as
follows
rowid data unitchannel create_on
Thanks everyone. You have helped back up my suspicions. It is indeed
a Linux system, and it has a RAID IO system, but I don't yet know the
details of that IO system. Time to put them back to work looking at
their hardware, rather than blaming our software :)
Thanks for the extra tips on
On Thu, 2008-11-13 at 17:30 -0700, Scott Marlowe wrote:
I had them run a vacuum analyze verbose on my database, and had these
lines come back which made me suspicious:
INFO: index ix_cpe_ispid now contains 41626 row versions in 13727 pages
DETAIL: 5224 index row versions were
Correction, it does not work. At least not the way i think it should.
The database does not start when i register the service with pg_ctl.
I can only start the cluster with the postgres2 user. I get the prompt back,
but i can not close the window. The dos command exit causes all
interaction with
There was concurrent access to the table during VACUUMing, so the long
delay is explainable as long waits for cleanup lock, plus probably
thrashing the cache with bloated indexes. The CPU overhead per row seems
OK. We should instrument the wait time during a VACUUM and report that
also.
--
On Fri, 2008-11-14 at 09:00 -0600, Dan Armbrust wrote:
There was concurrent access to the table during VACUUMing, so the long
delay is explainable as long waits for cleanup lock, plus probably
thrashing the cache with bloated indexes. The CPU overhead per row seems
OK. We should
On Fri, Nov 14, 2008 at 12:57:32PM +0800, Phoenix Kiula wrote:
1. If I have a unique index on (user_id, url_encrypted), then will
queries asking only for user_id also use this index? Or should i
simply have separate indexes on user_id and url_encrypted? I vaguely
recall reading somewhere that
Phoenix Kiula [EMAIL PROTECTED] writes:
2. Is there a production equivalent of REINDEX? Last time I tried
CREATE INDEX CONCURRENTLY overnight, by the morning it had croaked
with these errors:
ERROR: deadlock detected
DETAIL: Process 6663 waits for ShareLock on transaction 999189656;
Dean Rasheed [EMAIL PROTECTED] writes:
I have a table with a trigger on it, designed to run security
invoker. In my real code this accesses a temporary table belonging to
the invoker.
Then I have second table, together with a foreign key between them and
a delete cascade from the second to
I Need to replace string ( ) in the situation below :
select clie_nmclientefrom cadastro.cliente where clie_nmcliente like
'%%';
result:
JOANA DARCALMEIDA
EMLURB P M R.
CECILIA DCAGNO
HELENA FERREIRA DFREITAS
JOSE M. BARRACA DO BOLA
FORTE DUNAS BAR
JOANA DARC R. DE SOUZA
ASSEMBLEIA
Perhaps you can post the steps you took, since I did not receive the
original message with errorsWhat command did you use when you got this error
exactly (including paths, arguments, etc)
Have you created the appropriate users
Have you assigned the corresponding permissions
Have you reviewed the
On 14/11/2008 16:17, paulo matadr wrote:
Any help me to create pgPL/sql or funcion to replace ( ) to null
value, I have many table with this.
Do you mean you want to replace just those characters with NULL? Won't
this cause the entire string to be NULL?
postgres=# select 'abc' || NULL;
On Fri, Nov 14, 2008 at 12:17 PM, paulo matadr [EMAIL PROTECTED] wrote:
I Need to replace string ( ) in the situation below :
select clie_nmclientefrom cadastro.cliente where clie_nmcliente like
'%%';
result:
JOANA DARCALMEIDA
EMLURB P M R.
CECILIA DCAGNO
HELENA FERREIRA DFREITAS
Tom Lane wrote:
Dean Rasheed [EMAIL PROTECTED] writes:
I have a table with a trigger on it, designed to run security
invoker. In my real code this accesses a temporary table belonging to
the invoker.
Then I have second table, together with a foreign key between them and
a delete cascade
Craig Ringer [EMAIL PROTECTED] writes:
Is the search_path reset in some way that's not visible in
pg_catalog.pg_settings when the ON DELETE CASCADE is issued?
No, I don't believe so. Perhaps your test case was simply fooled by
plan caching within the trigger function?
In general the solution
35 matches
Mail list logo