Hello,
I'm still at beginning regarding databases. I got a question about trigers
that model the behavior of some data,
usualy a relational database may contain trigers and declarative sql
constraints.
My question is how could I detect these trigers within a database?
using postgresql would help
Hi,
do you want to know how to create a trigger?
http://www.postgresql.org/docs/9.0/interactive/sql-createtrigger.html
http://www.postgresql.org/docs/9.0/interactive/sql-createtrigger.htmldo
you want to know how to list the triggers in your database?
- in pgAdmin they are listed in the tree-view
I'll leave the can/cannot responses to those more familiar with
high-load/use situations but I am curious; what reasons other than cost are
leading you to discontinue using your current database engine? With that
many entities any migration is likely to be quite challenging even if you
maybe this?
http://enterprisedb.com/resources-community/webcasts-podcasts-videos
http://enterprisedb.com/resources-community/webcasts-podcasts-videos
cheers,
WBL
On Tue, Mar 1, 2011 at 3:34 PM, James B. Byrne byrn...@harte-lyne.cawrote:
I recently viewed a screen-cast on PostgreSQL developed
also nice:
sysv-rc-conf - SysV init runlevel configuration tool for the terminal
On Tue, Mar 1, 2011 at 2:09 PM, Ray Stell ste...@cns.vt.edu wrote:
On Tue, Mar 01, 2011 at 06:37:35PM +0530, Adarsh Sharma wrote:
But I want to start it after booting automatically.
On Thu, Mar 3, 2011 at 6:41 AM, Nick Raj nickrajj...@gmail.com wrote:
Which type of data type will be used in above function (in place of ?)
that can collect more than one row(20,000) ?
Maybe the id that those 20M records have in common?
hth,
WBL
On Thu, Mar 3, 2011 at 6:41 AM, Nick
A developer here accidentally flooded a server with connection opens and
closes, essentially one per transaction during a multi-threaded data
migration process.
We were curious if this suggests that connection clean up is more
expensive than creation thereby exhausting resources, or if perhaps
After upgrading to pg 9.0.3 (from 8.4.2) on my Mac OS 10.6.2 machine i find
this in my log file (a lot):
postgres%192.168.254.210%2011-03-03 16:37:30 CET%22021STATEMENT: SELECT
pg_file_read('pg_log/postgresql-2011-03-03_00.log', 25, $
postgres%192.168.254.210%2011-03-03 16:37:32
top posting my self to clarify, since the title is in fact inverted:
close might be more expensive the open.
On 03/03/2011 08:28 AM, Rob Sargent wrote:
A developer here accidentally flooded a server with connection opens and
closes, essentially one per transaction during a multi-threaded data
A developer here accidentally flooded a server with connection opens and
closes, essentially one per transaction during a multi-threaded data
migration process.
We were curious if this suggests that connection clean up is more
expensive than creation thereby exhausting resources, or if perhaps
I use pgsql 9.0.3 and I know that postgresql tries to use the fields
in indexes instead of the original table if it possible
But when I run
SELECT COUNT(id) FROM tab
or
SELECT COUNT(*) FROM tab
where
there id is PRIMARY KEY and there are other indexes there I get
execution plan that
Hey folks,
I was looking through the contrib modules with 8.4 and hoping to find
something that satisfies my itch.
http://www.postgresql.org/docs/8.4/static/pgstatstatements.html comes the
closest.
I'm inheriting a database which has mostly unknown usage patterns, and would
like to figure them
On 03/03/2011 05:29 AM, obamaba...@e1.ru wrote:
I use pgsql 9.0.3 and I know that postgresql tries to use the fields in
indexes instead of the original table if it possible
But when I run
SELECT COUNT(id) FROM tab
or
SELECT COUNT(*) FROM tab
where there id is PRIMARY KEY and there are other
On Thu, Mar 3, 2011 at 11:00 AM, Derrick Rice derrick.r...@gmail.com wrote:
Hey folks,
I was looking through the contrib modules with 8.4 and hoping to find
something that satisfies my itch.
http://www.postgresql.org/docs/8.4/static/pgstatstatements.html comes the
closest.
I'm inheriting a
On 3/3/2011 11:00 AM, Derrick Rice wrote:
Hey folks,
I was looking through the contrib modules with 8.4 and hoping to find
something that satisfies my itch.
http://www.postgresql.org/docs/8.4/static/pgstatstatements.html comes
the closest.
I'm inheriting a database which has mostly unknown
On Thu, Mar 3, 2011 at 12:34 PM, Andy Colson
a...@squeakycode.netwrote:There are stat tables you can look at:
http://www.postgresql.org/docs/9.0/static/monitoring-stats.html
-Andy
Aha! Thank you.
Given the below information, is it reasonable to assume that files in pgsql_tmp
dated prior to 10 days ago can be safely removed?
postgres@hw-prod-repdb1 uptime
15:01:35 up 10 days, 13:50, 3 users, load average: 1.23, 1.15, 0.63
postgres@hw-prod-repdb1 pwd
/mnt/iscsi/psql_tmp/tmpdata
Reid Thompson reid.thomp...@ateb.com writes:
Given the below information, is it reasonable to assume that files in
pgsql_tmp dated prior to 10 days ago can be safely removed?
That listing doesn't show anything in pgsql_tmp ...
What the listing looks like to me is a tablespace containing a few
Rayner --
...
I have a database of 1000 tables, 300 of theirs are of major growing
with 1 rows daily, the estimate growing for this database is of
2,6 TB every year.
In and of-itself sheer number of rows only hits you when you need to be
reading most of them; in that case good hardware
On Tue, Mar 1, 2011 at 7:24 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Adrian Klaver adrian.kla...@gmail.com writes:
Looks like the TOAST compression is not working on the second machine. Not
sure
how that could come to be. Further investigation underway:)
Somebody carelessly messed with the
Hi:
I have to update all the records of a table. I'm worried about what the table
will look like in terms of fragmentation when this is finished. Is there some
sort of table healing/reorg/rebuild measure I should take if I want the
resulting table to operate at optimal efficiency? What
On Thu, 2011-03-03 at 20:03 -0700, Gauthier, Dave wrote:
Hi:
I have to update all the records of a table. I'm worried about what
the table will look like in terms of fragmentation when this is
finished. Is there some sort of table healing/reorg/rebuild measure I
should take if I want the
Yesterday, I had some problem with postgresql 9.0.2. Today i backup postgres
and has error
pg_dump: reading dependency data
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: invalid page header in block 299
of relation pg_depend_depender_index
pg_dump: The command was:
23 matches
Mail list logo