Hi There,
We are considering to use NetApp filer for a highly
busy 24*7 postgres database and the reason we chose
netapp, mostly being the "snapshot" functionality for
backing up database online. The filer would be mounted
on a rh linux server (7.3), 4g RAM, dual cpu with a
dedicated card for file
Hi,
does anybody know the internal pg_ tables that has
table/column definitions defined (similar to
user_tab_columns in oracle)
thanks,
Shankar
__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com
---
> You mean you want to round off to even seconds? Try
> date_trunc,
> or cast to "timestamp(0) with time zone".
well, i just need to trunc whatever thats after the
timestamp i.e from 06/10/2003 12:50:19.188 PDT, i need
to remove .188 PDT
i'll see if date_trunc helps me or not. thanks for the
s
oops forgot to mention, database version is 7.3.2
--- Shankar K <[EMAIL PROTECTED]> wrote:
> Folks,
>
> Our databases stores most of the dates as timestamp
> with timezone format, so the data looks like this...
>
> 09/12/2003 12:51:31.268 PDT
> 09/12/2003 12:50:20 PDT
Folks,
Our databases stores most of the dates as timestamp
with timezone format, so the data looks like this...
09/12/2003 12:51:31.268 PDT
09/12/2003 12:50:20 PDT
some has centiseconds in them along with TZ. Now i'm
wondering is there a way to just extract date and
timestamp alone leaving the c
Hi,
I got a database cluster with multiple databases on
it. all with similar tables/indexes etc.
Now the question is, how to identify database x raised
error y (which gets logged in server logfile).
for e.g. below snip gets logged in server logfile,
which clearly tells dummy_id is not a column o
Hi Everyone,
I've a kind of less inserts/mostly updates table,
which we vacuum every half-hour.
here is the output of vacuum analyze
INFO: --Relation public.accounts--
INFO: Index accounts_u1: Pages 1498; Tuples 515:
Deleted 179.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: Index acc
Thanks tom that was very useful.
just wondering what could be "Keep 0, UnUsed 205434"
refer here. does that any of it impact in evaluvating
the vaccum frequecy.
thanks,
Shankar
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> Shankar K <[EMAIL PROTECTED]> writes:
>
Thanks Jonathan, tom...
--- Jonathan Gardner <[EMAIL PROTECTED]>
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On Monday 02 June 2003 12:25, Tom Lane wrote:
> > Shankar K <[EMAIL PROTECTED]> writes:
> > > i'm trying to log the verbos
hi all,
I'm trying to evaluate the frequecy to run vacuum
analyze on key tables. so if anyone could help me to
interpret the output of vacuum analyze verbose output
that would be great. below is the output of one of our
major indexes.
INFO: --Relation public.accounts--
INFO: Index accounts_u1
hi there,
i'm trying to log the verbose output of vaccumdb to a
file along with other details, but i only end up with
logging this
-
Begin vacuum/analyze db at Mon Jun 2 10:37:29 PDT
2003
Vacuuming test
SET autocommit TO 'on';SET vacuum
hi everybody,
i checked through documentation but couldn't find the
list of bugs that got fixed in 7.3 release as well as
new feature set introduced in 7.3 or any performance
improvements etc...when compared to 7.2
Reason for asking this is that, i'd like to make a
point to my management for usin
12 matches
Mail list logo