On Fri, 31 Oct 2003, Naomi Walker wrote:
> We have a similar issue regarding security. Some of the access to our
> database will be by ODBC connections for reporting purposes (ie. Actuate
> Report/Crystal Reports). Without creating a zillion or so views (which I
> suspect carries with it alot
Thank to all
I was in the WRONG database. Sorry, and thanks!
-Original Message-
From: Tom Lane <[EMAIL PROTECTED]>
To: Jeff <[EMAIL PROTECTED]>
Cc: "PostgreSQL" <[EMAIL PROTECTED]>, [EMAIL PROTECTED]
Date: Fri, 31 Oct 2003 16:27:31 -0500
Subject: Re: [ADMIN] SELECT COUNT(*)... returns 0
Jeff <[EMAIL PROTECTED]> writes:
> "PostgreSQL" <[EMAIL PROTECTED]> wrote:
>> Why the result us CERO? the table have 1.400.000 rows!
> 1. did you remember to load data?
> 2. did someone accidentally delete the data?
> 3. are you connected to the correct db (I've panic'd before but realized
> I was
Rajesh Kumar Mallah wrote:
>
> Hi,
>
> I think its not going to a trivial task to take out only the LOG duration
> lines from a PostgreSQL logfile. We need to extract the duration and
> the actual statement. I think i will put a custom delimeters around the
> statements for the time being so that
Rajesh Kumar Mallah <[EMAIL PROTECTED]> writes:
> In the explain below are the references
> "outer"."?column2?" = "inner"."?column2?"
> Ok?
Yeah, those are variables that don't have any name because they don't
correspond exactly to table columns. It looks like the plan is
merge-joining (main.id)
On Fri, Oct 31, 2003 at 02:05:57PM -0500, Tom Lane wrote:
>
> You might be able to get past this by starting a standalone postgres
> with the -P command-line option (ignore system indexes). If so, try
> "select relname, relfilenode from pg_class". With luck that will give
> you a list of which f
On Fri, 31 Oct 2003 13:33:09 -0600
"PostgreSQL" <[EMAIL PROTECTED]> wrote:
> I have instaled Postgres 7.3.4 on RH 9,
> if I excecute:
>
> select count(*) from cj_tranh;
> count
> ---
> 0
> (1 row)
>
> Why the result us CERO? the table have 1.400.000 rows!
> What is wrong? Anybody help
Brian Ristuccia <[EMAIL PROTECTED]> writes:
> The standalone backend errors out with:
> FATAL 1: _mdfd_getrelnfd: cannot open relation pg_trigger: No such file or
> directory
Well, if you can identify which of the lost+found files is pg_trigger,
you can move it back into place and then try again.
I've got a question about references, is it possible
to specify criteria for the Reference. For example, Table 1 has Field1
That References Field1 in Table2, However I only want Field 1 in Table
1 to reference the Records in Table2 that have the Valid Field set to
TRUE. In other words,
I have instaled Postgres 7.3.4 on RH 9,
if I excecute:
select count(*) from cj_tranh;
count
---
0
(1 row)
Why the result us CERO? the table have 1.400.000 rows!
What is wrong? Anybody help please.
---(end of broadcast)---
TIP 2: you can
Brian Ristuccia <[EMAIL PROTECTED]> writes:
> Recently I had a problem where a system crash scribbed on some directories,
> which landed a bunch of files, including a few of the system table files for
> one of my databases, in lost+found along with a zillion other files.
Ugh.
> I might be able t
Hi ,
In the explain below are the references
"outer"."?column2?" = "inner"."?column2?"
Ok?
rt3=# SELECT version();
version
PostgreSQL 7.4beta5 on i686-pc-linux-gnu, compiled by GCC 2.96
(1 row
On Friday 31 October 2003 11:19 am, Marek Florianczyk wrote:
> W liście z pią, 31-10-2003, godz. 16:51, Mike Rylander pisze:
> > On Friday 31 October 2003 09:59 am, Marek Florianczyk wrote:
> > > W liście z pią, 31-10-2003, godz. 15:23, Tom Lane pisze:
> > > > Marek Florianczyk <[EMAIL PROTECTED]>
On 31 Oct 2003, Marek Florianczyk wrote:
> Hi all
>
> We are building hosting with apache + php ( our own mod_virtual module )
> with about 10.000 wirtul domains + PostgreSQL.
> PostgreSQL is on a different machine ( 2 x intel xeon 2.4GHz 1GB RAM
> scsi raid 1+0 )
Tom's right, you need more memo
I have a table like the following
create table test {
id int8,
lastupdate date,
balance numeric(12, 2) };
With an index:
create index ix_test(id, lastupdate);
This table currently has 6 million records. I have done a vacuum full
and reindex this morning. The file associated with this table is
=?windows-1251?Q?=CA=F0=E8=E2=EE=F8=E5=E5=E2_=CF=E0=E2=E5=EB?= <[EMAIL PROTECTED]>
writes:
> I am having problems compiling Postgres 7.3.4 on a Sun Fire V120
> running Solaris 9.
Take out the inclusion of in fe-connect.c. This probably
should be back-patched into 7.3.5:
2003-06-23 13:03 mom
Recently I had a problem where a system crash scribbed on some directories,
which landed a bunch of files, including a few of the system table files for
one of my databases, in lost+found along with a zillion other files.
I might be able to find the file for this table/index in lost+found, but ho
Does xfs_freeze work on red hat 7.3?
Cynthia Leon
-Original Message-
From: Murthy Kambhampaty [mailto:[EMAIL PROTECTED]
Sent: Friday, October 17, 2003 11:34 AM
To: 'Tom Lane'; Murthy Kambhampaty
Cc: 'Jeff'; Josh Berkus; [EMAIL PROTECTED];
[EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PRO
DBA's,
2Wire is looking for a DBA with PostgreSQL Exp...if you know of someone or
if your looking for a new opportunity then please send your resume or call
so we can discuss the position. Please send you resume to [EMAIL PROTECTED]
Thank you,
Gregg Lynch
Sr. Contract Recruiter
2Wire Inc.
Direct
On Thu, Oct 30, 2003 at 10:28:10AM -0700, [EMAIL PROTECTED] wrote:
> Does xfs_freeze work on red hat 7.3?
It works on any kernel with XFS (it talks directly to XFS).
cheers.
--
Nathan
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
> What I did next, is put a trigger on pg_attribute that should, in theory,
> on insert and update, fire up a function that will increment a version
System tables do not use the same process for row insertion / updates as
the rest of the system. You're trigger will rarely be fired.
signature.asc
Hi,
I have a test system that is setup the same as a production system and
would like to frequently copy the database over.
pg_dump takes a few hours and even sometimes hangs.
Are there any reasons not to simply just copy the entire data directory
over to the test system? I could not find any p
[EMAIL PROTECTED] (Jeff) writes:
> On Wed, 29 Oct 2003 11:53:38 -0500
> DHS Webmaster <[EMAIL PROTECTED]> wrote:
>
>> We vacuum our working database nightly. Although this is not a 'full',
>> we don't exclude any tables. We don't do anything with template1
>> (knowingly), so we do not perform any m
Tom Lane <[EMAIL PROTECTED]> writes:
> Returning to the original problem, it seems to me that comparing "pg_dump
> -s" output is a reasonable way to proceed.
I've actually started checking in a pg_dump -s output file into my CVS tree.
However I prune a few key lines from it. I prune the TOC O
I am having problems compiling Postgres 7.3.4 on a Sun Fire V120
running Solaris 9.
./configure --with-java --with-openssl=/usr/local/ssl --enable-syslog --disable-shared --enable-locale --enable-multibyte
Configure goes OK, but when #gmake following error:
make[2]: Entering directory `~/sr
I am seeing strange behavior with my postmaster logs, it seems to be
writing out each line of a log entry 10 times. This seems to have
started recently without any know config changes.
I am running 7.3.2 on RedHat 7.3 i386. Below is a snippet from the logs.
Thanks in advance for any help.
Tha
We have a similar issue regarding security. Some of the access to our
database will be by ODBC connections for reporting purposes (ie. Actuate
Report/Crystal Reports). Without creating a zillion or so views (which I
suspect carries with it alot of overhead), I believe it would be tricky to
ma
Hi,
I think its not going to a trivial task to take out only the LOG
duration
lines from a PostgreSQL logfile. We need to extract the duration and
the actual statement. I think i will put a custom delimeters around the
statements for the time being so that the log parser can parse it
unambig
W liście z pią, 31-10-2003, godz. 16:51, Mike Rylander pisze:
> On Friday 31 October 2003 09:59 am, Marek Florianczyk wrote:
> > W liście z pią, 31-10-2003, godz. 15:23, Tom Lane pisze:
> > > Marek Florianczyk <[EMAIL PROTECTED]> writes:
> > > > We are building hosting with apache + php ( our own
On Friday 31 October 2003 09:59 am, Marek Florianczyk wrote:
> W liście z pią, 31-10-2003, godz. 15:23, Tom Lane pisze:
> > Marek Florianczyk <[EMAIL PROTECTED]> writes:
> > > We are building hosting with apache + php ( our own mod_virtual module
> > > ) with about 10.000 wirtul domains + PostgreSQ
W liście z pią, 31-10-2003, godz. 15:30, Matt Clark pisze:
> Hmm, maybe you need to back off a bit here on your expectations. You said your test
> involved 400 clients simultaneously running
> queries that hit pretty much all the data in each client's DB. Why would you expect
> that to be anyt
W liście z pią, 31-10-2003, godz. 15:23, Tom Lane pisze:
> Marek Florianczyk <[EMAIL PROTECTED]> writes:
> > We are building hosting with apache + php ( our own mod_virtual module )
> > with about 10.000 wirtul domains + PostgreSQL.
> > PostgreSQL is on a different machine ( 2 x intel xeon 2.4GHz
Hmm, maybe you need to back off a bit here on your expectations. You said your test
involved 400 clients simultaneously running
queries that hit pretty much all the data in each client's DB. Why would you expect
that to be anything *other* than slow?
And does it reflect expected production use
Marek Florianczyk <[EMAIL PROTECTED]> writes:
> We are building hosting with apache + php ( our own mod_virtual module )
> with about 10.000 wirtul domains + PostgreSQL.
> PostgreSQL is on a different machine ( 2 x intel xeon 2.4GHz 1GB RAM
> scsi raid 1+0 )
> I've made some test's - 3000 databases
W liście z pią, 31-10-2003, godz. 13:54, Jamie Lawrence pisze:
> On Fri, 31 Oct 2003, Matt Clark wrote:
>
> > I was more thinking that it might be possible to manage the security at a
> > different level than the DB.
> >
>
>
> We do this with users and permissions.
>
> Each virtual host has
On Fri, 31 Oct 2003, Matt Clark wrote:
> I was more thinking that it might be possible to manage the security at a different
> level than the DB.
>
We do this with users and permissions.
Each virtual host has an apache config include specifying a db user,
pass (and database, although most of
W liście z pią, 31-10-2003, godz. 13:06, Gaetano Mendola pisze:
> Marek Florianczyk wrote:
>
> > But my problem is that when I hit command:
> > psql -h 127.0.0.1 dbname dbuser
> > I'm waiting about 3-5 sec to enter psql monitor, so every new connection
> > from apache will wait about 3-5 sec to
W liście z pią, 31-10-2003, godz. 13:33, Matt Clark pisze:
> > W liście z pią, 31-10-2003, godz. 12:25, Matt Clark pisze:
> > > Ooh, I see. That's a tricky one. Do you really need that level of separation?
> >
> > Well, if you talk with the clients, and they promise, that they will not
> > acce
> W liście z pią, 31-10-2003, godz. 12:25, Matt Clark pisze:
> > Ooh, I see. That's a tricky one. Do you really need that level of separation?
>
> Well, if you talk with the clients, and they promise, that they will not
> access to other databasess, and specially don't do "drop database
> my_be
W liście z pią, 31-10-2003, godz. 12:25, Matt Clark pisze:
> Ooh, I see. That's a tricky one. Do you really need that level of separation?
Well, if you talk with the clients, and they promise, that they will not
access to other databasess, and specially don't do "drop database
my_bes_fried_db"
Marek Florianczyk wrote:
But my problem is that when I hit command:
psql -h 127.0.0.1 dbname dbuser
I'm waiting about 3-5 sec to enter psql monitor, so every new connection
from apache will wait about 3-5 sec to put query to server. Thats a very
long time...
Why don't you use a connection manage
Ooh, I see. That's a tricky one. Do you really need that level of separation?
> Because every virtual domain has its own database, username and
> password. So one client domain1.com with db: domain1db user: domain1user
> cannot access to second client database domain2.com db: domain2db user:
>
W liście z pią, 31-10-2003, godz. 11:52, Matt Clark pisze:
> > I could made persistent connection, but with 10.000 clients it will kill
> > the server.
>
> But if they're virtual domains, why would you need one connection per domain? You
> should only need one connection per apache
> process...
> I could made persistent connection, but with 10.000 clients it will kill
> the server.
But if they're virtual domains, why would you need one connection per domain? You
should only need one connection per apache
process...
---(end of broadcast)
Hi all
We are building hosting with apache + php ( our own mod_virtual module )
with about 10.000 wirtul domains + PostgreSQL.
PostgreSQL is on a different machine ( 2 x intel xeon 2.4GHz 1GB RAM
scsi raid 1+0 )
I've made some test's - 3000 databases and 400 clients connected at same
time. These
45 matches
Mail list logo