Re: [ADMIN] large table support 32,000,000 rows

2002-03-25 Thread Zhang, Anna
I have multi tables with over 10,000, 000 rows, the biggest one is 70, 000, 000 rows. For each table, there are several indexes, almost all columns are varchar2. In my experiences, with many indexes on large table, data insertion will be a pain. In my case, I have 30, 000 rows to be

[ADMIN] seq scan on indexed column

2002-03-14 Thread Zhang, Anna
Hi, I always have questions on sql tunning, here is the one: gtld_analysis=# \d gtld_owner Table gtld_owner Attribute| Type | Modifier ++ owner_name | character varying(100) | netblock_start

FW: [ADMIN] seq scan on indexed column

2002-03-14 Thread Zhang, Anna
!!! Anna Zhang -Original Message- From: Zhang, Anna [mailto:[EMAIL PROTECTED]] Sent: Thursday, March 14, 2002 3:34 PM To: '[EMAIL PROTECTED]' Subject: [ADMIN] seq scan on indexed column Hi, I always have questions on sql tunning, here is the one: gtld_analysis=# \d gtld_owner

Re: [ADMIN] seq scan on indexed column

2002-03-14 Thread Zhang, Anna
, Anna Cc: '[EMAIL PROTECTED]' Subject: Re: [ADMIN] seq scan on indexed column On Thu, 14 Mar 2002, Zhang, Anna wrote: gtld_analysis=# explain SELECT NETBLOCK_START gtld_analysis-# FROM GTLD_OWNER gtld_analysis-# WHERE NETBLOCK_START = -2147483648; You might want to try the same query

[ADMIN] vacuum error

2002-03-08 Thread Zhang, Anna
Hi, I got an error when vacuum database, see below: $vacuumdb xdap ERROR: RelationBuildTriggers: 2 record(s) not found for rel domain. I deleted triggers that referenced domain before vacuum, is this the cause? How can I fix it? or It doesn't bother, It's ok to ignore such error? Thanks in

Re: [ADMIN] vacuum error

2002-03-08 Thread Zhang, Anna
Thanks Stephan! your suggestion works. I just wonder that if dropping triggers caused such problem, this might be a postgres bug. Anna Zhang -Original Message- From: Stephan Szabo [mailto:[EMAIL PROTECTED]] Sent: Friday, March 08, 2002 1:02 PM To: Zhang, Anna Cc: '[EMAIL PROTECTED

[ADMIN] fk constraint can't be dropped

2002-03-06 Thread Zhang, Anna
Hi, I created a foreign key constraint on table referral like this: alter table referral add constraint fk_referral foreign key (handle) references domain (handle); create alter table referral drop constraint fk_referral restrict; ERROR: ALTER TABLE / DROP CONSTRAINT: fk_referral does not

[ADMIN] pgaccess of postgres 7.2

2002-03-01 Thread Zhang, Anna
Hi, I installed postgres 7.2 to default location. I can't find pgaccess. Does this utility has to install seperately? or I installed wrong? Someone knows? For my postgres 7.1.3 database, pgaccess works fine. ---(end of broadcast)--- TIP 1:

[ADMIN] optimizer

2002-02-27 Thread Zhang, Anna
Hi, I have a table named Domain that has 14M rows, here is the defination: xdap_regr=# \d domain Table domain Attribute | Type | Modifier -+---+-- domainhandle| text |not null domainname | text |not null parentdomain| text |

Re: [ADMIN] optimizer

2002-02-27 Thread Zhang, Anna
Is the estimate above (1.5M rows) reasonable? If so, it's probably doing the right thing. If not, what version are you using and are there any very common values that may throw off the estimates; what does select * from pg_statistic where starelid=(select oid from pg_class where

[ADMIN] shared_buffers and effective_cache_size

2002-02-22 Thread Zhang, Anna
Hi, I just got a new Penguin machine with 2 processors and 2G RAM. After installed postgres 7.2, I started to modify postgresql.conf to get max performance. I read Adm Guide and Bruce's article: PostgreSQL Hardware Performance Tuning, in my understanding, shared_buffers is similar to Oracle's

Re: [ADMIN] postgresql 7.1.3

2002-02-05 Thread Zhang, Anna
try: $tar xzpf postgresql-7.1.3.tar.gz Anna Zhang -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] Sent: Tuesday, February 05, 2002 12:51 PM To: [EMAIL PROTECTED] Subject: [ADMIN] postgresql 7.1.3 Hi, I have postgresql7.1.2 on system now(sunOs5.6). I downloaded

[ADMIN] tuning SQL

2002-01-29 Thread Zhang, Anna
Hi, I am running a query on postgres 7.1.3 at Red Hat 7.2 (2 CPUs, 1.5G RAM, 2 drive disk array). select count(*) from contact a, contact_discard b where a.contacthandle b.contacthandle; Table contact has over 9 million rows, contact_diacard has around 259,000 rows, both tables define

Re: [ADMIN] tuning SQL

2002-01-29 Thread Zhang, Anna
it takes only a few minues. Thanks! Anna Zhang -Original Message- From: Ross J. Reedstrom [mailto:[EMAIL PROTECTED]] Sent: Tuesday, January 29, 2002 11:39 AM To: Zhang, Anna Cc: [EMAIL PROTECTED] Subject: Re: [ADMIN] tuning SQL On Tue, Jan 29, 2002 at 10:57:01AM -0500, Zhang, Anna wrote

Re: [ADMIN] Increasing Shared Memory - on MacOS X

2002-01-25 Thread Zhang, Anna
see docs: http://www.postgresql.org/idocs/ under Administrator's Guide, section 3.5 Managing Kernel Resources. Anna Zhang -Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED]] Sent: Thursday, January 24, 2002 9:06 PM To: Chris Ruprecht Cc: PostGreSQL Admin Group Subject: Re:

Re: [ADMIN] ERROR: cannot read block

2002-01-24 Thread Zhang, Anna
I think you have experienced data/index block corruption. First drop indexes (if you have) and recreate them. if only index corrupts, you are lucky, this will fix your problem. but if this won't work, it means that data block corrupted, you may need to recovery your database using backup. Above

Re: [ADMIN] Increasing Shared Memory - on MacOS X

2002-01-24 Thread Zhang, Anna
You need to increase SHMMAX. For solaris, it's in /etc/system file; Linux: /etc/sysctl.conf. You should know corresponding file for your system or check with your SA to figure out. Anna Zhang -Original Message- From: Chris Ruprecht [mailto:[EMAIL PROTECTED]] Sent: Thursday, January 24,

[ADMIN] timing a process

2002-01-23 Thread Zhang, Anna
Hi All, I like to know how to timing a process? like in oracle sqlplus, we can set timing on, then execute a statement, such way we can know how long the process takes. In psql, is there such feature? Anna Zhang ---(end of broadcast)--- TIP 4:

[ADMIN] copy command

2002-01-03 Thread Zhang, Anna
I have a problem to load data to postgres database using copy command. The problem is that we have one column called address which is multi-line text, the taxt file looks like this: aab770|awkc.com administration|sultan 23 Bogota, na0|CO above shows one record with '|' as delimiters. Column