Then do i need to still provide the auto vacumming options in the
postgres.conf file or these options are automatically taken care of.
Thanks,
Sumeet.
On 4/5/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
Sumeet escribió:
> Hi all is there a way i can find if the pg_autovacuum
Hi all is there a way i can find if the pg_autovacuum module is installed on
my server??
The postgres version im using is 8.1.4
I tried searching the contrib module and dint see the pg_autovacuum
directory there
I checked m source
Thanks,
Sumeet
n ma (cost=0.00..2181956.92 rows=48235392 width=76)
(actual time=21985.285..22204.308 rows=10 loops=1)
Total runtime: 22204.476 ms
(3 rows)
--
Thanks,
Sumeet
27;default',coalesce(name_first,'') ||' '||
coalesce(name_last,''));
Thanks,
Sumeet
The only info i have is
Apple xRaid drive array with 14 400GB drives for a total of 5 TB storage
I have around 10-15 indexes for each tables. does the number of indexes slow
down the vacuum process?
indexes are compund indexes on multiple fields.
-Sumeet
On 3/13/07, Andrej Ricnik-Bay <[EM
;m running postgres on a server with 32 gigs of ram
and 8 processors.
Thanks,
Sumeet.
alyze the time for update queries,
but what i did was a explain analyze
$ explain analyze select to_tsvector(article_title) from master_table limit
1000;
The total runtime was approx 500ms.
The server is Sun OS 5.10, with around 8gigs of RAM and 6cpus.
Thanks,
Sumeet.
On 3/7/07, Oleg Bar
Hi All,
I'm trying to udpate a table containing 13149741 records. And its taking
forever to complete this process.
The update query i'm trying to run is for full text indexing similiar to
UPDATE tblMessages SET idxFTI=to_tsvector(strMessage);
Below are some of the stats which might be helpfu
Thanks Buddy, really appreciate ur help on this
problem solved...
Is there any way this query can be optimized...i'm running it on a huge
table with joins
- Sumeet
On 2/23/07, Rajesh Kumar Mallah <[EMAIL PROTECTED]> wrote:
On 2/24/07, Sumeet <[EMAIL PROTECTED]>
there any other better way of
solving this problem.
Thanks,
Sumeet
lains more about query execution plans..i.e. each of parameters like bitmap heap scan, index scanetc...
-- Thanks,Sumeet
orary tables??? (and the same question as abovew for temp. tables.)3) If there are multiple users who connect to my website at the same time and create temporary views or tables with the same name would there be any overlapping or such???
-- Thanks,Sumeet.
quot;,") ){ substr($_,$i,1)="~"; } } } # this replaces any quotes
s/"//g; print "$_\n";}cat data_file | perl scriptname.pl > outputfile.datand when i run the copy command i get messages like data missing for xyz column.any possible hints...
--Thanks,Sumeet
n't forget to increase your free space map settings-- Thanks,Sumeet AmbreMasters of Information Science Candidate,Indiana University.
On 8/23/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
I've been trying to figure out how to do the following:Select schedule.* from sch
uthors user enters into a interface against the 20 miliions records in my db. Can anyone suggest a good way to perform this kind of search ?. Thanks,Sumeet.
Thanks Micheal I dont have dblink installed, I need to find a good documentation which will help me do this and also the place where i can download this module.Thanks,Sumeet.
On 8/17/06, Michael Fuhr <[EMAIL PROTECTED]> wrote:
On Thu, Aug 17, 2006 at 04:37:03PM -0400, Sumeet wrote:> Im
Hi All,Im trying to find out documentation for postgres module names "dblink", can anyone point me out to it. What i want to do is join multiple databases instead of multiple tables.Thanks,Sumeet.
l make sure that the next value your sequence generates is
greater than any key that already exists in the table.
>> taken from tom lane.-- Thanks,Sumeet.
On 8/15/06, Andrew Sullivan <[EMAIL PROTECTED]> wrote:
On Tue, Aug 15, 2006 at 10:11:41AM -0400, Sumeet Ambre wrote:> >> The Design of the database is because our organization wants to split up> different datasets into different entities, and there might be a
> possibi
Andrew Sullivan wrote:
On Mon, Aug 14, 2006 at 05:26:10PM -0400, Sumeet Ambre wrote:
Hi All,
I have a database which consists of 20 million records and I've split up
the db into 6-7 dbs.
You can do this (as someone suggested with dblink), but I'm wondering
why the
Hi All,
I have a database which consists of 20 million records and I've split up
the db into 6-7 dbs. I have a base database which consists of
the ids with link all the databases. I'm performing search on this
single base table. After searching i get some ids which are ids in the other
database
21 matches
Mail list logo