hi,
I am becoming more and more convinced that in order to achieve the
required performance and scalability I need to split my data amoungst
many backend machines.
Ideally I would start with about 10 machine and have 1/10th of the data
on each. As the data set grows I would then buy
After applying the patches supplied so far and also trying the lastest
stable tar.gz for tsearch2 ( downloaded 24th of september)
I am still experiencing the same issue as previously described:
I try to do a
SELECT to_tsvector( 'default', 'some text' )
The backend crashes.
SELECT
I am running a SELECT to get all tuples within a given date range. This
query is much slwoer than i expected - am i missing something?
I have a table 'meta' with a column 'in_date' of type timestamp(0), i
am trying to select all
records within a given date range. I have an index on 'in_date'
Is postgresql have an axisting solution for distibuting the databse
amoungst several servers, so that each one holds a subset of the data
and queries are passed to each one and then collated by a master server?
I have heard erServer mentioned but i got the impression this might
just be for
Tom Lane writes:
That has nothing whatever to do with how much memory the kernel will
let
any one process have. Check what ulimit settings the postmaster is
running under (particularly -d, -m, -v).
My ulimit settings you requested look ok (others included for info)
ulimit -d, -m, -v :
From: Tom Lane [EMAIL PROTECTED]
[EMAIL PROTECTED] writes:
How do i get the core files to examine? There never seem to be any
produced, even outside the debuggers.
Most likely you have launched the postmaster under ulimit -c 0,
which
prevents core dumps. This seems to be the
[EMAIL PROTECTED] writes:
I have set ulimit -c unlimited as you sugested,
i then copied postmaster to /home/postgres
and ran it as postgres from there...
but still no core files. Where should they appear?
In $PGDATA/base/yourdbnumber/core (under some OSes the file name
might
be core
First - apologies for the stuff about i don't understand why there's
only one core file, i now have a post-it note now saying ulimit gets
reset at reboot (i assume thats what happened).
So please find below a potentially more useful core file gdb output:
Core was generated by `postgres: mat
I have been trying to find out more about the postmaster crashing, but
things seem to be getting stranger! I am experiencing problems running
postmaster in gdb too (see end of message)
I will put all the information in this posting for completness,
apologies for the duplicated sections.
I am
Hi, I am having problems manipulating bit strings.
CREATE TABLE lookup(
fname TEXT PRIMARY KEY,
digest BIT VARYING
);
I am trying to construct another bit string based on the length of the
first:
SELECT b'1'::bit( bit_length( digest ) ) FROM lookup;
This doesn't work as i had hoped, where am
On Thu, 7 Aug 2003 [EMAIL PROTECTED] wrote:
Part1.
I have created a dictionary called 'webwords' which checks all
words
and curtails them to 300 chars (for now)
after running
make
make install
I then copied the lib_webwords.so into my $libdir
I have run
psql mybd
I am trying to use the fti module to search my text.
Searching through the raw text using ILIKE takes 3 seconds,
searching using fti takes 212 seconds.
Then i tried to turn off seq_scan to see what happens, the
planner still does a seq_scan.
Why does the planner not use the index?
Are there any
I am trying to setup tsearch2 on postgresql 7.3.4 on a Redhat9 system,
installed from rpms.
There seemed to be some files required for installation of tsearch
missing so I downloaded the src bundle too.
Tsearch2 then compiled ok but now the command:
psql mydb tsearch2.sql
fails with a
Bad form to reply to my own posting i know but -
I notice that the integer dictionary can accept MAXLEN for the longest
number that is considered a valid integer. Can i set MAXLEN for the en
dictionary to be the longest word i want indexed?
I think i'd need to create a new dictionary...?
Below is the EXPLAIN ANALYZE output of a typical current query.
I have just begun looking at tsearch2 to index the header and body
fields.
I have also been using 'atop' to see I/O stats on the disk, i am now
pretty sure thats where the current bottleneck is. As soon as a query
is launched the
I am looking at ways to speed up queries, the most common way by for
queries to be constrianed is by date range. I have indexed the date
column. Queries are still slower than i would like.
Would there be any performance increase for these types of queries if
the tables were split by month as
Ron thank you for your comments, sorry of the slow response - i
actually replied to you on saturday but i think the list was having
trouble again?!
Your questions are answered below...
On Fri, 2003-07-25 at 07:42, [EMAIL PROTECTED] wrote:
As mentioned previously I have a large text database
As mentioned previously I have a large text database with upwards of
40GB of data and 8 million tuples.
The time has come to buy some real hardware for it.
Having read around the subject online I see the general idea is to get
as much memory and the fastest I/O possible.
The buget for the
Hi,
I'm having trouble with libpg.so.2.
Specifically:
Can't load '/usr/lib/perl5/site_perl/5.8.0/i386-linux-thread-multi/auto/
Pg/Pg.so' for module Pg: libpq.so.2: cannot open shared object file: No
such file or directory at /usr/lib/perl5/5.8.0/i386-linux-thread-multi/
DynaLoader.pm line 229.
Ok - discovered the solution in pgsql-php, repeated below for reference:
From: Peter De Muer (Work) [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Re: 7.3.1 update gives PHP libpq.so.2 problem
Date: Tue, 4 Feb 2003 14:06:04 +0100
try making a soft link
Apologies if this is a repost - I tried sending it yesterday and haven'
t seen it in the forum yet.
I am currently writing a perl script to convert the string a user
supplies to a search engine into SQL. The user supplies a string in the
same foramt as google uses - e.g. cat -dog finds records
21 matches
Mail list logo