I believe that the biggest cause for data loss from people useing the
'cheap' drives is due to the fact that one 'cheap' drive holds the
capacity of 5 or so 'expensive' drives, and since people don't
realize this they don't realize that the time to rebuild the failed
drive onto a hot-spare is
On Sat, 7 Apr 2007, Ron wrote:
Ron, why is it that you assume that anyone who disagrees with you doesn't
work in an environment where they care about the datacenter environment,
and aren't in fields like financial services? and why do you think that we
are just trying to save a few pennies? (t
At 05:42 PM 4/7/2007, [EMAIL PROTECTED] wrote:
On Sat, 7 Apr 2007, Ron wrote:
The reality is that all modern HDs are so good that it's actually
quite rare for someone to suffer a data loss event. The
consequences of such are so severe that the event stands out more
than just the statistics w
On Sat, 7 Apr 2007, Ron wrote:
The reality is that all modern HDs are so good that it's actually quite rare
for someone to suffer a data loss event. The consequences of such are so
severe that the event stands out more than just the statistics would imply.
For those using small numbers of HDs
On 7-4-2007 18:24 Tilo Buschmann wrote:
Unfortunately, the query above will definitely not work correctly, if
someone searches for "a" or "the".
That are two words you may want to consider not searching on at all.
As Tom said, its not very likely to be fixed in PostgreSQL. But you can
always
Tilo Buschmann <[EMAIL PROTECTED]> writes:
>> Arjen van der Meijden <[EMAIL PROTECTED]> writes:
>>> SELECT ... FROM cd
>>> JOIN tracks ...
>>> WHERE cd.id IN (SELECT DISTINCT cd_id FROM (SELECT t.cd_id FROM tracks t
>>> WHERE t.tstitle @@ plainto_tsquery('simple','education') LIMIT 30)
>>> as foo
Hi everyone,
On Sat, 07 Apr 2007 11:54:08 -0400
Tom Lane <[EMAIL PROTECTED]> wrote:
> Arjen van der Meijden <[EMAIL PROTECTED]> writes:
> > If that is your main culprit, you could also use two limits based on the
> > fact that there will be at most X songs per cd which would match your
> > titl
Arjen van der Meijden <[EMAIL PROTECTED]> writes:
> If that is your main culprit, you could also use two limits based on the
> fact that there will be at most X songs per cd which would match your
> title (my not very educated guess is 3x). Its a bit ugly... but if that
> is what it takes to mak
Given all the data I have personally + all that I have from NOC
personnel, Sys Admins, Network Engineers, Operations Managers, etc my
experience (I do systems architecture consulting that requires me to
interface with many of these on a regular basis) supports a variation
of hypothesis 2. Let'
Can't you use something like this? Or is the distinct on the t.cd_id
still causing the major slowdown here?
SELECT ... FROM cd
JOIN tracks ...
WHERE cd.id IN (SELECT DISTINCT t.cd_id FROM tracks t
WHERE t.tstitle @@ plainto_tsquery('simple','education') LIMIT 10)
If that is your main cul
Hello,
I am trying to build a application to search CDs and their tracks and I
am experiencing some performance difficulties.
The database is very simple at the moment, two tables "cd" and "tracks"
contain the CD-information and their respective tracks. A column
"cd_id" in public.tracks is the fo
11 matches
Mail list logo