Gregory Stark <[EMAIL PROTECTED]> writes:
> It may be more right in an abstract ideal world -- the reality is that text
> collation is annoyingly complex. But this may be a case where we can get away
> with just eliding this hassle.
If anyone actually complains about it, I think we can point to th
"Tom Lane" <[EMAIL PROTECTED]> writes:
> Gregory Stark <[EMAIL PROTECTED]> writes:
>> "Tom Lane" <[EMAIL PROTECTED]> writes:
>>> How so? If you think this change is a bad idea you'd better speak up
>>> PDQ.
>
>> Well I think it's fine for 'foo ' != 'foo' even if they sort similarly.
>
>> But I'm
Gregory Stark <[EMAIL PROTECTED]> writes:
> "Tom Lane" <[EMAIL PROTECTED]> writes:
>> How so? If you think this change is a bad idea you'd better speak up
>> PDQ.
> Well I think it's fine for 'foo ' != 'foo' even if they sort similarly.
> But I'm not sure it makes sense for <'foo ','a'> to sort
"Tom Lane" <[EMAIL PROTECTED]> writes:
> Gregory Stark <[EMAIL PROTECTED]> writes:
>> "Tom Lane" <[EMAIL PROTECTED]> writes:
>>> I'm intending to get rid of ~=~ and ~<>~ for 8.4; there's no longer any
>>> reason why those slots in the pattern_ops classes can't be filled by the
>>> plain = and <> o
Gregory Stark <[EMAIL PROTECTED]> writes:
> "Tom Lane" <[EMAIL PROTECTED]> writes:
>> I'm intending to get rid of ~=~ and ~<>~ for 8.4; there's no longer any
>> reason why those slots in the pattern_ops classes can't be filled by the
>> plain = and <> operators. (There *was* a reason when they wer
"Tom Lane" <[EMAIL PROTECTED]> writes:
> Gregory Stark <[EMAIL PROTECTED]> writes:
>> Hm, for a simple = or <> I think it doesn't matter which operator class you
>> use. For < or > it would produce different answers. Postgres isn't clever
>> enough
>> to notice that this is equivalent though so I
Gregory Stark <[EMAIL PROTECTED]> writes:
> Hm, for a simple = or <> I think it doesn't matter which operator class you
> use. For < or > it would produce different answers. Postgres isn't clever
> enough
> to notice that this is equivalent though so I think you would have to do
> something like (
"Kaare Rasmussen" <[EMAIL PROTECTED]> writes:
> Hi
>
> The database is initialized with utf8, so in order for LIKE to use the index
> on
> a text field, I used text_pattern_ops when I created it. So far so good.
>
> It's in the documentation, but there's no explanation of why this index will
>
Hi
The database is initialized with utf8, so in order for LIKE to use the index
on a text field, I used text_pattern_ops when I created it. So far so good.
It's in the documentation, but there's no explanation of why this index will
only work for LIKE searches. How come that I have to have tw
Ryan,
I am crossing this discussion to the PGSQL-PERFORMANCE list, which is the
proper place for it. Anyone interested, please follow us there!
>>>Ryan Bradetich said:
> the table would look like:
> 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user x has an invalid shell.
> 1 | Mon Feb 17 00:3
Curt Sampson wrote:
> On Mon, 16 Feb 2003, Ryan Bradetich wrote:
> > Since my only requirement is that the rows be unique, I have developed a
> > custom MD5 function in C, and created an index on the MD5 hash of the
> > concatanation of all the fields.
>
> Well, that won't guarantee uniqueness, si
On Mon, 16 Feb 2003, Ryan Bradetich wrote:
> I am not sure why all the data is duplicated in the index ...
Well, you have to have the full key in the index, or how would you know,
when you look at a particular index item, if it actually matches what
you're searching for?
MS SQL server does have
Ryan Bradetich <[EMAIL PROTECTED]> writes:
> the table would look like:
> 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user x has an invalid shell.
> 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user y has an invalid shell.
Ah, I see your point now. (Thinks: what about separating the "anomaly"
column i
> I ended up with few only indexes on the operations table, because the
> processes that fill it up do minimal lookups to see if data is already in the
> table, if not do inserts. Then at regular intervals, the table is cleaned up -
> that is, a process to remove the duplicate is run. This unfortun
>>>Ryan Bradetich said:
> the table would look like:
> 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user x has an invalid shell.
> 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user y has an invalid shell.
> 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user y has expired password.
> 2 | Mon Feb 17 00:34
On Mon, 2003-02-17 at 00:15, Tom Lane wrote:
> Ryan Bradetich <[EMAIL PROTECTED]> writes:
> > On Sun, 2003-02-16 at 23:34, Tom Lane wrote:
> >> It's not real clear to me why you bother enforcing a constraint that the
> >> complete row be unique. Wouldn't a useful constraint be that the first
> >>
Ryan Bradetich <[EMAIL PROTECTED]> writes:
> On Sun, 2003-02-16 at 23:34, Tom Lane wrote:
>> It's not real clear to me why you bother enforcing a constraint that the
>> complete row be unique. Wouldn't a useful constraint be that the first
>> three columns be unique?
> The table holds system poli
On Sun, 2003-02-16 at 23:34, Tom Lane wrote:
> Ryan Bradetich <[EMAIL PROTECTED]> writes:
> > Although the table schema is immaterial, I will provide it so we have a
> > common framework for this discussion:
>
> > host_id integer (not null)
> > timestamp datetime
Ryan Bradetich <[EMAIL PROTECTED]> writes:
> Although the table schema is immaterial, I will provide it so we have a
> common framework for this discussion:
> host_id integer (not null)
> timestamp datetime(not null)
> categorytext
Hello postgres hackers,
Been a while since I have participated on this list ... but I have a new
itch to scratch
Although the table schema is immaterial, I will provide it so we have a
common framework for this discussion:
host_id integer (not null)
timestamp
20 matches
Mail list logo