Re: [GENERAL] Fedora 13 killed dblink this week...

2010-07-17 Thread Devrim GÜNDÜZ


17.Tem.2010 tarihinde 22:54 saatinde, Tom Lane   
şunları yazdı:



Jerry LeVan  writes:

On Jul 17, 2010, at 1:21 PM, Tom Lane wrote:

Uhm ... what's wrong with the Fedora-supplied PG RPMs?



For a long time Fedora rpms lagged the rpms that Devrim made for
the yum repository.


Well, I'm not allowed to bump to a new major PG version within a
stable Fedora release, but other than that I believe I've kept up...


That's I did create PostgreSQL RPM Repository ;)

--
Devrim GÜNDÜZ
PostgreSQL DBA @ Akinon/Markafoni, Red Hat Certified Engineer
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fedora 13 killed dblink this week...

2010-07-17 Thread Tom Lane
Jerry LeVan  writes:
> On Jul 17, 2010, at 1:21 PM, Tom Lane wrote:
>> Uhm ... what's wrong with the Fedora-supplied PG RPMs?

> For a long time Fedora rpms lagged the rpms that Devrim made for
> the yum repository.

Well, I'm not allowed to bump to a new major PG version within a
stable Fedora release, but other than that I believe I've kept up...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fedora 13 killed dblink this week...

2010-07-17 Thread Jerry LeVan

On Jul 17, 2010, at 1:21 PM, Tom Lane wrote:

> Jerry LeVan  writes:
>> I wish whoever is in charge of the yum rpm depository would get
>> cracking on building the Fedora 13 repo...
> 
> Uhm ... what's wrong with the Fedora-supplied PG RPMs?
> 
>   regards, tom lane

For a long time Fedora rpms lagged the rpms that Devrim made for
the yum repository.

That is apparently no longer the case so I may well switch back :)

Jerry


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump and --inserts / --column-inserts

2010-07-17 Thread Thomas Kellerer

Tom Lane wrote on 17.07.2010 19:35:

Thomas Kellerer  writes:

Tom Lane wrote on 17.07.2010 16:36:

Well, nobody's offered any actual *numbers* here.



I measured the runtime as seen from the JDBC client and as reported by explain analyze 
(the last line reading "Total runtime:")


The "runtime" from explain analyze really should not be measurably
different, since it doesn't include parse time or data transmission
time, and you ought to get the same execution plan with or without the
column names.


Interesting.

My intend _was_ to exclude data transmission from the test by using explain 
analyze, but I'm surprised that it doesn't include the parsing in the execution 
time reported from that.



I'd dismiss those numbers as being within experimental
error, except it seems odd that they all differ in the same direction.


And it's reproducable (at least on my computer). As I said I ran it 20 times 
(each run did it for 5,10,... columns) and the values I posted were averages of 
those runs.

Regards
Thomas

 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump and --inserts / --column-inserts

2010-07-17 Thread Tom Lane
Thomas Kellerer  writes:
> Tom Lane wrote on 17.07.2010 16:36:
>> Well, nobody's offered any actual *numbers* here.

> I measured the runtime as seen from the JDBC client and as reported by 
> explain analyze (the last line reading "Total runtime:")

The "runtime" from explain analyze really should not be measurably
different, since it doesn't include parse time or data transmission
time, and you ought to get the same execution plan with or without the
column names.  I'd dismiss those numbers as being within experimental
error, except it seems odd that they all differ in the same direction.
The overall times seen from the client seem plausible though;
particularly since we can see an increase in the percentage overhead
as the number of columns increases, which is what you'd expect if
you were accurately measuring a column-name-lookup overhead.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fedora 13 killed dblink this week...

2010-07-17 Thread Tom Lane
Jerry LeVan  writes:
> I wish whoever is in charge of the yum rpm depository would get
> cracking on building the Fedora 13 repo...

Uhm ... what's wrong with the Fedora-supplied PG RPMs?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump and --inserts / --column-inserts

2010-07-17 Thread Thomas Kellerer

Thomas Kellerer wrote on 17.07.2010 18:29:

Want to do some experiments?


Apparently there *is* a substiantial overhead, but I suspected the
sending of the raw SQL literal to be a major factor here.
(Server and JDBC program were running on the same machine)



In case any one is interested.

Out of curiosity I ran the same test with a local Oracle installation (10.2) 
and there the overhead is substantially lower (for 20 columns, only 2% slower, 
compared to 26% with Postgres)

I can't run something equivalent to explain analyze in Oracle from within JDBC, 
so I could not compare those figures. But it seems that either the parsing 
overhead in Oracle is lower or the JDBC driver is more efficient...

Regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump and --inserts / --column-inserts

2010-07-17 Thread Thomas Kellerer

Tom Lane wrote on 17.07.2010 16:36:

Thomas Kellerer  writes:

I'm till a bit surprised that parsing the statement _with_ a column list is 
mesurably slower than withou a column list.


Well, nobody's offered any actual *numbers* here.  It's clear that
parsing the column list will take more time than not doing so, but
whether that adds up to anything significant or even measurable
compared to the rest of the statement's cost is not so clear.



Want to do some experiments?


OK, I wrote a small Java program that inserts rows using a column list and 
without column list

I did that for 5,10,15 and 20 (integer) columns. then inserting 1 rows into 
the empty table.

I measured the runtime as seen from the JDBC client and as reported by explain analyze 
(the last line reading "Total runtime:")
All times are milliseconds  and are averaged over 20 runs

ColumnCount   with columns   without columns
   5  1132   1092.6
  10  1288.531148.33
  15  1430   1215.67
  20  1657.6 1313.2


Apparently there *is* a substiantial overhead, but I suspected the sending of 
the raw SQL literal to be a major factor here.
(Server and JDBC program were running on the same machine)

So I ran "EXECUTE ANALYZE" instead of INSERT to get the processing time of the 
server and remove the JDBC/SQL literal overhead.

  ColumnCount with columns   without columns
   5  116.33 115.3
  10  149.89 128.28
  15  169.94 159.14
  20  197.72 193.66

Which still shows an overhead, but less.

So apparently the "even slower" in the manual *is* right, as the overhead of 
sending the longer SQL Statement over the network does impact psql as well.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to distribute quantity if same product is in multiple rows

2010-07-17 Thread Andrus

Order contains same product in multiple rows.
I tried to calculate undelivered quantity using script below but it produces
wrong result:
delivered quantity is substracted from both rows, not distributed.

How to distibute undelivered quantity according to row quantity in every row
?
Can it be done using SQL or should SCAN loop in plpgsql used?

Andrus.

-- Order details
CREATE TEMP TABLE rid  (
id serial primary key,
toode char(20), -- product id
kogus numeric(12,5), -- ordered quantity
taitmata numeric(12,5) ) -- undelivered quantity which needs calculated
on commit drop;

insert into rid (toode,kogus) values ('PRODUCT1', 10 );
insert into rid (toode,kogus) values ('PRODUCT1', 20 );

-- Delivered quantities
CREATE TEMP TABLE mtait  (
toode char(20) primary key, -- product id
taitmkogus numeric(12,5) ) -- delivered quantity
on commit drop;

insert into mtait(toode, taitmkogus) values ('PRODUCT1', 15);

UPDATE rid SET taitmata= rid.kogus -
   COALESCE((SELECT taitmkogus FROM mtait WHERE mtait.toode=rid.toode),0);

select taitmata
 from rid;

Observed:

-5
5

Expected:

0
15

First row 10 is fully delivered since 10<15. From second row quantity 20
only 5 is delivered (15-10=5)


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fedora 13 killed dblink this week...

2010-07-17 Thread Jerry LeVan
On Sat, 2010-07-17 at 08:49 -0700, Joe Conway wrote:
> On 07/17/2010 08:12 AM, Tom Lane wrote:
> > I wrote:
> > Oh, and Theory 3: see if restarting your postgresql server fixes it.
> > If you didn't restart then the old openldap libraries are probably
> > still in the server's address space.  I'm not sure exactly how that
> > might lead to this symptom, but it sure looks like your libldap and
> > liblber are out of sync somehow.
> 
> FWIW, I just tested on fedora 13 and sure enough, I started with the
> same error, and fixed it with a postgres restart.
> 
> Joe
> 
> 
Wow,  Tom  strikes again...

The server restart fixed the problem.

I wish whoever is in charge of the yum rpm depository would get
cracking on building the Fedora 13 repo...

Jerry


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fedora 13 killed dblink this week...

2010-07-17 Thread Joe Conway
On 07/17/2010 08:12 AM, Tom Lane wrote:
> I wrote:
> Oh, and Theory 3: see if restarting your postgresql server fixes it.
> If you didn't restart then the old openldap libraries are probably
> still in the server's address space.  I'm not sure exactly how that
> might lead to this symptom, but it sure looks like your libldap and
> liblber are out of sync somehow.

FWIW, I just tested on fedora 13 and sure enough, I started with the
same error, and fixed it with a postgres restart.

Joe


-- 
Joe Conway
credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting, & Support



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] cache lookup failed for function 19119

2010-07-17 Thread David Fetter
On Thu, Jul 15, 2010 at 10:21:52AM -0400, Merlin Moncure wrote:
> On Thu, Jul 15, 2010 at 2:34 AM, tamanna madaan
>  wrote:
> > Hi All
> >
> > I am using  postgres-8.1.2 .
> >
> > And getting this error “cache lookup failed for function 19119”.
> >
> > Can anyone please let me know what could have gone wrong.
> >
> > How can a function go missing . And which function
> >
> > Its talkig about ?? its some postgres’s internal function or a user defined
> > function ??
> >
> >  How can I get function name corresponding 19119
> 
> The function is either gone (it was deleted manally from pg_proc for
> example), dropped, added, etc. or there is some other problem. You
> might be able to fix the problem by recreating the function
> (create/replace) that is calling the function in question (your
> database log should be giving you some context).
> 
> You are on 8.1.2 which is crazy.  you need to immediately get the
> latest bugfix release for the 8.1 series.  You might want to consider
> a dump/reload...read the release notes for the 8.1 series here:
> http://www.postgresql.org/docs/8.1/static/release.html.

You might also want to note that 8.1's end of life is in November, so
start planning the upgrade to 9.0 right now.  You will likely need to
clean up some client code in order for that to work, as modern
versions of PostgreSQL don't allow some of the sloppy and dangerous
things (casting automatically to and from text, e.g.) that former
versions did.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fedora 13 killed dblink this week...

2010-07-17 Thread Tom Lane
I wrote:
> Theory 2: this week's F-13 update to openldap misfired on your box.
> (If so, forcibly removing and reinstalling openldap might fix it.)

Oh, and Theory 3: see if restarting your postgresql server fixes it.
If you didn't restart then the old openldap libraries are probably
still in the server's address space.  I'm not sure exactly how that
might lead to this symptom, but it sure looks like your libldap and
liblber are out of sync somehow.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fedora 13 killed dblink this week...

2010-07-17 Thread Tom Lane
Jerry LeVan  writes:
> I think that something bad has happened to Fedora 13 this week.
> This morning when I tried the following:

> select dblink_connect('host=mbp user=levan dbname=levan password=xx')

> I got the following error:

> Error: could not load library "/usr/lib/pgsql/dblink.so": 
> /usr/lib/libldap_r-2.4.so.2: undefined symbol: ber_sockbuf_io_udp

> The above worked ok last week...

Works for me, too, using the standard Fedora postgresql RPMs.

Theory 1: there's something wrong with the PGDG RPMs.

Theory 2: this week's F-13 update to openldap misfired on your box.
(If so, forcibly removing and reinstalling openldap might fix it.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump and --inserts / --column-inserts

2010-07-17 Thread Tom Lane
Thomas Kellerer  writes:
> I'm till a bit surprised that parsing the statement _with_ a column list is 
> mesurably slower than withou a column list.

Well, nobody's offered any actual *numbers* here.  It's clear that
parsing the column list will take more time than not doing so, but
whether that adds up to anything significant or even measurable
compared to the rest of the statement's cost is not so clear.
Want to do some experiments?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MESSAGE ERROR

2010-07-17 Thread Leif Biberg Kristensen
On Saturday 17. July 2010 15.14.51 Cornelio Royer Climent wrote:
> 
> Hi
> 
>  
> 
> I want to create this table, but i can't, 
> 
>  
> 
> Look this error.
> 
>  
> 
>  
> 
> CREATE TABLE security_info2 (
> 
> window character varying(64) NOT NULL
> 
> );
> 
> ERROR:  syntax error at or near "window"
> 
> LINE 2: window character varying(64) NOT NULL

"window" is a reserved word in PostgreSQL since version 8.3, I believe.

regards,
--
Leif Biberg Kristensen
http://solumslekt.org/blog/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] MESSAGE ERROR

2010-07-17 Thread Cornelio Royer Climent
 

Hi

 

I want to create this table, but i can't, 

 

Look this error.

 

 

CREATE TABLE security_info2 (

window character varying(64) NOT NULL

);

ERROR:  syntax error at or near "window"

LINE 2: window character varying(64) NOT NULL

 

 

 

Could any to help me, please?

 

I'm using postgresql 8.4

 

Thanks..

 

 



Re: [GENERAL] Fedora 13 killed dblink this week...

2010-07-17 Thread Devrim GÜNDÜZ
17.Tem.2010 tarihinde 16:10 saatinde, Jerry LeVan  
 şunları yazdı:



I think that something bad has happened to Fedora 13 this week.

This morning when I tried the following:

select dblink_connect('host=mbp user=levan dbname=levan  
password=xx')


I got the following error:

Error: could not load library "/usr/lib/pgsql/dblink.so": /usr/lib/ 
libldap_r-2.4.so.2: undefined symbol: ber_sockbuf_io_udp


PostgreSQL RPMs need to be rebuilt against new ldap RPMs, I think. You  
need to bug the PG repo maintainers.


Still, it is also Fedora's fault, too.
--
Devrim GÜNDÜZ
PostgreSQL DBA @ Akinon/Markafoni, Red Hat Certified Engineer
devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] message errror

2010-07-17 Thread Raymond O'Donnell

On 17/07/2010 14:16, Cornelio Royer Climent wrote:

CREATE TABLE security_info2 (

window character varying(64) NOT NULL

);

ERROR: syntax error at or near "window"

LINE 2: window character varying(64) NOT NULL



"window" is a reserved word:

  http://www.postgresql.org/docs/8.4/static/sql-keywords-appendix.html

Just call your column something else and it should work. Alternatively, 
you could enclose the column name in double-quotes -


  "window" character varying(64)

- but then you'll have to remember to quote it *every* time it is 
referenced in queries or elsewhere.


Ray.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] message errror

2010-07-17 Thread Cornelio Royer Climent
 Hi

 I want to create this table, but i can't, 

 Look this error.

 CREATE TABLE security_info2 (

window character varying(64) NOT NULL

);

ERROR:  syntax error at or near "window"

LINE 2: window character varying(64) NOT NULL

 Could any to help me, please?

 I'm using postgresql 8.4

 Thanks..

 


[GENERAL] Fedora 13 killed dblink this week...

2010-07-17 Thread Jerry LeVan
I think that something bad has happened to Fedora 13 this week.

This morning when I tried the following:

select dblink_connect('host=mbp user=levan dbname=levan password=xx')

I got the following error:

Error: could not load library "/usr/lib/pgsql/dblink.so": 
/usr/lib/libldap_r-2.4.so.2: undefined symbol: ber_sockbuf_io_udp

The above worked ok last week...

I am running postgresql 8.4.4 from the rpms

[je...@bigbox ~]$ rpm -qa | grep postgres
postgresql-8.4.4-1PGDG.fc12.i386
postgresql-server-8.4.4-1PGDG.fc12.i386
postgresql-pltcl-8.4.4-1PGDG.fc12.i386
postgresql-docs-8.4.4-1PGDG.fc12.i386
postgresql-libs-8.4.4-1PGDG.fc12.i386
postgresql-contrib-8.4.4-1PGDG.fc12.i386
postgresql-plpython-8.4.4-1PGDG.fc12.i386
postgresql-devel-8.4.4-1PGDG.fc12.i386

Also:
[je...@bigbox ~]$ ldd /usr/lib/libldap_r-2.4.so.2
linux-gate.so.1 =>  (0x00852000)
liblber-2.4.so.2 => /usr/lib/liblber-2.4.so.2 (0x03c3)
libresolv.so.2 => /lib/libresolv.so.2 (0x009a8000)
libsasl2.so.2 => /usr/lib/libsasl2.so.2 (0x03aaa000)
libssl.so.10 => /usr/lib/libssl.so.10 (0x037fc000)
libcrypto.so.10 => /lib/libcrypto.so.10 (0x034da000)
libpthread.so.0 => /lib/libpthread.so.0 (0x0080f000)
libc.so.6 => /lib/libc.so.6 (0x00681000)
libdl.so.2 => /lib/libdl.so.2 (0x0082c000)
libcrypt.so.1 => /lib/libcrypt.so.1 (0x00597000)
libgssapi_krb5.so.2 => /lib/libgssapi_krb5.so.2 (0x037cc000)
libkrb5.so.3 => /lib/libkrb5.so.3 (0x036e4000)
libcom_err.so.2 => /lib/libcom_err.so.2 (0x00dfa000)
libk5crypto.so.3 => /lib/libk5crypto.so.3 (0x0379f000)
libz.so.1 => /lib/libz.so.1 (0x0086a000)
/lib/ld-linux.so.2 (0x0065f000)
libfreebl3.so => /lib/libfreebl3.so (0x005c9000)
libkrb5support.so.0 => /lib/libkrb5support.so.0 (0x00646000)
libkeyutils.so.1 => /lib/libkeyutils.so.1 (0x00637000)
libselinux.so.1 => /lib/libselinux.so.1 (0x00981000)

This happens on *both* of my Fedora 13 boxen...

Any suggestions

Jerry
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incorrect FTS result with GIN index

2010-07-17 Thread Oleg Bartunov

Artur,

I downloaded your dump and tried your queries with index, I see no problem
so far.

  Table "public.search_tab"
 Column |  Type   |Modifiers 
+-+--

 id | integer | not null default 
nextval('search_tab_id_seq1'::regclass)
 keywords   | text|
 collection_urn | text|
 bbox   | text|
 object_urn | text| not null
 description| text|
 category   | text|
 summary| text|
 priority   | integer | 
Indexes:

"search_tab_pkey1" PRIMARY KEY, btree (id)
"idx_keywords_ger" gin (to_tsvector('german'::regconfig, keywords))

test=# explain analyze select count(*) from search_tab 
where (to_tsvector('german', keywords ) @@ to_tsquery('german', 'ee:*')) 
and   (to_tsvector('german', keywords ) @@ to_tsquery('german', 'dd:*'));
   QUERY PLAN 
-

 Aggregate  (cost=103.87..103.88 rows=1 width=0) (actual time=24.784..24.784 
rows=1 loops=1)
   ->  Bitmap Heap Scan on search_tab  (cost=5.21..103.80 rows=25 width=0) 
(actual time=24.642..24.769 rows=123 loops=1)
 Recheck Cond: ((to_tsvector('german'::regconfig, keywords) @@ 
'''ee'':*'::tsquery) AND (to_tsvector('german'::regconfig, keywords) @@ 
'''dd'':*'::tsquery))
 ->  Bitmap Index Scan on idx_keywords_ger  (cost=0.00..5.21 rows=25 
width=0) (actual time=24.620..24.620 rows=123 loops=1)
   Index Cond: ((to_tsvector('german'::regconfig, keywords) @@ 
'''ee'':*'::tsquery) AND (to_tsvector('german'::regconfig, keywords) @@ 
'''dd'':*'::tsquery))
 Total runtime: 24.830 ms
(6 rows)

see rows=123


On Thu, 15 Jul 2010, Artur Dabrowski wrote:



Hello,

I was trying to use GIN index, but the results seem be incorrect.


1. QUERY WITHOUT INDEX
select count(*) from search_tab where
(to_tsvector('german', keywords ) @@ to_tsquery('german', 'ee:*')) and
(to_tsvector('german', keywords ) @@ to_tsquery('german', 'dd:*'));

count
---
  123
(1 row)


2. CREATING INDEX
create index idx_keywords_ger on search_tab
using gin(to_tsvector('german', keywords));


3. QUERY WITH INDEX
select count(*) from search_tab where
(to_tsvector('german', keywords ) @@ to_tsquery('german', 'ee:*')) and
(to_tsvector('german', keywords ) @@ to_tsquery('german', 'dd:*'));

count
---
  116
(1 row)


The number of rows is different. To make things more funny and ensure
problem is not caused by dictionary normalisation:

4. EQUIVALENT QUERY WITH INDEX
select count(*) from search_tab where
(to_tsvector('german', keywords ) @@ to_tsquery('german', 'ee:* & dd:*'));

count
---
  123
(1 row)

I tried the same with simple-based dictionary. The problem is always
reproducible.

Total count of records in my database is 1 006 300 if it matters.

One of missing results is the following: "lSWN eeIf hInEI IN
SIL3WugEOANcEGVWL1L LBAGAeLlGS ttfL DDhuDEIni9 ce". If the query is more
specifically targeted to find this row then it founds it:

5. MORE DETAILED QUERY WITH INDEX
select keywords from search_tab where
(to_tsvector('german', keywords ) @@ to_tsquery('german', 'eeI:* & dd:*'));

keywords



lSWN eeIf hInEI IN SIL3WugEOANcEGVWL1L LBAGAeLlGS ttfL DDhuDEIni9 ce
tSALWIEEIn-3WNecGAINfLuLAV DDLIWNG E Lt h c8  BiIfgGl1 EeIhulSLenS6LDe5O
hGn DDlhIgGEAcS1O eeiEEI WnILWELS68VBLL AGNIAfINt6 lLuWuNeDc ItLfe SL
hGe WIiI EeItnLLuA1efOh3ALWc  uGINEltcIBE LnegLDNA3 DD SVNG LSSIlWfE
eeIW ItueS W39LnELg-GuDLEhAn8BeFG IVi DDNEfLG1SI 1tNIOA  lAhNLLccfWISE l
6em on.0nsRH nehSA2l1HAsauncu0I65l7 ddnsn1SAS i u0eLAnlr t70gaains w gzsH
eeiog
rfiwgso0g364l1 1wU eei1n 5lL dDA 0
DDInNcEfSWAEAtcL1IeSuAG5LE Lilh8tEGeDg f3B eEIOL7h uWV-L1IGN LINWeIn l S
ils eeiru00ewH.6sgAeHoSlLhglso0 asn0u2a atisA0 ddcngAnzRA Se Au2 nm8ns0
uS8snH
DDD EWlE1GShhLe8L NENI  tuL cgGGInfcBAlLfIO L1S eeIWeAEnILStu AViWNI
n IOLLt 0Alih tuWNE L nAGlVSNSDI DDeW BIegfG EeIhL9ELeScELWGAIfN1uIc
DnSE eeIWLu9tLNhNEuAt I1BelhGGfLWLS nSWINI eiELgAIG DDLEclV7 IO c Af
EeIElfN L4I lE2G cSOLniAWgSVItc ILDN L57BuDfALtSIe-WnGhGIW DDA NE1Lhuee
hNILN DD L6flSEeW1gthfI L1WAlENE eEIGIAt VGBDO uGLeLccAeSuLWIn Ii nS
(14 rows)


Did I misunderstood something or is it a bug?

Best regards
Artur



Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-general mailing list (pgsql-general@

Re: [GENERAL] pg_dump and --inserts / --column-inserts

2010-07-17 Thread Thomas Kellerer

Craig Ringer wrote on 17.07.2010 03:13:

On 17/07/10 04:26, Thomas Kellerer wrote:


Hmm.
For years I have been advocating to always use fully qualified column
lists in INSERTs (for clarity and stability)
And now I learn it's slower when I do so :(


If you're not doing hundreds of thousands of identical ones at a time,
it's still very much a good idea. The costs of parsing and transmission
are usually pretty insignificant, and the readability/maintenance
benefits are huge.

It's only when dealing with bulk loading that this sort of thing starts
to be worth thinking about.


Thanks for the clarification.

I'm till a bit surprised that parsing the statement _with_ a column list is 
mesurably slower than withou a column list.

Regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Efficient Way to Merge Two Large Tables

2010-07-17 Thread Daniel Verite
Joshua Rubin wrote:

> I need to figure out why this is slow, and if there is any faster way.

Have you considered INSERTing into a third table that would replace both
source tables when it's over? The target table would initially have no index.

Best regards,
-- 
Daniel
PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general