Re: [GENERAL] retrieving keys from a GIN index on a tsvector column in Postgres 9.1?

2012-10-18 Thread Craig Ringer

On 10/19/2012 03:13 AM, Chris Esposito wrote:

Hi,
 I've got a collection of documents that I'd like to do some full
text searching on in Postgres 9.1, so adding them as the contents of a
tsvector and then creating a GIN index on this column seems
straightforward enough.


Is this your SO question?

http://stackoverflow.com/questions/12961459/retrieving-keys-from-a-gin-index-on-a-tsvector-column-in-postgres-9-1

If so, please link back to SO questions when posting on the mailing list 
and vice versa, so others who're looking for information later can find 
both.


--
Craig Ringer


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] retrieving keys from a GIN index on a tsvector column in Postgres 9.1?

2012-10-18 Thread Tom Lane
Chris Esposito  writes:
> I've got a collection of documents that I'd like to do some full text
> searching on in Postgres 9.1, so adding them as the contents of a tsvector
> and then creating a GIN index on this column seems straightforward enough.
> My understanding of the GIN index is that the keys used are (depedning on
> the text search configuration used) either the list of original words or
> their normalized lexemes from the collection of original documents.

If you're storing a tsvector column, it already has done all the text
processing involved --- I don't believe GIN is adding anything, it's
just indexing the elements of the tsvector.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] oracle_fdw

2012-10-18 Thread hartrc
Environment: PostgreSQL 9.1.6
SLES 11 SP2
Oracle_fdw 0.9.7

I am trying to implement the use of oracle_fdw. 
So far I have installed an oracle client on my postgres server. I can
connect to the oracle environment from the postgres server (as postgres os
user) by navigating to $ORACLE_HOME/bin then ./sqlplus /nolog

then:
connect user@instance


I have downloaded and run make and make install for the oracle_fdw - both
executed successfully. I have created the extension successfully. I have
created a foreign server, foreign data wrapper and a foreign table.

When i try a select from the foreign table I get:

ERROR:  error connecting to Oracle: OCIEnvCreate failed to create
environment handle
DETAIL:  

ERROR: error connecting to Oracle: OCIEnvCreate failed to create environment
handle
SQL state: HV00N


from what limited info i can find this is most likely due to my ORACLE_HOME
environment variable or other environment setting? I have set ORACLE_HOME in
postgres user .bash_profile 

Where should i set ORACLE_HOME and TNS_ADMIN environment variables? Is there
something else I am missing? Does the database require to be restarted
following any changes to environment variables?

Thank you
Rob





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/oracle-fdw-tp5728931.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_stats in 9.2

2012-10-18 Thread Guillaume Lelarge
On Wed, 2012-10-17 at 11:57 -0400, Tom Lane wrote:
> Guillaume Lelarge  writes:
> > Anyone care to explain all this to me? :)
> 
> Try the stats-slot type specifications in
> src/include/catalog/pg_statistic.h
> 

Oh, OK, got it.

The three more values are, in order, the smaller frequency, the bigger
frequency, and the frequency of NULL elements.

Thanks.


-- 
Guillaume
http://blog.guillaume.lelarge.info
http://www.dalibo.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] retrieving keys from a GIN index on a tsvector column in Postgres 9.1?

2012-10-18 Thread Chris Esposito
Hi,
I've got a collection of documents that I'd like to do some full text
searching on in Postgres 9.1, so adding them as the contents of a tsvector
and then creating a GIN index on this column seems straightforward enough.
My understanding of the GIN index is that the keys used are (depedning on
the text search configuration used) either the list of original words or
their normalized lexemes from the collection of original documents.
However, for other functionality we've developed I'd really like to get
that list of keys out of the GIN index to put in another column in a
different table for use. Is this possible?

Thanks,
Chris


Re: [GENERAL] postgresql error while connecting to cluster with pgadmin

2012-10-18 Thread Raymond O'Donnell
On 18/10/2012 15:47, GMAIL wrote:
> i have created two cluster in the same host the first called "main" with
> port 5432 the second called "remote" with port 5433.
> but when i try to connect with pgadmin3 to the second cluster i get the
> error "authentication failed".
> if i try to connect to the first cluster i don't have any kind of problem.

Yes, they maintain two completely separate lists of users. A successful
login on the first cluster doesn't mean that you can necessarily log in
on the second - they are completely separate... so whatever you did on
the first to be able to log in, you need to do it again on the second.

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postgresql error while connecting to cluster with pgadmin

2012-10-18 Thread GMAIL
i have created two cluster in the same host the first called "main" with 
port 5432 the second called "remote" with port 5433.
but when i try to connect with pgadmin3 to the second cluster i get the 
error "authentication failed".

if i try to connect to the first cluster i don't have any kind of problem.

i'm using postgresql 9.1 and ubuntu 10.04


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Improve MMO Game Performance

2012-10-18 Thread Albe Laurenz
Alvaro Herrera wrote:
>>> Are there any particular settings or methods available to improve
Just
>>> insert_table operations
>>
>> The obvious ones:
>> - Wrap several of them in a transaction.
>> - Increase checkpoint_segments.
>> - Set fsync=off and hope you don't crash.
> 
> I think it would work to set asynchronous_commit=off for the
> transactions that insert moves.  That way, the fsync flushes happen in
> the background and are batched.  Raising wal_buffers is probably a
good
> idea, and keep an eye on how the walwriter is doing.

I guess you mean synchronous_commit=off :^)

Yes, I forgot that one.

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1/9.2 SERIALIZABLE: expected serialization failure between INSERT and SELECT not encountered

2012-10-18 Thread Chris Angelico
On Fri, Oct 19, 2012 at 1:22 AM, Kevin Grittner  wrote:
> Now, if no records are inserted or deleted by another connection, how
> many rows will be deleted by this statement?:
>
> delete from rc where id = (select min(id) from rc);
>
> It's a trick question; the answer depends on a race condition.
>
> Before you run that delete, on a separate connection run this:
>
> begin;
> update rc set id = id - 1;
>
> Run your delete on the first connection and it will block.  Commit
> the transaction on the second connection, and go look at the results
> on the first, and select from the table.

Ah yes, I believe that one can be best explained with the Magic: The
Gathering rules about triggers and the "intervening if" clause. In
M:TG, a trigger might say "At the beginning of your upkeep, if you
have 40 or more life, you win the game". It checks the condition
twice, once when it triggers, and again when it resolves. (Yeah, I'm a
nerd. I know detaily rules to a nerd's game. So sue me.)

In the first pass, Postgres decides which row(s) should be deleted.
Okay, let's find the one that's the lowest ID. Lowest ID is 1, that
means this record. Hey, mind if I delete this? Oh, it's locked. Hold
on.

Commit the other one.

Okay, let's go delete this one. Check to see that it still matches the
WHERE clause. Does it have ID=1? Nope, ID=0. Don't delete anything!

Yes, it's slightly odd. But really, if you're doing sweeping changes
like that, a table-level massively exclusive lock is probably the best
way to do it. I haven't seen any issues with READ COMMITTED that have
caused problems; although that's partly because I usually have an
immutable ID 'serial primary key' on every row. Helps a lot.

ChrisA


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Improve MMO Game Performance

2012-10-18 Thread Alvaro Herrera
Albe Laurenz wrote:
> Arvind Singh wrote:
> > Are there any particular settings or methods available to improve Just
> insert_table operations
> 
> The obvious ones:
> - Wrap several of them in a transaction.
> - Increase checkpoint_segments.
> - Set fsync=off and hope you don't crash.

I think it would work to set asynchronous_commit=off for the
transactions that insert moves.  That way, the fsync flushes happen in
the background and are batched.  Raising wal_buffers is probably a good
idea, and keep an eye on how the walwriter is doing.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1/9.2 SERIALIZABLE: expected serialization failure between INSERT and SELECT not encountered

2012-10-18 Thread Kevin Grittner
Chris Angelico wrote:
 
> Correct me if I'm wrong, but wouldn't:
> 
> update some_table set last_used=last_used+1 returning last_used
> 
> simply block until the other transaction is committed/rolled back?

Only at the READ COMMITTED transaction isolation level.

> That avoids all the issues of serialization AND retrying
> transactions.

If you can tolerate the occasional oddities of READ COMMITTED
transactions, it makes this quite simple, yes.

> Or is the delay itself a problem?

The delay isn't a problem per se; it's hard to get around some form
of blocking if you want transactional integrity and gap-free
assignment of numbers.  If you can do this sort of assignment near
the front of the transaction in REPEATABLE READ or SERIALIZABLE
transactions, it doesn't cost that much more than in READ COMMITTED.

And the manifestations of weirdness in READ COMMITTED can be
daunting.  The simplest example looks pretty contrived, but simple
examples usually do.

Create and populate a simple table:

create table rc (id int primary key);
insert into rc select generate_series(1, 5);

Now, if no records are inserted or deleted by another connection, how
many rows will be deleted by this statement?:

delete from rc where id = (select min(id) from rc);

It's a trick question; the answer depends on a race condition.

Before you run that delete, on a separate connection run this:

begin;
update rc set id = id - 1;

Run your delete on the first connection and it will block.  Commit
the transaction on the second connection, and go look at the results
on the first, and select from the table.

Rather than make sure that every programmer writing code to go
against your database knows exactly where all the problems are and
consistently codes around them, it can be a lot easier to use
serializable transactions.

-Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1/9.2 SERIALIZABLE: expected serialization failure between INSERT and SELECT not encountered

2012-10-18 Thread Chris Angelico
On Thu, Oct 18, 2012 at 11:26 PM, Kevin Grittner  wrote:
> updating a "last_used" number in a table and
> using the result (if it is *is* critical that there are no gaps in
> the numbers).

Correct me if I'm wrong, but wouldn't:

update some_table set last_used=last_used+1 returning last_used

simply block until the other transaction is committed/rolled back?
That avoids all the issues of serialization AND retrying transactions.
Or is the delay itself a problem?

ChrisA


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1/9.2 SERIALIZABLE: expected serialization failure between INSERT and SELECT not encountered

2012-10-18 Thread Kevin Grittner
Chris Angelico wrote:
> Craig Ringer  wrote:
>> BTW, the issue with the underlying question is that their "name"
>> column is unique. They expected to get a serialization failure on
>> duplicate insert into "name", not a unique constraint violation.
>> The question wasn't "why doesn't this fail" but "Why does this
>> fail with a different error than I expected".

I remember reading a paper about an attempt to use software to do a
static analysis of software accessing production databases, to
identify where additional protections (ecplicit locking, conflict
promotion, or conflict materialization) were needed to prevent
serialization anomalies under snapshot isolation. They initially got
a lot of hits for situations where no anomaly could actually happen
due to declarative constraints. Primary keys, unique indexes, and
foreign keys could all prevent anomalies in situations where you
would see them if the logic were left to, for example, trigger code
instead of the declarative constraints. The researchers argued that
in such situations, there was no point generating extra overhead to
use other techniques to redundantly protect data integrity.  I was
pursuaded. (I tried to find the paper to reference it here, but
wasn't successful -- I know that Alan Fekete was one of the authors
and the software they were looking at was in production use by the
Indian Institute of Technology for accounting and also a system for
tracking academic information.)

> Sounds to me like it's giving a better error anyway - more helpful
> to know _why_ the second transaction failed than to simply know
> _that_ it failed.

It is a double-edged sword -- you have a more efficient way to
protect the data and a more specific error message; *but*, you don't
have a SQLSTATE on the error message which makes it clear that the
error was due to a race condition and that it is reasonable to retry
the transaction. The application programmer can still find techniques
which will allow automated retry without bugging the user with
spurious error messages which are effectively about hitting a race
condition from which the software can automatically recover, but this
does undercut the mantra that the transaction will do the right thing
or generate a serialization failure.

As an example, if you want to let serialization failures deal with
automatic retries, without pestering the user about having hit a
recoverable race condition, you stay away from the SELECT max(id) + 1
style of key assignment in favor of sequences (if contiguous numbers
are not critical) or updating a "last_used" number in a table and
using the result (if it is *is* critical that there are no gaps in
the numbers). That is actually the only "special rule" I've found to
be needed in practice so far to otherwise allow programmers to code
each transaction as though it were going to be run alone, and
otherwise ignore concurrency issue when using serializable
transactions.

-Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1/9.2 SERIALIZABLE: expected serialization failure between INSERT and SELECT not encountered

2012-10-18 Thread Chris Angelico
On Thu, Oct 18, 2012 at 3:08 PM, Craig Ringer  wrote:
> BTW, the issue with the underlying question is that their "name" column is
> unique. They expected to get a serialization failure on duplicate insert
> into "name", not a unique constraint violation. The question wasn't "why
> doesn't this fail" but "Why does this fail with a different error than I
> expected". Not that the question made that particularly clear.

Sounds to me like it's giving a better error anyway - more helpful to
know _why_ the second transaction failed than to simply know _that_ it
failed.

I've actually never used serializable transaction isolation,
preferring more explicit constraints.

ChrisA


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1/9.2 SERIALIZABLE: expected serialization failure between INSERT and SELECT not encountered

2012-10-18 Thread Albe Laurenz
Craig Ringer wrote:
> > Why? They can be serialized. The outcome would be exactly the same
> > if session 2 completed before session 1 began.
> 
> Hmm. Good point; so long as *either* ordering is valid it's fine, it's
> only when *both* orderings are invalid that a serialization failure
> would occur. For some reason I had myself thinking that if a conflict
> could occur in either ordering the tx would fail, which wouldn't
really
> be desirable and isn't how it works.
> 
> BTW, the issue with the underlying question is that their "name"
column
> is unique. They expected to get a serialization failure on duplicate
> insert into "name", not a unique constraint violation. The question
> wasn't "why doesn't this fail" but "Why does this fail with a
different
> error than I expected". Not that the question made that particularly
clear.

But the unasked question is also answered, right?

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Improve MMO Game Performance

2012-10-18 Thread Albe Laurenz
Chris Angelico wrote:
>> - Set fsync=off and hope you don't crash.
> 
> Ouch. I might consider that for a bulk import operation or something,
> but not for live usage. There's plenty else can be done without
> risking data corruption.

I didn't mean that to be an alternative that anybody should
consider for production use.

I don't think that there are so many ways to speed up
INSERTs.  Can you think of some that I have missed?

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general