Re: [GENERAL] Sharing data between databases

2011-05-11 Thread John R Pierce

On 05/11/11 9:04 PM, Tim Uckun wrote:

or carefully structure your dblink joins so they can perform efficiently,
>  possibly using temp tables as a sort of materialized view.

According to the documents unless you are writing procedural code with
cursors when you touch the dblink view it will pull the entire
table/recordset over.



well, the idea is, you only query the remote server for the data you 
know you'll need.  yeah, you can't arbitrarily do complex joins between 
large tables, you want to filter as much as you can with the remote 
query so the dblink only pulls across data you need.   procedures might 
help too.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sharing data between databases

2011-05-11 Thread Tim Uckun
> or carefully structure your dblink joins so they can perform efficiently,
> possibly using temp tables as a sort of materialized view.

According to the documents unless you are writing procedural code with
cursors when you touch the dblink view it will pull the entire
table/recordset over.

>
> thats really all that the fancier database engines do behind the scenes...
> and even then, distributed joins can be painful.

I am not sure what they do but I have done this kind of thing in SQL
server without any problems and with almost no performance penalty if
the two databases were on the same instance.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sharing data between databases

2011-05-11 Thread John R Pierce

On 05/11/11 8:10 PM, Tim Uckun wrote:

That's disappointing. I guess I'll have to look towards a replication solution.




or carefully structure your dblink joins so they can perform 
efficiently, possibly using temp tables as a sort of materialized view.


thats really all that the fancier database engines do behind the 
scenes... and even then, distributed joins can be painful.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sharing data between databases

2011-05-11 Thread Tim Uckun
That's disappointing. I guess I'll have to look towards a replication solution.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sharing data between databases

2011-05-11 Thread Craig Ringer

On 05/12/2011 08:48 AM, Tim Uckun wrote:

What is the preferred way to share tables between databases?

I read about dblink but it doesn't seem optimal because it needs to
pull in the entire query every time the view is referenced so it might
be highly inefficient if I am trying to join a dblinked table with a
"local" table.


There isn't a good one for PostgreSQL, really.

Most people use replication tools like slony, londiste or bucardo to 
replicate shared tables between databases, so each has a copy.


SQL/MED may improve the situation - at some point, possibly.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] invalid byte sequence for encoding "UTF8": 0xf1612220

2011-05-11 Thread Craig Ringer

On 05/11/2011 03:16 PM, AI Rumman wrote:

I am trying to migrate a database from Postgresql 8.2 to Postgresql 8.3
and getting the following error:

pg_restore: [archiver (db)] Error from TOC entry 2764; 0 29708702 TABLE
DATA originaldata postgres
pg_restore: [archiver (db)] COPY failed: ERROR:  invalid byte sequence
for encoding "UTF8": 0xf1612220
HINT:  This error can also happen if the byte sequence does not match
the encoding expected by the server, which is controlled by
"client_encoding".
CONTEXT:  COPY wi_originaldata, line 3592

I took a dump from 8.2 server and then tried to restore at 8.3.

Both the client_encoding and server_encoding are UTF8 at both the servers.


Newer versions of Pg got better at caching bad unicode. While this helps 
prevent bad data getting into the database, it's a right pain if you're 
moving data over from an older version with less strict checks.


I don't know of any way to relax the checks for the purpose of importing 
dumps. You'll need to fix your dump files before loading them (by 
finding the faulty text and fixing it) or fix it in the origin database 
before migrating the data. Neither approach is nice or easy, but nobody 
has yet stepped up to write a unicode verifier tool that checks old 
databases' text fields against stricter rules...


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: cannot execute nextval() in a read-only transaction

2011-05-11 Thread Craig Ringer

On 05/11/2011 02:29 PM, Dae-man Yang wrote:

I upgrade postgresql from 8.4.2 to 9.0.4.
But I have one problem.
The Error message 'cannot execute nextval() in a read-only transaction'

Please help me.


nextval() modifies a sequence, so you shouldn't be doing it in a 
read-only transaction. Pg 9.0.4 enforces this, but it was still a bug in 
whatever was trying to use nextval() in a read-only transaction before 
9.0, Pg just didn't notice and warn you.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Debug Contrib/cube code

2011-05-11 Thread Joshua Tolley
On Fri, May 06, 2011 at 10:43:23AM +0530, Nick Raj wrote:
> Hi,
> I am using postgresql-8.4.6. I want to debug the contrib/cube code. Can we
> able to debug that cube code?  Because there is no .configure  file to
> enable debug. Is there is any way to change make file to enable debug?

If your postgres build uses --enable-debug, cube should use it as well. It
determines this either through pg_config, or by referring to makefiles in the
postgres source directory contrib/cube lives in.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


[GENERAL] Sharing data between databases

2011-05-11 Thread Tim Uckun
What is the preferred way to share tables between databases?

I read about dblink but it doesn't seem optimal because it needs to
pull in the entire query every time the view is referenced so it might
be highly inefficient if I am trying to join a dblinked table with a
"local" table.

Cheers.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Urgent Order

2011-05-11 Thread Joshua J. Kugler
On Saturday 07 May 2011, John R Pierce elucidated thus:
> On 05/07/11 6:08 AM, Bob Wilson wrote:
> > Hello
> > This is Bob and I will like to order ( Indexing Table )Do get back
> > to me with the types and cost for the ones you do carry and let me
> > know if there is an extra cost when using visa or master
> > Card.Kindly get back  to me with your name Are you the sales
> > manager or the Owner?
>
> * smallint - $2
> * integer - $4
> * bigint - $8
> * varchar - $1/character
> * boolean - $1 each
> * bytea - $1/byte
> * date - $6
> * timestamp - $8
> * ...

http://www.google.com/search?q=bob+wilson+urgent+order

Makes for some amusing reading.

j

-- 
Joshua Kugler
Part-Time System Admin/Programmer
http://www.eeinternet.com - Fairbanks, AK
PGP Key: http://pgp.mit.edu/  ID 0x73B13B6A

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] FILLFACTOR and increasing index

2011-05-11 Thread Tomas Vondra
Hi,

I've studied the implementation of the btree indexes and how exactly the
fillfactor is used, and in general

- when a page split happens, the process needs to obtain more locks
  than with simple insert, which may result in contention with other
  processes that modify the index (the same pages)

- the fillfactor is used only for the leaf pages, the rest of the index
  does not use it (directly)

So lowering the number of page splits might remove some contention when
there's a lot of processes accessing the same pages.

But that's the theory - I really was not able to come up with a test
that benefits from lower fillfactor. Actually the lower the fillfactor,
the larger the index, which may be a significant issue with huge indexes.

So in your case, I'd probably go with the default fillfactor (90), and
maybe I'd consider even using fillfactor=100, to make the index as dense
as possible.

Anyway I guess the impact of this will be negligible, compared to other
parameters (shared buffers, work_mem, etc.).

regards
Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGA

2011-05-11 Thread salah jubeh
Hello  Andrew,

You are right, it is pgaccess


 Thanks



 





From: Andrew Sullivan 
To: pgsql-general@postgresql.org
Sent: Tue, May 10, 2011 4:01:42 PM
Subject: Re: [GENERAL] PGA

On Tue, May 10, 201t 06:53:11AM -0700, salah jubeh wrote:
>  pga_diagrams   pga_forms   pga_graphs   pga_images   pga_queries   
>pga_reports  
>
>  pga_scripts  
> 
> 
> I am wondering are those tables are part of postgres installation. if yes 
> what 

> are used for because all these tables are empty. if not, do you know what 
> application might create/ use them

Looks like you installed pgaccess, is my guess.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] track functions call

2011-05-11 Thread Cédric Villemain
2011/5/10 Mark :
> Thanks for quick reply,
> but I want to know, which of these method is called in concrete situation. I
> suppose, that ts_rank call only one of these functions(ts_rank_wttf ,
> ts_rank_wtt , ts_rank_ttf ,ts_rank_tt ). Is it possible?

Yes, same table:
select proname,prosrc,prorettype,proargtypes from pg_proc where
proname = 'ts_rank';
  proname |prosrc| prorettype |proargtypes
-+--++---
 ts_rank | ts_rank_wttf |700 | 1021 3614 3615 23
 ts_rank | ts_rank_wtt  |700 | 1021 3614 3615
 ts_rank | ts_rank_ttf  |700 | 3614 3615 23
 ts_rank | ts_rank_tt   |700 | 3614 3615

 select oid,typname from pg_type where oid in ('1021','3614','3615','23','700');
 oid  | typname
--+--
   23 | int4
  700 | float4
 1021 | _float4
 3614 | tsvector
 3615 | tsquery

so you can find what go with what for the ts_rank function :

ts_rank([ weights float4[], ] vector tsvector,
query tsquery [, normalization integer ]) returns float4


> Thanks for reply
> Mark
>
> --
> View this message in context: 
> http://postgresql.1045698.n5.nabble.com/track-functions-call-tp4384220p4385392.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Returning NULL to a set returning C type function

2011-05-11 Thread Bborie Park

Don't do that ;-).  You could choose either to not return any row
at all when this happens, or to construct an all-nulls row to return.
ExecMakeTableFunctionResult doesn't want to guess which behavior is
appropriate for your use-case, so it just complains.

regards, tom lane



Thanks Tom.  I'll go the "no return any row" route.

--
Bborie Park
Programmer
Center for Vectorborne Diseases
UC Davis
530-752-8380
bkp...@ucdavis.edu

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regexp match not working.. (SQL help)

2011-05-11 Thread David Johnston
> 
> I am using this SQL:
> 
>SELECT id FROM table1
>WHERE   mytext   ~*   E'sub1|sub2|sub3|sub4...'
>LIMIT 10;
> 
> This is basically working, but some of the "mytext" columns being returned
> that do not contain any of these substrings. Am I doing the POSIX regexp
> wrongly? This same thing works when I try it in PHP with preg_match. But
not
> in Postgresql. I have tried several variations
> too:
> 
>WHERE   mytext   ~*   E'(sub1)(sub2)(sub3)(sub4)...'
> 

When requesting help with RegEx you are strongly advised to supply the text
of the records that you are concerned about (in this case you say you are
getting false positives so provide the contents of "mytext" for those
records) AND the exact expression you are using.

You seem to indicate the contents of mytext contains a "text document" and
you are attempting to find specific words in that document.  The expression
format supplied does not take into consideration word boundaries.  If any
part of a word matches "subX" then the pattern will match.

You may want to consider finding one or more books on RegEx.  The fact that
you consider E'(sub1)(sub2)...' to be a variation of E'sub1|sub2...'
indicates that the issue is likely not PostgreSQL itself but your
understanding on RegEx.

You may also want to try the "regexp_matches(...)" function in PostgreSQL.
Instead of just evaluating true/false it returns an array of all the matches
that were found.  Using this you would be able to see exactly what text
PostgreSQL is matching with your expression.

Figuring out why something is matching that should not be (false positive)
is fairly easy since the engine itself will tell you what it matched.  The
hard situation is the false-negative, where you think something should match
and it does not.

David J.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Returning NULL to a set returning C type function

2011-05-11 Thread Tom Lane
Bborie Park  writes:
> I have a C type function that returns a set of a type.  The problem I 
> have is that the underlying function may return NULL.  When the 
> underlying function returns NULL, I get the error message:

> ERROR:  function returning set of rows cannot return null value

> I'm wondering what is the proper way to correct this problem?

Don't do that ;-).  You could choose either to not return any row
at all when this happens, or to construct an all-nulls row to return.
ExecMakeTableFunctionResult doesn't want to guess which behavior is
appropriate for your use-case, so it just complains.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query to return every 1st Sat of a month between two dates

2011-05-11 Thread Michael Nolan
On Wed, May 11, 2011 at 10:22 AM, Alex -  wrote:

>  Hi,
> is there an easy way to return the date of every first Saturday of a month
> in a data range i.e. 2011-2013
>
>
This is one way to do it:, there are others:

select '2011-01-01'::date + s.a as dates from generate_series(0,1095)
as s(a)
where to_char('2011-01-01'::date+s.a,'dd') between '01' and '07'
and to_char('2011-01-01'::date+s.a,'dy') = 'sat'

--
Mike Nolan


[GENERAL] Returning NULL to a set returning C type function

2011-05-11 Thread Bborie Park
I have a C type function that returns a set of a type.  The problem I 
have is that the underlying function may return NULL.  When the 
underlying function returns NULL, I get the error message:


ERROR:  function returning set of rows cannot return null value

I'm wondering what is the proper way to correct this problem?  Or is 
there are correct workaround?


So, for my sample code:

CREATE TYPE histogram AS (
min double precision,
max double precision,
count integer,
proportion double precision
);

CREATE OR REPLACE FUNCTION _st_histogram(rast raster, nband int, 
hasnodata boolean, sample_percent double precision, bins int, width 
double precision[], right boolean)

RETURNS SETOF histogram
AS '$libdir/rtpostgis-2.0','RASTER_histogram'
LANGUAGE 'C' IMMUTABLE STRICT;

PG_FUNCTION_INFO_V1(RASTER_histogram);
Datum RASTER_histogram(PG_FUNCTION_ARGS)
{
-- if any of the args are incorrect, return NULL
PG_RETURN_NULL();
}

Thanks,
Bborie

--
Bborie Park
Programmer
Center for Vectorborne Diseases
UC Davis
530-752-8380
bkp...@ucdavis.edu

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full text search to_tsquery performance with ispell dictionary

2011-05-11 Thread Oleg Bartunov

On Wed, 11 May 2011, Stanislav Raskin wrote:





Yes, loading a large dictionary is known to be a fairly expensive
operation.  There's been discussions about how to make it cheaper, but
nothing's been done yet.

   regards, tom lane


Hi Tom,

thanks for the quick response. Bad news for me ;(
We develop ajax-driven web apps, which sort of rely on quick calls to data
services. Each call to a service opens a new connection. This makes the
search service, if using fts and ispell, about 100 times slower than a
"dumb" ILIKE-implementation.

Is there any way of hack or compromise to achieve good performance without
losing fts ability?
I am thinking, for example, of a way to permanently keep a loaded
dictionary in memory instead of loading it for every connection. As I
wrote in response to Pavel Stehule's post, connection pooling is not
really an option.
Our front-end is strictly PHP, so I was thinking about using a single
persistent connection
(http://de.php.net/manual/en/function.pg-pconnect.php) for all calls. Is
there some sort of major disadvantage in this approach from the database
point of view?

Kind regards

--

Stanislav Raskin






Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Read Committed transaction with long query

2011-05-11 Thread Durumdara
Hi!

Two table:
Main
Lookup

The query is:
select Main.*, Lookup.Name
left join Lookup on (Main.Type_ID = Lookup.ID)

Lookup:
ID Name
1 Value1
2 Value 2
3 Value 3

Many records is in Main table (for example 1 million).

What happens in this case (C = connection):

C1.) begin read committed
C1.) starting this query
C1.) query running
C2.) begin read committed
C2.) update Lookup set Name = "New2" where ID = 2
C2.) commit
C1.) query running
C1.) query finished

Is it possible to the first joins (before C2 modifications) are
containing "Value2" on the beginning of the query and "New2" on the
end of the query?
So is it possible to the long query is containing not consistent state
because of C2's changing? For example mixing "Value2" and "New2"?

Thanks for your help!

Regards:
dd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full text search to_tsquery performance with ispell dictionary

2011-05-11 Thread Stanislav Raskin
On 11.05.11 16:42, "Pavel Stehule"  wrote:


>I wrote a
>patch that stores loaded dictionary in shared memory.

Hi Pavel,

very interesting. I will give it a closer look.

What do you think about using ispell to create, store and index tsvectors,
but at the same time to use the stemmer to create tsqueries? Thus, I hope
to create a fast user experience while searching, but at the same time
have more detailed tsvectors and more matches.

Regards

--

Stanislav Raskin
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regexp match not working.. (SQL help)

2011-05-11 Thread Tom Lane
Phoenix Kiula  writes:
> I am using this SQL:

>SELECT id FROM table1
>WHERE   mytext   ~*   E'sub1|sub2|sub3|sub4...'
>LIMIT 10;

> This is basically working, but some of the "mytext" columns being
> returned that do not contain any of these substrings.

[ raised eyebrow... ]  Could we see a concrete example?

One potential issue is that, depending on which PG version and locale
and database encoding you are using, case-insensitive matching might
not work properly on non-ASCII letters.  Other than that, the only
gotcha I can think of is having regexp special characters in the
substrings and failing to escape them properly.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom types and JDBC

2011-05-11 Thread Tom Lane
mephysto  writes:
> my goal is for example:  define a typeFoo (id as int, name as varchar) in
> postgres, define an object in java objFoo (id as int, name string), define a
> stored function in posgres return a typeFoo

You'd probably be better off asking about this in the pgsql-jdbc list.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom types and JDBC

2011-05-11 Thread Adrian Klaver

On 05/11/2011 08:03 AM, mephysto wrote:

M no,
my goal is for example:  define a typeFoo (id as int, name as varchar) in
postgres, define an object in java objFoo (id as int, name string), define a
stored function in posgres return a typeFoo


create ora replace function getFoo() returns typeFoo as
$$
begin
..
end;
$$

I would to connect postgres by jdbc and call stored function getFoo. A this
poin I would set properties of objFoo (id and name) with id and name
retrieved from stored function.

Is resultset the only way to achieve this goal?


As far as I can tell, yes. If the values for id and name never changed 
you could just hard code them as properties of objFoo. Assuming that the 
function getFoo() changes the values, the only way you can know what the 
new values are is query the database and get them from the result set.




Thanks.

Meph



--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full text search to_tsquery performance with ispell dictionary

2011-05-11 Thread Tom Lane
Stanislav Raskin  writes:
> Is there any way of hack or compromise to achieve good performance without
> losing fts ability?
> I am thinking, for example, of a way to permanently keep a loaded
> dictionary in memory instead of loading it for every connection. As I
> wrote in response to Pavel Stehule's post, connection pooling is not
> really an option.

You really, really, really need to fix whatever is preventing you from
using pooling.  Opening a database connection to run one query is just
horridly inefficient.  Loading a dictionary has evidently raised it past
the threshold where you couldn't ignore it anymore, but that doesn't
mean you weren't paying through the nose before.  A postgres backend
just isn't a lightweight object --- there is all sorts of stuff it has
to cache before it's really up to speed.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Query to return every 1st Sat of a month between two dates

2011-05-11 Thread Alex -

Hi,is there an easy way to return the date of every first Saturday of a month 
in a data range i.e. 2011-2013
Any help would be appreciated
ThanksAlex

[GENERAL] Regexp match not working.. (SQL help)

2011-05-11 Thread Phoenix Kiula
I have a text column in a table, which I want to search through --
seeking the occurrence of about 300 small strings in it.

Let's say the table is like this:

table1 (
 id   bigint primary key
,mytext   text
,mydate  timestamp without time zone
);


I am using this SQL:

   SELECT id FROM table1
   WHERE   mytext   ~*   E'sub1|sub2|sub3|sub4...'
   LIMIT 10;

This is basically working, but some of the "mytext" columns being
returned that do not contain any of these substrings. Am I doing the
POSIX regexp wrongly? This same thing works when I try it in PHP with
preg_match. But not in Postgresql. I have tried several variations
too:

   WHERE   mytext   ~*   E'(sub1)(sub2)(sub3)(sub4)...'

 None of this is working. I cannot seem to get out the results that do
NOT contain any of those strings.

Appreciate any pointers!

Thanks!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full text search to_tsquery performance with ispell dictionary

2011-05-11 Thread Pavel Stehule
2011/5/11 Stanislav Raskin :
> On 11.05.11 16:42, "Pavel Stehule"  wrote:
>
>
>>I wrote a
>>patch that stores loaded dictionary in shared memory.
>
> Hi Pavel,
>
> very interesting. I will give it a closer look.
>
> What do you think about using ispell to create, store and index tsvectors,
> but at the same time to use the stemmer to create tsqueries? Thus, I hope
> to create a fast user experience while searching, but at the same time
> have more detailed tsvectors and more matches.
>

I know nothing - but there can be risk - two different method can to
returns two different results.

Regards

Pavel

> Regards
>
> --
>
> Stanislav Raskin
>>
>
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full text search to_tsquery performance with ispell dictionary

2011-05-11 Thread tv
>>
>>
>>
>>Yes, loading a large dictionary is known to be a fairly expensive
>>operation.  There's been discussions about how to make it cheaper, but
>>nothing's been done yet.
>>
>>regards, tom lane
>
> Hi Tom,
>
> thanks for the quick response. Bad news for me ;(
> We develop ajax-driven web apps, which sort of rely on quick calls to data
> services. Each call to a service opens a new connection. This makes the
> search service, if using fts and ispell, about 100 times slower than a
> "dumb" ILIKE-implementation.
>
> Is there any way of hack or compromise to achieve good performance without
> losing fts ability?
> I am thinking, for example, of a way to permanently keep a loaded
> dictionary in memory instead of loading it for every connection. As I
> wrote in response to Pavel Stehule's post, connection pooling is not
> really an option.
> Our front-end is strictly PHP, so I was thinking about using a single
> persistent connection
> (http://de.php.net/manual/en/function.pg-pconnect.php) for all calls. Is
> there some sort of major disadvantage in this approach from the database
> point of view?
>
> Kind regards

Hi,

opening a completely new connection for each request may be a bit
expensive, so I'd recommend using some king od connection pooling,
especially when you're doing 'small' transactions (because that's when the
overhead matters).

We had exactly the same problem and persistent connection solved it.

But it has some drawbacks too - each conneection has it's own copy of the
dictionary. So if the dictionary takes 30MB and you have 10 connections,
then 300 MB of memory is used.

regards
Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom types and JDBC

2011-05-11 Thread mephysto
M no,
my goal is for example:  define a typeFoo (id as int, name as varchar) in
postgres, define an object in java objFoo (id as int, name string), define a
stored function in posgres return a typeFoo


create ora replace function getFoo() returns typeFoo as
$$
begin
..
end;
$$

I would to connect postgres by jdbc and call stored function getFoo. A this
poin I would set properties of objFoo (id and name) with id and name
retrieved from stored function.

Is resultset the only way to achieve this goal?

Thanks.

Meph

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Custom-types-and-JDBC-tp4387382p4387604.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full text search to_tsquery performance with ispell dictionary

2011-05-11 Thread Stanislav Raskin

On 11.05.11 15:45, "Pavel Stehule"  wrote:

>it is expected behave :( . A loading of ispell dictionary is very slow.
>
>Use a german snowball instead.
>
>You can you a some pooling connection software too.


Thank you for the response.
Is the dictionary german_stem supplied with postgresql a snowball stemmer?
If yes, it sure is incredibly fast, but yields much worse results and thus
fewer and worse matches for search queries.

To use connections pooling is...difficult in my situation, to say the
least. We currently use quite a complex pgcluster/corosync setup for
multi-master replication, load balancing and high availability. To
introduce connection pooling to this setup could turn out to be quite a
big project. 

--

Stanislav Raskin




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full text search to_tsquery performance with ispell dictionary

2011-05-11 Thread Stanislav Raskin
>
>
>
>Yes, loading a large dictionary is known to be a fairly expensive
>operation.  There's been discussions about how to make it cheaper, but
>nothing's been done yet.
>
>regards, tom lane

Hi Tom,

thanks for the quick response. Bad news for me ;(
We develop ajax-driven web apps, which sort of rely on quick calls to data
services. Each call to a service opens a new connection. This makes the
search service, if using fts and ispell, about 100 times slower than a
"dumb" ILIKE-implementation.

Is there any way of hack or compromise to achieve good performance without
losing fts ability?
I am thinking, for example, of a way to permanently keep a loaded
dictionary in memory instead of loading it for every connection. As I
wrote in response to Pavel Stehule's post, connection pooling is not
really an option.
Our front-end is strictly PHP, so I was thinking about using a single
persistent connection
(http://de.php.net/manual/en/function.pg-pconnect.php) for all calls. Is
there some sort of major disadvantage in this approach from the database
point of view?

Kind regards

--

Stanislav Raskin



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full text search to_tsquery performance with ispell dictionary

2011-05-11 Thread Pavel Stehule
Hello

2011/5/11 Stanislav Raskin :
>
> On 11.05.11 15:45, "Pavel Stehule"  wrote:
>
>>it is expected behave :( . A loading of ispell dictionary is very slow.
>>
>>Use a german snowball instead.
>>
>>You can you a some pooling connection software too.
>
>
> Thank you for the response.
> Is the dictionary german_stem supplied with postgresql a snowball stemmer?
> If yes, it sure is incredibly fast, but yields much worse results and thus
> fewer and worse matches for search queries.
>
> To use connections pooling is...difficult in my situation, to say the
> least. We currently use quite a complex pgcluster/corosync setup for
> multi-master replication, load balancing and high availability. To
> introduce connection pooling to this setup could turn out to be quite a
> big project.
>

German_stem is part of distribution. I am thinking so result of stems
are usable because the reports about slow speed are not often.

There are not exists Czech stem, so we have to use a ispell. I wrote a
patch that stores loaded dictionary in shared memory. You can find
source code in archive pg_hacker mailing list. But it isn't well
tested and it is just prototype - not accepted to pg. You can test it.
Sometimes people use a >>simple<< configuration here. It isn't best
but it is fast.

Regards

Pavel Stehule

> --
>
> Stanislav Raskin
>
>
>
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Recursive select / updates

2011-05-11 Thread Alex -


Hi,I have a problem where i need to calculate totals in a table that are based 
on previous values and calculations.I am currently doing that in a function 
which works fine but was wondering if there is a more elegant or efficient way 
to do this.
Here is an example table, ordered by row no.The total is based on the previous 
rows. Note that the difference of a values does not always need to be 1
 row  |  a |   b   | total  |--++---+--- + 1|  3 |11 |  
0 |   2|  5 |34 | 22 |   3|  6 |64 | 67 |   4|  
7 |   525 |176 | 5|  8 |  9994 |810 | 6|  9 | 26216 |  11438 | 
7| 10 |  3363 |  48282 | 8| 11 |  1147 |  88489 | 9| 12 |  1037 | 
129843 | 10   | 13 |   200 | 172234 |

Total Row 2(a2-a1)*b1 i.e. (5-3)*11 = 22
Total Row 3 (a3-a2)*b2 + (a3-a1)*b1  i.e (6-5)*34 + (6-3)*11 = 34+33=67 
Total Row 4 (a4-a3)*b3 + (a4-a2)*b2 + (a4-a1)*b1  i.e (7-6)*64 + (7-5)*34 + 
(7-3)*11 = 64+68+44=176 
Has anyone a clever solution for that problem?
Thanks for any suggestions.
Alex  

Re: [GENERAL] temporarily disabling foreign keys

2011-05-11 Thread Seb
On Tue, 10 May 2011 14:29:48 -0400,
Vick Khera  wrote:

> in recent versions of postgres, there is a "replication" mode designed
> specifically for replication software to disable FK's and other
> triggers.  Perhaps investigate that.

> the other option is to make your FK's deferrable, and do all your
> delete in one big transaction with the FK checks deferred.  not sure
> if that will help.

> either that, or order your deletes and loads in a way such that the
> FKs remain satisfied.  unless you have circular dependencies, this
> should be possible.

Excellent, this last suggestion was actually a very simple and efficient
solution.  However, I got curious about the deferrable FK's, which might
make this and other scripts more flexible.

Thanks everyone for your feedback,

-- 
Seb


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom types and JDBC

2011-05-11 Thread Adrian Klaver
On Wednesday, May 11, 2011 7:07:25 am mephysto wrote:
> This is the point!
> 
> I would to know if an alternative of resultset exist to retrieve custom
> data types from postgres by jdbc.
> 
> I explained me?

Not sure:) The information is in the server and the JDBC driver is external to 
the server, so a query is involved. If I understand what you want is a query 
that fetches the data types available in a server directly from the server 
without parsing the results of arbitrary queries.  Not sure exactly what 
information you are looking for, but the system catalog pg_type might be a 
start.

> 
> --

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom types and JDBC

2011-05-11 Thread mephysto
This is the point!

I would to know if an alternative of resultset exist to retrieve custom data
types from postgres by jdbc.

I explained me?

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Custom-types-and-JDBC-tp4387382p4387475.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Custom types and JDBC

2011-05-11 Thread Dave Cramer
Hi,

I'm not sure what you mean by without result set ? There is no real
way to get information back from the driver except using a result set.

Dave Cramer

dave.cramer(at)credativ(dot)ca
http://www.credativ.ca




On Wed, May 11, 2011 at 3:19 AM, mephysto  wrote:
> Hi,
> I would use custom types in several stored functions and I would my sotred
> function return these custom types.
> I would launch stored functions and retrieve results by JDBC interface: is
> it possible to map postgres custom types to java objects without resultset
> use?
>
> Thanks in advance.
>
> Meph
>
> --
> View this message in context: 
> http://postgresql.1045698.n5.nabble.com/Custom-types-and-JDBC-tp4386655p4386655.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full text search to_tsquery performance with ispell dictionary

2011-05-11 Thread Tom Lane
Stanislav Raskin  writes:
> The problem is, that if I open a new connection to the database and do
> something like this
> SELECT to_tsquery('german_de', 'abcd');
> it takes A LOT of time for the query to complete for the first time. About
> 1-1,5s. If I submit the same query for a second, third, fourth time and so
> on, it takes only some 10-20ms, which is what I would expect.

> It almost seems as if the dictionary is somehow analyzed or indexed and the
> results cached for each connection, which seems counter-intuitive to me.

Yes, loading a large dictionary is known to be a fairly expensive
operation.  There's been discussions about how to make it cheaper, but
nothing's been done yet.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] full text search to_tsquery performance with ispell dictionary

2011-05-11 Thread Pavel Stehule
Hello

2011/5/11 Stanislav Raskin :
> Hello everybody,
> I was experimenting with the FTS feature on postgres 8.3.4 lately and
> encountered a weird performance issue when using a custom FTS configuration.
> I use this german ispell dictionary, re-encoded to utf8:
> http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/dicts/ispell/ispell-german-compound.tar.gz
> With the following configuration:
>
> CREATE TEXT SEARCH CONFIGURATION public.german_de (COPY =
> pg_catalog.german);
>
> CREATE TEXT SEARCH DICTIONARY german_de_ispell (
>
>     TEMPLATE = ispell,
>
>     DictFile = german_de_utf8,
>
>     AffFile = german_de_utf8,
>
>     StopWords = german_de_utf8
>
> );
>
> ALTER TEXT SEARCH CONFIGURATION german_de
>
>     ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
>
>                       word, hword, hword_part
>
>     WITH german_de_ispell, german_stem;
>
> So far so good. Indexing and creation of tsvectors works like a charm.
> The problem is, that if I open a new connection to the database and do
> something like this
> SELECT to_tsquery('german_de', 'abcd');
> it takes A LOT of time for the query to complete for the first time. About
> 1-1,5s. If I submit the same query for a second, third, fourth time and so
> on, it takes only some 10-20ms, which is what I would expect.
> It almost seems as if the dictionary is somehow analyzed or indexed and the
> results cached for each connection, which seems counter-intuitive to me.
> After all, the dictionaries should not change that often.
> Did I miss something or did I do something wrong?
> I'd be thankful for any advice.
> Kind Regards

it is expected behave :( . A loading of ispell dictionary is very slow.

Use a german snowball instead.

You can you a some pooling connection software too.

Regards

Pavel Stehule

> --
>
> Stanislav Raskin
>
> livn GmbH
> Campus Freudenberg
> Rainer-Gruenter-Str. 21
> 42119 Wuppertal
>
> +49(0)202-8 50 66 921
> ras...@livn.de
> http://www.livn.de
>
> livn
> local individual video news GmbH
> Registergericht Wuppertal HRB 20086
>
> Geschäftsführer:
> Dr. Stefan Brües
> Alexander Jacob

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ts_rank vs ts_rank_cd

2011-05-11 Thread Mark
Hi I have tested ts_rank and ts_rank_cd for searching in Wikipedia. I would
like to know which of them is better for searching in wikipedia. I have read
something like that ts_rank_cd is better for long sentences, oposite ts_rank
is better for single terms. But generally which of them is better to use ?
Thanks for reply.
Mark 

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/ts-rank-vs-ts-rank-cd-tp4385337p4385337.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] track functions call

2011-05-11 Thread Mark
Thanks for quick reply,
but I want to know, which of these method is called in concrete situation. I
suppose, that ts_rank call only one of these functions(ts_rank_wttf ,
ts_rank_wtt , ts_rank_ttf ,ts_rank_tt ). Is it possible?
Thanks for reply 
Mark

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/track-functions-call-tp4384220p4385392.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] full text search to_tsquery performance with ispell dictionary

2011-05-11 Thread Stanislav Raskin
Hello everybody,

I was experimenting with the FTS feature on postgres 8.3.4 lately and
encountered a weird performance issue when using a custom FTS configuration.

I use this german ispell dictionary, re-encoded to utf8:

http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/dicts/ispell/ispell-g
erman-compound.tar.gz

With the following configuration:

CREATE TEXT SEARCH CONFIGURATION public.german_de (COPY =
pg_catalog.german);



CREATE TEXT SEARCH DICTIONARY german_de_ispell (

TEMPLATE = ispell,

DictFile = german_de_utf8,

AffFile = german_de_utf8,

StopWords = german_de_utf8

);



ALTER TEXT SEARCH CONFIGURATION german_de

ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,

  word, hword, hword_part

WITH german_de_ispell, german_stem;



So far so good. Indexing and creation of tsvectors works like a charm.

The problem is, that if I open a new connection to the database and do
something like this

SELECT to_tsquery('german_de', 'abcd');

it takes A LOT of time for the query to complete for the first time. About
1-1,5s. If I submit the same query for a second, third, fourth time and so
on, it takes only some 10-20ms, which is what I would expect.

It almost seems as if the dictionary is somehow analyzed or indexed and the
results cached for each connection, which seems counter-intuitive to me.
After all, the dictionaries should not change that often.

Did I miss something or did I do something wrong?

I'd be thankful for any advice.

Kind Regards

--

Stanislav Raskin

livn GmbH
Campus Freudenberg
Rainer-Gruenter-Str. 21
42119 Wuppertal

+49(0)202-8 50 66 921
ras...@livn.de
http://www.livn.de

livn
local individual video news GmbH
Registergericht Wuppertal HRB 20086

Geschäftsführer:
Dr. Stefan Brües
Alexander Jacob




[GENERAL] ERROR: cannot execute nextval() in a read-only transaction

2011-05-11 Thread Dae-man Yang
I upgrade postgresql from 8.4.2 to 9.0.4.
But I have one problem.
The Error message 'cannot execute nextval() in a read-only transaction'

Please help me.


[Version 8.4.2]
DEVDB=# select version();
  version

---
 PostgreSQL 8.4.2 on x86_64-unknown-linux-gnu, compiled by GCC gcc
(GCC) 3.4.6 20060404 (Red Hat 3.4.6-10), 64-bit
(1 row)

DEVDB=# select version();
  version
---
 PostgreSQL 8.4.2 on x86_64-unknown-linux-gnu, compiled by GCC gcc
(GCC) 3.4.6 20060404 (Red Hat 3.4.6-10), 64-bit
(1 row)

DEVDB=# SET SESSION CHARACTERISTICS AS TRANSACTION READ ONLY;
SET
DEVDB=#  select nextval('ws_blog_sns_inf_sns_inf_master_seq_seq');
 nextval
-
   5
(1 row)

DEVDB=# SHOW transaction_read_only;
 transaction_read_only
---
 on
(1 row)

[Version 9.0.4]

web_dev=# select version();
  version
---
 PostgreSQL 9.0.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc
(GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit
(1 row)

web_dev=# SET SESSION CHARACTERISTICS AS TRANSACTION READ ONLY;
SET
web_dev=# SHOW transaction_read_only;
 transaction_read_only
---
 on
(1 row)

web_dev=#  select nextval('ws_blog_sns_inf_sns_inf_master_seq_seq');
ERROR:  cannot execute nextval() in a read-only transaction

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Custom types and JDBC

2011-05-11 Thread mephysto
Hi,
I would use custom types in several stored functions and I would my sotred
function return these custom types.
I would launch stored functions and retrieve results by JDBC interface: is
it possible to map postgres custom types to java objects without resultset
use?

Thanks in advance.

Meph

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Custom-types-and-JDBC-tp4386655p4386655.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres federation

2011-05-11 Thread Tim Uckun
I want to set up a central database and several satellite databases
which use some of the data from the central database.

For example

Say my central database contains people records, with a many to many
relationship with clients records.

Each client has their own database but needs read, write access to
their people. They will have other data of course which they will want
to join with their people table.  Each client has a different business
and different requirements but they are all dealing with humans.

Initially all the databases will be on the same server in the same
postgres cluster.

I have looked into dblink but I don't think it's going to work unless
I keep a local people table and figure out a way to sync the local
table with central table using dblink queries.

The other way would be set up some sort of bidirectional replication
using an off the shelf product like bucardo or londiste.  When
replicating from central to peripheral databases it would need to use
a query or a view so it doesn't replicate all the records but when
replicating back it could use the entire table.  My main worry in that
scenario is that the same person record could show up in multiple
client databases and therefore can have collision problems.

Has anybody set up anything like this? Is there some other way to do
this that is cleaner?

TIA.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: One-off attempt at catalog hacking to turn bytea column into text

2011-05-11 Thread Noah Misch
On Tue, May 10, 2011 at 04:31:37PM -0400, Vlad Romascanu wrote:
> As a one-off attempt to change a large table's 'bytea' column to
> 'text' with minimal I/O (where the 'bytea' contents is already valid
> UTF8 and the database encoding is also UTF8, and the column is not
> part of any index or anything involving collation), how unsafe is the
> following?
> 
> UPDATE pg_attribute SET atttypid='text'::regtype::oid WHERE
> attrelid=('schema_name.table_name')::regclass AND attname='col_name'
> AND atttypid='bytea'::regtype::oid;

Do a "LOCK TABLE schema_name.table_name" earlier in the same transaction.

Any of the following potentially complicates things:
- data is not valid text in the server encoding, including NUL bytes
- use in an index (you mentioned this does not apply)
- use in a CHECK or FOREIGN KEY constraint
- default expression on the column
- views or rules referencing the column
- referenced directly in a trigger definition (UPDATE OF col_name, WHEN ...)

A useful, though not foolproof, way to check whether you've snagged on any of
those is to take schema dumps of a test database after (a) doing the change
normally and (b) doing it this way, then compare.

> Additionally, if the 'bytea' happenned to also explicitly contain a
> trailing NUL prior to the "conversion" (i.e. after the hack, the last
> byte in the 'text' value would be NUL), would there be any obvious
> problems with the above hack?

Yes; this will break things in the general case.  Two text datums that differ
only in the presence of this trailing NUL will compare as unequal by texteq().
The octet_length() function will include the NUL byte in its count.  If you use
the column in very restricted ways, you might end up okay.  Note that you could
avoid these problems by updating the bytea columns to remove these NUL bytes
before making the catalog update.  It could still be much I/O, but at least you
wouldn't have a table lock while doing it.

nm

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Custom types and JDBC

2011-05-11 Thread mephysto
Hi,
I would use custom types in several stored functions and I would my sotred
function return these custom types.
I would launch stored functions and retrieve results by JDBC interface: is
it possible to map postgres custom types to java objects without resultset
use?

Thanks in advance.

Meph 

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Custom-types-and-JDBC-tp4386732p4386732.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: Fwd: Re: [GENERAL] SSDD reliability

2011-05-11 Thread Toby Corkindale
BTW, I saw a news article today about a brand of SSD that was claiming 
to have the price effectiveness of MLC-type chips, but with lifetime of 
4TB/day over 5 years.


http://www.storagereview.com/anobit_unveils_genesis_mlc_enterprise_ssds

which also links to:
http://www.storagereview.com/sandforce_and_ibm_promote_virtues_mlcbased_ssds_enterprise

which is a similar tech - much improved erase-cycle-counts on MLC.

No doubt this'll be common in all SSDs in a year or so then!

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] invalid byte sequence for encoding "UTF8": 0xf1612220

2011-05-11 Thread AI Rumman
I am trying to migrate a database from Postgresql 8.2 to Postgresql 8.3 and
getting the following error:

pg_restore: [archiver (db)] Error from TOC entry 2764; 0 29708702 TABLE DATA
originaldata postgres
pg_restore: [archiver (db)] COPY failed: ERROR:  invalid byte sequence for
encoding "UTF8": 0xf1612220
HINT:  This error can also happen if the byte sequence does not match the
encoding expected by the server, which is controlled by "client_encoding".
CONTEXT:  COPY wi_originaldata, line 3592

I took a dump from 8.2 server and then tried to restore at 8.3.

Both the client_encoding and server_encoding are UTF8 at both the servers.

Table "public.data"
   Column|  Type
|   Modifiers
-++
 orgid   | integer|
 id  | integer| not null default
nextval(('"data"'::text)::regclass)
 datatypecode| character varying(15)  |
 batchname   | character varying(60)  |
 filename| character varying(60)  |
 encoding| character varying(20)  |
 errormessage| character varying(255) |
 originaldata_backup | bytea  |
 processeddata   | bytea  |
 validatedflag   | smallint   |
 processedflag   | smallint   |
 createddate | date   |
 createdtime | time without time zone |
 modifieddate| date   |
 modifiedtime| time without time zone |
 processeddate   | date   |
 processedtime   | time without time zone |
 deletedflag | smallint   |
 originaldata| text   |
Indexes:
"data_pkey" PRIMARY KEY, btree (id)

Any help will be appreciable.