On Fri, 21 Aug 2009, xaviergxf wrote:
Hi,
I?m using php and full text on postgresql 8.3 for indexing html
descriptions. I have no acess to postgresql server, since i use a
shared hosting service.
To improve search and performance, i want to do the follow:
Strip all html tags then use my
good catch - it's because i'm used to working in plperlu.
unfortunately commenting out those lines makes no difference for this
particular data (that i linked in my original email); it's still
corrupted:
# ./bytea.pl
37652cf91fb8d5e41d3a90ea3a22ea61 != ce3fc63b88993af73fb360c70b7ec965
nathan
O
In these situations I would suggest to use a real (not that PG's FT is
not real...) search engine
like MNOGoSearch, lucene or others...
Ries
On Aug 21, 2009, at 9:56 PM, xaviergxf wrote:
Hi,
I´m using php and full text on postgresql 8.3 for indexing html
descriptions. I have no acess to
Hi,
I´m using php and full text on postgresql 8.3 for indexing html
descriptions. I have no acess to postgresql server, since i use a
shared hosting service.
To improve search and performance, i want to do the follow:
Strip all html tags then use my php script to remove more stop words
(b
On Fri, 2009-08-21 at 11:30 +1000, Yaroslav Tykhiy wrote:
> Hi there,
>
> On 19/08/2009, at 8:38 PM, Craig Ringer wrote:
> > You should also `chkdsk' your file system(s) and use a SMART
> > diagnostic tool to test your hard disk (assuming it's a single ATA
> > disk).
>
> By the way, `chkdsk'
Nathan Jahnke wrote:
> [...]
> my $encodeddata = $data;
> $encodeddata =~ s!(\\|[^ -~])!sprintf("\\%03o",ord($1))!ge; #prepare
> data for bytea column storage
> [...]
> my $insert_sth = $connection->prepare('insert into testtable (data)
> values (?) returning id');
> $insert_sth->execute($encod
On Aug 19, 2009, at 10:33 PM, Craig Ringer wrote:
On Wed, 2009-08-19 at 08:58 -0400, Arturo Pérez wrote:
19-Aug 02:24 bacula-dir JobId 1951: Fatal error: sql_create.c:789
Fill Path table Query failed: INSERT INTO Path (Path) SELECT a.Path
FROM (SELECT DISTINCT Path FROM batch) AS a WHERE NOT
On 21 Aug 2009, at 22:12, Tom Lane wrote:
Alban Hertroys writes:
I have created operators on unit_token for =, <, <=, > and >=, but
either I did something wrong defining my operators or the error is
pointing to some other problem.
The mere fact that the operator is named '=' means nothing to
got some binary data that changes when i insert and retrieve it later
from bytea column:
http://nate.quandra.org/data.bin.0.702601051229191
running 8.3 on debian 5.0.
example:
root=# create database testdb;
CREATE DATABASE
root=# \c testdb
You are now connected to database "testdb".
testdb=# cr
Alban Hertroys writes:
> I defined a type:
> CREATE TYPE unit_token AS (
> base_unit TEXT,
> unit_base INT
> );
> If I try to join on tokens or try to create an index over that column
> I get: "ERROR: could not identify a comparison function for type
> unit_token".
A
thanks that seems to do the trick!
Dave
Miroslav S wrote:
Some time ago, i created this tool: http://apgdiff.sourceforge.net/
Miroslav
David Kerr napsal(a):
Is there a default/standard (free) schema diff tool that's in use in
the community?
I'd like to be able to quickly identify new column
Thanks.
Yeah, if it's not free i'll just write my own if it becomes too much of
a pain =)
Dave
Boyd, Craig wrote:
Look here:
http://sqlmanager.net/en/products/postgresql
They aren't cheap, but they seem to work well.
Thanks,
Craig Boyd
David Kerr wrote:
On Fri, Aug 21, 2009 at 12:00:11PM
Some time ago, i created this tool: http://apgdiff.sourceforge.net/
Miroslav
David Kerr napsal(a):
> Is there a default/standard (free) schema diff tool that's in use in
> the community?
>
> I'd like to be able to quickly identify new columns, data changes, new
> indexes, etc between 2 schema ver
On Fri, Aug 21, 2009 at 11:43:49AM -0700, David Kerr wrote:
> Is there a default/standard (free) schema diff tool that's in use in the
> community?
check_postgres.pl will compare schemas and report on results.
http://bucardo.org/check_postgres/
--
Joshua Tolley / eggyknap
End Point Corporation
Look here:
http://sqlmanager.net/en/products/postgresql
They aren't cheap, but they seem to work well.
Thanks,
Craig Boyd
David Kerr wrote:
On Fri, Aug 21, 2009 at 12:00:11PM -0700, Joshua D. Drake wrote:
- On Fri, 2009-08-21 at 11:56 -0700, David Kerr wrote:
- > Is there an easy way, that I'm
On Fri, Aug 21, 2009 at 01:59:43PM -0500, Boyd, Craig wrote:
- We are on 7.3.0.1666.
-
- ODBC alter scripts do tend to be, um, ugly.
- When you do the CC are restricting the objects you CC? Try to keep it
- as minimal as possible. If I get some time over the weekend I will see
- what I can do.
On Fri, Aug 21, 2009 at 01:58:34PM -0400, Merlin Moncure wrote:
> On Fri, Aug 21, 2009 at 1:13 PM, Sam Mason wrote:
> > On Fri, Aug 21, 2009 at 12:05:51PM -0400, Tom Lane wrote:
> >> We might be able to do that based on the row-returning-subselect
> >> infrastructure being discussed over here:
> >>
On Fri, Aug 21, 2009 at 2:59 PM, Alban
Hertroys wrote:
> Hello all,
>
> I'm running into a small problem (while comparing tokenised unit strings in
> case you're interested) with said topic.
>
> I defined a type:
> CREATE TYPE unit_token AS (
> base_unit TEXT,
> unit_base
On Fri, Aug 21, 2009 at 12:00:11PM -0700, Joshua D. Drake wrote:
- On Fri, 2009-08-21 at 11:56 -0700, David Kerr wrote:
- > Is there an easy way, that I'm missing, where I can export a schema from
- > database A and then rename it on load into database B?
-
- pg_dump -s foo|psql bar
Sorry, I was
Hello,
iconv seemed to work fine. I converted the dump file from LATIN1 to UFT8
and kept the changes in the client_encoding (in the dump file) and loaded
them all into the database.
No complains. I still need to verify the result but at least I got no
restore errors based on character encod
Eric Schwarzenbach wrote:
> In other words, if you have
>
> create view C select * from A join B on (A.foo = B.foo);
> create view D select * from C join E on (C.foo= E.foo);
> and you execute some select query on D, does it necessarily join A and B
> before joining the result to E, or might it d
On Fri, 2009-08-21 at 11:56 -0700, David Kerr wrote:
> Is there an easy way, that I'm missing, where I can export a schema from
> database A and then rename it on load into database B?
pg_dump -s foo|psql bar
>
> I use similar functionality in oracle all the time and it's great for
> developme
We are on 7.3.0.1666.
ODBC alter scripts do tend to be, um, ugly.
When you do the CC are restricting the objects you CC? Try to keep it
as minimal as possible. If I get some time over the weekend I will see
what I can do. No promises though as it is already pretty booked. :)
If I do get th
Hello all,
I'm running into a small problem (while comparing tokenised unit
strings in case you're interested) with said topic.
I defined a type:
CREATE TYPE unit_token AS (
base_unit TEXT,
unit_base INT
);
In my table I have:
CREATE TABLE unit (
unitT
Archibald Zimonyi wrote:
>
> Hello,
>
> >Archibald Zimonyi writes:
> >>I went into the generated dump file and (more wish then anything else)
> >>tried to simply change the encoding from LATIN1 to UTF8 and then load the
> >>file, it did not complain about incorrect encoding setting for the load,
Is there an easy way, that I'm missing, where I can export a schema from
database A and then rename it on load into database B?
I use similar functionality in oracle all the time and it's great for
development environments when you're making schema changes or updating a
lot of data. You can me
What version of ERwin are you using?
Thanks,
Craig Boyd
David Kerr wrote:
Is there a default/standard (free) schema diff tool that's in use in
the community?
I'd like to be able to quickly identify new columns, data changes, new
indexes, etc between 2 schema versions.
(and then create an a
we're on v7.2.8
there's no pg specific option so we've been using ODBC as the "database"
type and the alter's it generates are just ugly.
Dave
Boyd, Craig wrote:
What version of ERwin are you using?
Thanks,
Craig Boyd
David Kerr wrote:
Is there a default/standard (free) schema diff tool th
Hello,
Archibald Zimonyi writes:
I went into the generated dump file and (more wish then anything else)
tried to simply change the encoding from LATIN1 to UTF8 and then load the
file, it did not complain about incorrect encoding setting for the load,
however it complained that the characters
Is there a default/standard (free) schema diff tool that's in use in the
community?
I'd like to be able to quickly identify new columns, data changes, new
indexes, etc between 2 schema versions.
(and then create an alter script for the original)
We're using ERWin as our modeling tool, but it
We have run into postgres bug #4907 : stored procedures and changed tables
To say, we have created a function and made changes to the table and the
procedure no longer works giving the error below
ERROR: structure of query does not match function result type
CONTEXT: PL/pgSQL functio
Tom Lane wrote:
> David Waller writes:
>
>> I'm struggling with a database query that under some circumstances returns
>> the error "ERROR: number of columns (2053) exceeds limit (1664)".
>> Confusingly, though, no table is that wide.
>>
>
> This limit would be enforced against the out
On Fri, Aug 21, 2009 at 1:13 PM, Sam Mason wrote:
> On Fri, Aug 21, 2009 at 12:05:51PM -0400, Tom Lane wrote:
>> Sam Mason writes:
>> > ... PG should instead arrange that the expression
>> > "t" is run exactly once and reuse the single result for all columns.
>>
>> We might be able to do that base
On Fri, Aug 21, 2009 at 12:05:51PM -0400, Tom Lane wrote:
> Sam Mason writes:
> > ... PG should instead arrange that the expression
> > "t" is run exactly once and reuse the single result for all columns.
>
> We might be able to do that based on the row-returning-subselect
> infrastructure being
On Fri, Aug 21, 2009 at 12:16:45PM -0400, Eric Comeau wrote:
> In the next release of our software the developers are moving to
> JBoss and have introduced the use of JBoss Messaging. They want to
> change from using the built-in hsqldb to using our PostgreSQL
> database.
>
> What is the best appr
Chris-
did you look at Zdenek Kotala's pgcheck ?
http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/pgcheck/pgcheck/src/
download 3 source files and run makefile
anyone know of a PG integrity checker
?
Martin Gainty
__
Verzicht und Vertraulichkeitanmerkun
Archibald Zimonyi writes:
> I went into the generated dump file and (more wish then anything else)
> tried to simply change the encoding from LATIN1 to UTF8 and then load the
> file, it did not complain about incorrect encoding setting for the load,
> however it complained that the characters d
On Fri, Aug 21, 2009 at 06:54:51PM +0300, Andrus Moor wrote:
> create temp table test ( test bytea );
> insert into test values(E'\274')
>
> Causes error
Yup, you want another backslash in there, something like:
insert into test values(E'\\274');
The first backslash is expanded out during par
2009/8/21 Andrus Moor :
> In 8.4, script
>
> create temp table test ( test bytea );
> insert into test values(E'\274')
Try E'\\274'
--
greg
http://mit.edu/~gsstark/resume.pdf
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www
Hello,
I tired changing the client_encoding setting but there was no differance
in the result.
I went into the generated dump file and (more wish then anything else)
tried to simply change the encoding from LATIN1 to UTF8 and then load the
file, it did not complain about incorrect encoding
In the next release of our software the developers are moving to JBoss and
have introduced the use of JBoss Messaging. They want to change from using
the built-in hsqldb to using our PostgreSQL database.
What is the best approach, create a new database or new schema with-in our
current PostgreS
Dilyan Berkovski wrote:
Hi All,
I have a nasty table with many repeating columns of the kind port_ts_{i}_,
where {i} is from 0 to 31, and could be 3 different words.
I have made a pl/pgsql function that checks those columns from port_ts_1_status to
port_ts_31_status and counts something, howe
"Chris Hopkins" writes:
> Thanks Tom. Next question (and sorry if this is an ignorant one)...how
> would I go about doing that?
See the archives for previous discussions of corrupt-data recovery.
Basically it's divide-and-conquer to find the corrupt rows.
regards, tom lan
Sam Mason writes:
> ... PG should instead arrange that the expression
> "t" is run exactly once and reuse the single result for all columns.
We might be able to do that based on the row-returning-subselect
infrastructure being discussed over here:
http://archives.postgresql.org/message-id/4087.12
In 8.4, script
create temp table test ( test bytea );
insert into test values(E'\274')
Causes error
ERROR: invalid byte sequence for encoding "UTF8": 0xbc
HINT: This error can also happen if the byte sequence does not match the
encoding expected by the server, which is controlled by "client_
Thanks Tom. Next question (and sorry if this is an ignorant one)...how
would I go about doing that?
- Chris
THIS MESSAGE IS INTENDED FOR THE USE OF THE PERSON TO WHOM IT IS ADDRESSED. IT
MAY CONTAIN INFORMATION THAT IS PRIVILEGED, CONFIDENTIAL AND EXEMPT FROM
DISCLOSURE UNDER APPLICA
On Fri, Aug 21, 2009 at 10:49:52AM -0400, Merlin Moncure wrote:
> On Fri, Aug 21, 2009 at 10:17 AM, Sam Mason wrote:
> > CREATE TYPE foo AS ( i int, j int );
> >
> > SELECT (id((SELECT (1,2)::foo))).*;
> >
> > or am I missing something obvious?
>
> I think that what you are bumping in to is th
Hello all,
I'm struggling with a database query that under some circumstances returns the
error "ERROR: number of columns (2053) exceeds limit (1664)". Confusingly,
though, no table is that wide.
The problem seems to be my use of views. The largest table in the database is
500 columns wide.
"Chris Hopkins" writes:
> 2009-08-19 22:35:42 ERROR: out of memory
> 2009-08-19 22:35:42 DETAIL: Failed on request of size 536870912.
> Is there an easy way to give pg_dump more memory?
That isn't pg_dump that's out of memory --- it's a backend-side message.
Unless you've got extremely wide fi
On Fri, Aug 21, 2009 at 12:50 AM,
stone...@excite.com wrote:
> Hey all,
>
> My company is designing a database in which we intend to store data for
> several customers. We are trying to decide if,
>
> A: we want to store all customer data in one set of tables with customer_id
> fields separating
On Fri, Aug 21, 2009 at 10:17 AM, Sam Mason wrote:
> On Fri, Aug 21, 2009 at 02:22:54PM +0100, Greg Stark wrote:
>> SELECT (r).*
>> FROM (SELECT (SELECT x FROM x WHERE a=id) AS r
>> FROM unnest(array[1,2]) AS arr(id)
>> ) AS subq;
>
> Shouldn't that second inner SELECT unnecessa
On Fri, 21 Aug 2009, Adrian Klaver wrote:
On Thursday 20 August 2009 11:45:30 pm Archibald Zimonyi wrote:
Hello,
I am sitting on version 7.4.x and am going to upgrade to version 8.3.x.
From all I can read I should have no problem with actual format of the
pgdump file (for actual dumping and
On Fri, Aug 21, 2009 at 02:22:54PM +0100, Greg Stark wrote:
> SELECT (r).*
> FROM (SELECT (SELECT x FROM x WHERE a=id) AS r
> FROM unnest(array[1,2]) AS arr(id)
>) AS subq;
Shouldn't that second inner SELECT unnecessary? I'd be tempted to
write:
SELECT ((SELECT x FROM x WH
On Fri, Aug 21, 2009 at 9:22 AM, Greg Stark wrote:
> Of course immediately upon hitting send I did think of a way:
>
> SELECT (r).*
> FROM (SELECT (SELECT x FROM x WHERE a=id) AS r
> FROM unnest(array[1,2]) AS arr(id)
> ) AS subq;
nice use of composite type in select-list subquery
Ivan Sergio Borgonovo writes:
> I was mainly concerned about assigning ownership of a sequence to a
> column that is not an int.
I think you can get away with that. It's already the case within the
standard usage that the owning column could be either int or bigint.
FWIW, I agree with the idea
On Thursday 20 August 2009 11:45:30 pm Archibald Zimonyi wrote:
> Hello,
>
> I am sitting on version 7.4.x and am going to upgrade to version 8.3.x.
> From all I can read I should have no problem with actual format of the
> pgdump file (for actual dumping and restoring purposes) but I am
> having p
David Waller writes:
> I'm struggling with a database query that under some circumstances returns
> the error "ERROR: number of columns (2053) exceeds limit (1664)".
> Confusingly, though, no table is that wide.
This limit would be enforced against the output rows of any intermediate
join ste
On Fri, Aug 21, 2009 at 2:16 PM, Greg Stark wrote:
> On Fri, Aug 21, 2009 at 1:16 PM, John DeSoi wrote:
>>
>> Yes, this is the best I have come up with so far. I have a set returning
>> function which returns the key and the index number. The implementation with
>> a cursor looks like this:
>>
>> S
On Fri, Aug 21, 2009 at 1:16 PM, John DeSoi wrote:
>
> Yes, this is the best I have come up with so far. I have a set returning
> function which returns the key and the index number. The implementation with
> a cursor looks like this:
>
> SELECT * FROM cursor_pk('c1') c LEFT JOIN foo ON (c.pk = foo
On Thu, Aug 20, 2009 at 2:43 PM, Jasen Betts wrote:
> On 2009-08-19, Stephen Cook wrote:
>
>> Let's say I have a function that needs to collect some data from various
>> tables and process and sort them to be returned to the user.
>
> plpgsql functions don't play well with temp tables IME.
> there
On 20 Aug 2009 13:43:10 GMT
Jasen Betts wrote:
> On 2009-08-19, Stephen Cook wrote:
>
> > Let's say I have a function that needs to collect some data from
> > various tables and process and sort them to be returned to the
> > user.
>
> plpgsql functions don't play well with temp tables IME.
W
Hello all,
I'm struggling with a database query that under some circumstances returns the
error "ERROR: number of columns (2053) exceeds limit (1664)". Confusingly,
though, no table is that wide.
The problem seems to be my use of views. The largest table in the database is
500 columns wide.
On Aug 21, 2009, at 7:26 AM, Sam Mason wrote:
It may help to wrap the generate_series call into a function so you
don't have to refer to "myPkArray" so many times.
Yes, this is the best I have come up with so far. I have a set
returning function which returns the key and the index number. T
Thom Brown wrote:
> If this results in an unpredictable and non-duplicating loop of generated
> sets of characters, that would be ideal. Would a parallel for this be a
> 5-character code possibly transcoded from a 6-character GUID/UUID? (a-h +
> j+n + p-z + A-H + J-N + P+Z + 2-9 = 56 poss
On Thu, Aug 20, 2009 at 11:15:12PM -0400, John DeSoi wrote:
> Suppose I have an integer array (or cursor with one integer column)
> which represents primary keys of some table. Is there a simple and
> efficient way to return the rows of the table corresponding to the
> primary key values and
"stone...@excite.com" wrote:
>
>
> >stone...@excite.com wrote:
> >> Hey all,
> >>
> >> My company is designing a database in which we intend to store data
> >> for several customers. We are trying to decide if,
> >>
> >> A: we want to store all customer data in one set of tables with
> >> custome
On 2009-08-19, Christophe Pettus wrote:
> In other examples, page-to-page flow is probably not a great candidate
> for encoding in the database; I would think that it makes far more
> sense for the database to store the state of the various business
> objects, and let the PHP application de
On Wed, 2009-08-19 at 12:31 -0400, Tom Lane wrote:
> In the original coding, the first assignment was
>
> PGDATA=/var/lib/pgsql
>
> and thus the if-test did indeed do something useful with setting
> PGDATA
> differently in the two cases. However, there is no reason whatsoever
> for this initscri
Hello,
I am sitting on version 7.4.x and am going to upgrade to version 8.3.x.
From all I can read I should have no problem with actual format of the
pgdump file (for actual dumping and restoring purposes) but I am
having problems with encoding (which I was fairly sure I would). I have
searc
On Thu, 20 Aug 2009 14:31:02 -0400
Alvaro Herrera wrote:
> Ivan Sergio Borgonovo wrote:
> > I've
> >
> > create table pr(
> > code varchar(16) primary key,
> > ...
> > );
> > create sequence pr_code_seq owned by pr.code; -- uh!
> > actually stuff like:
> > alter table pr drop column code;
>
70 matches
Mail list logo