[GENERAL] postgres import

2006-10-31 Thread Antonios Katsikadamos
Hi all,I have a little problem. I have an .sql file ( db dump ) and i want to import it to postgres on linux.Does anyone know how i can do it?thnx a lot mates 

We have the perfect Group for you. Check out the handy changes to Yahoo! Groups.


Re: [GENERAL] postgres import

2006-10-31 Thread Albe Laurenz
 I have a little problem. I have an .sql file (  db dump  )  
 and i want to import it to postgres on linux.
  
 Does anyone know how i can do it?

You feed it to the command line interface psql.

Example:
psql -h host -p port -d database -U user dump.sql

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] WAL Archiving under Windows

2006-10-31 Thread Tim Tassonis

Richard Huxton wrote:

Tim Tassonis wrote:

We use version 8.1.3 and the following archive_coomand:

archive_command = 'copy %p d:\\backup\\logs\%f'

^^^
Could the lack of a double-backslash be causing the problem?



Sorry, that was a problem on my quoting. The config file reads:


archive_command = 'copy %p d:\\backup\\logs\\%f'


Hmm - in that case I'd be tempted to wrap it in a small script so you 
can log the parameters passed in and return code passed out.




The strange thing is, even with loglevel debug5, I don't get any log 
message indicating that postgres is even trying to call the command. Is 
there only anything in the logfile if the copying succeeds or are there 
cases where postgres thinks it doesn't have to copy them. From what I 
understand, the wal files should be copied fairly often, so if postgres 
breaks, everything is at the (hopefully still intact) archive location.


Bye
Tim


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] WAL Archiving under Windows

2006-10-31 Thread Tom Lane
Tim Tassonis [EMAIL PROTECTED] writes:
 The strange thing is, even with loglevel debug5, I don't get any log 
 message indicating that postgres is even trying to call the command.

Then it isn't, because there are definitely log messages, which were
specifically put there for the purpose of recording the fully-expanded
archive command string:

ereport(DEBUG3,
(errmsg_internal(executing archive command \%s\,
 xlogarchcmd)));
rc = system(xlogarchcmd);
if (rc != 0)
{
ereport(LOG,
(errmsg(archive command \%s\ failed: return code %d,
xlogarchcmd, rc)));
return false;
}
ereport(LOG,
(errmsg(archived transaction log file \%s\, xlog)));

Better double-check the usual sorts of gotchas, like whether you are
editing the right config file and properly forcing a config reload
afterwards.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [GENERAL] Index greater than 8k

2006-10-31 Thread Teodor Sigaev
The problem as I remember it is pg_tgrm not tsearch2 directly, I've sent a 
self contained test case directly to  Teodor  which shows the error. 


'ERROR:  index row requires 8792 bytes, maximum size is 8191'
Uh, I see. But I'm really surprised why do you use pg_trgm on big text? pg_trgm 
is designed to find similar words and use technique known as trigrams. This will 
 work good on small pieces of text such as words or set expression. But all big 
texts (on the same language) will be similar :(. So, I didn't take care about 
guarantee that index tuple's size limitation. In principle, it's possible to 
modify pg_trgm to have such guarantee, but index becomes lossy - all tuples 
gotten  from index should be checked by table's tuple evaluation.


If you want to search similar documents I can recommend to have a look to 
fingerprint technique (http://webglimpse.net/pubs/TR93-33.pdf). It's pretty 
close to trigrams and metrics of similarity is the same, but uses another 
signature calculations. And, there are some tips and trics: removing HTML 
marking,removing punctuation, lowercasing text and so on - it's interesting and 
complex task.

--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] updating to 7.4.13 helped it appears

2006-10-31 Thread Geoffrey
It appears that upgrading to 7.4.13 helped the problem we were having 
with the postgres process terminating.  We still are having the problem, 
but it does appear to be different, based on the output of backtraces. 
The core files are much larger and there does seem to be a common thread 
amongst most of them. I've attached one to see if anyone has any ideas 
as to what our problem might be.  Suggestions would be appreciated.


--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin
Using host libthread_db library /lib/tls/libthread_db.so.1.
Core was generated by `postgres: msanchez exp 198.212.166.29 SELECT  '.
Program terminated with signal 11, Segmentation fault.
#0  0x0815d950 in cost_mergejoin (path=0x836f20c, root=0x8370708)
at costsize.c:915
915 if (rescannedtuples  0)
#0  0x0815d950 in cost_mergejoin (path=0x836f20c, root=0x8370708)
at costsize.c:915
#1  0x0815d98c in cost_mergejoin (path=0x836f20c, root=0x8370300)
at costsize.c:932
#2  0x0815d98c in cost_mergejoin (path=0x836f20c, root=0x836f2ec)
at costsize.c:932
#3  0x0815d8c1 in cost_mergejoin (path=0x836f20c, root=0x775360)
at costsize.c:878
#4  0x0815c428 in clauselist_selectivity (root=0x827d6e4, clauses=0xfeff7798, 
varRelid=-16812072, jointype=135970173) at clausesel.c:203
#5  0x081637f5 in get_cheapest_path_for_pathkeys (paths=0x827d6e4, 
pathkeys=0x882eeb0, cost_criterion=7) at pathkeys.c:586
#6  0x081abd7d in ProcessUtility (parsetree=0x835ea10, dest=0x882ee54, 
completionTag=0xa Address 0xa out of bounds) at utility.c:611
#7  0x081ac1ff in ProcessUtility (parsetree=0xfeff7850, dest=0x88af7e0, 
completionTag=0x88af408 \v) at utility.c:793
#8  0x081082c4 in CreateTrigger (stmt=0x88af588, forConstraint=-100 '\234')
at trigger.c:155
#9  0x08109e30 in CopyTriggerDesc (trigdesc=0x88af588) at trigger.c:922
#10 0x0810aa4b in ExecBSDeleteTriggers (estate=0x88af828, relinfo=0x88af408)
at trigger.c:1324
#11 0x0810ae71 in ExecASUpdateTriggers (estate=0x88af380, relinfo=0x8112de0)
at trigger.c:1462
#12 0x08112ec9 in AlterUserSet (stmt=0x88af380) at user.c:1002
#13 0x08106d66 in createForeignKeyTriggers (rel=0x88af380, fkconstraint=0x1, 
constrOid=141463696) at tablecmds.c:3697
#14 0x08113718 in CreateGroup (stmt=0x88af2f8) at user.c:1273
#15 0x08106e11 in createForeignKeyTriggers (rel=0x88af2f8, fkconstraint=0x413, 
constrOid=4278155864) at tablecmds.c:3714
#16 0x081055fd in AlterTableAddCheckConstraint (rel=0x88af1a8, 
constr=0x88af2f8) at tablecmds.c:2951
#17 0x081049d8 in AlterTableAlterOids (myrelid=139861992, recurse=-88 '¨', 
setOid=-40 'Ø') at tablecmds.c:2513
#18 0x0817c33b in BackendFork (port=0x8360fd8) at postmaster.c:2485
#19 0x0817c113 in BackendFork (port=0x8360fd8) at postmaster.c:2403
#20 0x081788d3 in PGSemaphoreLock (sema=0x835cbc0, interruptOK=0 '\0')
at pg_sema.c:424
#21 0x0817b1c1 in pmdie (postgres_signal_arg=4) at postmaster.c:1701
#22 0x08154c40 in _readConst () at readfuncs.c:377
#23 0x08154633 in _readResdom () at readfuncs.c:311
#24 0x08152b98 in _outAExpr (str=0x2, node=0x1) at outfuncs.c:1366
#25 0x0815225e in _outIndexElem (str=0x5, node=0x830d6b8) at outfuncs.c:1208
#26 0x08121f63 in ExecEndNode (node=0x5) at execProcnode.c:499
#27 0x0065479a in ?? ()
#28 0x0005 in ?? ()
#29 0xfeff8c64 in ?? ()
#30 0xfeff8c7c in ?? ()
#31 0x in ?? ()
#32 0x00774a78 in ?? ()
#33 0x0013a020 in ?? ()
#34 0x081fbd18 in interval_part (fcinfo=0x8121d30) at timestamp.c:3374
#35 0x0806fd51 in nocachegetattr (tuple=) at heaptuple.c:409

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] WAL Archiving under Windows

2006-10-31 Thread Richard Huxton

Tim Tassonis wrote:

Hi Tom

Richard Huxton wrote:

Tim Tassonis wrote:

Hi Tom

Tom Lane wrote:

Tim Tassonis [EMAIL PROTECTED] writes:
The strange thing is, even with loglevel debug5, I don't get any 
log message indicating that postgres is even trying to call the 
command.


Then it isn't, because there are definitely log messages, which were
specifically put there for the purpose of recording the fully-expanded
archive command string:


I must be, as I increased the debug level and that actually took 
place. I did get more log messages.


Is there anything else that could prevent the archiving from taking 
place, without giving any message?


Did you check Tom's suggestion that you haven't got a duplicate 
postgresql.conf somewhere?


As I said, other changes in the config file did have an effect, so that 
hardly can be the issue.


Could it be that for some reason the WAL backup config isn't being 
picked up? Stray non-printing character? Typo? Try show all.


If that fails I'd think the quickest thing might be to initdb an 
alternative installation, run it on a different port with its own config 
file and push a load of rows through it. See if that picks up your WAL 
backup script. If it does, then it must be something with your config 
file. If not, then it's something in the code.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] postgres under Suse linux

2006-10-31 Thread Anastasios Hatzis

Antonios Katsikadamos wrote:

Hi all I am a new linux and postgres user and i don't
know how i canconfigure the postgres on suse linux in
order to make it run.

I would be thankful for any tip.



Antonios,

Finally I have installed successfully PostgreSQL 8.1.5 under a clean 
OpenSUSE 10.0 64-bit system. I had some problems since I'm new to Linux 
(at least if ignoring some years of superior comfort desktop click-click 
on Linux/KDE)... despite that I could build PostgreSQL even from 
sources. And I achieved that it is started at given runlevels. It costed 
me some days during the last weeks, but I'm a bit proud of it now ;-)


If you have still questions on installation and configuration in more 
details, please ask in this list (CC me, pls.). I guess some of the 
problems I had could be the same at your installation.


Anastasios

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Pgsql on Solaris

2006-10-31 Thread vodhner
How widespread is the use of PostgreSQL on Solaris?  I am beginning to sense 
that community support is not currently very strong on this platform, and that 
pgsql may not be the best candidate for my current project -- installing LXR on 
a 64-bit Solaris system.

I had to do a lot of web searching to learn how to get past a compilation 
problem, apparently fixed for the current release; and now It appears (based on 
the reply below) that the configure setup is not ready to recognize 64-bit 
solaris environment; the 32-bit executables are incompatible with 64-bit Perl.  
I am not an autoconf hacker, and can't afford in this case to go where none 
have gone before.

This is always a challenge with Free Software, of course:  potential users of a 
given product need to know how to pick their battles.

So, please advise as to the level of 64-bit Solaris 10 activity with pgsql.

Thanks,
Victor Odhner

 Andrew Sullivan [EMAIL PROTECTED] wrote: 
On Thu, Oct 26, 2006 at 12:40:41PM -0700, [EMAIL PROTECTED] wrote:
 How can I get the make to generate ELF 64-bit executables on Solaris 10?
 We're on Fujitsu hardware; uname -a displays this:
   SunOS  5.10 Generic_118822-26 sun4us sparc FJSV,GPUZC-M

Well, to start with, are you using a compiler that can generate 64
bit binaries?  How about your libs?

A



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [GENERAL] Index greater than 8k

2006-10-31 Thread Joshua D. Drake
Teodor Sigaev wrote:
 The problem I am after is the 8k index size issue. It is very easy to
 get a GIST index (especially when using tsearch2) that is larger than
 that.
 Hmm, tsearch2 GIST index  is specially designed for support huge index
 entry:
 first, every lexemes in tsvectore are transformed to hash value (with a
 help of crc32), second, it's stripped all position infos, third, if size
 of array is greater than TOAST_INDEX_TARGET then tsearch2 will make bit
 signature of tsvector. Signature's length is fixed and equals to 252
 bytes by default (+ 8 bytes for header of datum). All values on internal
 pages are represented as signatures below.
 
 So, tsearch2 guarantees that index entry will be small enough. If it's
 not true, then there is a bug - pls, make test suite demonstrating the
 problem.
 
 Is recompiling the block size the option there?
 What are the downsides, except for the custom build?
 
 Can you send exact error message?

I am training this week, but Darcy can do it. Can you give them a test
case on what we were working on with that customer?

Joshua D. Drake

 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] WAL Archiving under Windows

2006-10-31 Thread Tim Tassonis

Hi Tom

Tom Lane wrote:

Tim Tassonis [EMAIL PROTECTED] writes:
The strange thing is, even with loglevel debug5, I don't get any log 
message indicating that postgres is even trying to call the command.


Then it isn't, because there are definitely log messages, which were
specifically put there for the purpose of recording the fully-expanded
archive command string:


I must be, as I increased the debug level and that actually took place. 
I did get more log messages.


Is there anything else that could prevent the archiving from taking 
place, without giving any message?


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] WAL Archiving under Windows

2006-10-31 Thread Richard Huxton

Tim Tassonis wrote:

Hi Tom

Tom Lane wrote:

Tim Tassonis [EMAIL PROTECTED] writes:
The strange thing is, even with loglevel debug5, I don't get any log 
message indicating that postgres is even trying to call the command.


Then it isn't, because there are definitely log messages, which were
specifically put there for the purpose of recording the fully-expanded
archive command string:


I must be, as I increased the debug level and that actually took place. 
I did get more log messages.


Is there anything else that could prevent the archiving from taking 
place, without giving any message?


Did you check Tom's suggestion that you haven't got a duplicate 
postgresql.conf somewhere?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [GENERAL] Index greater than 8k

2006-10-31 Thread Darcy Buskermolen
On October 31, 2006 06:42 am, Joshua D. Drake wrote:
 Teodor Sigaev wrote:
  The problem I am after is the 8k index size issue. It is very easy to
  get a GIST index (especially when using tsearch2) that is larger than
  that.


The problem as I remember it is pg_tgrm not tsearch2 directly, I've sent a 
self contained test case directly to  Teodor  which shows the error. 

'ERROR:  index row requires 8792 bytes, maximum size is 8191'


 
  Hmm, tsearch2 GIST index  is specially designed for support huge index
  entry:
  first, every lexemes in tsvectore are transformed to hash value (with a
  help of crc32), second, it's stripped all position infos, third, if size
  of array is greater than TOAST_INDEX_TARGET then tsearch2 will make bit
  signature of tsvector. Signature's length is fixed and equals to 252
  bytes by default (+ 8 bytes for header of datum). All values on internal
  pages are represented as signatures below.
 
  So, tsearch2 guarantees that index entry will be small enough. If it's
  not true, then there is a bug - pls, make test suite demonstrating the
  problem.
 
  Is recompiling the block size the option there?
  What are the downsides, except for the custom build?
 
  Can you send exact error message?

 I am training this week, but Darcy can do it. Can you give them a test
 case on what we were working on with that customer?

 Joshua D. Drake

-- 
Darcy Buskermolen
Command Prompt, Inc.
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997
http://www.commandprompt.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] limit left join rows to 1

2006-10-31 Thread Andreas Kretschmer
Jure Ložar [EMAIL PROTECTED] schrieb:

 Hi.
 
 Is it possible to limit number of left join rows that match condition to 1? 
 I don't want to have hits from first table multiplied when more then 1 row 
 matches on left join condition.

I'm not sure if i understand you correctly, but perhaps this is what you
are searching for:

Suppose, you have 2 tables, master and detail:

test=# select * from master;
 id

  1
  2
(2 rows)

test=# select * from detail;
 id | val
+-
  1 | 200
  2 | 200
  1 | 100
(3 rows)


This is the left join:

test=# select m.id, d.val from master m left join detail d on m.id=d.id;
 id | val
+-
  1 | 100
  1 | 200
  2 | 200
(3 rows)


But you need only one row from detail, which? Suppose, this one with the
max(val) value:

test=# select m.id, d.val from master m left join (select id, max(val)
as val from detail group by id) d on m.id=d.id;
 id | val
+-
  1 | 200
  2 | 200
(2 rows)


Is this okay for you?


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.(unknow)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] postgresql and reiserfs

2006-10-31 Thread km
Hi all,

Is anyone up with database features in reiserfs (reiser4)  with postgresql 8.x? 

regards,
KM

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] postgresql and reiserfs

2006-10-31 Thread Richard Broersma Jr
 Is anyone up with database features in reiserfs (reiser4)  with postgresql 
 8.x? 

I heard that reiser4 is not yet stable.  And that there is a chance that it 
wont be since its
author is in detention. Most of the recommendations that I've seen are to use 
good-old-reliable
EXT3 which keeps your data safe.

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] WAL Archiving under Windows

2006-10-31 Thread Tim Tassonis

Hi Tom

Richard Huxton wrote:

Tim Tassonis wrote:

Hi Tom

Tom Lane wrote:

Tim Tassonis [EMAIL PROTECTED] writes:
The strange thing is, even with loglevel debug5, I don't get any log 
message indicating that postgres is even trying to call the command.


Then it isn't, because there are definitely log messages, which were
specifically put there for the purpose of recording the fully-expanded
archive command string:


I must be, as I increased the debug level and that actually took 
place. I did get more log messages.


Is there anything else that could prevent the archiving from taking 
place, without giving any message?


Did you check Tom's suggestion that you haven't got a duplicate 
postgresql.conf somewhere?


As I said, other changes in the config file did have an effect, so that 
hardly can be the issue.


Bye
Tim


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] limit left join rows to 1

2006-10-31 Thread Jure Ložar

Hi.

Is it possible to limit number of left join rows that match condition to 
1? I don't want to have hits from first table multiplied when more then 
1 row matches on left join condition.


Thank you
Jure

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] postgresql and reiserfs

2006-10-31 Thread Devrim GUNDUZ
Hi,

On Tue, 2006-10-31 at 09:31 -0800, Richard Broersma Jr wrote:

 I heard that reiser4 is not yet stable.  And that there is a chance
 that it wont be since its author is in detention.

Here are the links:

http://linux.slashdot.org/linux/06/10/15/0057203.shtml
http://yro.slashdot.org/yro/06/10/11/0142216.shtml?tid=123

Also, SuSE announced that they will be switching to ext3 in their next
SLES releases:

http://news.com.com/Novell+makes+file-storage+software
+shift/2100-1016_3-6125509.html

Red Hat, major player in Enterprise game, is supporting ext* for years.
reiserfs is not enabled by default.

So, IMHO, since less people will be using reiser, I would not use that
in my installations.

  Most of the recommendations that I've seen are to use
 good-old-reliable EXT3 which keeps your data safe 

Also ext2 is preferred on many installations, especially when people
want to avoid journals.

Regards,
-- 
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/



signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] updating to 7.4.13 helped it appears

2006-10-31 Thread Alvaro Herrera
Geoffrey wrote:
 It appears that upgrading to 7.4.13 helped the problem we were having 
 with the postgres process terminating.  We still are having the problem, 
 but it does appear to be different, based on the output of backtraces. 
 The core files are much larger and there does seem to be a common thread 
 amongst most of them. I've attached one to see if anyone has any ideas 
 as to what our problem might be.  Suggestions would be appreciated.

I don't think this backtrace makes much sense.  Did you compile with
--enable-debug?

Are you sure you are passing the same postgres executable to GDB that
was used to actually generate the core (i.e. the one that's running)?
Is this core file generated from exactly that executable, or is it maybe
one that was generated with an older executable?


 Using host libthread_db library /lib/tls/libthread_db.so.1.
 Core was generated by `postgres: msanchez exp 198.212.166.29 SELECT  '.
 Program terminated with signal 11, Segmentation fault.
 #0  0x0815d950 in cost_mergejoin (path=0x836f20c, root=0x8370708)
 at costsize.c:915
 915   if (rescannedtuples  0)
 #0  0x0815d950 in cost_mergejoin (path=0x836f20c, root=0x8370708)
 at costsize.c:915
 #1  0x0815d98c in cost_mergejoin (path=0x836f20c, root=0x8370300)
 at costsize.c:932
 #2  0x0815d98c in cost_mergejoin (path=0x836f20c, root=0x836f2ec)
 at costsize.c:932
 #3  0x0815d8c1 in cost_mergejoin (path=0x836f20c, root=0x775360)
 at costsize.c:878
 #4  0x0815c428 in clauselist_selectivity (root=0x827d6e4, clauses=0xfeff7798, 
 varRelid=-16812072, jointype=135970173) at clausesel.c:203
 #5  0x081637f5 in get_cheapest_path_for_pathkeys (paths=0x827d6e4, 
 pathkeys=0x882eeb0, cost_criterion=7) at pathkeys.c:586
 #6  0x081abd7d in ProcessUtility (parsetree=0x835ea10, dest=0x882ee54, 
 completionTag=0xa Address 0xa out of bounds) at utility.c:611
 #7  0x081ac1ff in ProcessUtility (parsetree=0xfeff7850, dest=0x88af7e0, 
 completionTag=0x88af408 \v) at utility.c:793
 #8  0x081082c4 in CreateTrigger (stmt=0x88af588, forConstraint=-100 '\234')
 at trigger.c:155
 #9  0x08109e30 in CopyTriggerDesc (trigdesc=0x88af588) at trigger.c:922
 #10 0x0810aa4b in ExecBSDeleteTriggers (estate=0x88af828, relinfo=0x88af408)
 at trigger.c:1324
 #11 0x0810ae71 in ExecASUpdateTriggers (estate=0x88af380, relinfo=0x8112de0)
 at trigger.c:1462
 #12 0x08112ec9 in AlterUserSet (stmt=0x88af380) at user.c:1002
 #13 0x08106d66 in createForeignKeyTriggers (rel=0x88af380, fkconstraint=0x1, 
 constrOid=141463696) at tablecmds.c:3697
 #14 0x08113718 in CreateGroup (stmt=0x88af2f8) at user.c:1273
 #15 0x08106e11 in createForeignKeyTriggers (rel=0x88af2f8, 
 fkconstraint=0x413, 
 constrOid=4278155864) at tablecmds.c:3714
 #16 0x081055fd in AlterTableAddCheckConstraint (rel=0x88af1a8, 
 constr=0x88af2f8) at tablecmds.c:2951
 #17 0x081049d8 in AlterTableAlterOids (myrelid=139861992, recurse=-88 '?', 
 setOid=-40 '?') at tablecmds.c:2513
 #18 0x0817c33b in BackendFork (port=0x8360fd8) at postmaster.c:2485
 #19 0x0817c113 in BackendFork (port=0x8360fd8) at postmaster.c:2403
 #20 0x081788d3 in PGSemaphoreLock (sema=0x835cbc0, interruptOK=0 '\0')
 at pg_sema.c:424
 #21 0x0817b1c1 in pmdie (postgres_signal_arg=4) at postmaster.c:1701
 #22 0x08154c40 in _readConst () at readfuncs.c:377
 #23 0x08154633 in _readResdom () at readfuncs.c:311
 #24 0x08152b98 in _outAExpr (str=0x2, node=0x1) at outfuncs.c:1366
 #25 0x0815225e in _outIndexElem (str=0x5, node=0x830d6b8) at outfuncs.c:1208
 #26 0x08121f63 in ExecEndNode (node=0x5) at execProcnode.c:499
 #27 0x0065479a in ?? ()
 #28 0x0005 in ?? ()
 #29 0xfeff8c64 in ?? ()
 #30 0xfeff8c7c in ?? ()
 #31 0x in ?? ()
 #32 0x00774a78 in ?? ()
 #33 0x0013a020 in ?? ()
 #34 0x081fbd18 in interval_part (fcinfo=0x8121d30) at timestamp.c:3374
 #35 0x0806fd51 in nocachegetattr (tuple=) at heaptuple.c:409



-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Pgsql on Solaris

2006-10-31 Thread Ray Stell

I was interested in going down that path.  This thread seems to be sparc
oriented.  How about on opteron?  Will pg work on solaris10 without too
much stress?

 isainfo -v
64-bit amd64 applications
sse3 sse2 sse fxsr amd_3dnowx amd_3dnow amd_mmx mmx cmov amd_sysc cx8
tsc fpu
32-bit i386 applications
sse3 sse2 sse fxsr amd_3dnowx amd_3dnow amd_mmx mmx cmov amd_sysc cx8
tsc fpu

thanks.



On Tue, Oct 31, 2006 at 10:43:55AM -0700, [EMAIL PROTECTED] wrote:
 So, please advise as to the level of 64-bit Solaris 10 activity with pgsql.
 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] updating to 7.4.13 helped it appears

2006-10-31 Thread Geoffrey

Alvaro Herrera wrote:

Geoffrey wrote:
It appears that upgrading to 7.4.13 helped the problem we were having 
with the postgres process terminating.  We still are having the problem, 
but it does appear to be different, based on the output of backtraces. 
The core files are much larger and there does seem to be a common thread 
amongst most of them. I've attached one to see if anyone has any ideas 
as to what our problem might be.  Suggestions would be appreciated.


I don't think this backtrace makes much sense.  Did you compile with
--enable-debug?


It didn't make much sense to me either, but then, I'm not familiar with 
the postgres code. :(  Is this a gcc flag?   I did compile it with -g 
option, I don't see an --enable-debug in the gcc man page.



Are you sure you are passing the same postgres executable to GDB that
was used to actually generate the core (i.e. the one that's running)?
Is this core file generated from exactly that executable, or is it maybe
one that was generated with an older executable?


The core files were generated on a machine that does not have postgres 
compiled with debugging information, thus I built from source for the 
same version on another machine and ran gdb against it and the generated 
core file.  I've done this in the past with different applications and 
was successful in debugging the core file.  If you believe this is not 
generating an accurate trace, then I'll need to rebuild postgres on the 
production machine (which is not what I wanted to do).


--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Pgsql on Solaris

2006-10-31 Thread Steve Atkins


On Oct 31, 2006, at 9:43 AM, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

How widespread is the use of PostgreSQL on Solaris?  I am beginning  
to sense that community support is not currently very strong on  
this platform, and that pgsql may not be the best candidate for my  
current project -- installing LXR on a 64-bit Solaris system.


I had to do a lot of web searching to learn how to get past a  
compilation problem, apparently fixed for the current release; and  
now It appears (based on the reply below) that the configure  
setup is not ready to recognize 64-bit solaris environment; the 32- 
bit executables are incompatible with 64-bit Perl.  I am not an  
autoconf hacker, and can't afford in this case to go where none  
have gone before.


You should be able to build it just by setting CFLAGS and LDFLAGS  
appropriately,
e.g.  -xarch=v9, but my build scripts actually do it by setting CC  
to cc -xarch=v9 -
I don't recall why I did that, but it's possible there was something  
funky with flags handling

somewhere in the build process.

Works fine, and has done for years. I've had 64 bit sparc builds  
running happily

since 7.2 or so.

That's all assuming you're using Sun compilers on sparc. If not, I've  
no idea whether

you can even build 64 bit binaries.

Cheers,
  Steve



This is always a challenge with Free Software, of course:   
potential users of a given product need to know how to pick their  
battles.


So, please advise as to the level of 64-bit Solaris 10 activity  
with pgsql.


Thanks,
Victor Odhner

 Andrew Sullivan [EMAIL PROTECTED] wrote:
On Thu, Oct 26, 2006 at 12:40:41PM -0700, [EMAIL PROTECTED] wrote:
How can I get the make to generate ELF 64-bit executables on  
Solaris 10?

We're on Fujitsu hardware; uname -a displays this:
  SunOS  5.10 Generic_118822-26 sun4us sparc FJSV,GPUZC-M


Well, to start with, are you using a compiler that can generate 64
bit binaries?  How about your libs?

A



---(end of  
broadcast)---

TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] updating to 7.4.13 helped it appears

2006-10-31 Thread Alvaro Herrera
Geoffrey wrote:
 Alvaro Herrera wrote:
 Geoffrey wrote:
 It appears that upgrading to 7.4.13 helped the problem we were having 
 with the postgres process terminating.  We still are having the problem, 
 but it does appear to be different, based on the output of backtraces. 
 The core files are much larger and there does seem to be a common thread 
 amongst most of them. I've attached one to see if anyone has any ideas 
 as to what our problem might be.  Suggestions would be appreciated.
 
 I don't think this backtrace makes much sense.  Did you compile with
 --enable-debug?
 
 It didn't make much sense to me either, but then, I'm not familiar with 
 the postgres code. :(  Is this a gcc flag?   I did compile it with -g 
 option, I don't see an --enable-debug in the gcc man page.

--enable-debug is a flag to configure.  It'll automatically add -g to
CFLAGS (I'm not sure if it does anything else, but it's easier than
specifying that yourself.)

 Are you sure you are passing the same postgres executable to GDB that
 was used to actually generate the core (i.e. the one that's running)?
 Is this core file generated from exactly that executable, or is it maybe
 one that was generated with an older executable?
 
 The core files were generated on a machine that does not have postgres 
 compiled with debugging information, thus I built from source for the 
 same version on another machine and ran gdb against it and the generated 
 core file.  I've done this in the past with different applications and 
 was successful in debugging the core file.  If you believe this is not 
 generating an accurate trace, then I'll need to rebuild postgres on the 
 production machine (which is not what I wanted to do).

I'm not 100% sure what you are saying here, but if it is what I believe,
then you didn't copy the newly compiled executable into the production
machine; that won't work.  You need to use a debug-enabled executable
both to produce the core file, and to pass to GDB for inspection.

On the other hand, if you can reproduce the failure on the development
machine, that core file would serve just fine.  (You'd only need to copy
the tables and relevant data from production to said machine).

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [GENERAL] Index greater than 8k

2006-10-31 Thread Darcy Buskermolen
On October 31, 2006 08:53 am, Teodor Sigaev wrote:
  The problem as I remember it is pg_tgrm not tsearch2 directly, I've sent
  a self contained test case directly to  Teodor  which shows the error.
 
  'ERROR:  index row requires 8792 bytes, maximum size is 8191'

 Uh, I see. But I'm really surprised why do you use pg_trgm on big text?
 pg_trgm is designed to find similar words and use technique known as
 trigrams. This will work good on small pieces of text such as words or set
 expression. But all big texts (on the same language) will be similar :(.
 So, I didn't take care about guarantee that index tuple's size limitation.
 In principle, it's possible to modify pg_trgm to have such guarantee, but
 index becomes lossy - all tuples gotten  from index should be checked by
 table's tuple evaluation.

The problem is some of the data we are working with is not strictly text but 
bytea that we've run through encode(bytea, 'escape'), and we've had to resort 
to trigrams in an attempt to mimic LIKE for searches.  From our findings 
tsearch2 does not match partial words,  in the same way that a LIKE would. ie 
col LIKE 'go%' would match good, gopher.  pg_tgrm will return those with the 
limit set appropriately, but tsearch2 does not.



 If you want to search similar documents I can recommend to have a look to
 fingerprint technique (http://webglimpse.net/pubs/TR93-33.pdf). It's pretty
 close to trigrams and metrics of similarity is the same, but uses another
 signature calculations. And, there are some tips and trics: removing HTML
 marking,removing punctuation, lowercasing text and so on - it's interesting
 and complex task.

-- 
Darcy Buskermolen
Command Prompt, Inc.
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997
http://www.commandprompt.com/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Pgsql on Solaris

2006-10-31 Thread Tom Lane
Ray Stell [EMAIL PROTECTED] writes:
 I was interested in going down that path.  This thread seems to be sparc
 oriented.  How about on opteron?  Will pg work on solaris10 without too
 much stress?

It seems like most of the questions in this thread could be answered by
perusing the latest version of FAQ_Solaris:
http://developer.postgresql.org/cvsweb.cgi/~checkout~/pgsql/doc/FAQ_Solaris?rev=1.22

In particular note the advice *against* using 64-bit on Sparc unless you
have a strong reason why you need it (hint: you probably don't).

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] RAM Based Disk Drive?

2006-10-31 Thread Adam



I recently saw a Hard Disk Drive that is really 4GB 
of RAM with and SATA 1.5Gb/s serial interface. It's basically a hard disk 
drive that uses RAM. It also has a battery backup, so if you loose power, 
you don't loose your data.

Has anyone tried using this, and if so was there a 
noticeable performance increase?


Re: [GENERAL] updating to 7.4.13 helped it appears

2006-10-31 Thread Geoffrey

Alvaro Herrera wrote:


I'm not 100% sure what you are saying here, but if it is what I believe,
then you didn't copy the newly compiled executable into the production
machine; that won't work.  You need to use a debug-enabled executable
both to produce the core file, and to pass to GDB for inspection.


This is correct, I did not copy the executable to the production 
machine.  I suspect I'll be copying the binary over to the production 
system.


I moved the core file to the development machine where I built the new 
binaries.  Ran gdb against this core file and the postgres binary on 
this machine.  The core was not generated on the development machine.



On the other hand, if you can reproduce the failure on the development
machine, that core file would serve just fine.  (You'd only need to copy
the tables and relevant data from production to said machine).


I have not had any success in duplicating the failure on my development 
environment.  I suspect it's because I can't generate the volume of 
users.  The production system could well have 150-200 users at one time 
and we get a core file generated about 3-4 times a week, generally on 
the busiest days.


--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] [GENERAL] Index greater than 8k

2006-10-31 Thread Alvaro Herrera
Darcy Buskermolen wrote:
 On October 31, 2006 08:53 am, Teodor Sigaev wrote:
   The problem as I remember it is pg_tgrm not tsearch2 directly, I've sent
   a self contained test case directly to  Teodor  which shows the error.
  
   'ERROR:  index row requires 8792 bytes, maximum size is 8191'
 
  Uh, I see. But I'm really surprised why do you use pg_trgm on big text?
  pg_trgm is designed to find similar words and use technique known as
  trigrams. This will work good on small pieces of text such as words or set
  expression. But all big texts (on the same language) will be similar :(.
  So, I didn't take care about guarantee that index tuple's size limitation.
  In principle, it's possible to modify pg_trgm to have such guarantee, but
  index becomes lossy - all tuples gotten  from index should be checked by
  table's tuple evaluation.
 
 The problem is some of the data we are working with is not strictly text 
 but 
 bytea that we've run through encode(bytea, 'escape'),

I think one good question is why are you storing bytea and then
searching like it were text.  Why not store the text as text, and put
the extraneous bytes somewhere else?  Certainly you wouldn't expect to
be able to find text among the bytes, would you?

I remember suggesting you to store the Content-type next to each object,
and then creating partial trigram indexes where Content-type: text/*.
Did that plan not work for some reason?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] RAM Based Disk Drive?

2006-10-31 Thread Merlin Moncure

On 10/31/06, Adam [EMAIL PROTECTED] wrote:



I recently saw a Hard Disk Drive that is really 4GB of RAM with and SATA
1.5Gb/s serial interface.  It's basically a hard disk drive that uses RAM.
It also has a battery backup, so if you loose power, you don't loose your
data.

Has anyone tried using this, and if so was there a noticeable performance
increase?


you are talking about the gigabyte i-ram.  in the database world, you
can achieve same thing (actually better) by sticking those ram sticks
directly on the motherboard assuming you are in a 64 bit environment
and the motherboard is decent.

the main advantage of the iram that i see is faster boot times (big
woop). call me when they have a version that does 256gb :-)

merlin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] RAM Based Disk Drive?

2006-10-31 Thread Alan Hodgson
On Tuesday 31 October 2006 11:48, Merlin Moncure [EMAIL PROTECTED] 
wrote:
 you are talking about the gigabyte i-ram.  in the database world, you
 can achieve same thing (actually better) by sticking those ram sticks
 directly on the motherboard assuming you are in a 64 bit environment
 and the motherboard is decent.

 the main advantage of the iram that i see is faster boot times (big
 woop). call me when they have a version that does 256gb :-)

http://www.superssd.com/products_sub.htm

And, of course, the real advantage to a solid-state drive is random access 
speed, which vastly improves both random writes and random reads.

Not that I can afford one, of course ...

-- 
Ginsberg's Theorem:
 1) You can't win.
 2) You can't break even.
 3) You can't quit the game.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Pgsql on Solaris

2006-10-31 Thread Andrew Sullivan
On Tue, Oct 31, 2006 at 10:43:55AM -0700, [EMAIL PROTECTED] wrote:

 How widespread is the use of PostgreSQL on Solaris?  I am beginning

Well, one of the core members (Josh Berkus) actually works for Sun,
and Sun is officially supporting PostgreSQL in some capacity. 
Moreover, I ran our Postgres installations on Solaris for years, and
would have continued to do so if I coulda got Sun to be nice to me as
a customer.  (There's more to that story, though, most of which I
can't discuss.)  I believe there are actually packaged binaries
available for Solaris, but I always built my own using gcc.

So It Worked For Me(tm) is about all I can tell you.

 I had to do a lot of web searching to learn how to get past a
 compilation problem, apparently fixed for the current release; and
 now It appears (based on the reply below) that the configure
 setup is not ready to recognize 64-bit solaris environment; the

I don't see how my reply says anything of the sort.  All you asked
was how you did that, and I asked you what I think are reasonable
questions, like do you have a compiler that produces 64 bit
binaries?  I certainly don't know any of that from what you've told
me so far.  Posting the exact error message you're getting might lead
you to more productive responses from people using Solaris today.  Or
maybe you can use the packages, and you don't need to build it.

A


-- 
Andrew Sullivan  | [EMAIL PROTECTED]
Unfortunately reformatting the Internet is a little more painful 
than reformatting your hard drive when it gets out of whack.
--Scott Morris

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Trouble with plpgsql generic trigger function using special variables

2006-10-31 Thread Lenorovitz, Joel
I'd like to create a trigger function whose use can extend to multiple
tables by employing the special variables available (e.g., TG_RELNAME).
Below is a simple version of such a function that ought to prevent
insertion of greater than 4 total records in the table that calls it.
I'm not sure that I'm using or dereferencing the trigger variables
correctly, however, particularly in the query.  I have tried many
syntax, type casting, and alternate variable assignment variations, but,
aside from parsing successfully, this code does not seem to work as
intended.Can somebody correct this specific example to have it work
properly and/or further explain how to use these variables?  Any advice
on outputting the values of the variables to the console for inspection
during testing would be welcome as well (RAISE EXCEPTION doesn't allow a
variable value in the message string, plus it seems a little harsh).

Thanks,
JL

CREATE OR REPLACE FUNCTION trigger_fxn() RETURNS TRIGGER AS $$
BEGIN
IF ((TG_OP = 'INSERT') AND (TG_WHEN = 'BEFORE')) THEN
IF (SELECT COUNT(*) FROM text(TG_RELNAME))  4
THEN
RETURN NEW;
ELSE
RETURN NULL;
END IF;
END IF;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER test_bi BEFORE INSERT ON test
FOR EACH ROW EXECUTE PROCEDURE trigger_fxn();

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] RAM Based Disk Drive?

2006-10-31 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/31/06 13:48, Merlin Moncure wrote:
 On 10/31/06, Adam [EMAIL PROTECTED] wrote:


 I recently saw a Hard Disk Drive that is really 4GB of RAM with and SATA
 1.5Gb/s serial interface.  It's basically a hard disk drive that uses
 RAM.
 It also has a battery backup, so if you loose power, you don't loose your
 data.

 Has anyone tried using this, and if so was there a noticeable performance
 increase?
 
 you are talking about the gigabyte i-ram.  in the database world, you
 can achieve same thing (actually better) by sticking those ram sticks
 directly on the motherboard assuming you are in a 64 bit environment
 and the motherboard is decent.
 
 the main advantage of the iram that i see is faster boot times (big
 woop). call me when they have a version that does 256gb :-)

OLTP rates are *much* higher with SSDs.  (Even with lots of system
RAM, you *still* have to write the data back to the disk, and that
takes time.)  But that's only if you've got a small db that needs
*really* high tps rates.

I'd rather spend my money on enough system RAM to keep the active
portion of my DB in the OS cache.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is common sense really valid?
For example, it is common sense to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that common sense is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)

iD8DBQFFR7NCS9HxQb37XmcRArbfAJ4kLD4488yY/w/iCr66gamukWtO0wCgob05
1DvyBrP4zI2Un8oO9FEaOc0=
=oOuz
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] postgresql and reiserfs

2006-10-31 Thread Ben
As I understand resier4, its features are irrelevant if your filesystem 
will mostly be holding postgres data.


On Tue, 31 Oct 2006, km wrote:


Hi all,

Is anyone up with database features in reiserfs (reiser4)  with postgresql 8.x?

regards,
KM

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Pgsql on Solaris

2006-10-31 Thread vodhner
Thanks to all who replied.

The reason I wanted to go 64-bit on our Fujitsu sparc-alikes was that our 
sysadmins switched to 64-bit Perl as part of the standard package, and that 
broke our DBD::Pg interface.  With no warning, we started getting a message 
about Pg.so, Wrong ELF Class:  ELFCLASS32.

My assumption had been that 64-bit was the latest and greatest, so of course 
config should have found the appropriate libraries etc. and set me up for a 
64-bit make.  The answer to my question seems to be yes, I could cause 64-bit 
compilation, but it's not the obvious way to go.  

I don't know if our Fujis might be faster in 64-bit mode, but who cares when 
you're talking about CGIs?

So I think I'll beat on the admins to give me a 32-bit Perl under a different 
name for talking to Pgsql.  (Our Sun operation is heavily administered to 
simulate our production world.  We also have linux boxes where we can play god 
if we feel that need.)

Thanks again,

Victor Odhner


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] updating to 7.4.13 helped it appears

2006-10-31 Thread Dimitri Fontaine
Hi list,

Le mardi 31 octobre 2006 20:00, Geoffrey a écrit :
 I have not had any success in duplicating the failure on my development
 environment.  I suspect it's because I can't generate the volume of
 users.  The production system could well have 150-200 users at one time
 and we get a core file generated about 3-4 times a week, generally on
 the busiest days.

You could use pgfouine[1] and tsung[2] to easily reproduce such a load, 
following those steps:
 - activate query logging on your dev env, using syslog
 - use pgfouine to produce a tsung session file [3]
 - make a tsung file configuration using this session file
 - use tsung to simulate as many users as wanted

[1]: http://pgfouine.projects.postgresql.org/
[2]: http://tsung.erlang-projects.org/
[3]: http://pgfouine.projects.postgresql.org/tsung.html

Regards,
-- 
Dimitri Fontaine
http://www.dalibo.com/


pgpVLv55ojbXV.pgp
Description: PGP signature


Re: [GENERAL] postgres import

2006-10-31 Thread Tomi NA

2006/10/31, Albe Laurenz [EMAIL PROTECTED]:

 I have a little problem. I have an .sql file (  db dump  )
 and i want to import it to postgres on linux.

 Does anyone know how i can do it?

You feed it to the command line interface psql.

Example:
psql -h host -p port -d database -U user dump.sql


It's a good enough solution in most cases, but when the rowcount
starts to skyrocket, it simply doesn't seem to cut it (at least I
couldn't make it to). To load 1,5M rows (~230MB of INSERT statements),
I used gvim (wonderful tool!) to transform the INSERT statements into
a CSV file and then used an ETL (kettle - another wonderful tool) tool
to import the data into the database.
This could have probably been done much easier: I'd welcome a helpful
hint so as I know next time. :)

t.n.a.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Text manipulation tools (was Re: [GENERAL] postgres import)

2006-10-31 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/31/06 17:17, Tomi NA wrote:
 2006/10/31, Albe Laurenz [EMAIL PROTECTED]:
[snip]
 It's a good enough solution in most cases, but when the rowcount
 starts to skyrocket, it simply doesn't seem to cut it (at least I
 couldn't make it to). To load 1,5M rows (~230MB of INSERT statements),
 I used gvim (wonderful tool!) to transform the INSERT statements into
 a CSV file and then used an ETL (kettle - another wonderful tool) tool
 to import the data into the database.
 This could have probably been done much easier: I'd welcome a helpful
 hint so as I know next time. :)

Unix is chock full of streaming text manipulation tools.  In this
case, awk, Perl or Python would work well.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is common sense really valid?
For example, it is common sense to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that common sense is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)

iD8DBQFFR+NKS9HxQb37XmcRAo3XAKDr2V7T//IjcRIKoHe6IH01eqrE9gCfe8CO
g8eLsgHs7AtNJT6+F/2Byj4=
=RT7u
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] RAM Based Disk Drive?

2006-10-31 Thread Merlin Moncure

On 10/31/06, Alan Hodgson [EMAIL PROTECTED] wrote:

On Tuesday 31 October 2006 11:48, Merlin Moncure [EMAIL PROTECTED]
wrote:
 you are talking about the gigabyte i-ram.  in the database world, you
 can achieve same thing (actually better) by sticking those ram sticks
 directly on the motherboard assuming you are in a 64 bit environment
 and the motherboard is decent.

 the main advantage of the iram that i see is faster boot times (big
 woop). call me when they have a version that does 256gb :-)

http://www.superssd.com/products_sub.htm

And, of course, the real advantage to a solid-state drive is random access
speed, which vastly improves both random writes and random reads.


well, some motherboards out there, for example the tyan vx50
(http://www.tyan.com/products/html/vx50b4881.html) can stock up to
128gb ram.  For a database server, this will probably outperform the
'ramsan' on many workloads.  the ramsan is easier to stack though.

merlin

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Encoding, Unicode, locales, etc.

2006-10-31 Thread Carlos Moreno


Hi,

Even though I *think* I have a fairly clear understanding of encoding
and locale principles, I'm somewhat unsure of how some of the tools
available with PostgreSQL apply  (or rather, how should they be used).

1)  The way I understand it, encoding (character set) and locale are two
different things.  Yet, I see that initdb allows me to specify the default
encoding (I can override it when creating the database) for the databases
that I create later, and also the locale(s).  Why is it that the database
cluster is resrticted to a single locale (or single set of locales) instead
of being configurable on a per-database basis?

2)  On the same token (more or less), I have a test database, for which
I ran initdb without specifying encoding or locale;  then, I create a
database with UTF8 encoding.  Then, from a psql console on the same
Linux machine that is running the server, I try lower of a string that
contains characters with accents  (e.g., Spanish or French characters),
and it works as it should according to Spanish or French rules --- it
returns a string with the same characters in lowecase, with the same
accent.  Why did that work?  My Linux machine has all en_US.UTF-8
locales, and en_US is not even aware of characters with accents, so it
doesn't seem like it's taking by default the encoding from the OS.  (is it
simply that the case is too obvious so by default case conversion
does the obvious thing?)

I have several other details in which I'm not too clear, but perhaps
with any responses or pointers that I might get for the above, it would
clarify the whole confusion?

BTW, I did read the online PG documentation --- the section
localization;  the thing is, with everything that I read in there, my
reaction was more or less ok, I knew that;  that is, it states facts
for which I know (or at least I think I know) the theory, but it did
not clarify how to use the given tools.

Thanks,

Carlos
--


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] [GENERAL] Index greater than 8k

2006-10-31 Thread Joshua D. Drake
Teodor Sigaev wrote:
 The problem as I remember it is pg_tgrm not tsearch2 directly, I've
 sent a self contained test case directly to  Teodor  which shows the
 error.
 'ERROR:  index row requires 8792 bytes, maximum size is 8191'
 Uh, I see. But I'm really surprised why do you use pg_trgm on big text?
 pg_trgm is designed to find similar words and use technique known as
 trigrams. This will  work good on small pieces of text such as words or
 set expression. But all big texts (on the same language) will be similar
 :(. So, I didn't take care about guarantee that index tuple's size
 limitation. In principle, it's possible to modify pg_trgm to have such
 guarantee, but index becomes lossy - all tuples gotten  from index
 should be checked by table's tuple evaluation.

We are trying to get something faster than ~ '%foo%';

Which Tsearch2 does not give us :)

Joshua D. Drake



 
 If you want to search similar documents I can recommend to have a look
 to fingerprint technique (http://webglimpse.net/pubs/TR93-33.pdf). It's
 pretty close to trigrams and metrics of similarity is the same, but uses
 another signature calculations. And, there are some tips and trics:
 removing HTML marking,removing punctuation, lowercasing text and so on -
 it's interesting and complex task.


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Pgsql on Solaris

2006-10-31 Thread Joshua D. Drake
Ray Stell wrote:
 I was interested in going down that path.  This thread seems to be sparc
 oriented.  How about on opteron?  Will pg work on solaris10 without too
 much stress?


PostgreSQL will work wonderfully on Solaris10 Opteron.


Joshua D Drake


 
 isainfo -v
 64-bit amd64 applications
 sse3 sse2 sse fxsr amd_3dnowx amd_3dnow amd_mmx mmx cmov amd_sysc cx8
 tsc fpu
 32-bit i386 applications
 sse3 sse2 sse fxsr amd_3dnowx amd_3dnow amd_mmx mmx cmov amd_sysc cx8
 tsc fpu
 
 thanks.
 
 
 
 On Tue, Oct 31, 2006 at 10:43:55AM -0700, [EMAIL PROTECTED] wrote:
 So, please advise as to the level of 64-bit Solaris 10 activity with pgsql.

 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster
 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [GENERAL] Index greater than 8k

2006-10-31 Thread Joshua D. Drake
Alvaro Herrera wrote:
 Darcy Buskermolen wrote:
 On October 31, 2006 08:53 am, Teodor Sigaev wrote:
 The problem as I remember it is pg_tgrm not tsearch2 directly, I've sent
 a self contained test case directly to  Teodor  which shows the error.

 'ERROR:  index row requires 8792 bytes, maximum size is 8191'
 Uh, I see. But I'm really surprised why do you use pg_trgm on big text?
 pg_trgm is designed to find similar words and use technique known as
 trigrams. This will work good on small pieces of text such as words or set
 expression. But all big texts (on the same language) will be similar :(.
 So, I didn't take care about guarantee that index tuple's size limitation.
 In principle, it's possible to modify pg_trgm to have such guarantee, but
 index becomes lossy - all tuples gotten  from index should be checked by
 table's tuple evaluation.
 The problem is some of the data we are working with is not strictly text 
 but 
 bytea that we've run through encode(bytea, 'escape'),
 
 I think one good question is why are you storing bytea and then
 searching like it were text. 

We are not storing bytea, a customer is. We are trying to work around
customer requirements. The data that is being stored is not always text,
sometimes it is binary (a flash file or jpeg). We are using escaped text
to be able to search the string contents of that file .

 Why not store the text as text, and put
 the extraneous bytes somewhere else?  Certainly you wouldn't expect to
 be able to find text among the bytes, would you?

Yes we do (and can) expect to find text among the bytes. We have
searches running, we are just running into the maximum size issues for
certain rows.

Sincerely,

Joshua D. Drake




-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Limited availability this week

2006-10-31 Thread Joshua D. Drake
Hello,

I am teaching yet another PostgreSQL class this week. I have limited
availability for those who are trying to reach me. I apologize if my
replies seem staggered and tardy.

Sincerely,

Joshua D. Drake

-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] postgres import

2006-10-31 Thread Tom Lane
Tomi NA [EMAIL PROTECTED] writes:
 2006/10/31, Albe Laurenz [EMAIL PROTECTED]:
 psql -h host -p port -d database -U user dump.sql

 It's a good enough solution in most cases, but when the rowcount
 starts to skyrocket, it simply doesn't seem to cut it (at least I
 couldn't make it to).

It certainly should work.  We've seen some platforms where libreadline
seems to be unable to tell the difference between input from a terminal
and input from a file, and performs a boatload of processing that would
be useful for interactive input but is just overhead here.  If that's
your problem, try this form instead:

psql -h host -p port -d database -U user -f dump.sql

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [GENERAL] Index greater than 8k

2006-10-31 Thread Gregory S. Williamson
I hesitate to mention it, since it's retrograde, uses OIDS, may not handle your 
locale/encoding correctly, may not scale well for what you need etc., etc.

But we've used fti (in the contrib package) to do fast searches for any bit of 
text in people's names ... we didn't go with tesearch2 because we were a bit 
worried about the need to search for fragments of names, and that names don't 
follow stemming rules and the like very well. Still it might be a way of 
handling some of the uglier data. It was a bit of a pain to set up but seems to 
work well. Of course, users can ask for something commonplace and get back 
gazillions of rows, but apparently that's ok for the application this is part 
of. Caveat: only about 32 million rows in this dataset, partitioned into 
unequal grouings (about 90 total).

HTH (but doubt it for reasons that undoubtedly be made clear ;-)

Greg Williamson
DBA
GlobeXplorer LLC


-Original Message-
From:   [EMAIL PROTECTED] on behalf of Joshua D. Drake
Sent:   Tue 10/31/2006 7:46 PM
To: Teodor Sigaev
Cc: Darcy Buskermolen; PgSQL General; PostgreSQL-development
Subject:Re: [HACKERS] [GENERAL] Index greater than 8k

Teodor Sigaev wrote:
 The problem as I remember it is pg_tgrm not tsearch2 directly, I've
 sent a self contained test case directly to  Teodor  which shows the
 error.
 'ERROR:  index row requires 8792 bytes, maximum size is 8191'
 Uh, I see. But I'm really surprised why do you use pg_trgm on big text?
 pg_trgm is designed to find similar words and use technique known as
 trigrams. This will  work good on small pieces of text such as words or
 set expression. But all big texts (on the same language) will be similar
 :(. So, I didn't take care about guarantee that index tuple's size
 limitation. In principle, it's possible to modify pg_trgm to have such
 guarantee, but index becomes lossy - all tuples gotten  from index
 should be checked by table's tuple evaluation.

We are trying to get something faster than ~ '%foo%';

Which Tsearch2 does not give us :)

Joshua D. Drake



 
 If you want to search similar documents I can recommend to have a look
 to fingerprint technique (http://webglimpse.net/pubs/TR93-33.pdf). It's
 pretty close to trigrams and metrics of similarity is the same, but uses
 another signature calculations. And, there are some tips and trics:
 removing HTML marking,removing punctuation, lowercasing text and so on -
 it's interesting and complex task.


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=454815f5242304846743324[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:454815f5242304846743324!
---







---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [GENERAL] Index greater than 8k

2006-10-31 Thread Alvaro Herrera
Joshua D. Drake wrote:
 Alvaro Herrera wrote:
  Darcy Buskermolen wrote:
  On October 31, 2006 08:53 am, Teodor Sigaev wrote:
  The problem as I remember it is pg_tgrm not tsearch2 directly, I've sent
  a self contained test case directly to  Teodor  which shows the error.
 
  'ERROR:  index row requires 8792 bytes, maximum size is 8191'
  Uh, I see. But I'm really surprised why do you use pg_trgm on big text?
  pg_trgm is designed to find similar words and use technique known as
  trigrams. This will work good on small pieces of text such as words or set
  expression. But all big texts (on the same language) will be similar :(.
  So, I didn't take care about guarantee that index tuple's size limitation.
  In principle, it's possible to modify pg_trgm to have such guarantee, but
  index becomes lossy - all tuples gotten  from index should be checked by
  table's tuple evaluation.
  The problem is some of the data we are working with is not strictly text 
  but 
  bytea that we've run through encode(bytea, 'escape'),
  
  I think one good question is why are you storing bytea and then
  searching like it were text. 
 
 We are not storing bytea, a customer is. We are trying to work around
 customer requirements. The data that is being stored is not always text,
 sometimes it is binary (a flash file or jpeg). We are using escaped text
 to be able to search the string contents of that file .

Hmm, have you tried to create a functional trigram index on the
equivalent of strings(bytea_column) or something like that?

I imagine strings(bytea) would be a function that returns the
concatenation of all pure (7 bit) ASCII strings in the byte sequence.

On the other hand, based on Teodor's comment on pg_trgm, maybe this
won't be possible at all.

  Why not store the text as text, and put
  the extraneous bytes somewhere else?  Certainly you wouldn't expect to
  be able to find text among the bytes, would you?
 
 Yes we do (and can) expect to find text among the bytes. We have
 searches running, we are just running into the maximum size issues for
 certain rows.

Do you mean you actually find stuff based on text attributes in JPEG
images and the like?  I thought those were compressed ...

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Encoding, Unicode, locales, etc.

2006-10-31 Thread Tom Lane
Carlos Moreno [EMAIL PROTECTED] writes:
 Why is it that the database
 cluster is resrticted to a single locale (or single set of locales) instead
 of being configurable on a per-database basis?

Because we depend on libc's locale support, which (on many platforms)
isn't designed to switch between locales cheaply.  The fact that we
allow a per-database encoding spec at all was probably a bad idea in
hindsight --- it's out front of what the code can really deal with.
My recollection is that the Japanese contingent argued for it on the
grounds that they needed to deal with multiple encodings and didn't
care about encoding/locale mismatch because they were going to use
C locale anyway.  For everybody else though, it's a gotcha waiting
to happen.

This stuff is certainly far from ideal, but the amount of work involved
to fix it is daunting; see many past pg-hackers discussions.

 2)  On the same token (more or less), I have a test database, for which
 I ran initdb without specifying encoding or locale;  then, I create a
 database with UTF8 encoding.

There's no such thing as you didn't specify a locale.  If you didn't
specify one on the initdb command line, then it was taken from the
environment.  Try show lc_collate and show lc_ctype to see what
got used.

 I try lower of a string that
 contains characters with accents  (e.g., Spanish or French characters),
 and it works as it should according to Spanish or French rules --- it
 returns a string with the same characters in lowecase, with the same
 accent.  Why did that work?  My Linux machine has all en_US.UTF-8
 locales, and en_US is not even aware of characters with accents,

You sure?  I'd sort of expect a UTF8 locale to know this stuff anyway.
In any case, Postgres doesn't know anything about case conversion
beyond what toupper/tolower tell it, so your experimental result is
sufficient proof that that locale includes these conversions.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [GENERAL] Index greater than 8k

2006-10-31 Thread Joshua D. Drake

 We are not storing bytea, a customer is. We are trying to work around
 customer requirements. The data that is being stored is not always text,
 sometimes it is binary (a flash file or jpeg). We are using escaped text
 to be able to search the string contents of that file .
 
 Hmm, have you tried to create a functional trigram index on the
 equivalent of strings(bytea_column) or something like that?

I did consider that. I wonder what size we are going to deal with
though. Part of the problem is that some of the data we are dealing with
 is quite large.

 
 I imagine strings(bytea) would be a function that returns the
 concatenation of all pure (7 bit) ASCII strings in the byte sequence.
 
 On the other hand, based on Teodor's comment on pg_trgm, maybe this
 won't be possible at all.
 Yes we do (and can) expect to find text among the bytes. We have
 searches running, we are just running into the maximum size issues for
 certain rows.
 
 Do you mean you actually find stuff based on text attributes in JPEG
 images and the like?  I thought those were compressed ...

Well a jpeg is probably a bad example, but yes they do search jpeg, I am
guessing mostly for header information. A better example would be
postscript files, flash files and of course large amounts of text + Html.

Sincerely,

Joshua D. Drake




-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [HACKERS] [GENERAL] Index greater than 8k

2006-10-31 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 Do you mean you actually find stuff based on text attributes in JPEG
 images and the like?  I thought those were compressed ...

Typically not --- the design assumption is that the text size wouldn't
amount to anything anyway compared to the image data, and it's better to
be able to pull it out with minimal processing.

I do suggest though that an image containing auxiliary data like text
comments is a multi-part structure, and that dumping it into a single
uninterpreted database field is spectacularly bad schema design.
You should pull the text out into a separate column once when you store
the data, instead of trying to fix things up when you search.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Compiling/Installing as a non-admin user

2006-10-31 Thread Ritesh Nadhani

Hello All

Me and my professor are planning to work upon machine learning in 
postgresql over tsearch2. So I have some questions:


We have a server where Postgresql is running without any problem with 
postgres username and admin rights. I have a user account in that 
server. I plan to compile and run another postgresql for our testing so 
I was thinking of how to do that? My prior knowledge of using postgresql 
has always been as admin where I have full rights.


As I see, using the default MAKE for postgresql will set the data 
directory etc. in /usr/local/data etc which I dont have access to as a user.


So I would like to compile and run postgresql as a normal user with 
every thing like data kept in my usr directory. I should be able to run 
the instance over separate port and can start and stop it.


Basically, I want to run the server as in user mode

How should I configure the MAKE and INSTALL in this circumstances? What 
are your suggestions


Ritesh

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Compiling/Installing as a non-admin user

2006-10-31 Thread Thomas Kellerer

Ritesh Nadhani wrote on 01.11.2006 07:51:
We have a server where Postgresql is running without any problem with 
postgres username and admin rights. 


Interesting. On Windows, PG will *refuse* to run on an account with admin 
rights.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/