Re: [GENERAL] backup and restore

2008-01-16 Thread Richard Huxton

T.J. Adami wrote:

On 15 jan, 12:29, [EMAIL PROTECTED] (Richard Huxton) wrote:

It's quite simple to have problems with pgAdmin's backup procedure.

Hmm - shouldn't be, and if so then please let the pgAdmin people know!
They're always working to improve the package and it's



I think the same. We (company I work for) develop an ERP software with
more than 200 customers, and each one have a individual and
confidential database with PostgreSQL.
Using pgAdmin version 1.6.x or lower with those databases causes
errors when the database is restored from a .backup file created with
compression from pgAdmin. However, to kill the doubt, I realize those
errors with pg_restore utility from terminal as well.


What puzzles me is that as far as I know, pgAdmin *uses* pg_dump and 
pg_restore to handle its backup needs.



P.S: I have to trace exactly these pgAdmin issues. The issues really
exists (I'll not post any fake message here if it was not true), but I
can't send databases dump to reproduce them according the contract
with our customers (confidential data).


I'm sure that will be appreciated. Open source projects rely on 
knowledgable users who are prepared to do a little investigating. It's 
what helps drive reliability up.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] LIKE and REGEX optimization

2008-01-16 Thread Richard Huxton

Chris Browne wrote:

The only idea that comes to mind to follow that is to look into
tsearch.  On PostgreSQL 8.3, it's built in; on 8.2 and earlier, it's a
contrib module.  It allows indexing on words inside columns, which
would seem to fit your requirement.


You might want to google for postgresql trigram too. Some notes at the 
location below.

http://www.sai.msu.su/~megera/oddmuse/index.cgi/ReadmeTrgm

It's more of a substring search than tsearch2 is, so might meet your 
needs better. It's in the contrib package / source directory.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] backup and restore

2008-01-16 Thread Dave Page
On 16/01/2008, Richard Huxton [EMAIL PROTECTED] wrote:
 T.J. Adami wrote:
 What puzzles me is that as far as I know, pgAdmin *uses* pg_dump and
 pg_restore to handle its backup needs.

It does. Make sure it's using the correct version for your server
though - that's the only reason it might fail that I can think of. In
1.6 it'll use the first version it finds. In 1.8, you can select the
path yourself if you like.

Regards, Dave

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] ATTN: Clodaldo was Performance problem. Could it be related to 8.3-beta4?

2008-01-16 Thread Clodoaldo
2008/1/16, Tom Lane [EMAIL PROTECTED]:
 I went through this thread again, and noticed something that no one
 seems to have remarked on at the time: the vmstat numbers near the
 bottom of this post

 http://archives.postgresql.org/pgsql-general/2008-01/msg00161.php

This post is from january 5.

 show close to 100% I/O wait time (either that or 50% idle 50% I/O wait,
 which I suspect is an artifact).  We subsequently concluded that the
 SELECT side of the INSERT/SELECT command is not where the problem is,
 so all the cycles are going into the actual row insertion part.

 I don't know of any reason to think that insertion is slower in 8.3
 than it was in 8.2, and no one else has reported anything of the sort.
 So I'm leaning to the idea that this suggests some kind of
 misconfiguration of the disk setup in Clodoaldo's new server.  There
 was some earlier discussion about not having the RAID configured right:

 http://archives.postgresql.org/pgsql-general/2008-01/msg00169.php

This post is from january 6. So the raid problem was presumed fixed
after those vmsat numbers I posted.

 I'm thinking it's still not right :

I will test both 8.2 and 8.3 again in the old box which is now retired
and don't have the raid factor.

Regards, Clodoaldo Pinto Neto

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] INSERT with a composite columnt from query

2008-01-16 Thread Reg Me Please
Hi all.

I have two table like these:

create table compo (
  t text,
  i int
);

create table tab (
  x int,
  c compo
);

Then I have a function like this:

create or replace function f_compo()
returns setof compo as $body$
...
$body$ language sql stable;


What I'd need to do is to insert the results from f_compo() into
the table TAB along with a value x.

I expected somthing like this to work:

insert into tab
  select 42,row( c.* ) from f_compo() c;

But I get
ERROR:  cannot cast type record to compo

Any hint?

TALIA

-- 
Reg me, please!

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] ATTN: Clodaldo was Performance problem. Could it be related to 8.3-beta4?

2008-01-16 Thread Clodoaldo
2008/1/16, Joshua D. Drake [EMAIL PROTECTED]:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On Wed, 16 Jan 2008 00:29:16 -0500
 Tom Lane [EMAIL PROTECTED] wrote:

  Joshua D. Drake [EMAIL PROTECTED] writes:
   If we are sure that this issue is apparent actual row insertion it
   should be easy to duplicate.
 
  I think you missed my point entirely: I believe it's specific to

 Wouldn't be the first time.

  Clodoaldo's installation.  Certainly I didn't have any luck
  reproducing a huge 8.2-to-8.3 slowdown here.

 Well I would agree that all the testing that I have done doesn't show
 8.3 being slower. I do see this though in the very first post:

 http://archives.postgresql.org/pgsql-general/2008-01/msg00143.php

 Production: Fedora Core 6, postgresql 8.2.5, AMD XP2600, 2 GB mem, two
 7200 ide disks with pg_xlog alone in the second disk.
 New: Fedora 8, postgresql 8.3-beta4, Core Duo 2.33 MHz, 4 GB mem, two
 7200 sata disks in Raid 1.

 On the old machine pg_xlog is on its own disk but on the new machine he
 is getting random churn on the I/O because he only has the RAID 1 for
 DATA and pg_xlog.

 He tested 8.3 on the old box and had similar results.

No, I tested 8.3 only on the new box where I also tested 8.2. If it is
a hardware problem, what I don't know how to investigate, it is only
triggered by 8.3. When I tested 8.3 and 8.2 in the new box the raid
problem was apparently fixed as you noted.

 Although, I wonder
 if he tested 8.3b4 on the old box in the exact disk layout (with xlog
 on a separate disk).

I didn't test 8.3 on the old box. Now the new box is already the
production box running 8.2.6 and I can't test on it anymore, but I can
test everything again in the old box. That would rule out the raid
factor. I will have to wait for the weekend.

Regards, Clodoaldo Pinto Neto

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] INSERT with a composite columnt from query

2008-01-16 Thread Albe Laurenz
Reg Me Please wrote:
 I have two table like these:
 
 create table compo (
   t text,
   i int
 );
 
 create table tab (
   x int,
   c compo
 );
 
 Then I have a function like this:
 
 create or replace function f_compo()
 returns setof compo as $body$
 ...
 $body$ language sql stable;
 
 
 What I'd need to do is to insert the results from f_compo() into
 the table TAB along with a value x.
 
 I expected somthing like this to work:
 
 insert into tab
   select 42,row( c.* ) from f_compo() c;
 
 But I get
 ERROR:  cannot cast type record to compo

The whole exercise seems a bit pointless, but you could do it like this:

INSERT INTO tab SELECT 42, CAST (c AS compo) FROM f_compo() c;

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] INSERT with a composite columnt from query

2008-01-16 Thread Richard Huxton

Reg Me Please wrote:


What I'd need to do is to insert the results from f_compo() into
the table TAB along with a value x.

I expected somthing like this to work:

insert into tab
  select 42,row( c.* ) from f_compo() c;

But I get
ERROR:  cannot cast type record to compo


You need to add an explicit cast I believe.

INSERT INTO tab (x,c)
SELECT 42, ROW(c.*)::compo FROM f_compo() c;

Why you don't in the case of INSERT ... VALUES isn't immediately clear 
to me.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Forgot to dump old data before re-installing machine

2008-01-16 Thread Stefan Schwarzer



Ok, did what you said: stopping server, deleting newly created
data directory, re-running initdb, starting the server, stopping  
the

server.



Renamed empty data directory.



Restarting server: NOT COMPLAINING you need to run initdb or
something else Although it's saying that it starts, nothing
appears in the process list (ps -ef).


Hmm, you need to stop here and figure out exactly what happens.

What procedure are you using to start the server?  I assume you
are not directly typing postmaster, but using some script,
because the bare postmaster would certainly not act that way.
I guess that either the script silently runs initdb for you
(which is generally thought a bad idea nowadays) or that it
is redirecting the postmaster's log output someplace that you're
not looking.  Anyway, don't go past this step until you understand
what you're seeing.



Uff

Ok, here is what I did after compiling postgres8.1 (getting it  
from MacPorts):


/opt/local/lib/postgresql81/bin/initdb -D Documents/data_postgres
/opt/local/lib/postgresql81/bin/pg_ctl -D Documents/data_postgres -l  
logfile start

ps -ef shows the postmaster process
/opt/local/lib/postgresql81/bin/pg_ctl -D Documents/data_postgres -l  
logfile stop

renaming data_postgres to data_postgres.orig
/opt/local/lib/postgresql81/bin/pg_ctl -D Documents/data_postgres -l  
logfile start

which tells me:
postmaster starting
but in ps -ef there is no process listed

When I re-rename the newly created folder (data_postgres.orig into  
data_postgres) the start works again. But it does not work with  
the old (backuped) data folder...


So, as I mentioned before, it seems not to be that simple, that I  
can just copy the old (backuped) data folder onto the newly created.


Is there any way I can figure out with which version I have created  
the old databases? Perhaps, in a worst case scenario they have  
been created in 8.0 I will try...


The logfile is telling me this when I try to start the server with my  
old data folder:


FATAL:  database files are incompatible with server
DETAIL:  The database cluster was initialized with PG_CONTROL_VERSION  
738394112, but the server was compiled with PG_CONTROL_VERSION 812.


What does it mean? I have and had 8.1 installed...

Thanks for any help!!

Stef

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Search connections created per day

2008-01-16 Thread João Paulo Zavanela
 Hi there,

 I want to know how many connections is created simultaneously in data
base
 per day.

Do you want to know the max simo connections at any given time, or the
total connections in a given day?  Assuming you want to know the max
simo connections at any given time, I think you're going to have to
check pg_stat_activity every x minutes and store the number of rows
somewhere to retrieve later.

 Are there some table where i can search it?

pg_stat_activity, but it only has the info for right now in it.

Thanks...

I want to know the max simo connections at any given time, but I only have
a backup of the database, so, I don't can use the pg_stat_activity.

This database is running in a client, and I should to know if he is creating
more connections of it is allowed.

Thanks.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Prepared statement's plan

2008-01-16 Thread Vyacheslav Kalinin
Hello,

I would appreciate if someone explained me how exactly prepared parametrized
statements are planned, i.e. what kind of assumptions planner makes on param
values, selectivity, expected row count etc. that affect in particular
whether indexes will be used or not. For instance consider the following
case:

create table t1 ( id serial, val int);

insert into t1 (val)
select trunc(10*random())
  from generate_series(1, 100);

create index idx_t1 on t1(val);

analyze t1;
--

1.
prepare stmt (int) as
 select * from t1 where val  $1;

explain execute stmt(10);

---
QUERY PLAN
Seq Scan on t1  (cost=0.00..17401.94 rows=32 width=8)
 Filter: (val  $1)

2.
prepare stmt (int, int) as
 select * from t1 where val  $1 and val  $2;

explain execute stmt(2, 3);

---
QUERY PLAN
Bitmap Heap Scan on t1  (cost=151.74..5307.59 rows=5000 width=8)
  Recheck Cond: ((val  $1) AND (val  $2))
  -  Bitmap Index Scan on idx_t1  (cost=0.00..150.49 rows=5000 width=0)
Index Cond: ((val  $1) AND (val  $2))

Hmm, why does it expect 5000 rows here? What influences this expectation?

3.
prepare stmt (int) as
 select * from t1 where val = $1 or $1 is null;

explain execute stmt(2);

QUERY PLAN
Seq Scan on t1  (cost=0.00..17401.94 rows=5013 width=8)
  Filter: ((val = $1) OR ($1 IS NULL))

That's the weirdest behavior: where did 5013 rows assumption came from? Why
use seq scan then? I should mention that planner refuses to use anything but
seq scan here even if I explicitly disable it with set enable_seqscan to
off.

In general, I wonder if one could get somewhat predictable planner behavior
in such cases since we have a lot of code written in plpgsql and the
patterns above are pretty common there.

Thanks,
Viatcheslav


[GENERAL] Help with pre-loaded arbitrary key sequences

2008-01-16 Thread James B. Byrne
I am prototyping a system migration that is to employ Ruby, Rails and
PostgreSQL.  Rails has the convention that the primary key of a row is an
arbitrary integer value assigned by the database manager through a
sequence.  As it turns out, the legacy application employs essentially the
same convention in most instances.

My question is this: Can one assign an id number to a sequenced key column
on create and override the sequencer?  If one does this then can and, if
so, how does the sequencer in Postgresql handle the eventuality of running
into a block of keys holding previously assigned numbers?

For example.  The existing client master dataset employs an eight digit
account number as primary key.  The values in use tend to cluster in
groups at each thousand increment, thus 1..375, 1001..1288, 2001..2225,
3001..3312, ..., 2001001..2001476, ..., etc.

Assuming that these existing entries were all loaded into the new table
with the values given as their primary keys and given that one could not
simply start the sequencer at a value above the highest existing value: If
one was to add a record and auto-generate a sequence number then can the
sequencer handle looping from 1 to 375 and returning 376 and then continue
until it reaches 1001, when it needs be loop again until 1288 and then
return 1289 and so forth?

During the load of the initial table data it would probably be necessary
to disable the sequencer for this column. Is this in fact the case? If so,
how is this done and how is the sequencer restored after the initial
migration of data is complete?

I presume that I can write my own sequencer function to accomplish this in
any case but I wish to know if the existing method handles this case. 
Another solution is to simply decouple the existing key value from the new
and treat the exiting client number as a piece of data (with or without an
index), but that seems redundant since the new arbitrary key value might
just as well be the existing arbitrary key value.

Dispensing with the existing arbitrary number is another option but, the
fact is that present business practice is for employees to refer to their
clients and vendors by account number. The existing computer system is 25
years old but employs account numbers that predate automation. The firm is
well past the century mark and some of these numbers have been in use with
a few clients from the end of 1800's.  So, while not strictly a business
case, both the firm and some of its clients have a strong, if irrational,
attachment to preserving the existing scheme.

I regret if these questions appear naive but I am struggling with a lot of
new information on a number of fronts and as usual wish to get quick
answers to questions that may be far more involved than I realize.

Sincerely,

-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:[EMAIL PROTECTED]
Harte  Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Postgresql 8.2.4 on linux-sparc problem

2008-01-16 Thread Lew

Lew wrote:

Tom Lane wrote:

Before you get into that, try reducing these parameters:

maintenance_work_mem = 1572864   
work_mem = 1048576


They are way too high, especially the second one.


So if 1.5 MB maintenance_work_mem and 1 MB work_mem are way too high, 
why are the default values in the postgresql.conf on installation 16 MB 
and 1 MB, respectively?


Replying to my own question - I must learn to check docs *before* asking:
http://www.postgresql.org/docs/8.2/interactive/config-setting.html
Some settings specify a memory or time value. Each of these has an 
implicit unit, which is either kilobytes, blocks (typically eight 
kilobytes), milliseconds, seconds, or minutes. 


I was off by a factor of 1024 in how I read that.  The OP had specified 1.5 
*GB* and 1 *GB* respectively!


--
Lew

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] know the schema name in a trigger

2008-01-16 Thread danilo.juvinao
Hello,

i want know how can i get the schema name that execute a trigger.

for example, if a have a schema myschema and a table mytable and
it have a trigger procedure, i want know inside the trigger procedure,
with plpgsql, the shcema name myschema..

thanks,

pd. sorry for my english, i'm from Colombia.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Sun acquires MySQL

2008-01-16 Thread Russ Brown
http://blogs.mysql.com/kaj/sun-acquires-mysql.html/

What does this mean for Sun's support of Postgres?

-- 

Russ.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] A counter argument about DISTINCT and GROUP BY in PostgreSQL

2008-01-16 Thread dterrors
On Jan 4, 11:48 am, [EMAIL PROTECTED] (Gregory Stark) wrote:
 [EMAIL PROTECTED] writes:
  I've just spent a few hours searching and reading about the postgres
  way of selecting distinct records.  I understand the points made about
  the ORDER BY limitation of DISTINCT ON, and the abiguity of GROUP BY,
  but I think there's a (simple, common) case that have been missed in
  the discussion. Here is my sitation:

  table projects:
  id  title more stuff (pretend there's 20 more columns.)
  ---
  1   buildrome moredata   inothercolumns
  2   housework   evenmoredata letssay20columns

  table todos:
  id projectid name duedate
  -
  1  1 conquer1pm
  2  1 laybricks  10pm
  3  2 dolaundry  5pm

  In english, I want to select projects and order them by the ones that
  have todos due the soonest.  Does that sound like a reasonable
  request?

 SELECT *
   FROM (
 SELECT DISTINCT ON (projects.id) projects.*
   FROM projects
   JOIN todos ON (todos.projectid = projects.id)
  ORDER BY projects.id, projects.duedate ASC
 )
  ORDER BY duedate ASC
 OFFSET 10
  LIMIT 20

  Option E: I could use a subselect.  But notice my offset, limit.  If I
  use a subselect, then postgresql would have to build ALL of the
  results in memory (to create the subselect virtual table), before I
  apply the offset and limit on the subselect.

 Don't assume Postgres has to do things a particular way just because there's a
 subselect involved. In this case however I expect Postgres would have to build
 the results in memory, but not because of the subselect, just because that's
 the only way to do what you're asking.

When you say it would build the results in memory, do you mean the
entire subselected table?  The subselect in your example doesn't do
any offset, limit. (And, do you think what I'm asking for is odd or
unusual? I can think of a hundred examples besides a todo list.)

 You're asking for it to pick out distinct values according to one sort key
 then return the results sorted according to another key. Even if you had an
 index for the first key or Postgres used a hash to perform the distinct, the
 ORDER BY will require a sort.

I'm not trying to avoid doing a sort, actually.

  Any suggestion would be appreciated.

  BTW for those of you who are curious, in mysql (that other db), this
  would be:

  select a.* from projects a inner join todos b on b.projectid = a.id
  group by a.id order by b.duedate limit 10,20;

 And what does the plan look like?

It looks great in mysql!   Works perfectly- that was from my old mysql
code before I switched, or well tried to switch to postgres. I get:

id  title more stuff
 ---
1   buildrome moredata   inothercolumns
2   housework   evenmoredata letssay20columns


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] backup and restore

2008-01-16 Thread hiddenhippo
Having recently jumped ship from MSSQL to Postgres I'm finding it a
little difficult getting onto the 'correct path.'  Basically I have a
live database and I want to take a copy of that and then restore it
onto a backup server.

Firstly I'm using Postgres 8.2 and pgAdmin 1.6.3

Within the pgAdmin interface I've selected the database I wish to
backup, given it a filename, selected 'Compress' and have chosen blobs
and OIDs.  From the documentation it appears that if you use foreign
keys then you should use OIDs.  Anyway, the backup appears to work
fine and a file is created.  No errors are displayed at any point.

When I come to restore the database I fistly create a blank one, and
then right click and choose restore.  The problem is that errors are
thrown because of foreign-key constraints.

With a little more investigation I changed the backup output to
'plain' so that I could see the script generated.  It appears that the
backup process is created tables and their associated data in the
wrong order.  For example,  at the top of the generated file it
attempts to insert data into a given table, however the table dictates
that some entries must have a corresponding entry in another, for
example a user id (a foreign key saying that UID value must exist on
the user tables primary key).  After detailing the data for the first
table it then details the data that should go into the parent table,
or in my example above, it then attempts to populate the user table.
When you run the restore process this it fails because the user table
doesn't have the corresponding entries.

Could someone please suggest why this is happening and how, if it can
be done, I fix it?

Thanks


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Prepared Statements

2008-01-16 Thread Janning Vygen
Am Sonntag 13 Januar 2008 00:46:50 schrieb Tom Lane:
 Kris Jurka [EMAIL PROTECTED] writes:
  On Thu, 10 Jan 2008, [EMAIL PROTECTED] wrote:
  What do you mean with longer lifespan? Doesn't the JDBC driver uses
  the PREPARE Sql Statement and therefore the prepared Statement has the
  same lifespan as the connection? If so, as connections are pooled and
  never closed, the prepared Statement will last forever. What if the
  table analyzes changes and a better execution plan could be found?
 
  Data and stats changes do not trigger a replan.

 Note that this is no longer true as of 8.3: a stats update from ANALYZE
 (either manual or autovacuum) will trigger invalidation of cached plans.

great, i have too look out for 8.3, but i am quite happy with 8.1 because of 
automatic security updates coming with debian out of the box. I have to look 
for another way to replan. Maybe i just have to close and reopen my 
connections from time to time.

kind regards
Janning


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] libpq updateable resultset?

2008-01-16 Thread Tristen Ennemuist
Hello, I'm new to libpq and I'm trying to discover a
means of updating resultset columns.

I can see there's a get PQgetvalue but no set
equivalent PQsetvalue.

I've read the documentation and checked out the
examples, but neither seem to indicate that this is
possible?

I'd appreciate some direction on how I might go about
achieving this, thanks!

Regards
Tristen


  Make the switch to the world's best email. Get the new Yahoo!7 Mail now. 
www.yahoo7.com.au/worldsbestemail



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Postgresql 8.2.4 on linux-sparc problem

2008-01-16 Thread Lew

Tom Lane wrote:

Before you get into that, try reducing these parameters:


maintenance_work_mem = 1572864  
work_mem = 1048576


They are way too high, especially the second one.


So if 1.5 MB maintenance_work_mem and 1 MB work_mem are way too high, why 
are the default values in the postgresql.conf on installation 16 MB and 1 MB, 
respectively?


--
Lew

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-16 Thread Robert Treat
On Friday 11 January 2008 13:44, Josh Harrison wrote:
 On Jan 11, 2008 1:22 PM, Erik Jones [EMAIL PROTECTED] wrote:
  On Jan 11, 2008, at 12:14 PM, Scott Marlowe wrote:
   On Jan 11, 2008 12:02 PM, Josh Harrison [EMAIL PROTECTED] wrote:
   Hi
   We have an Oracle production database with some terbytes of data.
   We wanted
   to migrate that to Postgresql (rigt now...a test database and not
   production) database.
   What are the good options to do that?
   Please advise me on where to look for more information on this topic
  
   You have two steps to work on.  The first is the DDL, to create
   equivalent tables in pgsql as in oracle, the second is to migrate over
   your data.
  
 I had done this with the test database. For ddl generation I used xml/xsl
 and for data migration I used jdbc. I can get the ddl generated fine. With
 JDBC the data migration is a bit slow.
 My question is abt the data migration. Im not sure how to try this with an
 online oracle database. We are required to run both postgres and oracle
 database simultaneously for a couple of months (atleast till we decide
 whether we are going to shut down oracle for good !!!). Since the oracle
 database is a production database, It will have updates/inserts during this
 time. How do you manage that?


About a year ago we converted one of our clients multi-TB ODS systems built in 
Oracle over to PostgreSQL. There's a case study about it you can get from the 
Sun folks at  
http://www.sun.com/third-party/srsc/resources/postgresql/postgre_success_dwp.pdf

Now, due to the size of the project, we had to run both the Oracle and 
Postgres systems in parallel for several months. We kept the data up to date 
using a slew of custom code, designed to replicate data from either the ODS 
system or the OLTP system, depending on various technical and business 
factors.  My guess is that in your case, you'd want a mix of replicating data 
from the current Oracle database and your application, as best possible. 

Figuring out how you go about replicating the data is  certainly easier if 
you've have been through it before, but I don't think it is anything too 
magical; we went through a number of different ideas and ended up using 
multiple methods depending on the data involved.  HTH. 

-- 
Robert Treat
Database Architect
http://www.omniti.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] know the schema name in a trigger

2008-01-16 Thread danilo.juvinao
I did try it but always get public.

i want get the schema that execute the trigger, because i have a
function that is used by diferents tables in diferents schemas.


thanks for your help,
Danilo


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] LIKE and REGEX optimization

2008-01-16 Thread Kico Zaninetti
Hi all.

I have a database with 62 million registers and I have to make a
SELECT using LIKE.

This is my select:
SELECT * FROM phone WHERE name LIKE = '%ZANINETTI%' AND city = 'SAO
PAULO' AND state = 'SP'

I have an index created like this:
CREATE INDEX telefones_idx2 ON public.phone
  USING btree (name varchar_pattern_ops, city, state);

When I explain the query I have this:
QUERY PLAN
Bitmap Heap Scan on telefones  (cost=1031528.27..2726942.75 rows=4
width=145)
  Recheck Cond: (((city)::text = 'SAO PAULO'::text) AND ((state)::text
= 'SP'::text))
  Filter: ((name)::text ~~ '%ZANINETTI%'::text)
  -  Bitmap Index Scan on telefones_idx2  (cost=0.00..1031528.27
rows=1712760 width=0)
Index Cond: (((city)::text = 'SAO PAULO'::text) AND
((state)::text = 'SP'::text))


The cost is over than 1 million! It's to high and I have to reduce it.
Does someone know how can I make it?

Thanks in advance.

Kico Zaninetti
carpe diem

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] 8.2.4 serious slowdown

2008-01-16 Thread Lew

(attribution restored)
Clodoaldo wrote:

  I don't know if the plan would be the same but this is a bit clearer:
 
  WHERE COALESCE(b.quantity, 0)  COALESCE(b.deliveredsum, 0)


Sim Zacks wrote:
That should be true, but sometimes we get deliveries of greater quantity 
then we ordered.

I just want to know the times when I haven't gotten the complete order yet.
If we get more then we ordered, I don't want it to be in this query.


Huh?

How does that relate to the suggestion?

The suggested expression is mathematically equivalent to and perfectly 
substitutable for the original.


--
Lew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] COUNT() with ORDER BY

2008-01-16 Thread Andrus
create temp table test (col char);
select count(col) from test order by col;

causes

ERROR:  column test.col must appear in the GROUP BY clause or be used in 
an aggregate function
SQL state: 42803


How to fix this ?
This statement is generated by DbLinq driver and it is difficult to re-qrite 
the driver.

Query looks quite resonable to me and should be accepted by PostgreSQL 8.3 
Beta.


Andrus.



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] varchar sort ordering ignore blanks

2008-01-16 Thread Luca Arzeni
Hi there,
I have a table with a single column, pk of varchar type

The table contains few names, say:

 A
 C
B

In the first two records there is a between the  and the following letter 
A and C  while, the third one has a B immediately following the  (without 
blanks).

In postgres 7.4.7 (debian sarge), if I issue a select to sort the record I 
(correctly) obtain:
 A
 C
B

In postgres 8.1.9 (debian etch), if I issue a select to sort the record I 
(mistakenly) obtain:
 A
B
 C

That is: the sort order in postgres 8.1.9 seems to ignore the blank.

In all cases I'm using locale LATIN9 during DB creation, but I tested also 
with ASCII, UTF8 and LATIN1 encoding.

Can someone help me to get the correct order in postgres 8.1.9 ? 

=== Sample code  ===

CREATE TABLE t_table
(
  c_column varchar(30) NOT NULL,
  CONSTRAINT t_table_pk PRIMARY KEY (c_column)
)
WITHOUT OIDS;

INSERT INTO t_table(c_column) VALUES (' A');
INSERT INTO t_table(c_column) VALUES ('B');
INSERT INTO t_table(c_column) VALUES (' C');

select * from t_table order by c_column asc;

=

Thanks, Luca Arzeni


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Forgot to dump old data before re-installing machine

2008-01-16 Thread Thomas Pundt
On Mittwoch, 16. Januar 2008, Stefan Schwarzer wrote:
| The logfile is telling me this when I try to start the server with my  
| old data folder:
|
| FATAL:  database files are incompatible with server
| DETAIL:  The database cluster was initialized with PG_CONTROL_VERSION  
| 738394112, but the server was compiled with PG_CONTROL_VERSION 812.
|
| What does it mean? I have and had 8.1 installed...

I didn't follow the thread, but look at the output of 

  $ printf %x\n 738394112
  2c03

and 

  $ printf %x\n 812
  32c

This looks like an endianess mismatch; did you already mention on what 
architecture you are on?

Ciao,
Thomas

-- 
Thomas Pundt [EMAIL PROTECTED]  http://rp-online.de/ 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Segmentation fault with 8.3 FTS ISpell

2008-01-16 Thread Teodor Sigaev

Fixes are committed to CVS, hope, they will help you.
--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Can I create a TYPE (or DOMAIN) with arguments?

2008-01-16 Thread R.A.
I think the function-trigger approach will be useful to me to bypass this 
problem.

Thanks to all again for your suggestions!

--
Free pop3 email with a spam filter.
http://www.bluebottle.com/tag/5


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Help with pre-loaded arbitrary key sequences

2008-01-16 Thread Martijn van Oosterhout
On Fri, Jan 11, 2008 at 11:43:54AM -0500, James B. Byrne wrote:
 My question is this: Can one assign an id number to a sequenced key column
 on create and override the sequencer?  If one does this then can and, if
 so, how does the sequencer in Postgresql handle the eventuality of running
 into a block of keys holding previously assigned numbers?

You can set the counter during create, or at any time later. However,
the counter is not defined by the column as such and will happily
return numbers already in the table if you screw it up.

The usual process is to insert normally when loading the data and then
do a setval() on the sequence to past the values already stored.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] varchar sort ordering ignore blanks

2008-01-16 Thread Csaba Nagy
On Tue, 2008-01-15 at 16:32 +0100, Luca Arzeni wrote:
 In all cases I'm using locale LATIN9 during DB creation, but I tested also 
 with ASCII, UTF8 and LATIN1 encoding.

I guess this has nothing to do with the encoding, but with the collation
rules used, which is governed by lc_collate parameter. See what you
get on both DBs for:

SHOW lc_collate ;

Quoting from the docs:

The nature of some locale categories is that their value has to be
fixed for the lifetime of a database cluster. That is, once initdb has
run, you cannot change them anymore. LC_COLLATE and LC_CTYPE are those
categories. They affect the sort order of indexes, so they must be kept
fixed, or indexes on text columns will become corrupt. PostgreSQL
enforces this by recording the values of LC_COLLATE and LC_CTYPE that
are seen by initdb. The server automatically adopts those two values
when it is started.

See:
http://www.postgresql.org/docs/8.1/static/charset.html

HTH,
Csaba.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] libpq updateable resultset?

2008-01-16 Thread Martijn van Oosterhout
On Sun, Jan 13, 2008 at 09:55:08PM +1100, Tristen Ennemuist wrote:
 Hello, I'm new to libpq and I'm trying to discover a
 means of updating resultset columns.
 
 I can see there's a get PQgetvalue but no set
 equivalent PQsetvalue.

Libpq doesn't do this.

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] Forgot to dump old data before re-installing machine

2008-01-16 Thread Stefan Schwarzer
| The logfile is telling me this when I try to start the server with  
my

| old data folder:
|
| FATAL:  database files are incompatible with server
| DETAIL:  The database cluster was initialized with  
PG_CONTROL_VERSION

| 738394112, but the server was compiled with PG_CONTROL_VERSION 812.
|
| What does it mean? I have and had 8.1 installed...

I didn't follow the thread, but look at the output of

 $ printf %x\n 738394112
 2c03

and

 $ printf %x\n 812
 32c

This looks like an endianess mismatch; did you already mention on what
architecture you are on?


MacPro, Leopard

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-16 Thread Josh Harrison


We have an Oracle production database with some terbytes of data.
We wanted
to migrate that to Postgresql (rigt now...a test database and not
production) database.
What are the good options to do that?
Please advise me on where to look for more information on this
 topic
   
You have two steps to work on.  The first is the DDL, to create
equivalent tables in pgsql as in oracle, the second is to migrate
 over
your data.
   
  I had done this with the test database. For ddl generation I used
 xml/xsl
  and for data migration I used jdbc. I can get the ddl generated fine.
 With
  JDBC the data migration is a bit slow.
  My question is abt the data migration. Im not sure how to try this with
 an
  online oracle database. We are required to run both postgres and oracle
  database simultaneously for a couple of months (atleast till we decide
  whether we are going to shut down oracle for good !!!). Since the oracle
  database is a production database, It will have updates/inserts during
 this
  time. How do you manage that?
 

 About a year ago we converted one of our clients multi-TB ODS systems
 built in
 Oracle over to PostgreSQL. There's a case study about it you can get from
 the
 Sun folks at

 http://www.sun.com/third-party/srsc/resources/postgresql/postgre_success_dwp.pdf

 Now, due to the size of the project, we had to run both the Oracle and
 Postgres systems in parallel for several months. We kept the data up to
 date
 using a slew of custom code, designed to replicate data from either the
 ODS
 system or the OLTP system, depending on various technical and business
 factors.  My guess is that in your case, you'd want a mix of replicating
 data
 from the current Oracle database and your application, as best possible.

 Figuring out how you go about replicating the data is  certainly easier if
 you've have been through it before, but I don't think it is anything too
 magical; we went through a number of different ideas and ended up using
 multiple methods depending on the data involved.  HTH.

Thanks .
We have around 3TB of data now running in Oracle. I have done replication in
postgresql but not much in Oracle. Is there a way you can replicate between
Oracle and Postgresql.  For writing the custom codes do you suggest any
preferred language ...like java, perl etc?

Thanks
Josh


Re: [GENERAL] COUNT() with ORDER BY

2008-01-16 Thread Martijn van Oosterhout
On Mon, Jan 14, 2008 at 09:02:46PM +0200, Andrus wrote:
 create temp table test (col char);
 select count(col) from test order by col;
 
 causes
 
 ERROR:  column test.col must appear in the GROUP BY clause or be used in 
 an aggregate function
 SQL state: 42803
 
 
 How to fix this ?
 This statement is generated by DbLinq driver and it is difficult to re-qrite 
 the driver.

Sorry? The query is entirely non-sensical. The output will be a single
row telling you the number of non-null elements in the col column. How
can it be ordered?

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


[GENERAL] Building 8.1.11 on FC4

2008-01-16 Thread [EMAIL PROTECTED]
Good morning everyone!

As you may have guessed from previous posts, i just recently migrated my
main database server from 7.4.x to 8.1.11.  This part went pretty smoothly.
:)

One of the client machines in my architecture is a Fedora Core 4 box.  I
unfortunately cannot change this so I had to build 8.1.11 packages. When I
was building the packages it would fail unless I passed 'pgfts 0'.

Of course now, I am running into deadlock issues that never existed before.
There have been no code changes and the behavior of the code seems to
suggest to me that threads may be the issue.

Has anyone actually built 8.1.x on FC4?  My guess is that I just need a
little trick to force rpmbuild to finish successfully with thread safe
enabled but it has been suggested that perhaps FC4 is not capable of being
thread safe.  So if anyone could offer any advice that might help me sort
this out, I would truly appreciate it.

TIA

-bill


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Joshua D. Drake

Russ Brown wrote:

http://blogs.mysql.com/kaj/sun-acquires-mysql.html/

What does this mean for Sun's support of Postgres?



Does it matter? :) I am sure OmniTI and Command Prompt will be
happy to help any disgruntled customer :)

Sincerely,

Joshua D. Drake


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-16 Thread Josh Harrison
On Jan 15, 2008 3:58 PM, David Fetter [EMAIL PROTECTED] wrote:

 On Mon, Jan 14, 2008 at 11:42:50AM -0500, Josh Harrison wrote:
  Thanks
 
  On Jan 12, 2008 9:19 AM, David Fetter [EMAIL PROTECTED] wrote:
 
   On Fri, Jan 11, 2008 at 01:02:01PM -0500, Josh Harrison wrote:
Hi
We have an Oracle production database with some terbytes of data. We
wanted to migrate that to Postgresql (rigt now...a test database and
not production) database.  What are the good options to do that?
  
   I have written some Free software, DBI-Link, for just this use case.
   The software is under the BSD license, so you can use it freely.  I
   also offer consulting on such migrations.
  
   I downloaded DBI-Link.
  When I tried to compile postgres8.3 with-perl option it gives me this
 error.

 You may have an old or broken version of perl.  What's the output of
 perl -v?

I use version 5.8.8
Thanks. I sorted out that. That was a linker problem. I installed binutils
and made gcc use that ld. Now I can compile postgres with perl option.


Re: [GENERAL] Forgot to dump old data before re-installing machine

2008-01-16 Thread Tom Lane
Stefan Schwarzer [EMAIL PROTECTED] writes:
 This looks like an endianess mismatch; did you already mention on what
 architecture you are on?

 MacPro, Leopard

Did you just move from a PPC-based Mac to an Intel-based one?
If so, you're out of luck --- you need to go back to the PPC
to make a dump of those files.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Gregory Williamson
Joshua Drake shaped the electrons to say:

 
 Russ Brown wrote:
  http://blogs.mysql.com/kaj/sun-acquires-mysql.html/
  
  What does this mean for Sun's support of Postgres?
  
 
 Does it matter? :) I am sure OmniTI and Command Prompt will be
 happy to help any disgruntled customer :)

Well, in the past year or so Sun seemed to have been moving toward support of 
PostgreSQL and there was considerable traffic on Great And Subtle Things Beyond 
My Ken [Jignish Shah, I think, might be the name of the Sun engineer who was 
working on some issues]. If they own MySQL support for PostgreSQL might be 
reduced, and perhaps Oracle ? Hard to tell from the blog report. Sun might have 
some specific use for some aspect of MySQL, or maybe it is part of something 
bigger. But potentially it could freeze PostgreSQL out of more Sun-centric 
shops. {locally we use Linux mostly, some Sun, but used to be much more Sun 
oriented; personally from a Sun background and have a faint fondness for their 
servers}. I doubt that any entity other than Sun can provide software fixes for 
issues in Sun kernels that might improve PostgreSQL's performance.

My $0.04 worth (inflation)

Greg Williamson
Senior DBA
GlobeXplorer LLC, a DigitalGlobe company

Confidentiality Notice: This e-mail message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and must be protected in accordance with those 
provisions. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please contact the sender by 
reply e-mail and destroy all copies of the original message.

(My corporate masters made me say this.)

*may* not *does*, and how the heck could anyone destroy all copies of the 
original ?!? That's a hoot!   my Corporate Masters did not make me say this, 
too much beer did! 


Re: [GENERAL] know the schema name in a trigger

2008-01-16 Thread Albe Laurenz
danilo.juvinao wrote:
 i want know how can i get the schema name that execute a trigger.
 
 for example, if a have a schema myschema and a table mytable and
 it have a trigger procedure, i want know inside the trigger procedure,
 with plpgsql, the shcema name myschema..

http://www.postgresql.org/docs/current/static/plpgsql-trigger.html

See TG_TABLE_SCHEMA.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Building 8.1.11 on FC4

2008-01-16 Thread Greg Smith

On Wed, 16 Jan 2008, [EMAIL PROTECTED] wrote:


it has been suggested that perhaps FC4 is not capable of being
thread safe.


FC4 deprecated use of Linux Threads in preference to the Native POSIX 
Thread Library, and I wouldn't be surprised to find that transition wasn't 
perfect (welcome to 2005's bugs on a release that was obsolete in 9 
months).  There are instructions for using the older library at 
http://docs.fedoraproject.org/release-notes/fc4/#sn-nptl and that one may 
work better for you.  Alternately, you might be able to upgrade to a later 
NPTL version.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Weslee Bilodeau
Russ Brown wrote:
 http://blogs.mysql.com/kaj/sun-acquires-mysql.html/
 
 What does this mean for Sun's support of Postgres?
 

Speaking from pure opinion here :)

Oracle for example is buying out the little techs that MySQL relies on -
BDB and InnoDB.

The main company, MySQL AB was all that was left to effectively give
them control of MySQL.

PostgreSQL obviously doesn't have this risk - No one company holds that
much power, and even the -core team is split between the various
supporting companies around PostgreSQL.

Sun wants to support both.
If you wanted to ensure MySQL continued as a company, and you had the
money, its not a bad idea really.

Sun buys MySQL AB, ensures it continues.

I don't see Sun's support of PostgreSQL going away though.
I'm sure they have various support contracts out, not to mention various
employees working on it.

Sun can still contribute equally to PostgreSQL, and it can still make
just as much money on PostgreSQL as it does on MySQL.

Though PostgreSQL I imagine is cheaper as the community does more of the
work, they can just provide the additional support. MySQL they have
additional costs as they do more of the development.

I'm actually very curious now that Sun owns it, will they change how the
community contributes to the database?

I personally prefer the PostgreSQL community, joining and contributing
to the community I've found to be easier.

Then there is - How will Oracle feel about Solaris now?
Before Sun just supported the competition, it didn't own a direct
competitor.


Weslee


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Scott Marlowe
On Jan 16, 2008 7:19 AM, Russ Brown [EMAIL PROTECTED] wrote:
 http://blogs.mysql.com/kaj/sun-acquires-mysql.html/

 What does this mean for Sun's support of Postgres?

I don't see why it should change really, they kind of swim in different waters.

What I do think is interesting is that Sun might actually more fully
open up MySQL than it has been so far.  I.e relax the viral nature of
the connect libs.  Go back to LGPL licensing on them.   Stop trying to
collect licensing fees on an open source database.  Make the money on
consulting instead.

It would also be nice to see them do something to streamline the whole
2^n licensing / build model they currently struggle under.  Taking a
year to fix a fairly innocuous packaging bug, then reintroducing that
bug, then squashing it again is not good.  It would be nice to see
them streamline the development process.  Having 4 or 5 active
development branches is too chaotic.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Building 8.1.11 on FC4

2008-01-16 Thread Tom Lane
[EMAIL PROTECTED] [EMAIL PROTECTED] writes:
 One of the client machines in my architecture is a Fedora Core 4 box.  I
 unfortunately cannot change this so I had to build 8.1.11 packages. When I
 was building the packages it would fail unless I passed 'pgfts 0'.

Fail how?  I've never had to disable pgfts in any Fedora release.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Sun's PostgreSQL contribution?

2008-01-16 Thread dvanatta

How much does Sun currently contribute to the project?  Do they have
designated coders?
-- 
View this message in context: 
http://www.nabble.com/Sun-acquires-MySQL-tp14881966p14884994.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Joshua D. Drake

Scott Marlowe wrote:

On Jan 16, 2008 7:19 AM, Russ Brown [EMAIL PROTECTED] wrote:

http://blogs.mysql.com/kaj/sun-acquires-mysql.html/

What does this mean for Sun's support of Postgres?


I don't see why it should change really, they kind of swim in different waters.

What I do think is interesting is that Sun might actually more fully
open up MySQL than it has been so far.  I.e relax the viral nature of
the connect libs.


To my knowledge that argument is long gone, over and no longer 
relevant. What they do is hold their security fixes back and have 
official packages etc..


Sincerely,

Joshua D. Drake


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Sun's PostgreSQL contribution?

2008-01-16 Thread Joshua D. Drake

dvanatta wrote:

How much does Sun currently contribute to the project?  Do they have
designated coders?


They employ a core member who is not a hacker.
They provide some machines etc..

Sincerely,

Joshua D. Drake

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Sun's PostgreSQL contribution?

2008-01-16 Thread Alvaro Herrera
Joshua D. Drake wrote:
 dvanatta wrote:
 How much does Sun currently contribute to the project?  Do they have
 designated coders?

 They employ a core member who is not a hacker.
 They provide some machines etc..

They contributed a DTrace patch and the Sun hackers can be seen from
time to time.  They're not just marketing ...

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Sun's PostgreSQL contribution?

2008-01-16 Thread Joshua D. Drake

Alvaro Herrera wrote:

Joshua D. Drake wrote:

dvanatta wrote:

How much does Sun currently contribute to the project?  Do they have
designated coders?

They employ a core member who is not a hacker.
They provide some machines etc..


They contributed a DTrace patch and the Sun hackers can be seen from
time to time.  They're not just marketing ...


I didn't mean to imply that. I took his question as does Sun have 
regularly contributing hackers like yourself of AndrewD.


Sincerely,

Joshua D. Drake



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Gauthier, Dave
If MySQL goes the way of Java, maybe there isn't too much to worry
about.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Weslee Bilodeau
Sent: Wednesday, January 16, 2008 10:56 AM
To: Russ Brown
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Sun acquires MySQL

Russ Brown wrote:
 http://blogs.mysql.com/kaj/sun-acquires-mysql.html/
 
 What does this mean for Sun's support of Postgres?
 

Speaking from pure opinion here :)

Oracle for example is buying out the little techs that MySQL relies on -
BDB and InnoDB.

The main company, MySQL AB was all that was left to effectively give
them control of MySQL.

PostgreSQL obviously doesn't have this risk - No one company holds that
much power, and even the -core team is split between the various
supporting companies around PostgreSQL.

Sun wants to support both.
If you wanted to ensure MySQL continued as a company, and you had the
money, its not a bad idea really.

Sun buys MySQL AB, ensures it continues.

I don't see Sun's support of PostgreSQL going away though.
I'm sure they have various support contracts out, not to mention various
employees working on it.

Sun can still contribute equally to PostgreSQL, and it can still make
just as much money on PostgreSQL as it does on MySQL.

Though PostgreSQL I imagine is cheaper as the community does more of the
work, they can just provide the additional support. MySQL they have
additional costs as they do more of the development.

I'm actually very curious now that Sun owns it, will they change how the
community contributes to the database?

I personally prefer the PostgreSQL community, joining and contributing
to the community I've found to be easier.

Then there is - How will Oracle feel about Solaris now?
Before Sun just supported the competition, it didn't own a direct
competitor.


Weslee


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Dirk Riehle



The main company, MySQL AB was all that was left to effectively give
them control of MySQL.

PostgreSQL obviously doesn't have this risk - No one company holds that
much power, and even the -core team is split between the various
supporting companies around PostgreSQL.
  


Is this up to date?

http://www.postgresql.org/community/contributors/

I'm asking because I was always told EnterpriseDB employs now 5 out of 7 
core committers.


Thanks for the clarification.

Dirk

--
Phone: + 1 (650) 215 3459
Web: http://www.riehle.org



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-16 Thread Dawid Kuroczko
On Jan 11, 2008 7:14 PM, Scott Marlowe [EMAIL PROTECTED] wrote:

 On Jan 11, 2008 12:02 PM, Josh Harrison [EMAIL PROTECTED] wrote:
  Hi
  We have an Oracle production database with some terbytes of data. We wanted
  to migrate that to Postgresql (rigt now...a test database and not
  production) database.
  What are the good options to do that?
  Please advise me on where to look for more information on this topic

 You're going to need to use your brain for a fair portion of this,
 because how you use oracle will be just different enough from everyone
 else that no boxed solution.

 You have two steps to work on.  The first is the DDL, to create
 equivalent tables in pgsql as in oracle, the second is to migrate over
 your data.

 I've generally done the ddl conversion by hand in an editor, and
 migrated data over with some scripting language like perl or php.

If you are migrating terabytes don't use perl.  I did some experimental
for fun migration some time ago and DBD::Oracle worked remarkably
slow...  What you need is to get a program which will export data
from Oracle as CSV.  As far as I know Oracle does not provide such
a tool (though it will import CSV happily through sqlldr),
but you can Google out a C-code which does just that.  I don't remember
where I left if... :-(

From that, you just need to stream CSV into PostgreSQL's COPY
command.  It worked FAST.  Really.

And be wary of data types conversion.

  Regards,
 Dawid

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Dave Page
2 out of 7 - which would be Bruce  I.

Regards, Dave

On 1/16/08, Dirk Riehle [EMAIL PROTECTED] wrote:

  The main company, MySQL AB was all that was left to effectively give
  them control of MySQL.
 
  PostgreSQL obviously doesn't have this risk - No one company holds that
  much power, and even the -core team is split between the various
  supporting companies around PostgreSQL.
 

 Is this up to date?

 http://www.postgresql.org/community/contributors/

 I'm asking because I was always told EnterpriseDB employs now 5 out of 7
 core committers.

 Thanks for the clarification.

 Dirk

 --
 Phone: + 1 (650) 215 3459
 Web: http://www.riehle.org
   


 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq


-- 
Sent from my mobile device

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Wed, 16 Jan 2008 10:25:45 -0800
Dirk Riehle [EMAIL PROTECTED] wrote:

 
  The main company, MySQL AB was all that was left to effectively give
  them control of MySQL.
 
  PostgreSQL obviously doesn't have this risk - No one company holds
  that much power, and even the -core team is split between the
  various supporting companies around PostgreSQL.

 
 Is this up to date?
 
 http://www.postgresql.org/community/contributors/
 
 I'm asking because I was always told EnterpriseDB employs now 5 out
 of 7 core committers.

What? They do employ more contributors than that (oh and just because
they are core doesn't mean they have commit rights).

They employ Dave Page and Bruce Momjian who are core members.
They also employ Greg Stark and Heikki are very fairly visible
contributors.

Sincerely,

Joshua D. Drake



- -- 
The PostgreSQL Company: Since 1997, http://www.commandprompt.com/ 
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
SELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHjlBIATb/zqfZUUQRAl1xAJ9S/rm4ex4lC8xTXft5Wm1/qVjg2gCdG4s8
rT/NXbJ2Mad+AMOSNAiQ674=
=FlKG
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-16 Thread Josh Harrison
On Jan 16, 2008 10:11 AM, Josh Harrison [EMAIL PROTECTED] wrote:



 On Jan 15, 2008 3:58 PM, David Fetter [EMAIL PROTECTED] wrote:

  On Mon, Jan 14, 2008 at 11:42:50AM -0500, Josh Harrison wrote:
   Thanks
  
   On Jan 12, 2008 9:19 AM, David Fetter [EMAIL PROTECTED] wrote:
  
On Fri, Jan 11, 2008 at 01:02:01PM -0500, Josh Harrison wrote:
 Hi
 We have an Oracle production database with some terbytes of data.
  We
 wanted to migrate that to Postgresql (rigt now...a test database
  and
 not production) database.  What are the good options to do that?
   
I have written some Free software, DBI-Link, for just this use case.
The software is under the BSD license, so you can use it freely.  I
also offer consulting on such migrations.
   
I downloaded DBI-Link.
   When I tried to compile postgres8.3 with-perl option it gives me this
  error.
 
  You may have an old or broken version of perl.  What's the output of
  perl -v?

 I use version 5.8.8
 Thanks. I sorted out that. That was a linker problem. I installed binutils
 and made gcc use that ld. Now I can compile postgres with perl option.


Now a new problem had come up. When I try createlang command
createlang plperlu test
I get this error...
createlang: language installation failed: ERROR:  could not load library
/export/home/josh/postgres8.3-perl/lib/plperl.so: ld.so.1: postgres:
fatal: relocation error: file
/export/home/josh/postgres8.3-perl/lib/plperl.so: symbol PL_curpad:
referenced symbol not found

perl information:
 perl -v
This is perl, v5.8.8 built for i86pc-solaris-64int

Can you advise pls
josh


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-16 Thread Josh Harrison
On Jan 16, 2008 1:43 PM, Josh Harrison [EMAIL PROTECTED] wrote:



 On Jan 16, 2008 10:11 AM, Josh Harrison [EMAIL PROTECTED] wrote:

 
 
  On Jan 15, 2008 3:58 PM, David Fetter [EMAIL PROTECTED] wrote:
 
   On Mon, Jan 14, 2008 at 11:42:50AM -0500, Josh Harrison wrote:
Thanks
   
On Jan 12, 2008 9:19 AM, David Fetter [EMAIL PROTECTED] wrote:
   
 On Fri, Jan 11, 2008 at 01:02:01PM -0500, Josh Harrison wrote:
  Hi
  We have an Oracle production database with some terbytes of
   data. We
  wanted to migrate that to Postgresql (rigt now...a test database
   and
  not production) database.  What are the good options to do that?

 I have written some Free software, DBI-Link, for just this use
   case.
 The software is under the BSD license, so you can use it freely.
I
 also offer consulting on such migrations.

 I downloaded DBI-Link.
When I tried to compile postgres8.3 with-perl option it gives me
   this error.
  
   You may have an old or broken version of perl.  What's the output of
   perl -v?
 
  I use version 5.8.8
  Thanks. I sorted out that. That was a linker problem. I installed
  binutils and made gcc use that ld. Now I can compile postgres with perl
  option.


 Now a new problem had come up. When I try createlang command
 createlang plperlu test
 I get this error...
 createlang: language installation failed: ERROR:  could not load library
 /export/home/josh/postgres8.3-perl/lib/plperl.so: ld.so.1: postgres:
 fatal: relocation error: file
 /export/home/josh/postgres8.3-perl/lib/plperl.so: symbol PL_curpad:
 referenced symbol not found

 perl information:
  perl -v
 This is perl, v5.8.8 built for i86pc-solaris-64int

 Can you advise pls
 josh

 Forgot to mention..
My perl information
perl -V
Summary of my perl5 (revision 5 version 8 subversion 8) configuration:
  Platform:
osname=solaris, osvers=2.10, archname=i86pc-solaris-64int
uname='sunos aishwarya 5.10 generic_118844-26 i86pc i386 i86pc '
config_args='-Dcc=gcc -Dprefix=/export/home/josh/perl5 -Duse64bitint
-Duseshrplib'
hint=recommended, useposix=true, d_sigaction=define
usethreads=undef use5005threads=undef useithreads=undef
usemultiplicity=undef
useperlio=define d_sfio=undef uselargefiles=define usesocks=undef
use64bitint=define use64bitall=undef uselongdouble=undef
usemymalloc=n, bincompat5005=undef
  Compiler:
cc='gcc', ccflags ='-fno-strict-aliasing -pipe
-Wdeclaration-after-statement -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
-DPERL_USE_SAFE_PUTENV -DPERL_USE_SAFE_PUTENV -DPERL_USE_SAFE_PUTENV
-DPERL_USE_SAFE_PUTENV -DPERL_USE_SAFE_PUTENV -DPERL_USE_SAFE_PUTENV
-DPERL_USE_SAFE_PUTENV',
optimize='-O',
cppflags='-fno-strict-aliasing -pipe -Wdeclaration-after-statement'
ccversion='', gccversion='3.4.5', gccosandvers='solaris2.8'
intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=12345678
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
ivtype='long long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t',
lseeksize=8
alignbytes=4, prototype=define
  Linker and Libraries:
ld='gcc', ldflags =' -L/usr/local/lib '
libpth=/usr/local/lib /usr/lib /usr/ccs/lib
libs=-lsocket -lnsl -ldl -lm -lc
perllibs=-lsocket -lnsl -ldl -lm -lc
libc=/lib/libc.so, so=so, useshrplib=true, libperl=libperl.so
gnulibc_version=''
  Dynamic Linking:
dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags=' -Wl,-E -R
/export/home/josh/perl5/lib/5.8.8/i86pc-solaris-64int/CORE'
cccdlflags='-fPIC', lddlflags=' -Wl,-E -G -L/usr/local/lib'


Characteristics of this binary (from libperl):
  Compile-time options: PERL_MALLOC_WRAP PERL_USE_SAFE_PUTENV
USE_64_BIT_INT USE_LARGE_FILES USE_PERLIO
  Built under solaris
  Compiled at Jan 16 2008 12:13:26
  @INC:
/export/home/josh/perl5/lib/5.8.8/i86pc-solaris-64int
/export/home/josh/perl5/lib/5.8.8
/export/home/josh/perl5/lib/site_perl/5.8.8/i86pc-solaris-64int
/export/home/josh/perl5/lib/site_perl/5.8.8
/export/home/josh/perl5/lib/site_perl


[GENERAL] SVN event hooks for PL/PGSQL functions and DDL?

2008-01-16 Thread Webb Sprague
In another thread, someone mentioned writing hooks for Subversion that
would grab function definitions and DDL  statements from the current
database and push them into the repository?

Does anyone have a few scripts/ cookbook examples for this?  Is there
a cookbook section on the postgres wiki where they might go?  I am
sure I can figure this out, but I wouldn't mind cheating

Thx

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] PL/pgsql function handle CUBE values

2008-01-16 Thread Rajarshi Guha
Hi, I am trying to write a PL/pgsql function that will take a CUBE  
variable (which will be a 1D point) and a double precision variable.


If the input CUBE is defined as

'(x,y,z)'::cube

the function would then return a CUBE value of the form

'(x+R,y+R,z+R),(x-R,y-R,z-R)'::cube

where R is the second argument.

The problem I'm having is to actually add R to the individual  
components of the CUBE variable. I can't cast CUBE to float[] and I  
don't see anyway to get at the individual components of the CUBE.


Any pointers would be appreciated.

---
Rajarshi Guha  [EMAIL PROTECTED]
GPG Fingerprint: 0CCA 8EE2 2EEB 25E2 AB04  06F7 1BB9 E634 9B87 56EE
---
355/113 -- Not the famous irrational number PI,
but an incredible simulation!



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] PL/pgsql function handle CUBE values

2008-01-16 Thread Colin Wetherbee

Rajarshi Guha wrote:
Hi, I am trying to write a PL/pgsql function that will take a CUBE 
variable (which will be a 1D point) and a double precision variable.


If the input CUBE is defined as

'(x,y,z)'::cube

the function would then return a CUBE value of the form

'(x+R,y+R,z+R),(x-R,y-R,z-R)'::cube

where R is the second argument.

The problem I'm having is to actually add R to the individual components 
of the CUBE variable. I can't cast CUBE to float[] and I don't see 
anyway to get at the individual components of the CUBE.


I haven't tested this, but it looks like you can use cube_subset() to do 
that.


From [0]:

  cube_subset(cube, int[]) returns cube

  Makes a new cube from an existing cube, using a list of dimension
  indexes from an array. Can be used to find both the LL and UR
  coordinates of a single dimension, e.g.
  cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[2]) = '(3),(7)'. Or can be
  used to drop dimensions, or reorder them as desired, e.g.
  cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]) = '(5, 3, 1,
  1),(8, 7, 6, 6)'.

For each dimension, select cube_subset(your_cube, ARRAY[dimension]) and 
then increment or decrement by R.


Further, it looks like you're actually trying to create a function that 
will do precisely what cube_enlarge() does.


Also from [0]:

  cube_enlarge(cube c, double r, int n) returns cube

  Increases the size of a cube by a specified radius in at least n
  dimensions. If the radius is negative the cube is shrunk instead. This
  is useful for creating bounding boxes around a point for searching for
  nearby points. All defined dimensions are changed by the radius r. LL
  coordinates are decreased by r and UR coordinates are increased by r.
  If a LL coordinate is increased to larger than the corresponding UR
  coordinate (this can only happen when r  0) than both coordinates are
  set to their average. If n is greater than the number of defined
  dimensions and the cube is being increased (r = 0) then 0 is used as
  the base for the extra coordinates.

Colin

[0] http://developer.postgresql.org/pgdocs/postgres/cube.html

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] PL/pgsql function handle CUBE values

2008-01-16 Thread Rajarshi Guha


On Jan 16, 2008, at 3:41 PM, Colin Wetherbee wrote:


Rajarshi Guha wrote:
Hi, I am trying to write a PL/pgsql function that will take a CUBE  
variable (which will be a 1D point) and a double precision variable.

If the input CUBE is defined as
'(x,y,z)'::cube
the function would then return a CUBE value of the form
'(x+R,y+R,z+R),(x-R,y-R,z-R)'::cube
where R is the second argument.
The problem I'm having is to actually add R to the individual  
components of the CUBE variable. I can't cast CUBE to float[] and  
I don't see anyway to get at the individual components of the CUBE.


I haven't tested this, but it looks like you can use cube_subset()  
to do that.


From [0]:

  cube_subset(cube, int[]) returns cube

Further, it looks like you're actually trying to create a function  
that will do precisely what cube_enlarge() does.


Indeed! sorry for not giving the docs a thorough a reading

Thanks for the pointer

---
Rajarshi Guha  [EMAIL PROTECTED]
GPG Fingerprint: 0CCA 8EE2 2EEB 25E2 AB04  06F7 1BB9 E634 9B87 56EE
---
...but there was no one in it...
- RG



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread dvanatta

What's up with 3 of the 7 being from Pennsylvania?  What's the connection?


Dave Page-3 wrote:
 
 2 out of 7 - which would be Bruce  I.
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
 
 

-- 
View this message in context: 
http://www.nabble.com/Sun-acquires-MySQL-tp14881966p14895300.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Wed, 16 Jan 2008 13:23:35 -0800 (PST)
dvanatta [EMAIL PROTECTED] wrote:

 
 What's up with 3 of the 7 being from Pennsylvania?  What's the
 connection?

Its the closest the cult of the elephant will get to jersey.

Joshua D. Drake



- -- 
The PostgreSQL Company: Since 1997, http://www.commandprompt.com/ 
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
SELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHjneZATb/zqfZUUQRAsjKAKCj3xpZ8NBEvMYKmmDJdiu/6Y50PQCeI2fr
w+d0U+qn8mmvl2ylK2LeI0Q=
=nCyQ
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Bill Moran
In response to dvanatta [EMAIL PROTECTED]:

 
 What's up with 3 of the 7 being from Pennsylvania?  What's the connection?

Well, as everyone knows, Pennsylvania is a haven for brilliant
people.  In fact, simply living in Pennsylvania makes you smarter.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Wed, 16 Jan 2008 16:29:01 -0500
Bill Moran [EMAIL PROTECTED] wrote:

 In response to dvanatta [EMAIL PROTECTED]:
 
  
  What's up with 3 of the 7 being from Pennsylvania?  What's the
  connection?
 
 Well, as everyone knows, Pennsylvania is a haven for brilliant
 people.  In fact, simply living in Pennsylvania makes you smarter.

Then why did Ben Frankly attach a key to a kite in the middle of a
thunderstorm?

Joshua D. Drake

- -- 
The PostgreSQL Company: Since 1997, http://www.commandprompt.com/ 
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
SELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHjnlTATb/zqfZUUQRAt2HAJ4xJCGVrGnD9ydhSKkg8twAvZaM/QCfUJ7v
VDgpjoFCwcDJryk6+WxZ1CI=
=/IOr
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Geoffrey Gowey
And this is why I live in pa, but make the trek in to the netherworld
known as new jersey.  :D




On 1/16/08, Bill Moran [EMAIL PROTECTED] wrote:
 In response to dvanatta [EMAIL PROTECTED]:

 
  What's up with 3 of the 7 being from Pennsylvania?  What's the connection?

 Well, as everyone knows, Pennsylvania is a haven for brilliant
 people.  In fact, simply living in Pennsylvania makes you smarter.

 --
 Bill Moran
 http://www.potentialtech.com

 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



-- 
Kindest Regards,

Geoff

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Otto Hirr
 Russ Brown wrote:
  http://blogs.mysql.com/kaj/sun-acquires-mysql.html/
 
  What does this mean for Sun's support of Postgres?
 

So why not go directly to the source, Sun itself, and ask them?

Someone like Bruce should just knock on the door and ask.

Then you can evaluate the answer. Either a lie, the truth, or somewhere
in-between, and the answer may only have a certain shelf-life, for what is
true today in the tech industry is false later.

..Otto



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Geoffrey Gowey
Sounds reasonable, but what one manager answers today is subject to be
changed by another tomorrow.



On 1/16/08, Otto Hirr [EMAIL PROTECTED] wrote:
  Russ Brown wrote:
   http://blogs.mysql.com/kaj/sun-acquires-mysql.html/
  
   What does this mean for Sun's support of Postgres?
  

 So why not go directly to the source, Sun itself, and ask them?

 Someone like Bruce should just knock on the door and ask.

 Then you can evaluate the answer. Either a lie, the truth, or somewhere
 in-between, and the answer may only have a certain shelf-life, for what is
 true today in the tech industry is false later.

 ..Otto



 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?

http://archives.postgresql.org/



-- 
Kindest Regards,

Geoff

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Otto Hirr
 From: Geoffrey Gowey
 Sounds reasonable, but what one manager answers today is subject to be
 changed by another tomorrow.

The intent is to get out in the open their official statement. That in turn 
may create a discussion inside Sun that may not have taken place.

If postgres community does not like the stand, then lobby to change it. Like 
you say, nothing either in time or by individual is forever locked in.

If the postgres community likes it, then be sure to continue to support Sun so 
that they continue in that direction. If they come out with a favorable 
strategy or had a planned strategy but kept it to themselves and postgres 
disses them, then they may take their toys and decide to play in other ways.

But get it out in the open as much as can be done. Simple. Then you know where 
they stand/don't-stand/or remain mute.

Then you can take further action.

Rumors/opinions are just that.

Postgres needs to have an official spokesman make a request at a very 
important top official that is responsible for the acquisition.

..Otto



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Matthew T. O'Connor

Joshua D. Drake wrote:

dvanatta [EMAIL PROTECTED] wrote:

What's up with 3 of the 7 being from Pennsylvania?  What's the
connection?


Its the closest the cult of the elephant will get to jersey.


Whoa now, them's fightin' words.  Come on over and you me, Tony, Paulie 
and Silvio will have a little chat.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Ted Byers

--- Bill Moran [EMAIL PROTECTED] wrote:

 In response to dvanatta [EMAIL PROTECTED]:
 
  
  What's up with 3 of the 7 being from Pennsylvania?
  What's the connection?
 
 Well, as everyone knows, Pennsylvania is a haven for
 brilliant
 people.  In fact, simply living in Pennsylvania
 makes you smarter.
 
Does it count if I lived there for a year many many
years ago?  ;-)

Ted

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Raymond O'Donnell

On 16/01/2008 23:10, Ted Byers wrote:


Does it count if I lived there for a year many many
years ago?  ;-)


...or if I visited for a day or two in 1986? ;-)

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Help with pre-loaded arbitrary key sequences

2008-01-16 Thread Scott Marlowe
On Jan 11, 2008 10:43 AM, James B. Byrne [EMAIL PROTECTED] wrote:
 I am prototyping a system migration that is to employ Ruby, Rails and
 PostgreSQL.  Rails has the convention that the primary key of a row is an
 arbitrary integer value assigned by the database manager through a
 sequence.  As it turns out, the legacy application employs essentially the
 same convention in most instances.

 My question is this: Can one assign an id number to a sequenced key column
 on create and override the sequencer?  If one does this then can and, if
 so, how does the sequencer in Postgresql handle the eventuality of running
 into a block of keys holding previously assigned numbers?

 For example.  The existing client master dataset employs an eight digit
 account number as primary key.  The values in use tend to cluster in
 groups at each thousand increment, thus 1..375, 1001..1288, 2001..2225,
 3001..3312, ..., 2001001..2001476, ..., etc.

 Assuming that these existing entries were all loaded into the new table
 with the values given as their primary keys and given that one could not
 simply start the sequencer at a value above the highest existing value: If
 one was to add a record and auto-generate a sequence number then can the
 sequencer handle looping from 1 to 375 and returning 376 and then continue
 until it reaches 1001, when it needs be loop again until 1288 and then
 return 1289 and so forth?

You're essentially wanting to fill in the blanks here.  If you need
good performance, then what you'll need to do is to preallocate all
the numbers that haven't been assigned somewhere.  So, we make a table
something like:

create table locatorcodes (i int, count_id serial);

Then we insert an id into that table for everyone that's missing from
the main table:

insert into locatorcodes (i)
select b.i from (
select * from generate_series(1,100)as i
) as b
left join main_table a on (b.i=a.i)
where a.i is null;

Or something like that.

Now, we've got a table with all the unused ids, and a serial count
assigned to them.  Create another sequence:

create checkout_sequence;

and use that to check out numbers from locatorcodes:

select i from locatorcodes where count_id=nextval('checkout_sequence');

And since the sequence will just count up, there's little or no
problems with performance.

There's lots of ways of handling this.  That's just one of the ones
that doesn't slow your database down a lot.

If you need to, you can shuffle the numbers going into the
locatorcodes table with an order by random() when you create it.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Erik Jones


On Jan 16, 2008, at 4:02 PM, Otto Hirr wrote:

Postgres needs to have an official spokesman make a request at a  
very important top official that is responsible for the  
acquisition.


..Otto


Given that Josh Berkus works for Sun, I'd say we already have that.

Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Sun acquires MySQL

2008-01-16 Thread Peter Eisentraut
Otto Hirr wrote:
 Postgres needs to have an official spokesman make a request at a very
 important top official that is responsible for the acquisition.

Postgres doesn't need to do anything, because the matter at hand does not 
concern Postgres, and I think we shouldn't spend our energy making it so.

Nevertheless, I suggest you follow Josh Berkus's blog, which is as close as 
you will get to someone important from Postgres having access to someone 
important at Sun.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] A counter argument about DISTINCT and GROUP BY in PostgreSQL

2008-01-16 Thread Gregory Stark
[EMAIL PROTECTED] writes:

 On Jan 4, 11:48 am, [EMAIL PROTECTED] (Gregory Stark) wrote:

 And what does the plan look like?

 It looks great in mysql!   

Like what?

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's Slony Replication support!

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Prepared statement's plan

2008-01-16 Thread Chris

Vyacheslav Kalinin wrote:

Hello,

I would appreciate if someone explained me how exactly prepared 
parametrized statements are planned


http://www.postgresql.org/docs/8.2/static/sql-prepare.html

Under the Notes it explains it can't use an index and has to basically 
guess what it's going to do because it doesn't know what parameter you 
are going to put in.


--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Online Oracle to Postgresql data migration

2008-01-16 Thread Josh Harrison
On Jan 16, 2008 1:31 PM, Dawid Kuroczko [EMAIL PROTECTED] wrote:

  On Jan 11, 2008 7:14 PM, Scott Marlowe [EMAIL PROTECTED] wrote:
 
  On Jan 11, 2008 12:02 PM, Josh Harrison [EMAIL PROTECTED] wrote:
   Hi
   We have an Oracle production database with some terbytes of data. We
 wanted
   to migrate that to Postgresql (rigt now...a test database and not
   production) database.
   What are the good options to do that?
   Please advise me on where to look for more information on this topic
 
  You're going to need to use your brain for a fair portion of this,
  because how you use oracle will be just different enough from everyone
  else that no boxed solution.
 
  You have two steps to work on.  The first is the DDL, to create
  equivalent tables in pgsql as in oracle, the second is to migrate over
  your data.
 
  I've generally done the ddl conversion by hand in an editor, and
  migrated data over with some scripting language like perl or php.

 If you are migrating terabytes don't use perl.  I did some experimental
 for fun migration some time ago and DBD::Oracle worked remarkably
 slow...  What you need is to get a program which will export data
 from Oracle as CSV.  As far as I know Oracle does not provide such
 a tool (though it will import CSV happily through sqlldr),
 but you can Google out a C-code which does just that.  I don't remember
 where I left if... :-(

 From that, you just need to stream CSV into PostgreSQL's COPY
 command.  It worked FAST.  Really.

 And be wary of data types conversion.

 In this case how do you migrate the simultaneous
updates/deletes/inserts, Since this is a production system,  there are
everyday changes in the databse ?


[GENERAL] template0(or template1) is a database?

2008-01-16 Thread ivan.hou

as the tile, it's a database? or what is it? 
can i use select  command to query the data of template0?
how should i explain the template0?

-- 
View this message in context: 
http://www.nabble.com/template0%28or-template1%29-is--a-database--tp14906094p14906094.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] template0(or template1) is a database?

2008-01-16 Thread Chris

ivan.hou wrote:
as the tile, it's a database? or what is it? 


http://www.postgresql.org/docs/8.2/static/manage-ag-templatedbs.html

--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] postgresql in FreeBSD jails: proposal

2008-01-16 Thread Marc G. Fournier
[EMAIL PROTECTED] (Mischa Sandberg) writes:

Unfortunately, with multiple jails running PG servers and (due to app
limitations) all servers having same PGPORT, you get the situation that
when jail#2 (,jail#3,...) server comes up, it:
- detects that there is a shm seg with ipc key 5432001
- checks whether the associated postmaster process exists (with kill -0)
- overwrites the segment created and being used by jail #1

Easiest fix: change the UID of the user running the postmaster (ie. pgsql) so 
that each runs as a distinct UID (instead of distinct PGPORT) ... been doing 
this since moving to FreeBSD 6.x ... no patches required ...
-- 

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email . [EMAIL PROTECTED]  MSN . [EMAIL PROTECTED]
Yahoo . yscrappy   Skype: hub.orgICQ . 7615664

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly