Re: [NOVICE] [GENERAL] Connect to postgresql database using Perl

2010-03-31 Thread Sean Davis
On Wed, Mar 31, 2010 at 3:25 AM, dipti shah shahdipti1...@gmail.com wrote:
 Thanks Guys. DBI works fine. I have written below code but it executes only
 on the server where I installed postgresql. Is there any way to run this
 code from remote host. I get an error when trying to run it from remote
 host. I think it is obvious because in below code there is no information
 where to connect to. Could you please help me out.

Hi, Dipti.

Have a look at the DBI documentation.  As you suspect, you will need
to specify the host.

Sean

 use DBI;
 $DB_name    = 'mydb';
 $DB_user    = 'postgres';
 $DB_pwd = '';
 $dbh = DBI-connect(dbi:Pg:dbname=$DB_name,$DB_user,$DB_pwd);
 if ( !defined $dbh ) { die Cannot connect to database!\n; }
 $sth  = $dbh-prepare(SELECT * FROM mytable);
 $sth-execute();
 while ( ($id,$name) = $sth-fetchrow_array() ) {  print $id\t\t $name \n;
 }
 $sth-finish();
 $dbh-disconnect();

 remote-host# perl pg-connect.pl
 DBI connect('dbname=sysdb','postgres',...) failed: could not connect to
 server: No such file or directory
     Is the server running locally and accepting
     connections on Unix domain socket /tmp/.s.PGSQL.5432? at
 pg-connect.pl line 7
 Cannot connect to database!

 Thanks,
 Dipti

 On Wed, Mar 31, 2010 at 11:53 AM, Pavel Stehule pavel.steh...@gmail.com
 wrote:

 Hello

 2010/3/31 dipti shah shahdipti1...@gmail.com:
  Hi,
 
  Could anyone please provide me an example to connect to a postgresql
  database using Perl language and accessing the tables, schemas, and
  other
  postgresql objects.
 

 http://www.felixgers.de/teaching/perl/perl_DBI.html
 http://structbio.vanderbilt.edu/chazin/wisdom/dbi_howto.html

 Regards

 Pavel Stehule

  Thanks,
  Dipti
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [INTERFACES] PGSQL and Javascript

2009-01-30 Thread Sean Davis
No.  I'm not sure how that could be done given the security concerns.
Sean

On Fri, Jan 30, 2009 at 6:40 AM, Reg Me Please regmeple...@gmail.comwrote:

 Hello all.

 Is there a way to directly access PGSQL from a Javascript application?
 With no application server intervention, I mean.
 Just like libq allows access from C/C++.

 --
 Fahrbahn ist ein graues Band
 weisse Streifen, grüner Rand

 --
 Sent via pgsql-interfaces mailing list (pgsql-interfa...@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-interfaces



Re: [INTERFACES] [GENERAL] PGSQL and Javascript

2009-01-30 Thread Sean Davis
On Fri, Jan 30, 2009 at 7:09 AM, Reg Me Please regmeple...@gmail.comwrote:

 I'd like to write part of the application in Javascript, server side, not
 in
 the web page.


I missed your point.  Do a google search for javascript interpreters in the
language of your choice.  For example, there is Rhino for java.

Sean




 On Friday 30 January 2009 12:53:05 Allan Kamau wrote:
  May be Javascript + JDBC not sure.
 
  But the question is why would you want to do so?
  Javascript can be read easily by the user having the javascript
  running on their browser. JDBC or any other database connecting client
  will want to some how authenticate the user in most cases the
  username, password and maybe remote IP is used. This mean that your
  javascript will probably contain username and password in order to use
  JDBC to connect to the DB, is this favourable?
 
  Allan.
 
  On Fri, Jan 30, 2009 at 1:40 PM, Reg Me Please regmeple...@gmail.com
 wrote:
   Hello all.
  
   Is there a way to directly access PGSQL from a Javascript application?
   With no application server intervention, I mean.
   Just like libq allows access from C/C++.
  
   --
   Fahrbahn ist ein graues Band
   weisse Streifen, grüner Rand
  
   --
   Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
   To make changes to your subscription:
   http://www.postgresql.org/mailpref/pgsql-general

 --
 Fahrbahn ist ein graues Band
 weisse Streifen, grüner Rand

 --
 Sent via pgsql-interfaces mailing list (pgsql-interfa...@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-interfaces



Re: [INTERFACES] [GENERAL] PGSQL and Javascript

2009-01-30 Thread Sean Davis
On Fri, Jan 30, 2009 at 7:43 AM, Reg Me Please regmeple...@gmail.comwrote:

 I already have a JS interpreter (spidermonkey) but there is no PGSQL
 interfacing API!
 This'd be why I'm asking.


Ah.  Sorry again.  No idea if this will be useful, but you might take a look
at:

http://www.whitebeam.org/

Sean




 On Friday 30 January 2009 13:24:59 Sean Davis wrote:
  On Fri, Jan 30, 2009 at 7:09 AM, Reg Me Please regmeple...@gmail.com
 wrote:
   I'd like to write part of the application in Javascript, server side,
 not
   in
   the web page.
 
  I missed your point.  Do a google search for javascript interpreters in
 the
  language of your choice.  For example, there is Rhino for java.
 
  Sean
 
   On Friday 30 January 2009 12:53:05 Allan Kamau wrote:
May be Javascript + JDBC not sure.
   
But the question is why would you want to do so?
Javascript can be read easily by the user having the javascript
running on their browser. JDBC or any other database connecting
 client
will want to some how authenticate the user in most cases the
username, password and maybe remote IP is used. This mean that your
javascript will probably contain username and password in order to
 use
JDBC to connect to the DB, is this favourable?
   
Allan.
   
On Fri, Jan 30, 2009 at 1:40 PM, Reg Me Please 
 regmeple...@gmail.com
  
   wrote:
 Hello all.

 Is there a way to directly access PGSQL from a Javascript
 application? With no application server intervention, I mean.
 Just like libq allows access from C/C++.

 --
 Fahrbahn ist ein graues Band
 weisse Streifen, grüner Rand

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
  
   --
   Fahrbahn ist ein graues Band
   weisse Streifen, grüner Rand
  
   --
   Sent via pgsql-interfaces mailing list (
 pgsql-interfa...@postgresql.org)
   To make changes to your subscription:
   http://www.postgresql.org/mailpref/pgsql-interfaces



 --
 Fahrbahn ist ein graues Band
 weisse Streifen, grüner Rand



[GENERAL] General data warehousing questions

2008-10-05 Thread Sean Davis
I am looking at the prospect of building a data warehouse of genomic
sequence data.  The machine that produces the data adds about
300million rows per month in a central fact table and we will
generally want the data to be online.  We don't need instantaneous
queries, but we would be using the data for data mining purposes and
running some real-time queries for reporting and research purposes.
I have had the pleasure of working on an Netezza box where this type
of thing is quite standard, but we don't have that access anymore, so
I'm looking for hints on using postgres in a data warehousing/mining
environment.  Any suggestions on how DDL, loading, backup, indexing,
or (to a certain extent) hardware?

Thanks,
Sean

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] MySQL to Postgresql schema conversion

2008-09-30 Thread Sean Davis
There are a number of mysql to postgresql converters available, but
many of them have significant shortcomings.  Has anyone found a tool
that works well?  I am trying to convert a couple of relatively large,
public schema to postgresql.

Thanks,
Sean

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MySQL to Postgresql schema conversion

2008-09-30 Thread Sean Davis
On Tue, Sep 30, 2008 at 10:08 AM, Sean Davis [EMAIL PROTECTED] wrote:
 There are a number of mysql to postgresql converters available, but
 many of them have significant shortcomings.  Has anyone found a tool
 that works well?  I am trying to convert a couple of relatively large,
 public schema to postgresql.

I started playing with sqlalchemy (python) which can reflect a schema
to python objects.  Those objects can then be used to instantiate
another schema in a different database dialect.  Works like a charm
after modifying a couple of column names.  It mirrors about 4000
tables in about 45 seconds (of course, without the data).

Sean

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MySQL to Postgresql schema conversion

2008-09-30 Thread Sean Davis
On Tue, Sep 30, 2008 at 1:18 PM, Merlin Moncure [EMAIL PROTECTED] wrote:
 On Tue, Sep 30, 2008 at 12:48 PM, Sean Davis [EMAIL PROTECTED] wrote:
 On Tue, Sep 30, 2008 at 10:08 AM, Sean Davis [EMAIL PROTECTED] wrote:
 There are a number of mysql to postgresql converters available, but
 many of them have significant shortcomings.  Has anyone found a tool
 that works well?  I am trying to convert a couple of relatively large,
 public schema to postgresql.

 I started playing with sqlalchemy (python) which can reflect a schema
 to python objects.  Those objects can then be used to instantiate
 another schema in a different database dialect.  Works like a charm
 after modifying a couple of column names.  It mirrors about 4000
 tables in about 45 seconds (of course, without the data).


 Does it get all the various constraints and stuff (if any)?  Simple
 field to field copy techniques only tends to work if the database only
 uses a small subset of common features.  Great for you if it works
 though.

To the extent that the MySQL databases used anything interesting
(defaults, basically), it seems to, yes.  I have used it for other
projects as an ORM and it seems to support pretty much anything I can
dream up on the postgres side for DDL.

Sean

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [OT] RAID controllers blocking one another?

2008-01-17 Thread Sean Davis
We have a machine that serves as a fileserver and a database server.  Our
server hosts a raid array of 40 disk drives, attached to two3-ware cards,
one 9640SE-24 and one 9640SE-16. We have noticed that activity on one
controller blocks access on the second controller, not only for disk-IO but
also the command line tools which become unresponsive for the inactive
controller.   The controllers are sitting in adjacent PCI-express slots on a
machine with dual-dual AMD and 16GB of RAM.  Has anyone else noticed issues
like this?  Throughput for either controller is a pretty respectable
150-200MB/s writing and somewhat faster for reading, but the blocking is
problematic, as the machine is serving multiple purposes.

I know this is off-topic, but I know lots of folks here deal with very large
disk arrays; it is hard to get real-world input on machines such as these.


Thanks,
Sean


[GENERAL] plpython and error catching

2007-11-19 Thread Sean Davis
What is the expected behavior of a construct like this:

def insert_xml(elem):
id=int(elem.findtext('PMID'))
try:
plpy.execute(plan,[unicode(ET.tostring(elem)),id])
except:
plpy.execute(plan2,[unicode(ET.tostring(elem)),id])

id is a primary key on the table into which I am inserting.  plan is
the execute plan for inserting new data.  plan2 is for updating data
already in the database.  When I run this, I am not able to catch
errors of this type:

WARNING:  plpython: in function insert_medline_file:
DETAIL:  plpy.Error: Unknown error in PLy_spi_execute_plan
ERROR:  duplicate key value violates unique constraint medlinexml_pkey
CONTEXT:  SQL statement insert into medlinexml(content,id) values
(xmlparse(CONTENT $1),$2)

Why am I not able to catch this error and execute the update plan?
The manual (8.3beta2) implies that errors generated in functions are
genuine python errors that I should be able to catch.

Thanks,
Sean

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] plpython array support

2007-11-19 Thread Sean Davis
Just a simple question--does plpythonu (8.3beta) have support for
arrays?  I don't see a specific mention of it in the docs, so I
suppose not.

Thanks,
Sean

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] plpython array support

2007-11-19 Thread Sean Davis
On Nov 19, 2007 9:08 PM, Jorge Godoy [EMAIL PROTECTED] wrote:
 Em Monday 19 November 2007 19:29:51 Sean Davis escreveu:
  Just a simple question--does plpythonu (8.3beta) have support for
  arrays?  I don't see a specific mention of it in the docs, so I
  suppose not.

 Arrays work for a long time now.  I've been using them since 8.1, for sure,
 but I think that on 7.4 I already had code with arrays inside the database...

Thanks, Jorge.  I know that postgresql supports arrays, but I still
don't see how plpython supports arrays.  I noticed this post:

http://archives.postgresql.org/pgsql-general/2007-01/msg01417.php

from January, 2007 that suggests how to use the string representation
of an array to convert to/from python arrays.  This will work, but I
was wondering if there is now a true conversion from postgresql arrays
to python and vice-versa.

Thanks,
Sean

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] XML schema

2007-11-16 Thread Sean Davis
On Nov 15, 2007 6:20 PM, Peter Eisentraut [EMAIL PROTECTED] wrote:
 Sean Davis wrote:
  Is the current XML datatype (in 8.3) the direction of the
  future, or is something more akin to a dedicated XML schema (I think
  this is the route that Oracle has gone) going to be important?

 An XML schema is a specification that describes required properties of an XML
 document.  How do you imagine that affecting PostgreSQL database system
 operations?

Thanks, Peter.

An unfortunate choice of words on my part.  I meant a schema that
represents a general mapping between XML and a relational schema.  In
other words, I am looking for tools that use postgresql as the storage
engine for a native XML database.  Examples are given here:

http://www.rpbourret.com/xml/XMLDatabaseProds.htm#native

Sean

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] XML schema

2007-11-16 Thread Sean Davis
On Nov 16, 2007 2:14 PM, Peter Eisentraut [EMAIL PROTECTED] wrote:
 Sean Davis wrote:
  I meant a schema that
  represents a general mapping between XML and a relational schema. In
  other words, I am looking for tools that use postgresql as the storage
  engine for a native XML database.

 There are ideas for that, but nothing to be expected any time soon.

Thanks.  That is what I suspected, but just wanted to make sure I
wasn't missing something important.

Sean

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] XML schema

2007-11-15 Thread Sean Davis
Sorry for the cross-post, but I wasn't sure where this should go.  I
have a large XML file (about 7.5Gb) that is composed of about 17
million individual records.  I have successfully loaded these records
as individual XML records into a table with 17M rows.  However, I
looked around a bit and noticed that there are at least a couple of
XML schemas that have been available in the past (but don't look like
they are currently maintained) that enable XPATH queries directly on
the schema.  Is the current XML datatype (in 8.3) the direction of the
future, or is something more akin to a dedicated XML schema (I think
this is the route that Oracle has gone) going to be important?

Thanks,
Sean

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] XML database

2007-11-01 Thread Sean Davis
I have a large set of XML files (representing about 18M records) that I 
would like to load into postgres.  I have been loading the records into 
relational tables.  Is this the best way to go?  I am particularly 
interested in full-text searches of a subset of the elements.  I am on 
8.3Beta.


Thanks,
Sean


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] more problems with count(*) on large table

2007-09-28 Thread Sean Davis
Mike Charnoky wrote:
 Hi,
 
 I am still having problems performing a count(*) on a large table.  This
 is a followup from a recent thread:
 
 http://archives.postgresql.org/pgsql-general/2007-09/msg00561.php
 
 Since the last time these problems happened, we have tweaked some
 postgresql config parameters (fsm, etc).  I also recreated the large
 table, with the assumption it was somehow corrupted.
 
 Now, certain count(*) queries are failing to complete for certain time
 ranges (I killed the query after about 24 hours).  The table is indexed
 on a timestamp field.  Here is one query that hangs:
 
 select count(*) from mytable where evtime between '2007-09-26' and
 '2007-09-27';
 
 However, this query runs successfully and takes 2 minutes:
 
 select count(*) from mytable where evtime between '2007-09-25' and
 '2007-09-26';
 
   count
 --
  14150928
 (1 row)
 
 What is going on?  I analyzed the table before running the query and
 have no reason to believe that the amount of data added to the table
 varies much from day to day.  No data has been deleted from the table
 yet, just added.
 
 Here is some config info:
 
 PostgreSQL 8.1.8 on Fedora 3

Have you vacuumed recently?

Sean


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] use COPY TO on normalized database

2007-09-15 Thread Sean Davis

Acm wrote:

I am working with PostgreSQL 8.2.4.

I need to use the SQL COPY (COPY table FROM file) statement to
populate my database.

I have created a normalized data model (up to 3NF).  Is it possible to
use COPY TO on a particular table (that is linked to other tables
using foreign keys) whilst keeping the foreign keys and other tables
updated?

(Also, will de-normalizing the model to 2NF or 1NF help?)

Thank you.

  
Use copy...from to load your data into a temp table.  Then, do your data 
manipulation in the database using insert into  or other statements 
to get data into the real 3NF tables.  If you do all of this within a 
transaction, if anything breaks, you can rollback and have no problems 
with data integrity.


Sean

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] GRANT on group does not give access to group members

2007-09-15 Thread Sean Davis

wild_oscar wrote:

Well, after further searching and reviewing the code I believe the problem
was the NOINHERIT in the login role creation.

So the remaining question is:

On another question, if I want to grant privileges to all tables I have to
do them ONE BY ONE. Granting the privileges on the database or the schema
won't recursively grant them on the tables, am I correct?

Is the only solution the usage of scripts such as this one: 
http://pgedit.com/public/sql/acl_admin/index.html
http://pgedit.com/public/sql/acl_admin/index.html 
  


PgAdminIII will do these tasks quite nicely using the grant wizard. 


Sean

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] pam authentication

2007-09-11 Thread Sean Davis
I am making a first attempt at getting pam authentication working with a
postgres 8.2.4 installation on suse 10.2.  I have created the file:

/etc/pam.d/postgresql:
authrequired/lib64/security/pam_ldap.so
account required/lib64/security/pam_ldap.so

and in my pg_hba.conf, I have the line:
hostall all 0.0.0.0/0 pam postgresql

When performing login, I get in the server log:
LOG:  pam_authenticate failed: Conversation error
FATAL:  PAM authentication failed for user testuser

Worse yet, I am granted access to the server!

Any suggestions on what is up?

Thanks,
Sean

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [NOVICE] Recursive relationship - preventing cross-index entries.

2007-06-20 Thread Sean Davis
Andrew Maclean wrote:
 I got no answer so I am trying again.
  
 In a nutshell, if I have a recrusive relationship as outlined below, how
 do I implement a rule for the adjustments table that prevents the entry
 of an Id into the Ref column if the id exists in the Id column and vice
 versa?
  
 If I have a payments table which holds an Id and a payment and I also
 have an adjustments table that holds a payment id and a reference id so
 that adjustments can be made to payments.
 So the payments table looks like this:
 Id Payment
 1 500.0
 2 100.0
 3 1000.0
 4 50.0
 5 750.0
 6 50.0
 7 800.0
 8 1200.0
  
 and the adjustments table looks like this:
 Id Ref
 1 2
 3 4
 1 6
 3 5
 The idea is that, if for example Id=1 is a credit dard payment, then
 entries 2 and 6 could be payments that are already included in the
 credit card payment so we need to adjust the total payment to take this
 into account.
  
 This means that the payment for Id=1 ($500) in the payments table needs
 to be reduced by $100 (ref=2) and $50 (ref=6) , similarly, the payment
 for Id=3 ($1000) needs to be reduced by $850). So the question is:
  
  How do I design the adjustments table to ensure that:
 a) For any value entered in the Id column a check should occur to
 ensure that it does not exist in the Ref column.
 b) For any value entered in the Ref column, a check should occur to
 ensure that it does not exist in the Id column.
  
 In other words, looking at the adjustments table, I should be
 prevented from entering 2,4,6,5 in the Id column and 1, 3 in the Ref
 column.

I th8ink you can put a trigger on the table that can check the
constraints.

http://www.postgresql.org/docs/8.2/static/triggers.html

However, I wonder whether it might not make more sense to go with an
account system, with an account balance and credits and debits to the
account.  Is the system you are proposing really the best data model?

Sean

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Distributing PostGres database to various customers

2007-06-11 Thread Sean Davis
Mike Gould wrote:
 All,
 
 I am new to PostGres 8 (using 8.2.4 windows version).  We have for
 several years been using iAnywhere's SQL Anywhere product with our
 commercial transportation software.  With ASA there are 2 files that
 must be distributed for the database, a filename.db and a filename.log. 
 When we do a new installation we normally try and preload the database
 with data used for lookups, some registration data and if a customer is
 moving from another software where we've been contracted to convert
 their old data to our system we preload that.  Once that is done we can
 distribute the database as part of the setup process.
 
 How can we do this with PostGres?  Other than backup and restore or
 creating SQL scripts I haven't been able to find another method.  Some
 of these tables may have over a million rows in them initially if we
 convert old data.

I would give backup/restore a try.  A million rows isn't very big in
postgres terms.  On relatively low-end hardware, I am routinely able to
backup about 300Gb in about 35 databases in under 3 hours.

Sean

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Versioning

2007-04-10 Thread Sean Davis
I can think of several ways of versioning objects (modeled as rows of a 
table).  

1)  parent-child based model, where each edit of a row results in a child row
2)  date-based model, where each row is stored with a date and no updates are 
performed, only inserts
3)  Maintain a shadow table with old versions of a row
4)  Maintain a shadow table with only diffs from the original and metadata on 
when the changes took place
5)  Other?

Has anyone suggestions on what might work best?  The rows here are going 
to be simple text fields with a little metadata.

Thanks,
Sean

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Table design - unknown number of column

2006-11-09 Thread Sean Davis
I think the typical way of attacking a problem would be a second and third 
table.  The second table would look like:

flat_type table
  flag_type_id
  flag_type (like the column name in your original table)
  flag_type_description (BONUS:  you can describe each flag)

product_flag table
  product_id (fk to your original table)
  flag_type_id (fk to the flag_type table)
  flag_value (0, 1, or whatever you want to store here)

The advantages with this method are several:
1)  No wasted storage for all those NULLs where a flag is not needed
2)  Should be very fast to lookup by product_id to get all flags
3)  You can expand to an arbitrary number of flags
4)  Your schema remains normalized

Sean

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pgsql vs mysql

2006-07-12 Thread Sean Davis



On 7/12/06 3:32 AM, Rafal Pietrak [EMAIL PROTECTED] wrote:

 On Tue, 2006-07-11 at 15:24 -0500, Ron Johnson wrote:
 Joshua D. Drake wrote:
 It is also something that users are clammoring for (and my customers). To
 the point that I have customers using unions to emulate the behavior. Why?
 Because it is really, really fast.
 
 When inserting multiple rows in the same INSERT statement, how do
 you tell which row fails on a constraint or datatype-mismatch violation?
 
 BTW. COPY from psql client is quite trivial (we learn that from
 pg_dump), but can someone hint on how to use it from perl scrypt? With
 use DBI module?
 

See the DBD::Pg documentation, under copy support.

Sean


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Best open source tool for database design / ERDs?

2006-06-01 Thread Sean Davis
You might look into Eclipse (the java-based IDE).  It has at least one ERD
design plugin that allows graphical layout, editing of schema, and
generation of DDL directly from the schema.  It works with many DB platforms
and is FREE!!!

Sean


On 6/1/06 5:44 AM, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 Anyone know if DIA will generate CREATE TABLE statements from an ER
 diagram?
 
 I'd like to have a program where I can create my db design, then be
 able to instantiate the design in PostgreSQL as well as MySQL.
 
 I'll pay for a good commercial tool if it costs less than USD $100.
 
 Thanks.
 


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Best open source tool for database design / ERDs?

2006-06-01 Thread Sean Davis



On 6/1/06 12:29 PM, Tomi NA [EMAIL PROTECTED] wrote:

 On 6/1/06, Sean Davis [EMAIL PROTECTED] wrote:
 You might look into Eclipse (the java-based IDE).  It has at least one ERD
 design plugin that allows graphical layout, editing of schema, and
 generation of DDL directly from the schema.  It works with many DB platforms
 and is FREE!!!
 
 What's it called?

http://www.azzurri.jp/en/software/clay/


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] FATAL: could not read statistics message

2006-05-16 Thread Sean Davis
I am using postgresql 8.1.0 on an Xserver running MacOS 10.3.9.  I am
getting the following in the log every minute for the past couple of days.
The database is otherwise running normally, as far as I can tell:

2006-05-16 07:26:01 EDT   FATAL:  could not read statistics message:
Resource temporarily unavailable
2006-05-16 07:27:01 EDT   FATAL:  could not read statistics message:
Resource temporarily unavailable
2006-05-16 07:28:03 EDT   FATAL:  could not read statistics message:
Resource temporarily unavailable

I saw a previous message in the archives, but it did not appear that any
conclusion was reached.  Tom suggested that an EAGAIN signal was being
received from the system, but I'm not sure what this means exactly or why it
is happening now, as we have had the server running for months.

Any insight?

Thanks,
Sean


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] FATAL: could not read statistics message

2006-05-16 Thread Sean Davis



On 5/16/06 3:08 PM, Tony Wasson [EMAIL PROTECTED] wrote:

 On 5/16/06, Sean Davis [EMAIL PROTECTED] wrote:
 I am using postgresql 8.1.0 on an Xserver running MacOS 10.3.9.  I am
 getting the following in the log every minute for the past couple of days.
 The database is otherwise running normally, as far as I can tell:
 
 2006-05-16 07:26:01 EDT   FATAL:  could not read statistics message:
 Resource temporarily unavailable
 2006-05-16 07:27:01 EDT   FATAL:  could not read statistics message:
 Resource temporarily unavailable
 2006-05-16 07:28:03 EDT   FATAL:  could not read statistics message:
 Resource temporarily unavailable
 
 I saw a previous message in the archives, but it did not appear that any
 conclusion was reached.  Tom suggested that an EAGAIN signal was being
 received from the system, but I'm not sure what this means exactly or why it
 is happening now, as we have had the server running for months.
 
 Any insight?
 
 I ran into this problem also on OS X running Postgresql 8.0. When you
 start postgresql you usually see these 4 processes:
 
 /usr/local/pgsql/bin/postmaster
 postgres: writer process
 postgres: stats buffer process
 postgres: stats collector process

 When I saw the same error as you, the stats collector process was
 missing. A few times we also got messages like

Now that I look, I see the same thing.
 
 [KERNEL]: no space in available paging segments; swapon suggested

No such line in the logs
 
 and then a bunch of these:
 
 postgres[13562]: [1-1] FATAL:  could not read statistics message:
 Resource temporarily unavailable
 
 We thought it was our memory tuning of OS X. Since it wasn't a
 production box, we didn't pursue the problem further. What tuning have
 you done to postgresql.conf and the OS X memory settings?

I had cranked things up a bit from the standard install.

shared_buffers = 15000  # min 16 or max_connections*2, 8KB
each 
#temp_buffers = 1000# min 100, 8KB each
#max_prepared_transactions = 50 # can be 0 or more
work_mem = 1# min 64, size in KB
maintenance_work_mem = 128000   # min 1024, size in KB
max_stack_depth = 4096  # min 100, size in KB

Some of these may not be ideal, but it really improved performance for our
needs.

Suggested by the lack of a stats collector process or stats buffer process,
I restarted the server, and it appears to have fixed the issue for now.  At
least I know what to watch for now.

Thanks, Tony, for the reply.

Sean




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] FATAL: could not read statistics message

2006-05-16 Thread Sean Davis

Jim C. Nasby wrote:

On Tue, May 16, 2006 at 03:41:07PM -0400, Sean Davis wrote:


I had cranked things up a bit from the standard install.

shared_buffers = 15000  # min 16 or max_connections*2, 8KB
each 
#temp_buffers = 1000# min 100, 8KB each

#max_prepared_transactions = 50 # can be 0 or more
work_mem = 1# min 64, size in KB
maintenance_work_mem = 128000   # min 1024, size in KB
max_stack_depth = 4096  # min 100, size in KB



You didn't say anything about how much memory you have, but you need to
be careful with *work_mem, as being too agressive can run the machine
out of memory.


I have 4Gb of memory and the machine is pretty much a devoted database 
server.  We use it mainly for data warehousing and mining; there are 
rarely more than 2 active connections and never more than 5 total, so I 
have felt pretty comfortable with leaving work_mem pretty generous.  I 
will likely have to change that if the machine becomes more active.


Sean

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] FATAL: could not read statistics message

2006-05-16 Thread Sean Davis

Tony Wasson wrote:

On 5/16/06, Tom Lane [EMAIL PROTECTED] wrote:


Tony Wasson [EMAIL PROTECTED] writes:
 When I saw the same error as you, the stats collector process was
 missing.

The collector, or the buffer process?  The reported message would be
emitted by the buffer process, after which it would immediately exit.
(The collector would go away too once it noticed EOF on its input.)
By and by the postmaster should start a fresh pair of processes.



The stats collector was dead and would not respawn. Our options seemed
limited to restarting postmaster or ignoring the error.

Here was what the process list looked like:

kangaroo:~ twasson$ ps waux | grep post
pgsql  574   0.0 -0.0   460104832  p0  SWed06AM  10:26.98
/usr/local/pgsql/bin/postmaster -D /Volumes/Vol0/pgsql-data
pgsql  578   0.0 -5.2   460356 108620  p0  SWed06AM  27:43.68
postgres: writer process
twasson  23844   0.0 -0.018172688 std  S+   10:05AM   0:00.01
grep post


That is what I recalled, also, though I wasn't meticulous enough to hang 
onto the process list.




IIRC, the postmaster's spawning is rate-limited to once a minute,
so if the new buffer were immediately dying with the same error,
that would explain your observation of once-a-minute messages.

This all still leaves us no closer to understanding *why* the recv()
is failing, though.  What it does suggest is that the problem is a
hard, repeatable error when it does occur, which makes me loath to
put in the quick-fix retry on EAGAIN that I previously suggested.
If it is a hard error then that will just convert the problem into
a busy-loop that'll eat all your CPU cycles ... not much of an
improvement ...



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] stored proc vs sql query string

2006-04-06 Thread Sean Davis



On 4/6/06 12:12 AM, surabhi.ahuja [EMAIL PROTECTED] wrote:

 i have heard somewhere that writing a stored procedure, is much better than
 firing a sql query(such as select * from table_name) onto the database.
 is it true and if yes how?

This isn't going to be true most of the time, I think.  Write SQL where you
can, and where you can't (because you can't express something in SQL), write
a procedure.  There are places where using a stored procedure can be more
efficient, but I think starting with SQL, benchmarking and testing, and then
determining what queries need special attention is the best way to go at the
beginning.

 also i want to know that is the performnance in java slower as compared to
 cpp, given that the same things is being done.

Java and cpp performance are not really related to postgresql performance.
You will probably need to ask that on another list.  There are many other
reasons to choose one language over another besides speed (in fact, I think
speed is probably not the first thing to think about when choosing a
language).

Sean


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] psql is very slow

2006-02-24 Thread Sean Davis



On 2/22/06 10:23 AM, Michal Merta [EMAIL PROTECTED] wrote:

 I have a psql 7.3.4, apache 2.0.40, perl  v5.8.0. Database is pretty big,
 (dump is about 100Megs).
 But all the operations are very, very slow.
 Is any possibility to make postgresql more quick? (don't tell me to cut the
 database :))

I assume that by psql you mean postgresql?

You'll probably have to be more specific about what you are doing, what your
database is like and how it is used, what you want to change, and what kinds
of time you are talking about.

Sean



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Access Problem After Version Upgrade

2006-01-26 Thread Sean Davis



On 1/26/06 11:53 AM, Rich Shepard [EMAIL PROTECTED] wrote:

Last week I upgraded from -7.4.3 to -8.1.2. I had some problems moving data
 because of both my ignorance of the proper syntax and the move from
 /usr/local/pgsql to /var/lib/pgsql. Now I cannot access the server:
 
 [EMAIL PROTECTED] ~]$ psql -l
 psql: could not connect to server: Permission denied
  Is the server running locally and accepting
  connections on Unix domain socket /tmp/.s.PGSQL.5432?
 
This also prevents me from logging in to SQL-Ledger and other applications.
 Now, I don't know that the server is accepting connections, but
 srwxr-xr-x  1 root root 0 2006-01-21 14:53 /tmp/.s.PGSQL.5432=
 exists as a socket.
 
What do I do to trace the source of this problem and fix it?

You did start the server?  Did you fix the pg_hba.conf file?  Does your
postgresql.conf file allow tcp connections?  Those are the places I would
look.

Sean



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Normalized storage to denormalized report

2006-01-19 Thread Sean Davis
I have a table that stores data like so:

Create table raw_vals (
expt_id int,
clone_idx int,
val numeric,
primary key (expt_id,clone_idx)
);

And I would like to design a query that gives:

Clone_idx   expt_id_1  expt_id_2 
1   0.7834 0.8231
2   0.2832 1.2783

There are several tens-of-thousands of clone_idx entries, so doing this
quickly may not even be possible.  Any suggestions on how to go from this
normalized structure to the denormalized form easily.   Note that this isn't
exactly a crosstab, since there is no counting data, just reshaping.

Thanks,
Sean



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Normalized storage to denormalized report

2006-01-19 Thread Sean Davis



On 1/19/06 7:14 AM, Martijn van Oosterhout kleptog@svana.org wrote:

 On Thu, Jan 19, 2006 at 07:03:47AM -0500, Sean Davis wrote:
 I have a table that stores data like so:
 
 snip
 
 And I would like to design a query that gives:
 
 Clone_idx   expt_id_1  expt_id_2 
 1   0.7834 0.8231
 2   0.2832 1.2783
 
 There are several tens-of-thousands of clone_idx entries, so doing this
 quickly may not even be possible.  Any suggestions on how to go from this
 normalized structure to the denormalized form easily.   Note that this isn't
 exactly a crosstab, since there is no counting data, just reshaping.
 
 Although you may not call it exactly a crosstab, the crosstab
 functions in contrib/tablefunc should solve your problem nicely.
 
 Have a nice day,

Thanks Martijn and Michael.  Crosstab functions will do the trick (though
for production, I think I will have to use some materialized views)

Sean



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Loading large amounts of data in a SQL command

2006-01-05 Thread Sean Davis



On 1/5/06 8:31 AM, frank church [EMAIL PROTECTED] wrote:

 
 
 I am loading lots of data via SQL into a database and wrapping it into
 transactions to speed it up.
 
 This fails a number of times and causes the whole transaction to fail. The
 queries results are logged so it is easy for me to find problem records.
 
 Is there a setting or feature that allows which allows the same performance as
 transactions, without causing the whole process to fail, like a delayed
 updates
 or write mechanism of some sort?

I typically load into a loader table (usually using copy rather than
inserts) that looks like the data rather than what you want the final data
to look like.  For example, if you have an integer field that happens to
contain a couple of non-numeric characters (127a, for example), then load
this column as a varchar.  Then, you can use all of the various regex
commands, coercion functions, etc that postgres has to offer to select from
the loader table into your clean production table.  This has the advantage
of being VERY fast, allows you to do a lot of data munging very easily, and
avoids having to continually clean the data before it successfully inserts
into the database where you can work with it.

In fact, I am often faced with non-normalized data in one large spreadsheet.
I could use perl or some other client to produce nice inserts into the
appropriate tables, but I find it easier to load the whole thing and then
just do selects to grab the data and put it into normalized form.

Hope that helps,
Sean



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] inserting many rows

2006-01-03 Thread Sean Davis



On 1/2/06 5:34 PM, SunWuKung [EMAIL PROTECTED] wrote:

 I will need to insert multiple rows into a table from php.
 The data will come in 'packages' of 50-500 rows (they are responses from
 different questionnaires). As there will be many people sending their
 results in at the same time I need an effective method for this.
 
 What do you suggest is the most effective way to insert this type of
 data into the db? Issuing multiple inserts from php seems to be a waste
 of resources. 
 
 I was thinking of writing the responses into a pg array field with a
 single insert and than explode the content of that field into rows with
 a function.
 
 Could you suggest an efficient aproach?

You could look at using COPY to insert many records very quickly.  However,
inserting inside a transaction may be all that you need.  Have you tried
simulating your application under expected loads so that you are sure that
you are making the right choice?

Sean



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Stored Procedure: PL/Perl or PL/SQL?

2005-12-29 Thread Sean Davis



On 12/29/05 4:10 PM, Joshua Kramer [EMAIL PROTECTED] wrote:

 
 Greetings all,
 
 I'm working to integrate an accounting system
 (http://www.linuxcanada.com) with another application.  Fortunately,
 both use PG, so integration should be easy.
 
 I want to be able to insert invoices, and invoice line items, into the
 accounting system.  As you might expect, this requires many selects and
 inserts involving serveral tables wrapped around a transaction.  As I
 see it, there are a couple of ways to do this:
 
 1. Attach a trigger which runs a Stored Procedure in PL/SQL;
 
 2. Create a Perl Module that connects to a database via DBI and does the
 work; the trigger would then be written in PL/Perl, and would use the
 Perl Module to do the work.
 
 The advantage to #2 is that I'd have invoice migration and a
 general-purpose module for inserting invoices, with the same amount of
 work that I'd have for just invoice migration using PL/SQL.  The
 drawback is the overhead of using Perl inside PG; how much overhead is
 there?
 
 What else should I consider?

There have been a few discussions about this in the recent past (look in the
archives) and the general consensus is that one should use the language that
is most comfortable (known).  After that, use the language that is best
suited to the task.  Array manipulations in PL/PgSQL are given as one
example of where pl/perl might be easier and faster.  As for overhead, there
isn't any more for pl/perl than for pl/pgsql, from what I understand.  One
final point if you are thinking of writing a perl module for use in pl/perl,
the function will have to be run as untrusted, I think.  This could have
changed recently, but I'm not aware of it.

So, choose whichever is easiest for you for the job.  If performance is the
ONLY issue, then testing under real conditions is probably the only way to
be sure that what you are doing is the right way.

Sean
 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Table design

2005-12-02 Thread Sean Davis
This might be a bit off-topic, but I'm curious what folks would do with this
situation:

I have about 6 different tab-delimited file types, all of which store
similar information (microarray gene expression).  However, the files come
from different manufacturers, so each has slightly different fields with
different meanings.  However, there are a few columns that are shared.  I
may need to add table formats in the future (as we get more manufacturers).
I can think of at least three ways to go about storing these data:

1) Create a single table that has as many columns as needed for ALL formats
and make manufacturer-specific views, naming columns in the view as
appropriate.  Then put rules on the view for inserts, updates, etc.  This is
my first choice, I think, but adding a new manufacturer's format means
creating a new view and possibly adding columns; some columns may NULL for
large portions of the table.

2) Use postgres inheritance, but even shared columns in our data may have
different names depending on the manufacturer, so there may be views
involved anyway.

3) Use a fully-normalized strategy that stacks each column into one very
long table--this would be my last choice.

Thanks for any insight.

(For replies, please try to reply to me directly as well as the list as I
just get digests right now).

Thanks,
Sean


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Table design

2005-12-02 Thread Sean Davis
On 12/2/05 10:21 AM, Guy Rouillier [EMAIL PROTECTED] wrote:

 Sean Davis wrote:
 This might be a bit off-topic, but I'm curious what folks would do
 with this 
 situation:
 
 I have about 6 different tab-delimited file types, all of which store
 similar information (microarray gene expression).  However, the files
 come from different manufacturers, so each has slightly different
 fields with different meanings.  However, there are a few columns
 that are shared.  I may need to add table formats in the future (as
 we get more manufacturers). I can think of at least three ways to go
 about storing these data:
 
 1) Create a single table that has as many columns as needed for ALL
 formats and make manufacturer-specific views, naming columns in the
 view as appropriate.  Then put rules on the view for inserts,
 updates, etc.  This is my first choice, I think, but adding a new
 manufacturer's format means creating a new view and possibly adding
 columns; some columns may NULL for large portions of the table.
 
 2) Use postgres inheritance, but even shared columns in our data may
 have different names depending on the manufacturer, so there may be
 views involved anyway.
 
 I'm unclear if this is just a naming issue, or if the fields in the
 files have different meaning.  If it's just a case that supplier A names
 a field foo while supplier B names a field with the same meaning
 bar, I would think you'd want to coalesce all these incoming files
 into a single table containing columns that have meaning to your
 organization.  The effort then just becomes one of mapping incoming
 fields into the proper columns, but the end result would be something
 much more consistent and meaningful to your organization.
 
 If on the other hand all these incoming fields have different meaning
 and you need to keep them stored separately, I would look into option
 (4): just keep a separate table for each supplier, since you said that
 even shared fields may have different meaning; then use a view over all
 the tables to answer any queries across suppliers.

The columns are a mixture of both cases, hence the thought about
inheritance.  However, the column names do have meaning for users of the
manufacturer's products, so they need to be present, also.  I'm not the
first to tackle this problem, and most use a common table structure for all
the data, column naming aside.

Sean


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Question on Insert / Update

2005-11-09 Thread Sean Davis
On 11/9/05 9:45 AM, Alex [EMAIL PROTECTED] wrote:

 Hi,
 have just a general question...
 
 I have a table of 10M records, unique key on 5 fields.
 I need to update/insert 200k records in one go.
 
 I could do a select to check for existence and then either insert or update.
 Or simply insert, check on the error code an update if required.
 
 The 2nd seems to be to logical choice, but will it actually be faster
 and moreover is that the right way to do it?

Probably the fastest and most robust way to go about this if you have the
records in the form of a tab-delimited file is to COPY or \copy (in psql)
them into a separate loader table and then use SQL to manipulate the records
(check for duplicates, etc) for final insertion into the table.

Sean


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How to get the value in the lastvalue field

2005-10-26 Thread Sean Davis
On 10/26/05 8:23 AM, Venki [EMAIL PROTECTED] wrote:

 
 Hi,
 Can anyone in the list tell me how to get the value in the lastvalue field of
 a sequence. I tried
 select currval('field_seq');
 
 but got an error message
 
 ERROR:  currval of sequence field_seq is not yet defined in this session.
 
 I think currval will work only after an insert. I don't want to use nextval as
 this increases the lastvalue value.

That is correct.  You can't call currval until you have called nextval
first.  Why do you want to know?  That might help answer the question.

Sean


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] How to get the value in the lastvalue field

2005-10-26 Thread Sean Davis
On 10/26/05 8:42 AM, Venki [EMAIL PROTECTED] wrote:

 
 Hi 
 thanks for the replies. The situation is as follows. We get backups from the
 production server and update the local database in the local server but each
 time when we restore the database backup the sequence values are not getting
 updated properly. So what i thought was to write a function which will check
 the last value of the row in the table and compare it with the lastvalue of
 the sequnce and if it is not proper then update the lastvalue of the sequence
 to a proper value.
 
 I Hope that i have explained the situation properly. If you have any other
 suggestion it will be greatly helpful as we are new to postgres and might be
 doing something wrong when restoring the database.

I may be wrong, but I thought that sequences were also dumped and restored
with database dumps, unless you are dumping and restoring only data.  Is
that the case?

Sean


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] querying PostgreSQL version?

2005-10-26 Thread Sean Davis
On 10/26/05 8:52 AM, Zlatko Matić [EMAIL PROTECTED] wrote:

 Hello.
 
 Is there any way to check the version of PostgreSQL by a query? Maybe by
 querying catalog tables?

Select version();


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] newbie question: reading sql commands from script files

2005-10-24 Thread Sean Davis
Title: [GENERAL] newbie question: reading sql commands from script files



In psql, look at \i.

Sean


  - Original Message - 
  From: 
  basel 
  novo 
  To: pgsql-general@postgresql.org 
  
  Sent: Monday, October 24, 2005 8:28 
  PM
  Subject: [GENERAL] newbie question: 
  reading sql commands from script files
  
  What is the equivalent of the mysql 'source' command for 
  reading sql commands from ascii script files? 
  
  Thanks. 
  _ 
  Don’t just search. Find. Check out the new MSN Search! 
  http://search.msn.click-url.com/go/onm00200636ave/direct/01/ 
  
  ---(end of 
  broadcast)--- TIP 2: Don't 
  'kill -9' the postmaster 


Re: [GENERAL] Tables

2005-10-20 Thread Sean Davis
On 10/20/05 12:37 PM, Bob Pawley [EMAIL PROTECTED] wrote:

 I am new to databases.
 
 I have table 1, a primary source, which generates a serial number to make each
 item unique.
 
 I want to use this number to generate a row in table 2 linking the two rows
 and allowing specific information on each item to be developed..
 
 I have a number of books, including one specifically for Postgres.
 
 What I don't have is the language to look this function up.
 
 Concepts like JOIN appear to used to create views not new rows on other
 tables.

This was confusing to me at first, also.  There is no generic function to
create rows in two tables simultaneously.  One simply creates the row in the
first table and then creates rows in the second table in a second step.  The
link between the two tables is a single column that contains the same id.
See any of MANY online tutorials on SQL for an introduction or get a book on
SQL.  I found that I had to sit and type verbatim from multiple sources
before I really understood what was going on, so you may want to try that.

Google for SQL tutorial for starters.

Sean


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] getting around---division by zero on numeric

2005-10-19 Thread Sean Davis
On 10/19/05 8:26 AM, Tim Nelson [EMAIL PROTECTED] wrote:

 I am getting division by zero on a calculated field ( sum(sales) is 0 )
 and I can't find a way around this.  I figured out you can't use an
 aggregate in a where, and using having the parser must (obviously)
 evaluate the select fields before considering teh having clause.
 
 Does anyone have a way around this?  Thanks!
 
 select
 type,
 sum(sales),
 sum(cost),
 (sum(sales) * sum(cost) / sum(sales)) * 100
 from test
 group by 1
 having sum(sales) != 0

Can you use case?

http://www.postgresql.org/docs/8.0/interactive/functions-conditional.html

Or you could create a simple function to do the logic to avoid the division
by zero.

Sean


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Question about stored procedures

2005-10-13 Thread Sean Davis
On 10/13/05 6:36 AM, Josephine E. de Castro [EMAIL PROTECTED]
wrote:

 Hi everyone,  
 
 I just want to know if there is an equivalent method in PostgreSQL that acts
 like SQL Server's extended stored procedure. I want to run a stored procedure
 that can update a file in the file system..  I dont know whether I can do this
 using PL/pgSQL or do i need to use another method.
 
 Someone please enlighten me.

Look at pl/perlu or any of the other untrusted flavors of procedure
language.  Probably all can do what you like as far as file manipulation.

http://www.postgresql.org/docs/8.0/static/server-programming.html

Sean


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] graphs in PostgreSQL

2005-10-13 Thread Sean Davis
On 10/13/05 8:27 AM, Ivan Yu. Zolotukhin [EMAIL PROTECTED] wrote:

 Hello,
 
 I'm trying to organize storage and processing of a graph (pretty spare,
 100,000 vertices and 5,000,000 edges) with PostgreSQL.
 
 I have two main problems:
 - standart problem of finding all shortest paths between two given vertices;
 - search thru vertices' properties with ordering by path lengths from
 given vertix.
 
 So, basically, I need to decide what additional data (some preprocessed
 data about a graph or indexes) I need to store, how to store it, and how
 maintain it when graph changes.
 
 It seems that the second problem (ordering by path length) requires to
 store all path lengths between all vertices pairs (roadmap), that is
 very expensive to maintain.
 
 I would appreciate any suggestions...

Try googling for transitive closure SQL for some hits on the subject.

The following website describes how some folks have stored a DAG for a
biological system:
http://www.godatabase.org/dev/sql/doc/godb-sql-doc.html

I don't know if any folks from the CHADO
(http://www.gmod.org/schema/index.shtml) project can comment on computing
the transitive closure using stored procedures, but I think they do that for
their project.

Sean


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] About transform SQL grammar from postgresql 8.0 to

2005-10-11 Thread Sean Davis
On 10/11/05 11:34 AM, Emi Lu [EMAIL PROTECTED] wrote:

 Hello,
 
 I know there is a script in perl can transform all sql commands in mysql
 into the queries supported by postgreSQL.
 
 Since we have not finished moving all data from mysql into postgresql,
 we need dump around 60 tables from postgresql 8.0 to mysql 3.0 on a
 daily basis.
 
 So, I am looking for some existing scripts that can change postgresql
 SQL grammar (E.g., filtering set search_path) into the ones supported
 by mysql 3.0?

Look at SQL::Translator.  However, MANY, MANY features from postgres 8.0
will not be available on mysql 3 (really?).  So, I think you may have to do
a bit of editing to make it work correctly, in any case.  If this is a
one-time thing, do it by hand.  If doing on a regular basis, you might need
to script things a bit.

Sean


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Cast to integer

2005-10-05 Thread Sean Davis
On 10/5/05 9:08 AM, Robert Osowiecki [EMAIL PROTECTED] wrote:

 Hello!
 
 Anyone could explain to me, why cast(3.33 to int) works (from float, I
 suppose)  but cast('3.33' to int) (from text) does not? And what if I
 create a cast for that: is it possibly dangerous?

How about:

sdavis=# select (('3.33')::float)::int;
 int4 
--
3
(1 row)

Sean


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Integration with MS Sql Server

2005-10-04 Thread Sean Davis
On 10/3/05 7:41 PM, J B [EMAIL PROTECTED] wrote:

 On 10/3/05, Dann Corbit [EMAIL PROTECTED] wrote:
 Using SQL*Server, and OLEDB or ODBC data source can be connected as a
 linked server.  Then, TSQL queries can go against PostgreSQL tables as
 though they were ordinary SQL*Server tables (but they have 4 part names
 instead of 3 part names).
 
 Unfortunately, mirroring PostgreSQL within MS SQL Server is not an
 option in our case...we need to do it the other way around.
 Interesting info nonetheless...thanks.
 
 Is there no other way?

I'm sorry, but I missed the original post, but have you looked at DBI-Link?

http://www.pgfoundry.org/projects/dbi-link
http://www.pervasivepostgres.com/postgresql/tidbits_June05.asp

Sean


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Query optimization

2005-09-30 Thread Sean Davis
I have a couple of very large tables that I am querying on that gives the
following explain analyze output.  If someone can help out with my mess,
that would be great.

Thanks,
Sean

explain analyze
select e.*,c.* 
from u_all_est_mrna c
join g_rna_acc d on c.accession=d.accession,
(select a.gene_id,
b.db_id,
max(tend-tstart)
from g_rna_acc a
join u_all_est_mrna b on a.accession=b.accession
where gene_id200
group by a.gene_id,b.db_id) e
where abs(tstart-tend)=e.max
and d.gene_id=e.gene_id
and c.db_id=e.db_id;
   
QUERY PLAN 


-
 Hash Join  (cost=1054997.38..2472083.62 rows=2 width=219) (actual
time=122796.024..357269.576 rows=327 loops=1)
   Hash Cond: (((outer.accession)::text = (inner.accession)::text) AND
(abs((outer.tstart - outer.tend)) = inner.max) AND
((outer.db_id)::text = (inner.db_id)::text))
   -  Seq Scan on u_all_est_mrna c  (cost=0.00..932582.74 rows=24225174
width=179) (actual time=17.384..302484.904 rows=24225174 loops=1)
   -  Hash  (cost=1054973.98..1054973.98 rows=3119 width=52) (actual
time=11562.968..11562.968 rows=2276 loops=1)
 -  Nested Loop  (cost=1046393.15..1054973.98 rows=3119 width=52)
(actual time=11546.931..11558.704 rows=2276 loops=1)
   -  HashAggregate  (cost=1046393.15..1046395.98 rows=567
width=19) (actual time=11546.892..11547.188 rows=276 loops=1)
 -  Nested Loop  (cost=4.78..1046285.32 rows=14377
width=19) (actual time=0.148..11537.307 rows=1854 loops=1)
   -  Index Scan using g_rna_acc_gene_id on
g_rna_acc a  (cost=0.00..1049.44 rows=519 width=16) (actual
time=0.026..50.006 rows=1139 loops=1)
 Index Cond: (gene_id  200)
   -  Bitmap Heap Scan on u_all_est_mrna b
(cost=4.78..2007.57 rows=510 width=26) (actual time=7.100..10.068 rows=2
loops=1139)
 Recheck Cond: ((outer.accession)::text =
(b.accession)::text)
 -  Bitmap Index Scan on uaem_accession
(cost=0.00..4.78 rows=510 width=0) (actual time=4.270..4.270 rows=2
loops=1139)
   Index Cond:
((outer.accession)::text = (b.accession)::text)
   -  Index Scan using g_rna_acc_gene_id on g_rna_acc d
(cost=0.00..15.04 rows=6 width=16) (actual time=0.010..0.037 rows=8
loops=276)
 Index Cond: (d.gene_id = outer.gene_id)
 Total runtime: 357270.873 ms
(16 rows)


\d+ u_all_est_mrna
 Table
public.u_all_est_mrna
 Column  |   Type|
Modifiers  |Description
-+---+--
---+
 all_est_mrna_id | integer   | not null default
nextval('public.u_all_est_mrna_all_est_mrna_id_seq'::text) |
 db_id   | character varying |
| 
 seqtype | character varying |
| 
 matches | integer   |
| 
 mismatches  | integer   |
| 
 repmatches  | integer   |
| 
 ncount  | integer   |
| 
 qnuminsert  | integer   |
| 
 qbaseinsert | integer   |
| 
 tnuminsert  | integer   |
| 
 tbaseinsert | integer   |
| 
 strand  | character(1)  |
| 
 accession   | character varying |
| Genbank Accession without version.
 qsize   | integer   |
| 
 qstart  | integer   |
| 
 qend| integer   |
| 
 chrom   | character varying |
| Chromosome, notation like chr1
 tsize   | integer   |
| 
 tstart  | integer   |
| Blat hit start
 tend| integer   |
| Blat hit end
 blockcount  | integer   |
| 
 blocksizes  | character varying |
| 
 qstarts | character varying |
| 
 tstarts | character varying |
| 
Indexes:
u_all_est_mrna_pkey PRIMARY KEY, btree (all_est_mrna_id)
uaem_accession btree (accession)
uaem_chrom btree (chrom)
uaem_db_id btree (db_id)
uaem_seqtype btree (seqtype)
uaem_tend_chrom btree (tend, chrom)
uaem_tstart_chrom btree (tstart, chrom)
Has OIDs: yes


=== psql 78 ===
\d+ g_rna_acc
  Table public.g_rna_acc
Column|   Type| Modifiers | Description
--+---+---+-
 gene_id  | integer   |   |
 accession| character varying | not null  |
 version  | integer   |   |
 accession_gi | integer   |   |
Indexes:
g_rna_acc_pkey PRIMARY 

Re: [GENERAL] Perl regular expressions

2005-09-29 Thread Sean Davis
On 9/28/05 6:25 PM, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 Is there any support for perl regular expressions
 in Postgresql?

You might want to look at this section of the documentation:

http://www.postgresql.org/docs/8.0/interactive/functions-matching.html

Sean


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Index use in BETWEEN statement...

2005-09-27 Thread Sean Davis
On 9/27/05 7:45 AM, Yonatan Ben-Nes [EMAIL PROTECTED] wrote:

 Tom Lane wrote:
 Cristian Prieto [EMAIL PROTECTED] writes:
 
 mydb=# explain analyze select locid from geoip_block where
 '216.230.158.50'::inet between start_block and end_block;
 
 
 As you see it still using a sequential scan in the table and ignores the
 index, any other suggestion?
 
 
 That two-column index is entirely useless for this query; in fact btree
 indexes of any sort are pretty useless.  You really need some sort of
 multidimensional index type like rtree or gist.  There was discussion
 just a week or three ago of how to optimize searches for intervals
 overlapping a specified point, which is identical to your problem.
 Can't remember if the question was about timestamp intervals or plain
 intervals, but try checking the list archives.
 
 regards, tom lane
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org
 
 I think that Tom is talking about a discussion which I started entitled
 Planner create a slow plan without an available index search for it
 maybe it will help you.
 At the end I created an RTREE index and it did solved my problem though
 my data was 2 INT fields and not INET fields as yours so im not sure how
 can you work with that... To solve my problem I created boxes from the 2
 numbers and with them I did overlapping.

There is some code in this thread that shows the box approach explicitly:

http://archives.postgresql.org/pgsql-sql/2005-09/msg00189.php

Sean


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Data Entry Tool for PostgreSQL

2005-09-26 Thread Sean Davis
On 9/25/05 8:49 PM, Brent Wood [EMAIL PROTECTED] wrote:

 
 
 On Fri, 23 Sep 2005, Ian Overton wrote:
 
 Hello,
 
 We have a web application using PHP, Linux and PostgreSQL.  We need a
 simple data entry tool to allow non-IT people to edit data in our database
 through a web-browser.  Are there any free or commercial data management
 tools out there, not database management tools like pgAdmin.
 
 I would have thought a php appliction would be able to this fairly
 easily. My data entry scripts insert the new records with just a primary
 key, then iterate through the various fields using an update sql for each
 one which is not null.
 
 This sort of approach coul easily be used to populate an on-screen table
 using php, then update any changed fields as required.

These types of applications are typically called CRUD applications
(Create, Read, Update, and Delete).  Perl, Ruby, and Java, at least, have
frameworks for building CRUD applications in a fairly straightforward
(depending on needs and customization).

Sean
 


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



Re: [GENERAL] Index use in BETWEEN statement...

2005-09-26 Thread Sean Davis
On 9/26/05 11:26 AM, Cristian Prieto [EMAIL PROTECTED] wrote:

 
 Hello pals, I have the following table in Postgresql 8.0.1
 
 Mydb# \d geoip_block
 Table public.geoip_block
  Column|  Type  | Modifiers
 -++---
 locid   | bigint |
 start_block | inet   |
 end_block   | inet   |
 
 mydb# explain analyze select locid from geoip_block where
 '216.230.158.50'::inet between start_block and end_block;
 QUERY PLAN
 
 ---
 Seq Scan on geoip_block  (cost=0.00..142772.86 rows=709688 width=8) (actual
 time=14045.384..14706.927 rows=1 loops=1)
  Filter: (('216.230.158.50'::inet = start_block) AND
 ('216.230.158.50'::inet = end_block))
 Total runtime: 14707.038 ms
 
 Ok, now I decided to create a index to speed a little the query
 
 Mydb# create index idx_ipblocks on geoip_block(start_block, end_block);
 CREATE INDEX
 
 clickad=# explain analyze select locid from geoip_block where
 '216.230.158.50'::inet between start_block and end_block;
 QUERY PLAN
 
 --
 Seq Scan on geoip_block  (cost=0.00..78033.96 rows=230141 width=8) (actual
 time=12107.919..12610.199 rows=1 loops=1)
  Filter: (('216.230.158.50'::inet = start_block) AND
 ('216.230.158.50'::inet = end_block))
 Total runtime: 12610.329 ms
 (3 rows)
 
 I guess the planner is doing a sequential scan in the table, why not use the
 compound index? Do you have any idea in how to speed up this query?

Did you vacuum analyze the table after creating the index?

Sean


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] help me pls

2005-09-19 Thread Sean Davis
On 9/19/05 12:02 AM, suresh ramasamy [EMAIL PROTECTED] wrote:

 thanks for the info Devrim,
 
 by the way i'm newbie, i have followed the steps in the documentation for
 compiling and installation.  I'm using FC4.  ./cofigure completes immediately
 but the gmake running for nearly the whole day.  so decide to terminate. (my
 system configuration is p-III, 733 with 256 MB Ram and 80GB hard disk)  give
 me a solution.

Did you get errors?  Compiling may take a while (but a whole day does seem
like a long time).  I'm not a  linux user, but I think there is an rpm of
postgresql available.

 if i try the default installation in the FC4, the following error comes.  what
 should i do?
 
 postgres
 FATAL:  /var/lib/pgsql/data is not a valid data directory
 DETAIL:  File /var/lib/pgsql/data/PG
 _VERSION is missing.

Did you run initdb?



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] ERROR: database is being accessed by other users

2005-08-24 Thread Sean Davis
On 8/23/05 10:10 PM, wolverine my [EMAIL PROTECTED] wrote:

 Hi!
 
 I got the following psql's error when executing a script,
 ERROR:  database test is being accessed by other users
 
 To solve this, I have to disconnect the database connection from the
 pgAdmin III.
 
 May I know how can I configure the database to allow multiple user access?

What were you trying to do when you got this message?  What version of
postgres?  What OS?

Sean


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] psql: server closed the connection unexpetedly

2005-08-24 Thread Sean Davis
On 8/24/05 7:24 AM, Phil Thornhill [EMAIL PROTECTED] wrote:

 
 Hi,
 
 I can't connect to a new installation of PostgreSQL 8.1 on a XP SP1 laptop.
 
 Im trying to connect using psql in the laptop's command prompt. i keep
 getting the following error message:
 
 psql: server closed the connection unexplectedly
   this probably means the server terminated
   abnormally before or while processing the request
 
 
 I can start  stop the postgresql service with no problems. and if i try
 to connect while postgresql is not running i get the expected error:
 
 psql: could not connect to server: Connection refused (0x274D/10061)
   Is the server running on host ??? and accepting
   TCP/IP connections on port 5432?
 
 
 the laptop has a trendmicro firewall installed, but this is set to allow
 all connections from the localhost. plus i have other XP machines with
 the same setup and postgresql installation working perfectly.
 
 has anyone else ever come across the problem? or any suggestions as to
 what the problem may be?

I'm not sure what the problem is, but have you looked at the log file?
Also, you ARE running 8.1 (which isn't even in Beta testing yet)?

Sean


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Problem finding libpg.3.dylib on macos X in plperlu procedure

2005-08-24 Thread Sean Davis
I ran across this problem while running a plperlu procedure.  I do not have
a libpq.3.dylib, but do have a libpq.4.dylib.  I am running perl 5.8.1,
macos 10.3.8 on an Xserve G5, and Postgres 8.0.2.  Any suggestions?  I
should mention that other db functions seem to work just fine.

From psql:

select 
utilities.make_mysql_idmap('','','localhost','','','','i
dmap','idmap','idmap');
psql: NOTICE:  Connecting
psql: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
psql: connection to server was lost

From my server log:

NOTICE:  Connecting
dyld: /usr/local/pgsql/bin/postmaster can't open library:
/usr/local/pgsql/lib/libpq.3.dylib  (No such file or directory, errno = 2)
LOG:  server process (PID 826) was terminated by signal 5
LOG:  terminating any other active server processes
LOG:  all server processes terminated; reinitializing
LOG:  database system was interrupted at 2005-08-24 16:24:07 EDT
LOG:  checkpoint record is at 18/A35570C0
LOG:  redo record is at 18/A35570C0; undo record is at 0/0; shutdown TRUE
LOG:  next transaction ID: 44610957; next OID: 473189363
LOG:  database system was not properly shut down; automatic recovery in
progress
LOG:  record with zero length at 18/A3557100
LOG:  redo is not required
LOG:  database system is ready

And the offending procedure:

create or replace function utilities.make_mysql_idmap(pg_user text,pg_pass
text,pg_host text,mysql_user text,mysql_pass text,mysql_host text,mysql_db
text,idmaptable text,query text) returns boolean as $$
use strict;
use warnings;
use DBI;

my 
($pg_user,$pg_pass,$pg_host,$mysql_user,$mysql_pass,$mysql_host,$mysql_db,$i
dmaptable,$query) = @_;
elog NOTICE, Connecting;
my $dbh_pg= 
DBI-connect(dbi:Pg:dbname=annodb4;host=$pg_host,$pg_user,$pg_pass) or
elog ERROR, 'cannot connect to postgres';
elog NOTICE, Connected;
my $dbh_mysql = 
DBI-connect(dbi:mysql:database=$mysql_db;host=$mysql_host,$mysql_user,$my
sql_pass) or elog ERROR, 'cannot connect to mysql';
elog NOTICE, Connected;

$dbh_mysql-do(DROP TABLE $idmaptable);
elog NOTICE, Dopped table $idmaptable;
$dbh_mysql-do(qq/CREATE TABLE $idmaptable (
id_no autoincrement primary key,
from_id varchar(15),
from_value varchar(15),
to_id varchar(200),
to_value varchar(15))/);
elog NOTICE, Created table $idmaptable;
my $sth_pg = $dbh_pg-prepare(qq/
select ug_id,'Hs.data',symbol,'gene' from ug_main where species='Hs'
union 
select ug_id,'Mm.data',symbol,'gene' from ug_main where species='Mm'
union
select ug_id,'Mm.data',description,'title' from ug_main where species='Mm'
union
select ug_id,'Hs.data',description,'title' from ug_main where species='Hs'/
);
$sth_pg-execute();
my $sth_mysql = $dbh_mysql-prepare(qq/INSERT into $idmaptable
(from_id,from_value,to_id,to_value) values (?,?,?,?)/);
while (my $row = $sth_pg-selectrow_arrayref) {
$sth_mysql-execute(@{$row});
}
$sth_mysql-finish;
$sth_pg-finish;
$dbh_mysql-disconnect;
$dbh_pg-disconnect;
return 1;
$$ language plperlu;


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Problem finding libpg.3.dylib on macos X in plperlu

2005-08-24 Thread Sean Davis
I answered my own question--if one upgrades the server, one must also
consider the consequences for the client (DBD::Pg).  It had been compiled
using an older library that got removed when I upgraded.

Sean


On 8/24/05 4:31 PM, Davis, Sean (NIH/NHGRI) [EMAIL PROTECTED] wrote:

 I ran across this problem while running a plperlu procedure.  I do not have
 a libpq.3.dylib, but do have a libpq.4.dylib.  I am running perl 5.8.1,
 macos 10.3.8 on an Xserve G5, and Postgres 8.0.2.  Any suggestions?  I
 should mention that other db functions seem to work just fine.
 
 From psql:
 
 select 
 utilities.make_mysql_idmap('','','localhost','','','','i
 dmap','idmap','idmap');
 psql: NOTICE:  Connecting
 psql: server closed the connection unexpectedly
   This probably means the server terminated abnormally
   before or while processing the request.
 psql: connection to server was lost
 
 From my server log:
 
 NOTICE:  Connecting
 dyld: /usr/local/pgsql/bin/postmaster can't open library:
 /usr/local/pgsql/lib/libpq.3.dylib  (No such file or directory, errno = 2)
 LOG:  server process (PID 826) was terminated by signal 5
 LOG:  terminating any other active server processes
 LOG:  all server processes terminated; reinitializing
 LOG:  database system was interrupted at 2005-08-24 16:24:07 EDT
 LOG:  checkpoint record is at 18/A35570C0
 LOG:  redo record is at 18/A35570C0; undo record is at 0/0; shutdown TRUE
 LOG:  next transaction ID: 44610957; next OID: 473189363
 LOG:  database system was not properly shut down; automatic recovery in
 progress
 LOG:  record with zero length at 18/A3557100
 LOG:  redo is not required
 LOG:  database system is ready
 
 And the offending procedure:
 
 create or replace function utilities.make_mysql_idmap(pg_user text,pg_pass
 text,pg_host text,mysql_user text,mysql_pass text,mysql_host text,mysql_db
 text,idmaptable text,query text) returns boolean as $$
 use strict;
 use warnings;
 use DBI;
 
 my 
 ($pg_user,$pg_pass,$pg_host,$mysql_user,$mysql_pass,$mysql_host,$mysql_db,$i
 dmaptable,$query) = @_;
 elog NOTICE, Connecting;
 my $dbh_pg= 
 DBI-connect(dbi:Pg:dbname=annodb4;host=$pg_host,$pg_user,$pg_pass) or
 elog ERROR, 'cannot connect to postgres';
 elog NOTICE, Connected;
 my $dbh_mysql = 
 DBI-connect(dbi:mysql:database=$mysql_db;host=$mysql_host,$mysql_user,$my
 sql_pass) or elog ERROR, 'cannot connect to mysql';
 elog NOTICE, Connected;
 
 $dbh_mysql-do(DROP TABLE $idmaptable);
 elog NOTICE, Dopped table $idmaptable;
 $dbh_mysql-do(qq/CREATE TABLE $idmaptable (
   id_no autoincrement primary key,
   from_id varchar(15),
   from_value varchar(15),
   to_id varchar(200),
   to_value varchar(15))/);
 elog NOTICE, Created table $idmaptable;
 my $sth_pg = $dbh_pg-prepare(qq/
 select ug_id,'Hs.data',symbol,'gene' from ug_main where species='Hs'
 union 
 select ug_id,'Mm.data',symbol,'gene' from ug_main where species='Mm'
 union
 select ug_id,'Mm.data',description,'title' from ug_main where species='Mm'
 union
 select ug_id,'Hs.data',description,'title' from ug_main where species='Hs'/
 );
 $sth_pg-execute();
 my $sth_mysql = $dbh_mysql-prepare(qq/INSERT into $idmaptable
 (from_id,from_value,to_id,to_value) values (?,?,?,?)/);
 while (my $row = $sth_pg-selectrow_arrayref) {
   $sth_mysql-execute(@{$row});
 }
 $sth_mysql-finish;
 $sth_pg-finish;
 $dbh_mysql-disconnect;
 $dbh_pg-disconnect;
 return 1;
 $$ language plperlu;
 
 
 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings
 


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] SQL error - please help.

2005-08-22 Thread Sean Davis
On 8/22/05 8:24 AM, Bernard [EMAIL PROTECTED] wrote:

 Dear Postgresql specialists
 
 I would like to seek help with a SQL query that was developed and
 tested with other SQL92 compliant databases.
 
 Please examine the following testcase and the result that I get:
 
 # su postgres
 $ psql -d mydb -U postgres
 CREATE TABLE DEPARTMENT(PK INT NOT NULL, NAME TEXT NOT NULL);
 ALTER TABLE DEPARTMENT ADD PRIMARY KEY(PK);
 CREATE TABLE PROJECT(PK INT NOT NULL, DEPARTMENT_FK INT NOT NULL, NAME
 VARCHAR(30) NOT NULL, VALUE INT NOT NULL);
 ALTER TABLE PROJECT ADD PRIMARY KEY(PK);
 INSERT INTO DEPARTMENT(PK,NAME)VALUES(1,'Human Resources');
 INSERT INTO DEPARTMENT(PK,NAME)VALUES(2,'Tax');
 INSERT INTO
 PROJECT(PK,DEPARTMENT_FK,NAME,VALUE)VALUES(1,1,'Head-Hunt',1000);
 INSERT INTO
 PROJECT(PK,DEPARTMENT_FK,NAME,VALUE)VALUES(2,1,'Redundancy',100);
 INSERT INTO
 PROJECT(PK,DEPARTMENT_FK,NAME,VALUE)VALUES(3,2,'Avoidance',1000);
 INSERT INTO
 PROJECT(PK,DEPARTMENT_FK,NAME,VALUE)VALUES(4,2,'Charity',100);
 INSERT INTO
 PROJECT(PK,DEPARTMENT_FK,NAME,VALUE)VALUES(5,2,'Lobbying',1);
 
 SELECT
 DEPARTMENT.PK,
 DEPARTMENT.NAME,
 MIN(PROJECT.VALUE)AS RATING
 FROM DEPARTMENT,
 PROJECT
 WHERE DEPARTMENT.PK=PROJECT.DEPARTMENT_FK
 GROUP BY DEPARTMENT.PK
 ORDER BY DEPARTMENT.PK;
 
 ERROR:  column department.name must appear in the GROUP BY clause or
 be used in an aggregate function

It simply means that you must include department.name in the group by clause
like:

GROUP BY department.pk, department.name

The rest of the query looks OK.

Sean


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL]

2005-08-22 Thread Sean Davis
On 8/22/05 9:56 AM, Nigel Horne [EMAIL PROTECTED] wrote:

 On Mon, 2005-08-22 at 14:49, Tino Wildenhain wrote:
 Nigel Horne schrieb:
 On Fri, 2005-08-19 at 16:34, A. Kretschmer wrote:
 
 am  19.08.2005, um 15:58:20 +0100 mailte Nigel Horne folgendes:
 
 I can't work out from that how to return more than one value.
 
 17:35  rtfm_please For information about srf
 17:35  rtfm_please see
 http://techdocs.postgresql.org/guides/SetReturningFunctions
 17:35  rtfm_please or
 http://www.postgresql.org/docs/current/static/xfunc-sql.html#AEN28835
 
 
 How does that help with my problem? I seems to discuss returning more
 than one row of a table which is not the question I asked.
 
 
 try to tell your questions more precisely :-)
 
 I want to return more than one value from a procedure, e.g. a string and
 an integer.
 
 I think you want to return a record or tabletype.
 
 Not really, since those values could be computed on the fly, they may
 not be values in a database.

Actually, that is what you want.  Here is a concrete example:


CREATE OR REPLACE FUNCTION test_return(int,int) RETURNS RECORD AS $$
DECLARE
a alias for $1;
b alias for $2;
ret record;
BEGIN
select into ret a, b, a+b;
RETURN ret;
END;
$$ language plpgsql;

select * from test_return(1,2) as t(a int, b int, s int);
 a | b | s 
---+---+---
 1 | 2 | 3
(1 row)



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Query results caching?

2005-08-22 Thread Sean Davis
On 8/22/05 1:59 PM, Dann Corbit [EMAIL PROTECTED] wrote:

 
 
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-general-
 [EMAIL PROTECTED] On Behalf Of Ben-Nes Yonatan
 Sent: Monday, August 22, 2005 9:03 AM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] Query results caching?
 
 Hi all,
 
 I dont know how its called but I noticed that when I query the db for
 the first time it give me the result slower then the next times ill
 repeat the same exact query, I figure that its some kind of caching so
 henceforth the title of the mail :)
 
 The operating system and the database will both percolate frequently
 used information from disk into memory.  Particularly if they are SELECT
 queries, they will get faster and faster.
 
 Anyway I would want to be able to delete that caching after every
 query test that I run, cause I want to see the real time results for
 my
 queries (its for a searching option for users so it will vary alot).
 
 Those are the real times for your queries.
 
 Is it possible to do it manually each time or maybe only from the
 configuration?
 
 You will have to query a different table each time.

Just to extend this notion a bit, if you want to test your application
speed, you may want to generate real-world input to determine the actual
behavior/speed under real conditions.  As Dann pointed out, the results for
timings are real in that if the user generated the queries as you did, the
timing results would be (nearly) the same as for you.  It seems that your
concern is that the user will not generate the same type of input that you
did (that it will vary more), so the best solution may be to actually
generate some test queries that actually conform to what you think the user
input will look like.

Sean


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Set autocommit to off

2005-08-17 Thread Sean Davis
On 8/17/05 4:24 AM, Aliomar Mariano Rego [EMAIL PROTECTED] wrote:

 Does somebody knows why the Postgresql 7.4.8 or later doesn't supports
 the option SET AUTOCOMMIT TO OFF?

See this message:

http://archives.postgresql.org/pgsql-general/2005-07/msg00064.php

In short, server-side autocommit is not possible after 7.3, I think.

Instead, use a transaction block (BEGIN...COMMIT) or set it on the client
side like this in psql:

http://www.postgresql.org/docs/8.0/interactive/app-psql.html and search for
autocommit.

Hope that helps
Sean


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Finding nearest numeric value

2005-08-17 Thread Sean Davis
On 8/17/05 8:50 AM, Poul Møller Hansen [EMAIL PROTECTED] wrote:

 Does anyone know how to find the row with the nearest numeric value, not
 necessarily an exact match ?

To find the nearest value in number_column to some CONSTANT (where you
replace constant with a number), try:

select *,(number_column - CONSTANT)^2 as d from tablename order by d limit
1;

Does that do it for you?

Sean


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Finding nearest numeric value

2005-08-17 Thread Sean Davis
On 8/17/05 10:01 AM, Poul Møller Hansen [EMAIL PROTECTED] wrote:

 
 To find the nearest value in number_column to some CONSTANT (where you
 replace constant with a number), try:
 
 select *,(number_column - CONSTANT)^2 as d from tablename order by d limit
 1;
 
 Does that do it for you?
 
 Sean
  
 
 It does ideed, not that I understood how, but I will find out.
 Thank you very much.

Just a word (or several) of explanation, then

To compute the distance between two points on a line, you can compute the
absolute value of the difference (4-2 is the same distance as 2-4, while the
latter is negative) or you can square the difference (just to make it
positive).  You could use absolute value in the above query if you like--I
don't know which is faster, but they will give the same result.

As for the query structure, you can select calculations of columns as well
as the columns themselves.  The as d part just gives the calculation a
nice name to use in the rest of the query and in the resulting output.

Sean


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] createlang

2005-08-15 Thread Sean Davis
On 8/15/05 4:45 PM, eoghan [EMAIL PROTECTED] wrote:

 Hello
 Im trying to load plpgsql into my test db... Im wondering if theres a
 way to check if its loaded... I do:
 test-# createlang plpgsql test
 test-#
 When i try load an example function:
 test-# \i test.sql
 Im getting:
 psql:test.sql:5: ERROR:  language plpgsql does not exist
 HINT:  You need to use createlang to load the language into the
 database.

You need to run creatlang from the command line (outside the database).
Here is the documentation:

http://www.postgresql.org/docs/8.0/interactive/app-createlang.html

Sean


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Analyze and number of rows

2005-08-12 Thread Sean Davis
Just a simple question--is there a way to have analyze use a fixed
PROPORTION of rows rather than a fixed number?

Thanks,
Sean


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Cross database queries

2005-08-09 Thread Sean Davis
On 8/9/05 10:21 AM, [EMAIL PROTECTED] [EMAIL PROTECTED]
wrote:

 Hi
 
 Suppose I have two databases on one server (lets call them DB_A and DB_B)
 and I was to write a cross database query. How do I do this in PostgreSQL?
 
 On MS-SQL I would do something like:
 
 SELECT ta.Field1, ta.Field2, tb.Field2
 FROM DB_A.dbo.SomeTable ta
 JOIN DB_B.dbo.SomeOtherTable tb
 ON ta.Field1 = tb.Field1

See /contrib/dblink in the postgresql source distribution.

Sean


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] best way to reference tables

2005-08-09 Thread Sean Davis
On 8/9/05 10:31 AM, TJ O'Donnell [EMAIL PROTECTED] wrote:

 I have many different tables that I want to keep track of.
 So, I thought of a master table with those table names in it.
 But, to maintain this (suppose a table changes
 its name, gets dropped) I want to have some kind of referential
 integrity - the way foreign keys and constraints do.
 What could I use that would disallow inserting a name into
 the master table unless another table by that name already exists?
 And what could ensure that a table would not be renamed or dropped
 unless the master table is changed?

Good idea, but the table already exists.  Try:

Select * from pg_tables;

And see:

http://www.postgresql.org/docs/8.0/interactive/catalogs.html

Sean


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] psqsl - remote db

2005-08-04 Thread Sean Davis
On 8/4/05 2:26 PM, Andrew Stewart [EMAIL PROTECTED] wrote:

 I'm sorry, I'm sure I should be asking this someplace more general, but
 apparantly the word 'port' is used in so many different contexts that a
 google search turns up fruitless ;)
 
 I'm just trying to access a remote postgresql database on port 5432,
 which does not appear to be open on the remote computer.  I'm on a
 MacOSX and  cannot figure out for the life of me how to open up this
 port on the computer.
 
 This is probably a unix question if anything, but any help would be
 apprecaited.

You'll probably have to let us know some more details.

Can you connect while on the machine that the database is running on?

How are you trying to connect from the MacOS machine (show us the command)?
What output do you get (again, just paste the output of the command)?

Sean


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Alter privileges for all tables

2005-08-01 Thread Sean Davis
Sorry to ask this again, but I seem to have misplaced the original
discussion and can't find it in the archives--could someone point me to the
thread on functions to batch alter privileges on all tables (or a subset) in
a database?

Thanks,
Sean


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Is there something like Mssql GUID in PostgreSql

2005-07-29 Thread Sean Davis
Title: Re: [GENERAL] Is there something like Mssql GUID in PostgreSql database



On 7/29/05 8:04 AM, Dragan Matic [EMAIL PROTECTED] wrote:

We are currently migrating a database from Ms Sql server to PostgreSQL. 
So far replication was done using GUID (global unique identifier) fields 
which is MS extension. I am wondering if something similar exists in 
PostgreSQL? 

Tnx in advance 

Dragan 

There is a search function for the list here:

http://archives.postgresql.org/

Searching for GUID brings up a fairly extensive discussion (in just the past 2-3 days):

http://archives.postgresql.org/pgsql-general/2005-07/msg01074.php

Hope this helps

Sean





Re: [GENERAL] get sequence from remote server

2005-07-24 Thread Sean Davis


On Jul 24, 2005, at 3:23 PM, Stefan wrote:


Hi,

is there a way to get a sequence from another Postgres-Server?

I have some data servers an one admin server. When a new row is 
inserted

on one of the data servers, the server should ask automatically for a
sequence number from the admin server.

How can I do this? Maybe using pl/pgsql?


Check out DBI-link:

http://pgfoundry.org/projects/dbi-link

and as part of a tidbits column here:

http://www.pervasive-postgres.com/postgresql/tidbits.asp

Sean


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] temporary tables ?

2005-07-23 Thread Sean Davis




What is the influence on database growing in comparrison to permanent 
table frequently inserted/deleted rows ?


The tables are dropped automatically after the connection is closed.  The 
database doesn't grow because of temporary tables.  As for comparison to a 
frequently inserted/deleted table, that would depend on the time between 
vacuums.  The rows aren't removed from a table until a vacuum is 
performed.



On Jul 22, 2005, at 1:55 PM, Zlatko Matic wrote:


Hello.
I have some tables that are updated by several users in the same time 
and are used in queries for reports. Those tables have rows that are 
actualy copied from original  tables that are not to be altered. There 
is a procedure that inserts rows for every user when connects, along 
with his username, so different users can't interfere with each other 
because every user has his own copy of rows that he can update, and 
records are filtered by current_user.
Well, it's my heritage from MS Access, before I moved to Postgres, 
because there is no such thing as temporary table in Access...
Now, I'm wondering is there any true advantage to implement temporary 
tables for each user, insted of one table with inserted rows with 
username for every user ?


Temporary tables are not per-user, but per-connection.  A user can be 
connected twice, but a temporary table created on one connection is not 
visible from the other connection.  Also, temporary tables are 
temporary--they disappear after the connection is closed.








---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] temporary tables ?

2005-07-22 Thread Sean Davis


On Jul 22, 2005, at 1:55 PM, Zlatko Matic wrote:


Hello.
I have some tables that are updated by several users in the same time 
and are used in queries for reports. Those tables have rows that are 
actualy copied from original  tables that are not to be altered. There 
is a procedure that inserts rows for every user when connects, along 
with his username, so different users can't interfere with each other 
because every user has his own copy of rows that he can update, and 
records are filtered by current_user.
Well, it's my heritage from MS Access, before I moved to Postgres, 
because there is no such thing as temporary table in Access...
Now, I'm wondering is there any true advantage to implement temporary 
tables for each user, insted of one table with inserted rows with 
username for every user ?


Temporary tables are not per-user, but per-connection.  A user can be 
connected twice, but a temporary table created on one connection is not 
visible from the other connection.  Also, temporary tables are 
temporary--they disappear after the connection is closed.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Explain Analyse never returns .. maybe a bug

2005-06-29 Thread Sean Davis


- Original Message - 
From: David Gagnon [EMAIL PROTECTED]

Cc: pgsql-general@postgresql.org
Sent: Wednesday, June 29, 2005 9:25 AM
Subject: [GENERAL] Explain Analyse never returns .. maybe a bug



Hi all,

 I stop the following statement after 3 minutes

  explain analyse select *
   From IC
   where IC.ICNUM NOT IN (select IRICNUM from IR WHERE IRSTATUT in (1, 
2))



I'm using 8.0.3 on windows.
Is that a know bug ?


David,

Remember that EXPLAIN ANALYZE actually RUNS the query.  It may actually be 
taking 3+ minutes to run the query.  How large are the tables?  What does 
EXPLAIN (without ANALYZE) say?  Have you run the query without EXPLAIN 
ANALYZE to know how long it normally takes--what is the basis for suggesting 
that 3 minutes is too long?


Sean



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] MacOSX, fink, missing readline/readline.h

2005-06-26 Thread Sean Davis


- Original Message - 
From: John DeSoi [EMAIL PROTECTED]

To: Teunis Peters [EMAIL PROTECTED]
Cc: PostgreSQL general pgsql-general@postgresql.org
Sent: Sunday, June 26, 2005 6:13 PM
Subject: Re: [GENERAL] MacOSX, fink, missing readline/readline.h




On Jun 26, 2005, at 4:29 PM, Teunis Peters wrote:


This has to do with building postgresql-8.0.3

I'm installing on a MacOSX system (10.3) and configure's failing to  find 
the readline installation.



What does your configure command look like? I have not tried it in a 
while, but this used to work for me:


./configure --with-includes=/sw/include --with-libraries=/sw/lib


This makes the assumption that readline was installed via fink, I think.  A 
likely assumption, probably.  If you haven't done that, it is pretty easy to 
do


Sean 




---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Debugging PL/pgSQL

2005-06-21 Thread Sean Davis

You might want to look at pgEdit.

Sean

- Original Message - 
From: Craig Bryden [EMAIL PROTECTED]

To: pgsql pgsql-general@postgresql.org
Sent: Tuesday, June 21, 2005 3:01 PM
Subject: [GENERAL] Debugging PL/pgSQL 




Hi

Does anyone know of a free SQL Editor that allows you to debug PL/pgSQL
functions?

Thanks
Craig


---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Debugging PL/pgSQL

2005-06-21 Thread Sean Davis



Ah, yes I should have qualified my 
statement a bit. Sorry if I was misleading

Sean


  - Original Message - 
  From: 
  Bob 
  
  To: Sean Davis 
  Cc: Craig Bryden ; pgsql 
  Sent: Tuesday, June 21, 2005 4:57 
PM
  Subject: Re: [GENERAL] Debugging 
  PL/pgSQL
  Keep in mind there is no built in API to debug PL/pgSQL like 
  there is for PL/SQL. You will have to use the old true and tried output 
  statements to debug your stored procs.
  On 6/21/05, Sean 
  Davis [EMAIL PROTECTED] wrote: 
  You 
might want to look at pgEdit.Sean- Original Message 
-From: "Craig Bryden"  [EMAIL PROTECTED]To: 
"pgsql" pgsql-general@postgresql.orgSent: 
Tuesday, June 21, 2005 3:01 PM Subject: [GENERAL] Debugging 
PL/pgSQL Hi Does anyone know of a free SQL 
Editor that allows you to debug PL/pgSQL functions? 
Thanks Craig ---(end 
of broadcast)--- TIP 6: Have you searched 
our list 
archives? 
http://archives.postgresql.org 
---(end of 
broadcast)---TIP 2: you can get off all lists at 
once with the unregister command (send "unregister 
YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] subqueries

2005-06-19 Thread Sean Davis


- Original Message - 
From: Tomasz Grobelny [EMAIL PROTECTED]

To: pgsql-general@postgresql.org
Sent: Sunday, June 19, 2005 6:12 PM
Subject: [GENERAL] subqueries



I have such a statement:
select * from (subquery1) as foo, (subquery2) as bar;
Both subqueries are reasonably fast (0.5s) and generate results that have
several (10) rows but the whole query takes forever to execute. Moreover 
if
I simplify those subqueries (limiting functionality) the whole select 
clause
works just fine. It seems that those subqueries are not executed 
separately.

What am I doing wrong?


This is calling out for some EXPLAIN output (perhaps from the two subqueries 
individually and then the full query).


Sean



---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Making the DB secure

2005-06-17 Thread Sean Davis


On Jun 17, 2005, at 8:49 AM, Egyd Csaba wrote:


Hi,
we plan to make available our database from the internet (direct tcp/ip
based connections). We want to make it as secure as possible. There 
are a

few users who could access the database, but we want to block any other
users to access.

Our plans are:
  - using encripted (ssl) connections - since sensitive (medical) 
personal

information are stored.
(How to setup this? What do we need on server side, and what on 
client

side?)
  - using pg_hba.conf to configure authentication method and IP filters
  - forcing our users to change their passwords frequently
  - applying strong password policy (long pw, containing 
upper/lowercase

characters and numbers)

Could anybody suggest us something more valuable features in postgres 
to

improve the security?
Regarding SSL, I'd like to know how to use it correctly. What we have 
to do
on the server to accept ssl connections, and what kind of client 
softwares

are required.

Many thanks,

-- Csaba Egyd


It sounds like you might want to think about hiring a consultant to 
help out here--what do others think?  With medical information, this is 
not something you want to get wrong.


Sean


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Multiple COPYs

2005-06-17 Thread Sean Davis


On Jun 16, 2005, at 12:32 PM, [EMAIL PROTECTED] wrote:


Hello,

Having a great time with PG - ported an erp from oracle and db2. First
I tried MySql but choked somewhere in the 900 table region. I have a
python script to translate the syntax and it loads about 2000 tables.

Now I want to COPY my dumps - I have 1 data dump for each table. Any
tips on what to use so that I can read the file name into a variable
and pass it as the file name in the COPY command and have one script
load all my tables?


Why not use Python or a simple shell script to generate a file like:

\copy table1 from 'table1.txt'
\copy table2 from 'table2.txt'


And then do psql -f filename

Sean


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Executing SQL Script

2005-06-16 Thread Sean Davis


On Jun 15, 2005, at 9:47 PM, Jamie Deppeler wrote:

This is a simple question what is the best way to execute a SQL 
script, this script will create schemas and tables for a database i 
created.




Nearly all clients or interfaces provide a means to execute arbitrary 
SQL.  You can do so in PgAdminIII, for example.  For an out-of-the-box 
solution, you can use psql -f.  See documentation here:


http://www.postgresql.org/docs/8.0/static/app-psql.html

Sean


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Dynamic SQL

2005-06-16 Thread Sean Davis


On Jun 16, 2005, at 6:21 AM, Craig Bryden wrote:


Hi

I am trying to get information on how to do Dynamic SQL in PostgreSQL
(preferably pg/plsql).
Please can someone give an exaple of how this is done or direct me to  
the

relevant documentation.


There is searchable documentation here (bookmark it--it is quite  
good):


http://www.postgresql.org/docs/8.0/static/index.html

And the relevant section is here:

http://www.postgresql.org/docs/8.0/static/plpgsql- 
statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN



Sean


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] [HACKERS] mirroring oracle database in pgsql

2005-06-13 Thread Sean Davis

There is DBI-link, but this probably isn't an enterprise solution

http://www.pervasive-postgres.com/postgresql/tidbits.asp

Sean

On Jun 13, 2005, at 2:31 PM, Jonah H. Harris wrote:

The contrib/dblink module only works for creating a database link to 
another PostgreSQL database.  I'm working on a dblink_ora which allows 
you to connect to an 8i, 9i, or 10g system the same way.  dblink_ora 
is based on dblink, not dblink_tds (for SQL Server) so it has more 
features.  Also, I'm using the Oracle Instant Client libraries/SDK, so 
you don't need to do the whole Oracle Client install to use 
dblink_ora.


I'm currently doing some alpha testing on it but if you would like to 
use it in beta, let me know.  Also, if anyone has *a lot* of 
experience with OCI, I'd like to talk about a couple things.


-Jonah


Christopher Kings-Lynne wrote:

Check out EnterprisDB: www.enterprisedb.com
Chris
Edward Peschko wrote:

hey all,


I'm trying to convince some people here to adopt either mysql or 
postgresql
as a relational database here.. However, we can't start from a clean 
slate; we have a very mature oracle database that applications point 
to right now, and so we need a migration path. I went to the mysql 
folks, and it looks
like its going to be quite a while before mysql is up to the task, 
so I thought I'd try pgsql.

Anyways, I was thinking of taking the following steps:


a) finding a Java API that transparently supports both 
postgresql and

Oracle data access and stored procedure calls.

b) instrumenting the Oracle database so that all tables support
timestamps on data rows.

c) mirroring the Oracle database in MySQL.

d) making interface code connecting the MySQL database to the
   Oracle database (and both applying updates to the database
   as well as data.

In other words, I'm looking to make a postgresql - Oracle mirroring 
tool, and syncing the databases on a nightly basis, and I was

wondering if anybody had experience with this sort of thing.

As I see it, if we pull this off we could save quite a bit in 
licensing costs - we'd still have oracle around, but it would only 
be a datastore for talking to other oracle databases, and run by 
batch, not accessed by end users.


However:

a) I'm not sure how well stored procs, views, triggers and
   indexes transfer over from oracle to postgresql.

b) I'm not sure how scalable postgresql is, and how well
   it handles multiprocessor support (we'd be using a
six-processor box.



As an aside, how much experience do people on the list have with
enterprise db? I was thinking that they might alleviate the 
mirroring headaches quite a bit, but they don't seem to have a 
solaris port.. Anybody have a take on their db?



Ed

(
 ps - if you subscribe to the mysql list, no you're not seeing 
double.

  I posted a very similar message on the mysql lists a couple
  of days ago.. )

---(end of 
broadcast)---

TIP 8: explain analyze is your friend
---(end of 
broadcast)---

TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


--
Jonah H. Harris, UNIX Administrator  | phone: 505.224.4814
Albuquerque TVI  | fax:   505.224.3014
525 Buena Vista SE   | [EMAIL PROTECTED]
Albuquerque, New Mexico 87106| http://w3.tvi.edu/~jharris/

A hacker on a roll may be able to produce, in a period of a few
months, something that a small development group (say, 7-8 people)
would have a hard time getting together over a year.  IBM used to
report that certain programmers might be as much as 100 times as
productive as other workers, or more.

-- Peter Seebach

---(end of 
broadcast)---
TIP 1: subscribe and unsubscribe commands go to 
[EMAIL PROTECTED]





---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] [HACKERS] mirroring oracle database in pgsql

2005-06-13 Thread Sean Davis


On Jun 13, 2005, at 6:48 PM, Jonah H. Harris wrote:

I wouldn't say it's enterprise-grade, but one could probably make it 
work.




I totally agree--I use it relatively often.  This single piece of 
software opened my eyes as to the extent to which the procedure 
languages can be leveraged.


Sean


Sean Davis wrote:
There is DBI-link, but this probably isn't an enterprise 
solution

http://www.pervasive-postgres.com/postgresql/tidbits.asp
Sean



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] return next and pl/perl

2005-06-13 Thread Sean Davis

Eric,

You might check out this thread:

http://archives.postgresql.org/pgsql-general/2005-06/msg00527.php

Someone can almost certainly point to a better thread, but it is a 
start


Sean

On Jun 13, 2005, at 12:41 PM, Eric E wrote:


Hi David,
	I ended up doing what you suggest, and it seems to work OK right now. 
 I'll definitely look for return_next in 8.1.  On a related subject, 
is there any interest in eliminating the requirement that types be 
specified at runtime, either by the function returning the type, or in 
the select statement?  It would be helpful to me, but it's not a 
dealkiller.


Anyway, thanks a lot for your help, it's much appreciated.

Eric

On Fri, Jun 10, 2005 at 07:24:49PM -0400, Eric E wrote:

Hi all,
I'm working on implementing a function in PL/PERL that will ready
many rows.  As such I'd like to use return_next to keep memory usage
down.  When I call return next, I get the following error message:
ERROR:  error from Perl function: Can't call method return_next on
unblessed reference at (eval 18) line 25.
likewise if I use spi_return_next: ERROR:  error from Perl function:
Can't call method spi_return_next on unblessed reference at (eval
16) line 25.
After some Google and list searching, I inferred that
spi_return_next may not be implemented?  Is this correct?  If so, is
there a workaround?


For 8.0, not exactly, although you can batch the entire rowset into an
array ref of hash references, then return that.

In the forthcoming 8.1 (CVS TIP), there is a return_next() method.

Cheers,
D
--
David Fetter david ( at ) fetter ( dot ) org http://fetter.org/
phone: +1 510 893 6100   mobile: +1 415 235 3778

Remember to vote!




---(end of 
broadcast)---
TIP 1: subscribe and unsubscribe commands go to 
[EMAIL PROTECTED]





---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] CVS stability

2005-06-12 Thread Sean Davis
I've been noticing some very useful things that are probably already in 
CVS (return_next in pl/perl, IN/OUT parameters, and probably others).  
I don't have a mission critical production environment and was 
wondering how unstable a typical 8.1 CVS checkout is?  I'm not talking 
about feature-freeze, but more about uptime and compilation.


Thanks,
Sean


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] [SQL] Permission denied for language pltclu

2005-06-10 Thread Sean Davis


On Jun 10, 2005, at 5:51 AM, Dinesh Pandey wrote:



Hi Richard/ Michael
 
Thanks for your great help.
 
I got the problem.
 
Actually, I was not getting the cause of this problem, because it was 
working properly at our end.

 
Actually this problem occurs when the function is being created by the 
user who has not created the current database.

 
Solution: The database must be created by the user who is creating the 
pltcl function? Right


Dinesh,

The user creating the function must be a superuser.  The question to 
ask is: How do I make a user a superuser?  The answer is in the 
documentation at these two links.  It suffices to ALTER USER to have 
CREATEUSER privileges to be a superuser.


http://www.postgresql.org/docs/8.0/static/user-attributes.html
http://www.postgresql.org/docs/8.0/static/sql-alteruser.html

I hope this clarifies things a bit.
Sean

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] postgres and xml

2005-06-10 Thread Sean Davis


On Jun 10, 2005, at 12:02 PM, Bruce Momjian wrote:


Hrishikesh Deshmukh wrote:

Hi All,

Is there a way that makes it possible to export sql query results as 
a xml file?

I use postgres 7.4.7 on a debian system. Do i need any packages to
export query results as a xml file if its possible?


No, we have discussed it but it seems there is no standard XML export
format for databases, so you have to use something to use the COPY
output into XML.  Perhaps there is an CSV to XML conversion tool you 
can

use.


While there isn't a format for XML from databases, it is possible to 
dump stuff from the database into some form of XML.  You could then use 
XSLT to transform it into other forms as needed.


Here is a link of interest, but there are numerous others like it.

http://www.samspublishing.com/articles/article.asp?p=30081rl=1

Sean


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] return next and pl/perl

2005-06-10 Thread Sean Davis


- Original Message - 
From: Eric E [EMAIL PROTECTED]

To: pgsql-general@postgresql.org
Sent: Friday, June 10, 2005 7:24 PM
Subject: [GENERAL] return next and pl/perl



Hi all,
   I'm working on implementing a function in PL/PERL that will ready 
many rows.  As such I'd like to use return_next to keep memory usage 
down.  When I call return next, I get the following error message:


ERROR:  error from Perl function: Can't call method return_next on 
unblessed reference at (eval 18) line 25.


likewise if I use spi_return_next:
ERROR:  error from Perl function: Can't call method spi_return_next on 
unblessed reference at (eval 16) line 25.


After some Google and list searching, I inferred that spi_return_next 
may not be implemented?  Is this correct?  If so, is there a workaround?


There is no workaround that I know of.  See here:

http://www.postgresql.org/docs/8.0/static/plperl-missing.html

It's a known limitation.  


Sean



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] postgres and ggobi/xgvis

2005-06-09 Thread Sean Davis


On Jun 8, 2005, at 7:39 PM, Hrishikesh Deshmukh wrote:


Hi All,

How easy or difficult is it to get ggobi/xgvis working with 
postgresql?!

Is it possible to write a query and send the output straight to
ggobi/xgvis without much work?
Any pointers.



I would think that you would need to construct the input file for ggobi 
from your query output.  There are many ways to do this, I would 
suppose, but if I were to do it, I would make a perl script that 
accepts some parameters, does the SQL query, and outputs the results 
with appropriate formatting to a file.  Then call ggobi with that file 
as input.  Pretty simple, but you still have to do the work of 
determining the file format for ggobi and generating that file format 
on your own.


As an alternative, you could use a system like R 
(http://www.r-project.org) that has plugins that allow postgres access 
and access to ggobi.


Sean


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Copying data from int column to array column

2005-06-08 Thread Sean Davis


On Jun 8, 2005, at 8:21 AM, Adam Witney wrote:



Hi,

I am trying to copy the data from an integer column into an array 
column in

the same table. Something like this

CREATE TABLE test (field1 INT, field2 INT, field3 INT[]);

INSERT INTO test VALUES(1);
INSERT INTO test VALUES(2);
INSERT INTO test VALUES(3);
INSERT INTO test VALUES(4);
INSERT INTO test VALUES(5);

UPDATE test SET field2 = field1;
UPDATE test SET field3[1] = field1;

Why does the UPDATE of field2 work, but the UPDATE of field3 does not?


Adam,

I'm not sure what you were expecting, but I tried things here and they 
seemed to do what I expected:


Sean


CREATE TABLE test (field1 INT, field2 INT, field3 INT[]);

INSERT INTO test (field1) VALUES(1);
INSERT INTO test (field1) VALUES(2);
INSERT INTO test (field1) VALUES(3);
INSERT INTO test (field1) VALUES(4);
INSERT INTO test (field1) VALUES(5);
SELECT * FROM test;
UPDATE test SET field2 = field1;
SELECT * FROM test;
UPDATE test set field3[1] = field2;
SELECT * FROM test;
UPDATE test SET field3 = array((select field1 from test));
SELECT * FROM test;


-  OUTPUT 

CREATE TABLE test (field1 INT, field2 INT, field3 INT[]);
CREATE TABLE
INSERT INTO test (field1) VALUES(1);
INSERT 147690348 1
INSERT INTO test (field1) VALUES(2);
INSERT 147690350 1
INSERT INTO test (field1) VALUES(3);
INSERT 147690352 1
INSERT INTO test (field1) VALUES(4);
INSERT 147690353 1
INSERT INTO test (field1) VALUES(5);
INSERT 147690355 1
SELECT * FROM test;
 field1 | field2 | field3
++
  1 ||
  2 ||
  3 ||
  4 ||
  5 ||
(5 rows)

UPDATE test SET field2 = field1;
UPDATE 5
SELECT * FROM test;
 field1 | field2 | field3
++
  1 |  1 |
  2 |  2 |
  3 |  3 |
  4 |  4 |
  5 |  5 |
(5 rows)

UPDATE test set field3[1] = field2;
UPDATE 5
SELECT * FROM test;
 field1 | field2 | field3
++
  1 |  1 | {1}
  2 |  2 | {2}
  3 |  3 | {3}
  4 |  4 | {4}
  5 |  5 | {5}
(5 rows)

UPDATE test SET field3 = array((select field1 from test));
UPDATE 5
SELECT * FROM test;
 field1 | field2 |   field3
++-
  1 |  1 | {1,2,3,4,5}
  2 |  2 | {1,2,3,4,5}
  3 |  3 | {1,2,3,4,5}
  4 |  4 | {1,2,3,4,5}
  5 |  5 | {1,2,3,4,5}
(5 rows)


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Large Object = invalid input syntax for integer:

2005-06-06 Thread Sean Davis


- Original Message - 
From: Adam Witney [EMAIL PROTECTED]
To: grupos [EMAIL PROTECTED]; pgsql-general 
pgsql-general@postgresql.org

Sent: Monday, June 06, 2005 8:17 AM
Subject: Re: [GENERAL] Large Object = invalid input syntax for integer: 



On 6/6/05 4:58 am, grupos [EMAIL PROTECTED] wrote:


I need to use large objects BUT I am having problemns... I instaled
PostgreSQL 8.0.3 windows version with lo module.

first, I created the table below:
CREATE TABLE test
(
description varchar(20),
picture lo
)
WITHOUT OIDS;

After trying to make one insert without value for the lo I get the error
below:

INSERT INTO test VALUES (1);

ERROR:  invalid input syntax for integer: 

Which value I can put on the default of the lo to solve this ? I alreday
tryed -1, 0, 1, null but nothing works...

Why this problem?


I think the problem is nothing to do with lo, you are trying to insert an
integer into a varchar field, try

INSERT INTO test VALUES ('1');



And do you need:

INSERT INTO test (description) VALUES ('1');

Sean 




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Test for array slice?

2005-06-03 Thread Sean Davis


On Jun 3, 2005, at 12:32 PM, Joe Conway wrote:


Peter Fein wrote:

I want to do something like this (pardon my pseudocode):
A=ARRAY[4, 5, 6, 7, 8]
B=ARRAY[5, 6]
is_sliceof(A, B), i.e., there exists a slice of A that equals B.  My
best thought ATM is to convert both to strings and use pattern 
matching

- any better ideas?


I can't think of a really good way to do that directly in Postgres, 
but I'd bet (still not sure though) there is a way in R.

  http://www.r-project.org/index.html
If so, you could use PL/R:
  http://www.joeconway.com/plr/


This is probably also easy in perl and python as well.

Sean


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] writting a large store procedure

2005-06-02 Thread Sean Davis
I have to recommend pgEdit (nearly free and does have a nearly 
fully-functional test version).


Sean

On Jun 2, 2005, at 3:57 AM, Rodríguez Rodríguez, Pere wrote:


Hello,

I'm writing a large store procedures (more than 700 lines) and I have 
much problems to debug it. How can I debug it easily?


I use pgAdmin, is there another editor (free software) that permit 
write large store procedure more easily?


Thanks in advance.

pere



---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org


  1   2   >