Re: [GENERAL] Recovering data via raw table and field separators

2007-12-06 Thread John Wells
On 12/5/07, Alvaro Herrera [EMAIL PROTECTED] wrote:
 Not that I know of.

 I think the simplest thing to get your tuples back is:

 1. mark the transaction that deleted them as aborted in pg_clog
 2. reset the hint bits in the deleted tuples, or hack your postgres copy
 to ignore hint bits

 You can figure out the transaction that deleted the tuples by seeing
 that their Xmax value is with pg_filedump.

 The hint bits part makes it rather messy :-(

A bit beyond me I'm afriad, at least at my current level with
postgresql. Does anyone offer a commercial tool to do this? Or, would
anyone be interested in doing it for a fee?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Recovering data via raw table and field separators

2007-12-06 Thread John Wells
On 12/6/07, Martijn van Oosterhout [EMAIL PROTECTED] wrote:
 On Thu, Dec 06, 2007 at 02:35:42PM -0500, John Wells wrote:
  A bit beyond me I'm afriad, at least at my current level with
  postgresql. Does anyone offer a commercial tool to do this? Or, would
  anyone be interested in doing it for a fee?

 There was a tool pgfsck which could dump table data, but it's not been
 updated in quite a while so I don't know if it'll work for your
 version...

I'm on 8.1. I'll check and see if I can make it work.

Thanks!
John

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Determining current block size?

2007-12-06 Thread John Wells
I see that BLOCK_SIZE can be set at compile time, but is there a way
to determine what block size is in use in a running system? I've been
searching but have been unsuccessful so far.

Thanks!
John

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Determining current block size?

2007-12-06 Thread John Wells
On 12/6/07, Gregory Stark [EMAIL PROTECTED] wrote:
 John Wells [EMAIL PROTECTED] writes:

  I see that BLOCK_SIZE can be set at compile time, but is there a way
  to determine what block size is in use in a running system? I've been
  searching but have been unsuccessful so far.

 postgres=# show block_size;
  block_size
 
  8192
 (1 row)

Well *that* was unexpected easy.

Thanks!
John

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Recovering data via raw table and field separators

2007-12-05 Thread John Wells
On 12/4/07, Martijn van Oosterhout [EMAIL PROTECTED] wrote:
 On Tue, Dec 04, 2007 at 03:05:53PM -0500, John Wells wrote:
  So, given a database table file that still has records in it, and
  given the fact that these records could be parsed and displayed if the
  proper utilty knew how to read the various data structures used to
  denote field and record length, is there no utility to do this? I
  seems that it would be fairly straight forward to somehow read the
  records, yet to pay no mind to the deleted flag (or whatever mechanism
  postgresql uses to mark them as deleted).

 Ofcourse, see the pg_filedump mentioned at the beginning of this
 thread.

Thanks Martijn,

I have pg_filedump installed, but can't figure out how to dump the
rows themselves. I get the equivalent of the output at the end of this
post. Looking over the --help, there's nothing obvious that has gotten
me further.

Is there a trick I'm missing?

Thanks!
John

***
* PostgreSQL File/Block Formatted Dump Utility - Version 8.1.1
*
* File: 17741
* Options used: -i -f
*
* Dump created on: Wed Dec  5 11:21:07 2007
***

Block0 
Header -
 Block Offset: 0x Offsets: Lower 196 (0x00c4)
 Block: Size 8192  Version3Upper8192 (0x2000)
 LSN:  logid  0 recoff 0x0181e758  Special  8192 (0x2000)
 Items:   44   Free Space: 7996
 Length (including item array): 200

  :  58e78101 0100 c420  X..
  0010: 00200320 441f 781e b81d  . . D...x...
  0020: f41c 301c 641b 981a  0...d...
  0030: c419 f418 2418 5417  $...T...
  0040: 8016 ac15 e014 1014  
  0050: 4013 7412 a011 d010  @...t...
  0060: 0410 380f 680e 980d  8...h...
  0070: c40c f80b 280b 540a  (...T...
  0080: 8809 b408 0008 4807  H...
  0090: 9006 d805 2005 6804   ...h...
  00a0: b403 fc02 4802 9001  H...
  00b0: d400 4803 9402 e001  H...
  00c0: 3001 0...

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Recovering data via raw table and field separators

2007-12-04 Thread John Wells
Guys,

We had a bit of a misfortunate communication breakdown here at work,
which led to a particular database not being backed up. Before we
recognized this problem, and entire database table was deleted.

I immediately copied the pgdata directory and have been able to find
the file that represents the data that was blown away. Fortunately, I
only really need to restore two pieces of data from the table...the id
field (primary key) and the employee_number field. Once I have this
data, I can re-populate and all my relationships will be honored.

I've been looking through the records with a hex editor, but the
unfortunate thing is that I either don't see consistency with field
separators or I'm overlooking them.

The table definition looks like this:

   Table public.users
  Column   |Type |
Modifiers
---+-+
 id| integer | not null
default nextval('users_id_seq'::regclass)
 login | character varying(255)  |
 email | character varying(255)  |
 crypted_password  | character varying(40)   |
 salt  | character varying(40)   |
 created_at| timestamp without time zone |
 updated_at| timestamp without time zone |
 remember_token| character varying(255)  |
 remember_token_expires_at | timestamp without time zone |
 manager_id| integer |
 employee_number   | integer |
 classification_id | integer |
 name  | character varying(255)  |
Indexes:
users_pkey PRIMARY KEY, btree (id)

Can anyone guide me in how I might parse this out? There has to be a
way...I think I just need a helpful push ;)

Thanks!
John

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Recovering data via raw table and field separators

2007-12-04 Thread John Wells
On 12/4/07, Alvaro Herrera [EMAIL PROTECTED] wrote:
 There are no field separators.  Perhaps you could extract some useful
 info with pg_filedump, which you can grab at
 http://sources.redhat.com/rhdb

So is it simply field width? Can one count the number of bytes based
on native datatype length and determine field start/end?

Thanks!
John

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Recovering data via raw table and field separators

2007-12-04 Thread John Wells
On 12/4/07, Martijn van Oosterhout [EMAIL PROTECTED] wrote:
 On Tue, Dec 04, 2007 at 03:38:16PM -0300, Alvaro Herrera wrote:
   So is it simply field width? Can one count the number of bytes based
   on native datatype length and determine field start/end?
 
  Yes.  For variable length types, there is a 4-byte length word at the
  start of the field (unless you are using 8.3 which introduces more
  compact representations in some cases).

 And NULLs are skipped entirely. They are represented in the null-bitmap
 at the beginning of the tuple.

 What sometimes works is creating a new table with the exact same
 structure, shutting down the postmaster and copying the old table over
 the new one. If it's the same cluster and the clog/xlog are still there
 it might work.

 Have a nice day,

Martijn,

Wow...interesting idea...but to clarify, I copied the table file
*after* the delete was run on the table. Although the data appears to
still be there, wouldn't they be marked as deleted in some way and not
appear in the new table even if the copy worked?

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Recovering data via raw table and field separators

2007-12-04 Thread John Wells
On 12/4/07, Martijn van Oosterhout [EMAIL PROTECTED] wrote:
 Ah sorry, I though you meant de table was dropped or the database was
 deleted. If you actually ran a DELETE FROM on the table, then yes
 they'll all be marked deleted.


So, given a database table file that still has records in it, and
given the fact that these records could be parsed and displayed if the
proper utilty knew how to read the various data structures used to
denote field and record length, is there no utility to do this? I
seems that it would be fairly straight forward to somehow read the
records, yet to pay no mind to the deleted flag (or whatever mechanism
postgresql uses to mark them as deleted).

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] timer script from SAMS book or equivalent?

2007-10-11 Thread John Wells
The version sent yesterday was creating a LOT of unnecessary objects when 
running queries with large result sets. I tweaked the postpres-pr classes (via 
Ruby's very nice open class mechanism). The updated version is below...it took 
queries running 10+ minutes down to around 20 seconds.

Thanks,
John

-
require 'rubygems'
require 'postgres-pr/connection'
require 'postgres-pr/message'


$tupleQuery = SELECT relname, seq_scan, 
  seq_tup_read, idx_scan, 
  idx_tup_fetch 
   FROM pg_stat_all_tables


$blockQuery = SELECT relname, heap_blks_read, 
  heap_blks_hit, idx_blks_read, 
  idx_blks_hit 
   FROM pg_statio_all_tables

$use_jdbc = false

def usage
  app = File.basename $0
  usstr = -EOL
#{app} query [tablename] 
  or 
#{app} /path/to/file/containing/query.sql [tablename]
  
Example: #{app} select * from users users

Note: the following environment variables MUST be set:
PG_TIMER_DB   - name of the database
PG_TIMER_USER - name of database user
PG_TIMER_PASS - password of database user
PG_TIMER_URI  - connection uri that postgres-pr understands.
  Examples:
tcp://localhost:5432
unix:/tmp/.s.PGSQL.5432
  EOL
  puts usstr  
  exit
end

class PostgresPR::Connection::Result
  def get_field_at_row(field, row)  
idx = @fields.collect{|f|f.name}.index field
@rows[row][idx]
  end
end

class PostgresPR::Connection
  def query_no_results(sql)
puts Running query in background. Waiting...
@conn  PostgresPR::Query.dump(sql)
loop do  
  msg = PostgresPR::Message.read_without_buffer(@conn)
  case msg  
  when PostgresPR::ReadyForQuery
break  
  end
end
  end
end

class PostgresPR::Message
  def self.read_without_buffer(stream, startup=false)
type = stream.readbytes(1).unpack('C').first unless startup
length = stream.readbytes(4).unpack('N').first  # FIXME: length should be 
signed, not unsigned
if type==?Z
  buffer = Buffer.of_size(startup ? length : 1+length)
  buffer.write_byte(type) unless startup
  buffer.write_int32_network(length)
  buffer.copy_from_stream(stream, length-4)
  (startup ? StartupMessage : MsgTypeMap[type]).create(buffer)
else 
  stream.read(length - 4)
  return nil
end
  end
end

class PureDBUtil  
def initialize()
 
@conn = PostgresPR::Connection.new($dbname, 
   $dbuser, 
   $dbpass, 
   $dburi)  
end
  def start_tran 
@conn.query(BEGIN)
end
def commit(results=true)
if results
  @conn.query(COMMIT) 
else
  @conn.query_no_results(COMMIT)
end
end
def exec(query, results=true) 
if results 
  @conn.query(query) 
else
  @conn.query_no_results(query)
end
end
end

class JDBCDBUtil
def initialize(dbname=nil)
raise Exception, not implemented
end  
end

def getTupleValues(tran, table_name=nil)
if table_name.nil?
return tran.exec($tupleQuery +  ORDER by relname)
else
return tran.exec($tupleQuery +  WHERE relname = 
'#{table_name}' )
end
end

def getBlockValues(tran, table_name)
if table_name.nil?
return tran.exec($blockQuery +  ORDER by relname)
else
return tran.exec($blockQuery +  WHERE relname = 
'#{table_name}' )
end
end

def getDelta(n, beginning, ending, col)
  endVal = 0
  begVal = 0 
  endVal = ending.get_field_at_row(col, n)
  begVal = beginning.get_field_at_row(col, n)
  return endVal.to_f - begVal.to_f;
end


def gcw(res, col)
max = 0
0.upto res.rows.size-1 do |n|
fld_size = res.get_field_at_row(col, n).size
if fld_size  max
max = fld_size
end
end
return max
end

def fill(len, c)
c * len
end


def printDeltas(begTuples, endTuples, begBlocks, endBlocks)
nameWidth = gcw(begTuples, relname)
  str = 
  str  '+'  fill( nameWidth, '-' )  
+---+--+  
\n;
  str  '|'  fill( nameWidth, ' ' )  | SEQUENTIAL I/O
|INDEXED I/O   |  \n;
  str  '|'  fill( nameWidth, ' ' )  | scans | tuples |heap_blks  
|cached| scans | tuples | idx_blks |cached|  \n;
  str  '|'  fill( nameWidth, '-' )  
+---++---+--+---++--+--+  
\n;
  totSeqScans  = 0
  totSeqTuples = 0
  totHeapBlks  = 0
  totHeapHits  = 0
  totIdxScans  = 0
  totIdxTuples = 0
  totIdxBlks   = 0
  totIdxHits   = 0
  tableCount   = 0

  0.upto begTuples.rows.size-1 do |row|   

Re: [GENERAL] timer script from SAMS book or equivalent?

2007-10-10 Thread John Wells
Could someone explain to me how the connection is initialized below? I'm 
re-writing the script in Ruby...but this is a stumbling block for me in the way 
the C++ libs work. Does the library initial conn automagically? How would one 
emulate this in another language...would it be to run it as the postgres user 
and connection to template1?

Thanks,
John

int main( int argc, char * argv[] )
{
   // how is this being initialized?
   connection conn;

   work tran1( conn, getBegValues );

   const result  begTupleValues = getTupleValues( tran1, argc  2 ?
argv[2] : 0 );
   const result  begBlockValues = getBlockValues( tran1, argc  2 ?
argv[2] : 0 );

   const result  ignore = tran1.exec( argv[1] );

   tran1.commit();

   sleep( 1 );

   work tran2( conn, getEndValues );

   const result  endTupleValues = getTupleValues( tran2, argc  2 ?
argv[2] : 0 );
   const result  endBlockValues = getBlockValues( tran2, argc  2 ?
argv[2] : 0 );

   printDeltas( begTupleValues, endTupleValues, begBlockValues,
endBlockValues );

   
} 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] timer script from SAMS book or equivalent?

2007-10-10 Thread John Wells
My Ruby version is found below. Note it requires the postgres-pr ruby driver. 
Also note I didn't really ruby-ize it to much...for the most part it's a 
one-to-one translation. 

One different thing to note...this script expects you to have postgres-pr 
installed via rubygems. You can modify the require statements (actually just 
comment out the rubygems line) if you're is installed in a non-rubygems way.

Also note, this requires you to define your connection parameters in env 
variables, which works in my situation.

PG_TIMER_DB   - name of the database
PG_TIMER_USER - name of database user
PG_TIMER_PASS - password of database user
PG_TIMER_URI  - connection uri that postgres-pr understands.
  Examples:
tcp://localhost:5432
unix:/tmp/.s.PGSQL.5432


Hope it helps someone else.

###
require 'rubygems'
require 'postgres-pr/connection'


$tupleQuery = SELECT relname, seq_scan, 
  seq_tup_read, idx_scan, 
  idx_tup_fetch 
   FROM pg_stat_all_tables


$blockQuery = SELECT relname, heap_blks_read, 
  heap_blks_hit, idx_blks_read, 
  idx_blks_hit 
   FROM pg_statio_all_tables

$use_jdbc = false

def usage
  usstr = -EOL
#{$0} query
Example: #{$0} select * from users

Note: the following environment variables MUST be set:
PG_TIMER_DB   - name of the database
PG_TIMER_USER - name of database user
PG_TIMER_PASS - password of database user
PG_TIMER_URI  - connection uri that postgres-pr understands.
  Examples:
tcp://localhost:5432
unix:/tmp/.s.PGSQL.5432
  EOL
  puts usstr
  exit()
end

$dbname = ENV['PG_TIMER_DB']
$dbuser = ENV['PG_TIMER_USER']
$dbpass = ENV['PG_TIMER_PASS']
$dburi  = ENV['PG_TIMER_URI']

[$dbname, $dbuser, $dbpass, $dburi].each do |one|
  one.nil?  usage()
end

class PostgresPR::Connection::Result
  def get_field_at_row(field, row)  
idx = @fields.collect{|f|f.name}.index field
@rows[row][idx]
  end
end


class PureDBUtil  
def initialize()
  @conn = PostgresPR::Connection.new($dbname, 
   $dbuser, 
   $dbpass, 
   $dburi)  
end
  def start_tran 
@conn.query(BEGIN)
end
def commit
@conn.query(COMMIT)
end
def exec(query)  
@conn.query(query)
end
end

class JDBCDBUtil
def initialize(dbname=nil)
raise Exception, not implemented
end  
end

def getTupleValues(tran, table_name=nil)
if !table_name.nil?
return tran.exec($tupleQuery +  ORDER by relname)
else
return tran.exec($tupleQuery +  WHERE relname = 
'#{table_name}' )
end
end

def getBlockValues(tran, table_name)
if !table_name.nil?
return tran.exec($blockQuery +  ORDER by relname)
else
return tran.exec($blockQuery +  WHERE relname = 
'#{table_name}' )
end
end

def getDelta(n, beginning, ending, col)
  endVal = 0
  begVal = 0 
  endVal = ending.get_field_at_row(col, n)
  begVal = beginning.get_field_at_row(col, n)
  return endVal.to_f - begVal.to_f;
end


def getColumnWidth(res, col)
max = 0
res.rows.each do |one|
if one[col].size  max
max = one[col].size
end
end
return max
end

def fill(len, c)
c * len
end


def printDeltas(begTuples, endTuples, begBlocks, endBlocks)
nameWidth = 15
  str = 
  str  '+'  fill( nameWidth, '-' )  
+---+--+  
\n;
  str  '|'  fill( nameWidth, ' ' )  | SEQUENTIAL I/O
|INDEXED I/O   |  \n;
  str  '|'  fill( nameWidth, ' ' )  | scans | tuples |heap_blks |cached 
| scans | tuples | idx_blks |cached|  \n;
  str  '|'  fill( nameWidth, '-' )  
+---++---+--+---++--+--+  
\n;
  totSeqScans  = 0
  totSeqTuples = 0
  totHeapBlks  = 0
  totHeapHits  = 0
  totIdxScans  = 0
  totIdxTuples = 0
  totIdxBlks   = 0
  totIdxHits   = 0
  tableCount   = 0

  0.upto begTuples.rows.size-1 do |row|
seqScans  = getDelta(row, begTuples, endTuples, seq_scan)
seqTuples = getDelta(row, begTuples, endTuples, seq_tup_read)
heapBlks  = getDelta(row, begBlocks, endBlocks, heap_blks_read)
heapHits  = getDelta(row, begBlocks, endBlocks, heap_blks_hit)
idxScans  = getDelta(row, begTuples, endTuples, idx_scan)
idxTuples = getDelta(row, begTuples, endTuples, idx_tup_fetch)
idxBlks   = getDelta(row, begBlocks, endBlocks, idx_blks_read)
idxHits   = getDelta(row, begBlocks, endBlocks, idx_blks_hit)  

if(( seqScans + seqTuples + heapBlks + 
 heapHits + idxScans + idxTuples + 
 

Re: [GENERAL] timer script from SAMS book or equivalent?

2007-10-10 Thread John Wells
Sorry...the first version was a bit rash and left out some 
features...particularly filtering by table. Here's the patch to correct:

If anyone wants the corrected version, email me off list.

Thanks,
John


# diff -w pg_timer_old.rb pg_timer.rb
18a19
   app = File.basename $0
20,21c21,25
 #{$0} query
 Example: #{$0} select * from users
---
 #{app} query [tablename]
   or
 #{app} /path/to/file/containing/query.sql [tablename]

 Example: #{app} select * from users users
54a59

78c83
 if !table_name.nil?
---
   if table_name.nil?
86c91
 if !table_name.nil?
---
   if table_name.nil?
194c199,211
   arg = args[0]
---
   first = args[0]
   query = nil
   if !first.nil? and File.exists?(first)
 File.open(first, r) do |f|
   query = f.read
 end
   else
 query = first
   end

   table = args[1]

   usage() if args.size  1
196d212
   usage() if arg.nil?
201a218

204,206c221,223
   begTupleValues = getTupleValues(tran1, arg)
   begBlockValues = getBlockValues(tran1, arg)
   ignore = tran1.exec(args[0])
---
   begTupleValues = getTupleValues(tran1, table)
   begBlockValues = getBlockValues(tran1, table)
   ignore = tran1.exec(query)
212,213c229,232
   endTupleValues = getTupleValues(tran2, arg)
   endBlockValues = getBlockValues(tran2, arg)
---
   tran2.start_tran()
   endTupleValues = getTupleValues(tran2, table)
   endBlockValues = getBlockValues(tran2, table)
   tran2.commit()


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] timer script from SAMS book or equivalent?

2007-10-09 Thread John Wells
- Owen Hartnett [EMAIL PROTECTED] wrote:
 At 11:57 PM +0400 10/5/07, John Wells wrote:
 - Felipe de Jesús Molina Bravo [EMAIL PROTECTED]
 wrote:
   May be can help you \timing in psql
 
   El vie, 05-10-2007 a las 21:32 +0400, John Wells escribió:
Guys,
   
In the book PostgreSQL (2nd ed) the author mentions a timer
 script
   he wrote to analyze various performance bits about PostgreSQL.
 I've
   looked everywhere and can't find it. Does anyone know where I can
 find
   a copy, or find an equivalent tool?
 
 I'm afraid it's not even close in functionality. Here is sample 
 results from the timer tool he mentions. This is the kind of data 
 I'm after:
 
 
 This was available on line at www.developers-library.com, but now it 
 looks like that has gone away.  Sometimes an author keeps a web site 
 for code he uses to supplement a book, but I can't find one for him. 
 Here's the code:

Thanks to all. Having problems building it myself. I plan to reimplement in 
ruby if possible. Will post results.

Thanks,
John

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] timer script from SAMS book or equivalent?

2007-10-05 Thread John Wells
Guys,

In the book PostgreSQL (2nd ed) the author mentions a timer script he wrote to 
analyze various performance bits about PostgreSQL. I've looked everywhere and 
can't find it. Does anyone know where I can find a copy, or find an equivalent 
tool?

Thanks!
John

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] timer script from SAMS book or equivalent?

2007-10-05 Thread John Wells

- Felipe de Jesús Molina Bravo [EMAIL PROTECTED] wrote:
 May be can help you \timing in psql
 
 El vie, 05-10-2007 a las 21:32 +0400, John Wells escribió:
  Guys,
  
  In the book PostgreSQL (2nd ed) the author mentions a timer script
 he wrote to analyze various performance bits about PostgreSQL. I've
 looked everywhere and can't find it. Does anyone know where I can find
 a copy, or find an equivalent tool?


I'm afraid it's not even close in functionality. Here is sample results from 
the timer tool he mentions. This is the kind of data I'm after:

$ timer SELECT * FROM recalls

+-++---+
| | SEQUENTIAL I/O |INDEXED I/O|
| |scans |tuples |heap_blks |cached|scans |tuples |idx_blks |cached|
|-+--+---+--+--+--+---+-+--+
|pg_aggregate |0 | 0 |1 |0 |1 | 1 |   2 |0 |
|pg_am|1 | 1 |1 |0 |0 | 0 |   0 |0 |
|pg_amop  |0 | 0 |2 |   10 |   10 |24 |   4 |   16 |
|pg_amproc|0 | 0 |1 |5 |6 | 6 |   2 |   10 |
|pg_attribute |0 | 0 |8 |   14 |   21 |65 |   6 |   57 |
|pg_cast  |0 | 0 |2 |6 |   60 | 8 |   2 |  118 |
|pg_class |4 |   740 |5 |   32 |   18 |17 |   7 |   34 |
|pg_database  |1 | 1 |1 |0 |0 | 0 |   0 |0 |
|pg_index |2 |   146 |3 |   11 |8 |12 |   4 |   12 |
|pg_namespace |2 |10 |1 |2 |2 | 1 |   2 |2 |
|pg_opclass   |0 | 0 |2 |   11 |5 |73 |   4 |6 |
|pg_operator  |0 | 0 |4 |6 |   10 |10 |   4 |   26 |
|pg_proc  |0 | 0 |6 |8 |   14 |14 |  12 |   31 |
|pg_rewrite   |0 | 0 |1 |1 |2 | 2 |   2 |2 |
|pg_shadow|0 | 0 |1 |2 |3 | 3 |   4 |2 |
|pg_statistic |0 | 0 |3 |5 |   33 | 8 |   2 |   64 |
|pg_trigger   |0 | 0 |1 |1 |2 | 2 |   2 |2 |
|pg_type  |0 | 0 |2 |5 |7 | 7 |   2 |   12 |
|recalls  |1 | 39241 | 4413 |0 |0 | 0 |   0 |0 |
+-+--+---+--+--+--+---+-+--+
|Totals   |   11 | 40139 | 4458 |  119 |  202 |   253 |  61 |  394 |
+-+--+---+--+--+--+---+-+--+


Thanks,
John

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Drop database/create database broken in 7.4?

2006-01-05 Thread John Wells
Guys,

I'm running PG 7.4 on Ubuntu Breezy for Ruby on Rails development. One of
the things I ofter have to do is rebuild the database...for which I have a
script that I've created to do it in one step.

I've noticed some oddities. The script connects to template1, drops the
target database, creates the target database, and the rebuilds the tables.
However, two specific tables and sequences are not dropped, although  the
drop command completes successfully.

For example, here's the error reproduced manually:

dailystatus= \d
  List of relations
 Schema |  Name   |   Type   | Owner
+-+--+
 public | garages | table| wellsj
 public | garages_id_seq  | sequence | wellsj
 public | trucks  | table| wellsj
 public | trucks_id_seq   | sequence | wellsj
 public | sites   | table| wellsj
 public | sites_id_seq| sequence | wellsj
 public | statuses| table| wellsj
 public | statuses_id_seq | sequence | wellsj
 public | users   | table| wellsj
 public | users_id_seq| sequence | wellsj
(10 rows)

dailystatus= \c template1
You are now connected to database template1.
template1= drop database dailystatus;
DROP DATABASE
template1= create database dailystatus;
CREATE DATABASE
template1= \c dailystatus
You are now connected to database dailystatus.
dailystatus= \d
 List of relations
 Schema | Name |   Type   | Owner
+--+--+
 public | sites| table| wellsj
 public | sites_id_seq | sequence | wellsj
 public | users| table| wellsj
 public | users_id_seq | sequence | wellsj
(4 rows)

dailystatus=

Anyone know what might be happening? I'm at a loss.

Thanks!
John



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Drop database/create database broken in 7.4?

2006-01-05 Thread John Wells
Scott Marlowe said:
 Yep, somewhere along the lines you accidentally created those tables in
 template1

sheepish grin yep, sorry guys. I need more sleep /sheepish grin

Thanks!
John


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Returns setof record PG/PLSQL

2005-08-14 Thread John Wells
In my quest to create a function that counts rows for all user tables in
the database, I've written the following:
--
drop function generate_table_count ();
create or replace function generate_table_count () returns setof record
as '
declare   
  tname record;
  count record;
  table text;
begin
for tname in select table_name from information_schema.tables 
   where table_schema = ''public'' loop
for count in execute ''select ''  || 
quote_ident(tname.table_name) || 
'' as name, count(*)  from '' || 
quote_ident(tname.table_name) loop  
table := count.name;
return next;
end loop;
end loop;
return;
end;
' language plpgsql;
--

Problem is, I can't figure out what parameters to pass to return next;
to make this return properly, and can't find an example in the
documentation.  I have it working by defining the function to return
setof text and then do return next as:

--
return next table || '' '' || count.count;
--

However, I really want each result (table name and count) to have it's
own column.

Can someone help me out or point me in the direction of documentation
that will show an example?  It shouldn't be this hard, it seems.

Thanks, as always, for your help,
John


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Returns setof record PG/PLSQL

2005-08-14 Thread John Wells
On Sun, 2005-08-14 at 18:56 -0400, John Wells wrote:
 In my quest to create a function that counts rows for all user tables in
 the database, I've written the following:

Based on another example I've found, I've tried the two following
variations (to no avail).  Getting ERROR:  wrong record type supplied
in RETURN NEXT on both counts:

-- Variation 1 --
drop function generate_table_count ();
create TYPE rowcounts_t as (name TEXT, count int);
create or replace function generate_table_count () returns setof
rowcounts_t as '
declare   
  tname record;
  count record;
  table text;
begin
for tname in select table_name from information_schema.tables 
   where table_schema = ''public'' loop
for count in execute ''select ''  || 
quote_ident(tname.table_name)::text || 
'' as name, count(*)  from '' || 
quote_ident(tname.table_name) loop  

return next count;
end loop;
end loop;
return;
end;
' language plpgsql;
-- Variation 2 --
drop function generate_table_count ();
create TYPE rowcounts_t as (name TEXT, count TEXT);
create or replace function generate_table_count () returns setof
rowcounts_t as '
declare   
  tname record;
  count record;
  table text;
begin
for tname in select table_name from information_schema.tables 
   where table_schema = ''public'' loop
for count in execute ''select ''  || 
quote_ident(tname.table_name)::text || 
'' as name, count(*)::text  from '' || 
quote_ident(tname.table_name) loop  

return next count;
end loop;
end loop;
return;
end;
' language plpgsql;
--

Still strugglingany insight you might have is very much appreciated.
Thanks,
John



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Returns setof record PG/PLSQL

2005-08-14 Thread John Wells
On Sun, 2005-08-14 at 20:53 -0400, Tom Lane wrote:
 regression=# create type table_count_result as (table_name text, count 
 bigint);
 CREATE TYPE
 regression=# create or replace function generate_table_count ()  
 regression-# returns setof table_count_result as $$
 regression$# declare
 regression$#   tname record;
 regression$#   count table_count_result;
 regression$# begin
 regression$#   for tname in select table_name from information_schema.tables
 regression$#where table_schema = 'public'  loop
 regression$# for count in execute 'select ' || 
 quote_literal(tname.table_name) || ', count(*) from ' || 
 quote_ident(tname.table_name)  loop
 regression$#   return next count;
 regression$# end loop;
 regression$#   end loop;
 regression$#   return;
 regression$# end$$ language plpgsql;
 CREATE FUNCTION
 regression=# select * from generate_table_count();

Sweet mercy...success.  Thanks...my forehead was a bit bloody from all
the head bashing going on over here... 

I have much to learn about pl/pgsql programming!

Thanks again!
John


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] pl/pgsql frustration

2005-08-13 Thread John Wells
Guys,

I'm trying to write a simple function that will return a table's actual
row count on PostgreSQL 7.4.  I've tried two different versions (see
below), both with the same result:

ERROR:  syntax error at or near select at character 127

I've also tried a cursor example found in the interactive 7.4 docs and
get a similar 'select' error.  Am I missing something here or is
something else going on?  Thanks for any help you can provide.

John

create or replace function count_rows (text) returns integer as '
declare   
  return_val int4;  
  curs1 refcursor;
  table_name ALIAS FOR $1;
begin
OPEN curs1 FOR EXECUTE 'select * from ' || quote_indent(table_name);

fetch curs1 into return_val;
close curs1;
end;
' language 'plpgsql';

create or replace function count_rows (text) returns integer as '
declare   
  return_val int4;  
  curs1 cursor FOR EXECUTE 'select count(*) from ' ||
quote_indent(table_name);
  table_name ALIAS FOR $1;
begin
OPEN curs1;
fetch curs1 into return_val;
close curs1;
end;
' language 'plpgsql';


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pl/pgsql frustration

2005-08-13 Thread John Wells
Guys,

Sorry...never mind.  I had accidentally pulled up 8.0 documentation and
confused the syntax (single quotes being ok with you're using the $$
quoting, which I'm not).

Thanks,
John


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Seeking PL/PGSQL example

2005-08-12 Thread John Wells
Guys,

I'm getting started with PL/PGSQL but want to understand if the following
is even possible:

With PL/PGSQL, how would I roll through all tables in my database and
print the table name, along with the row count of that table?

Sorry if this is too simplistic.  I've googled but must not be phrasing my
question properly.  Currently reading through the manual, but if someone
has any helpful tips I'd appreciate it.

Thanks!
John


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Optimizing large data loads

2005-08-06 Thread John Wells
Richard Huxton said:
 You don't say what the limitations of Hibernate are. Usually you might
 look to:
 1. Use COPY not INSERTs

Not an option, unfortunately.

 2. If not, block INSERTS into BEGIN/COMMIT transactions of say 100-1000

We're using 50/commit...we can easily up this I suppose.

 3. Turn fsync off

Done.

 4. DROP/RESTORE constraints/triggers/indexes while you load your data

Hmmm...will have to think about this a bit...not a bad idea but not sure
how we can make it work in our situation.

 5. Increase sort_mem/work_mem in your postgresql.conf when recreating
 indexes etc.
 6. Use multiple processes to make sure the I/O is maxed out.

5. falls in line with 4.  6. is definitely doable.

Thanks for the suggestions!

John


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Optimizing large data loads

2005-08-05 Thread John Wells
Hi guys,

We have a Java process that uses Hibernate to load approximately 14 GB of
data.  One a dual-proc 2.4 GHZ Xeon with 2048 MB RAM, it's currently
taking over 13 hours to load (PostgreSQL 7.4.8).  We're flushing from
hibernate every 50 records.

I've turned fsync to false in postgresql.conf, and we've turned autocommit
off in our code.  Is there anything else I could try to temporarily speed
up inserts?

Thanks very much for your help.

John


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Copying bytea data out via pgsql

2005-07-23 Thread John Wells
Leonel Nunez said:

 I hope this helps

Leonel,

Worked perfectly...thanks.  I'll do the same thing via Java for my future
use (can't guarantee Perl will be available on all of our target
machines).

I appreciate the help!

John


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Copying bytea data out via pgsql

2005-07-22 Thread John Wells
Guys,

I have a number of jpegs and tiffs that are stored in a bytea field in a
PostgreSQL database by a Java program using Hibernate.

I need to copy these out to a flat file via pgsql for viewing, etc.  I've
tried

psql -c 'select binarydata_field from my_image_table where id=1' mydb 
flatfile

but although that seems to provide the data it does not seem to pull it in
a valid image format.  Is there some special handling I need to do here?

Thanks!
John


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Consultants

2005-06-15 Thread John Wells
Guys,

My company is migrating from a Progress-based ERP system to a PostgreSQL
based system.

We know PostgreSQL from a development standpoint, but not from an admin
perspective.  I'd like to hire a consultant to help us with some important
projects, specifically benchmarking Progress versus PostgreSQL and helping
to architect our new N-tier system (we're building a large system to host
the ERP application), including the database server.

Can anyone recommend a very knowledgeable, experienced (yet reasonable)
PostgreSQL consulting firm?

Thanks!
John


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Consultants

2005-06-15 Thread John Wells
Richard Huxton said:
 John Wells wrote:

 Can anyone recommend a very knowledgeable, experienced (yet reasonable)
 PostgreSQL consulting firm?

 You'll be wanting to state your country/region too.

The company is located in Greensboro, NC, USA. Although on-site consulting
is preferred, it's not a requirement.

Thanks,
John


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] PostgreSQL-related job posting

2004-10-22 Thread John Wells
Guys,

I have an opening currently for the following position.  If interested,
email me at john.wells__replace_this_with_at__timco.aero.  Although relo
is stated as not covered, I may be able to fight for that given a very
qualified candidate.

Company:TIMCO Aviation Services
Location:   US-NC-Greensboro
Base Pay:   N/A
Employee Type:  Full-Time Employee
Industry:   Airline - Aviation
Computer Software
Manages Others: No

Job Type:   Information Technology
Req'd Education:4 Year Degree
Req'd Experience:   At Least 1 Year
Req'd Travel:   None
Relocation Covered: No


 DESCRIPTION
Administers all areas of Linux and Unix systems, with specific emphasis on
Linux. Some PostgreSQL database administration also required.

 REQUIREMENTS
1. Bachelor Degree in Computer Science, Information Systems or related
fields.
2. Minimum of 2 years experience administering Linux/UNIX systems.
3. Minimum 1 year experience administering PostgreSQL databases, including
indexing, DBMS tuning, and query optimization.
4. Minimum 1 year experience with database design, including good
foundations in data normalization and referential integrity.
5. 1-3 years experience with Red Hat Linux and one of the following:
Slackware Linux, Debian Linux, SUSE Linux, Gentoo Linux.
6. 1-2 years experience with one of the following: Perl, Python, PHP.
7. Experience with UNIX, TCP/IP, firewalls, SAN, WAN, LAN, and network
monitoring.
8. Strong shell scripting experience required.
9. Must be willing to work in a 24 x 7 environment
10. Good written and verbal communications skills.
11. Must be a team player with advanced interpersonal/customer service
skills and a very positive attitude.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Two questions from the boss (SQL:2003 scalability)

2004-10-21 Thread John Wells
Guys,

My boss has been keeping himself busy reading MySQL marketing pubs,
and came at me with a few questions this morning regarding PostgreSQL
features (we're currently moving to PostgreSQL).

While I don't think either are really that important for our
situation, he wanted to know specifically whether PostgreSQL supported
SQL:2003, and what sort of capabilities PostgreSQL has to scale across
multiple CPUs and hosts (multithreading, load balancing, etc).

I appreciate any input you can provide.

Thanks,
JB

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Ingres versus PostgreSQL

2004-09-03 Thread John Wells
Guys,

Now that Ingres has been open-sourced, I'm curious about where it
stands with regards to PostgreSQL.  Is it a worthy competitor? 
PostgreSQL has been my database of choice for many years, and I'm very
interested in how Ingres and PostgreSQL stack up in terms of features
and performance.

Any insight you can provide will be very much appreciated.  Thanks!

John

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] A simple question (under pressure :-))....

2004-06-09 Thread John Wells
Guys,

I have approx. 8 hours to finish a deadline, so I can no longer spend time searching 
google...haven't found the answer yet.

In PG, is there not a way to select amongst databases?  

In other words, if I have one schema called sch1 and another called sch2, and I'm 
connected to sch2, could I not do:

select fieldname from sch1.tablename

I'm running 7.2.3, so I think there was no such things as a schema in this 
release...just a database, but can't recall.  

It seems I've been able to do this in the past, but I may be remembering wrong.

Thanks!
John

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] Large table search question

2004-05-30 Thread John Wells
Guys,

I have a general question about designing databases for large data sets.

I was speaking with a colleague about an application we're preparing to
build.  One of the application's tables will potentially contain 2 million
or more names, containing (at least) the fields first_name, last_name,
middle_name and prefix.

A common lookup the application will require is the full name, so prefix +
first_name + middle_name + last_name.

My friend's suggestion was to create a lookup field in the table itself,
which would contain a concatenation of these fields created during insert.
 So, for each record, we'd having each individual field and then a
full_name field that would contain the combination of the ind. fields. 
His argument is that this will make lookups in this manner extremely fast
and efficient.

I agree with his assertion, but get the feeling that this is sort of an
ugly design.  Would a compound index on these fields really be less
efficient?

Thanks for your help!

John

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] elog: out of memory

2004-02-29 Thread John Wells
Guys,

One of my scripts (php) is returning elog: out of memory.  Can anyone tell me what 
this means?  Haven't really found anything on the web yet except some code 
references

And wouldn't you know it...script has function fine for some time, but 6.5 hours 
before deadline and it craps...

(handlePearError) A Pear database error has been caught:
DB Error: unknown errordb_error Object
(
[error_message_prefix] = 
[mode] = 16
[level] = 1024
[code] = -1
[message] = DB Error: unknown error
[userinfo] = select tblJCSch.jcid, tblJCHead.jcno, tblJCHead.cardtitle, 
tblJCSch.checktypeid
FROM (tblJCHead INNER JOIN 
_ztwdsjc59345ac000865a6e5ae2e3dd73786707 
ON tblJCHead.JCID = 
_ztwdsjc59345ac000865a6e5ae2e3dd73786707.JCID) 
INNER JOIN tblJCSch ON tblJCHead.JCID = tblJCSch.JCID 
where tblJCSch.schtypeid = 47
and   tblJCSch.checktypeid = ID |Check 
|Priority|Description  [0]
and   tblJCSch.jcid = tblJCHead.jcid
and   tblJCHead.deckid = 10
order by tblJCHead.jcno [nativecode=ERROR:  elog: out of 
memory
]
[callback] = handlePearError

Thanks for your help!

John

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Embedded transactions

2004-01-26 Thread John Wells
On this page: http://www.compiere.org/technology/independence.html, the
project leader of Compiere (a popular ERP package) states that the move to
Postgres failed because of lack of support of embedded
transactions...something both Oracle and DB2 support.

Can someone explain to me excactly what embedded transactions are and why
they're challenging enough to not be included in Postgres or MySQL?  I'm
guessing it's some sort of begin/commit/rollback transactions within
being/commit/rollback blocks, but I may be trivializing it.

Thanks,
John


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Want a job?

2004-01-22 Thread John Wells
Guys,

Sent a message out last week asking for info, but wanted to send another
more direct one.

The company I work for has decided to cut ties with Progress Software
Corporation due to their draconian licensing policies (they're bleeding
us).

I'm looking at our available options and am seeking one or more of the below:

- a for-hire development shop that can help us with this effort.
- consultants who are familiar with Progress and know what it will take to
convert to another platform (postgresql + some language?).
- top-notch contractors with Progress experience.

If you fit any of these, or know someone who does, please contact me
off-list.  I'm trying to get as much info as possible has to what's
available to us.

Thanks!

John






---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Pronouncing PostgreSQL

2003-12-03 Thread John Wells
Ok...this has irritated me for sometime.  I've seen reference to dividing
it up into proper syllables in the mailing archives, but couldn't find
pronunciation of the gres part...

Is it:

Post - grease - queue - el

Post - greee - es - queue - el

or

Post - gress - queue - el (as in impress)

Thanks,

Johnn



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Pronouncing PostgreSQL

2003-12-03 Thread John Wells
Yeah... I found a link to the sound file in a archived message, but it's
404 now...

John

Dann Corbit said:
 I always say:

 Pigee - squeal.

 At least it's easy to remember.

 None of the below exactly roll off the tongue.  I seem to remember a
 site that had a sound file attachment for how to pronounce PostgreSQL,
 but I can't remember where I saw it.

 -Original Message-
 From: John Wells [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, December 03, 2003 4:43 PM
 To: [EMAIL PROTECTED]
 Subject: [GENERAL] Pronouncing PostgreSQL


 Ok...this has irritated me for sometime.  I've seen reference
 to dividing it up into proper syllables in the mailing
 archives, but couldn't find pronunciation of the gres part...

 Is it:

 Post - grease - queue - el

 Post - greee - es - queue - el

 or

 Post - gress - queue - el (as in impress)

 Thanks,

 Johnn



 ---(end of
 broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so
 that your
   message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Response from MySql AB (Re: [GENERAL] Humor me: Postgresql vs. MySql (esp. licensing))

2003-10-09 Thread John Wells
Here's an interesting response from mysql.com sales.  Frankly, I don't see
how using it on multiple internal servers violates the GPL?!?:

btw, forgive me if this comes through as a repost later.  I'm still
getting used to having multiple email addrs in this client :)

Hi John,

Thank you for your interest in MySQL.  My answers below.

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, October 08, 2003 7:50 AM
 To: [EMAIL PROTECTED]
 Subject: MySQL Licensing Question, US, -unknown-


 The following form was submitted via MySQL.com feedback engine:

   Name: John Wells

   Email address: [EMAIL PROTECTED]

   Type of interest: for company use

   Country: US

   Support contract: no

   Enter Your Questions, Comments, Feedback or Your Message here:

 I'm a little confused on the meaning of distributing either
externally or internally on the license page.  If we develop an
application that uses MySql as the backend database and
 use in on our
 internal LAN, do we need to license it commmercially?

Internal distribution applies to internally distributing/installing more
than one application.  If you plan to use one installation of MySQL it
would not be considered distribution and you could use if for free with
your internally built application.

 How would this
 be in violation of the GPL?

If your application (internal or external) is licensed under the terms of
the GPL, you are allowed to distribute MySQL, also under the GPL, for
free.

What do you mean, exactly,
 by internal
 distribution?

 Also, if we write an application that expects MySql (like
 a web app),
 but we don't package the database with the application
 and tell the
 customer that they must download and install it, is this
 violation of
 the GPL?

Again, if your application is licensed under the terms of the GPL you can
use MySQL for free.  If your application is not licensed under the terms
of the GPL, commercial non-GPL licenses are necessary.  You will need to
purchase licenses under this scenario (unless you go GPL), please review
our licensing examples,
http://www.mysql.com/products/licensing-examples.html.

How are you planning to use MySQL?

Mr. Matt Fredrickson,
Sales Executive
MySQL, Inc. - www.mysql.com
Office: +1 425 390 0154
Fax: +1 425 663 2001

MySQL Featured in the Wall Street Journal:
http://webreprints.djreprints.com/785490482991.html



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Humor me: Postgresql vs. MySql (esp. licensing)

2003-10-08 Thread John Wells
Oliver Elphick said:
 Unless they actually attach extra conditions to the GPL (i.e. This
 product is licensed under GPL with the following extra conditions...)
 this is rubbish.  The GPL allows you to do what you like with the
 software internally; its restrictions only apply to further
 distribution.  MySQL would _like_ you to pay for support or buy a
 commercial licence, but you only _need_ to buy a licence if you want to
 distribute a modified or linked MySQL without distributing your own
 source code.

 If that position changes, we (Debian) will need to move MySQL from
 Debian's main archive to its non-free section, or even drop it
 altogether.  So do let me know if that becomes necessary!

Then read this page: http://www.mysql.com/products/licensing.html

Very carefully.  Particularly the sentence As long as you never
distribute (internally or externally)...



---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Humor me: Postgresql vs. MySql (esp. licensing)

2003-10-08 Thread John Wells
Yes, I know you've seen the above subject before, so please be gentle with
the flamethrowers.

I'm preparing to enter a discussion with management at my company
regarding going forward as either a MySql shop or a Postgresql shop.

It's my opinion that we should be using PG, because of the full ACID
support, and the license involved.  A consultant my company hired before
bringing me in is pushing hard for MySql, citing speed and community
support, as well as ACID support.

My biggest concern with MySQL is licensing.  We need to keep costs low,
and last I remember the parent company was being pretty strict on fair
use under the GPL.  If I recall, they even said a company would have to
license the commercial version if it were simply used operationally within
the company.

Also, I was under the impression that Postgresql had pretty much caught up
with MySql in the speed category...is this not the case?

Finally, ACID support in mysql always seemed kind of a hackperhaps
this has changed?

Thanks for any input (armament ;) ) you can provide.

John

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Humor me: Postgresql vs. MySql (esp. licensing)

2003-10-08 Thread John Wells
Sorry for the repost again.  I emailed the Admin asking to cancel it (I
originally posted from a non-subscribed address), but perhaps he missed
it.

John

John Wells said:
 Yes, I know you've seen the above subject before, so please be gentle with
 the flamethrowers.

 I'm preparing to enter a discussion with management at my company
 regarding going forward as either a MySql shop or a Postgresql shop.

 It's my opinion that we should be using PG, because of the full ACID
 support, and the license involved.  A consultant my company hired before
 bringing me in is pushing hard for MySql, citing speed and community
 support, as well as ACID support.

 My biggest concern with MySQL is licensing.  We need to keep costs low,
 and last I remember the parent company was being pretty strict on fair
 use under the GPL.  If I recall, they even said a company would have to
 license the commercial version if it were simply used operationally within
 the company.

 Also, I was under the impression that Postgresql had pretty much caught up
 with MySql in the speed category...is this not the case?

 Finally, ACID support in mysql always seemed kind of a hackperhaps
 this has changed?

 Thanks for any input (armament ;) ) you can provide.

 John

 ---(end of broadcast)---
 TIP 9: the planner will ignore your desire to choose an index scan if your
   joining column's datatypes do not match



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] Humor me: Postgresql vs. MySql (esp. licensing)

2003-10-08 Thread John Wells
Yes, I know you've seen the above subject before, so please be gentle with
the flamethrowers.

I'm preparing to enter a discussion with management at my company
regarding going forward as either a MySql shop or a Postgresql shop.

It's my opinion that we should be using PG, because of the full ACID
support, and the license involved.  A consultant my company hired before
bringing me in is pushing hard for MySql, citing speed and community
support, as well as ACID support.

My biggest concern with MySQL is licensing.  We need to keep costs low,
and last I remember the parent company was being pretty strict on fair
use under the GPL.  If I recall, they even said a company would have to
license the commercial version if it were simply used operationally within
the company.

Also, I was under the impression that Postgresql had pretty much caught up
with MySql in the speed category...is this not the case?

Finally, ACID support in mysql always seemed kind of a hackperhaps
this has changed?

Thanks for any input (armament ;) ) you can provide.

John



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] book on advanced postgresql?

2003-10-03 Thread John Wells
Are there any books out there that are fairly current, but go into the
more advanced aspects of Postgresql, such as internals and advanced
programming?  I'd rather avoid the whole intro to SQL with a few extra
chapters on Postgresql if I could help it.

Thanks,

John



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] book on advanced postgresql?

2003-10-03 Thread John Wells
Tom,

I'd love to, but I can't read at a terminal for longer than an hour or so.
 I really prefer the dead tree version.

Does anyone provide printed copies of the documentation for a fee?  I know
some companies do this for other apps...

Thanks,

John

Tom Lane said:
 John Wells [EMAIL PROTECTED] writes:
 Are there any books out there that are fairly current, but go into the
 more advanced aspects of Postgresql, such as internals and advanced
 programming?  I'd rather avoid the whole intro to SQL with a few extra
 chapters on Postgresql if I could help it.

 I'm not aware of any that are tilted towards advanced stuff.  Erm, have
 you tried just reading the documentation :-) ?  I realize it's not
 always easy to find what you need, but we could use suggestions about
 how to make it better ...
 http://developer.postgresql.org/docs/postgres/index.html

   regards, tom lane

 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?

http://archives.postgresql.org



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Resources for Progress conversion...

2003-10-03 Thread John Wells
Anyone on the list have experience with this?  I (potentially) could be
hired today to help in an effort to convert from Progress to Postgresql,
and I'm looking for any resources/insights/urls/books/etc that might aid
towards this end.

Anything you can forward will be greatly appreciated (and paid for in beer
if you're in the atlanta area ;-)).

Thanks!
John

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Resources for Progress conversion...

2003-10-03 Thread John Wells
Sorry about the repost here guys... I sent this from my other email
(non-subscribed), but sent an email to the list admin before reposting to
get him/her to cancel it.  Apparently some miscommunication there. :)


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Seeking insight from heavy lifters

2003-09-29 Thread John Wells
A manager friend of mine sent me the following concern.  He's preparing to
shift to Postgresql from a proprietary DB and 4gl system:

---
To that end, I've also started studying up on Postgresql.  It seems to
have all the necessary features for a transaction heavy DB.  The recent
release is 7.3.  Of course, the proof will be in the pudding.  We
average 2.5 million transactions per day or 800 per second. 
Unfortunately, we would have no way of testing that until we committed to
getting the business logic moved over and had something to test it with. 
This is a bit of a catch 22 situation.  Just wished I knew of someone
locally who was running Postgresql in such a heavy environment.  I'd love
to find out how it performs for them. ---

While I have a lot of experience with PG, it's not really been in a heavy
processing environment.  Could I get some input to send him from anyone
out in the field using Postgres in a similar environment.

If PG isn't the best option here, what is?

Thanks very much for your input!

John




---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Seeking insight from heavy lifters

2003-09-29 Thread John Wells
Bruce,

A system called Progress, which is a 4GL/database combo I'm not really
that familiar with.

They're aware that Postgresql doesn't have a 4gl per se, but I think they
view the savings they'll gain by switching to an open solution as well
worth it.

Thanks,

John

Bruce Momjian said:

 What are the transfering from, exactly?

 ---


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Seeking insight from heavy lifters

2003-09-29 Thread John Wells
A manager friend of mine sent me the following concern.  He's preparing to
shift to Postgresql from a proprietary DB and 4gl system:

---
To that end, I've also started studying up on Postgresql.  It seems to have
all the necessary features for a transaction heavy DB.  The recent release
is 7.3.  Of course, the proof will be in the pudding.  We average 2.5
million transactions per day or 800 per second.  Unfortunately, we would
have no way of testing that until we committed to getting the business logic
moved over and had something to test it with.  This is a bit of a catch 22
situation.  Just wished I knew of someone locally who was running Postgresql
in such a heavy environment.  I'd love to find out how it performs for them.
---

While I have a lot of experience with PG, it's not really been in a heavy
processing environment.  Could I get some input to send him from anyone
out in the field using Postgres in a similar environment.

If PG isn't the best option here, what is?

Thanks very much for your input!

John


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org