Re: [GENERAL] is it tcl problem or pg problem?

2001-08-27 Thread newsreader

Interesting.  I think I might
be completely mistaken exec or eval exec in
tcl with the power of backtick operator
in perl.

Let's say my perl script is


$a=`$a`;
print $a
-- 
Then tcl $data variable gets not only
$a but also that error message

However I change perl to
---
$a=`$a`;
print $a;
exit 0
---
Then tcl $data variable only gets the
error message.

I must not "exit 0"!  Or else I will
get nothing.

Perhaps the correct way to do is 
to actually learn tcl to do what
perl is doing now but I would prefer
to stick with perl.




On Mon, Aug 27, 2001 at 10:14:39AM -0500, Len Morgan wrote:
> Have you tried running a "dummy" script that just returns say a number to
> see if you get the same error?  If I'm not mistaken, the return value from
> "exec" in Tcl is the return value from the command you execute (i.e., 0 if
> successful, etc).  While you can "print" from the program, I'm not sure you
> can capture just that value.  You might also want to make sure that you are
> doing a real "exit" from your script and not just letting it "fall through"
> 
> len morgan
> 
> - Original Message -
> From: <[EMAIL PROTECTED]>
> To: "Len Morgan" <[EMAIL PROTECTED]>
> Cc: <[EMAIL PROTECTED]>
> Sent: Monday, August 27, 2001 10:43 AM
> Subject: Re: [GENERAL] is it tcl problem or pg problem?
> 
> 
> > On Mon, Aug 27, 2001 at 09:07:34AM -0500, Len Morgan wrote:
> > > Try:
> > >
> > > catch { eval exec $NEW($1)} data
> > >
> > > I'm not sure that this will solve the problem but executing commands
> from
> >
> > It did not :(
> >
> > > commands?  Perhaps your "date" example was just an example (because you
> can
> > > use now()::date from within Postgres).
> >
> > What I really want to do is run something like
> > at -f file 13:10 8/31/2001
> > and then capture "at" job number.
> >
> > What I really want to get is "at" job number
> > and because I know perl better I am actually
> > going call a perl script from tcl.  Perl
> > will call "at" and parse job number and
> > print it.  Tcl will catch the number and
> > put in a database column
> >

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] is it tcl problem or pg problem?

2001-08-27 Thread newsreader

On Mon, Aug 27, 2001 at 11:12:40AM -0400, Tom Lane wrote:
> > child process lost (is SIGCHLD ignored or trapped?)
> 
> It's ignored in a backend, see src/backend/tcop/postgres.c.
> 
> Current sources change the SIG_IGN setting to SIG_DFL, which may
> well solve your problem; you could try patching 7.1 sources that way
> and see if it helps.

The real problem for me is that my column
gets filled with the error line as well
as the real data I want.

I will try patching it.  Do you mean to
say that I will not get the error message
with the patched version?

Thanks



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] PL/java?

2001-08-27 Thread newsreader

On Mon, Aug 27, 2001 at 09:40:13AM -0400, Alex Pilosov wrote:
> For the people who really really want PL/java, you can fake it with
> untrusted pl/perl  (in 7.2) and Inline::Java.
> 

Off topics --
I am very interested in this plperlu

Can plperlu be used in triggers?  Any idea
how I can go about using it before 7.2 is released?

Thanks

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] raw partition

2001-08-26 Thread newsreader

While people are discussing mysql vs pg
I wonder if anyone of the two
support raw partition.  If not
is it on the todo list?

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] FTI is really really slow; what am I doing wrong?

2001-08-22 Thread newsreader


Did you vacuum after 
populating the tables?
If not you should do it


On Wed, Aug 22, 2001 at 11:08:55AM -0400, Paul C. wrote:
> Greetings,
>   I am trying to test out the performance of the contrib/fulltextindex 
> package and I am getting horrid performance results.
> The Setup:
> I created a simple table, ST (id SERIAL, body varchar(1024), which is to be 
> searched.  I created the ST_FTI table, trigger and indices as per 
> instructions in the FTI readme and C file.  To populate the table, I took a 
> flat text version of 'War and Peace' I found on the net, broke it up into 
> sentences and inserted each sentence into ST as a row.  So I have about 
> 38,000 sentences and my ST_FTI table is about 2 million rows.
> The Test:
> There is exactly one sentence (row) that has the strings 'Newton' and 
> 'Kepler' in it.  That is my target.  For a straight select on ST:
>   select * from st where body ~* 'newton' and body ~* 'kepler';
> the cost is 1100.41
> BUT for an query using the FTI indices:
>   select s.* from st s, st_fti f1, st_fti f2 where f1.string
> ~ '^kepler' and f2.string ~ '^newton' and s.oid = f1.id
> and s.oid = f2.id;
> the cost becomes a staggering 80628.92!!!  The plans are pasted at the end 
> of this message.
> Now, I have all the indices created (on id of st_fti, on string of st_fti 
> and on oid of st).  I cannot figure out why this is so much worse than the 
> straight query.  Indeed, the cost to look up a single string in the st_fti 
> table is way high:
>   select * from st_fti where string ~ '^kepler';
> costs 36703.40, AND its doing a Seq Scan on st_fti, even though an index 
> exists.
> What am I doing wrong?  Is it the sheer size of the st_fti table that is 
> causing problems?  Any help would be greatly appreciated.
> Thanks,
> Paul C.
> 
> FTI search
> NOTICE:  QUERY PLAN:
> Merge Join  (cost=80046.91..80628.92 rows=110 width=28)
>   ->  Sort  (cost=41827.54..41827.54 rows=19400 width=24)
> ->  Hash Join  (cost=1992.80..40216.39 rows=19400 width=24)
>   ->  Seq Scan on st_fti f2  (cost=0.00..36703.40 rows=19400 
> width=4)
>   ->  Hash  (cost=929.94..929.94 rows=34094 width=20)
> ->  Seq Scan on st s  (cost=0.00..929.94 rows=34094 
> width=20)
>   ->  Sort  (cost=38219.37..38219.37 rows=19400 width=4)
> ->  Seq Scan on st_fti f1  (cost=0.00..36703.40 rows=19400 width=4)
> EXPLAIN
> 
> Plain search:
> NOTICE:  QUERY PLAN:
> Seq Scan on st  (cost=0.00..1100.41 rows=1 width=16)
> EXPLAIN
> 
> 
> _
> Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp
> 
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] Postgres hangs during VACUUM (autocommit = false)

2001-08-21 Thread newsreader

On Tue, Aug 21, 2001 at 07:19:42PM -0400, Tom Lane wrote:
> 
> What I suspect is that "autocommit off" causes the DBD driver to send a
> fresh BEGIN immediately after the COMMIT.  You might be better off with
> "autocommit on" which I think suppresses any automatic issuance of
> BEGIN/COMMIT.  Then you'd need to issue "BEGIN" and "COMMIT" explicitly
> to turn your module into a transaction block.
> 


$ perldoc DBD::Pg 

- snip
.
.
   According to the DBI specification the default for AutoCommit is TRUE.  In
   this mode, any change to the database becomes valid immediately. Any
   'begin', 'commit' or 'rollback' statement will be rejected.

   If AutoCommit is switched-off, immediately a transaction will be started by
   issuing a 'begin' statement. Any 'commit' or 'rollback' will start a new
   transaction. A disconnect will issue a 'rollback' statement.

-

Suggestion to the original poster: don't use persistent
connections then or else temporarily stop the front
ends.  Vacuuming locks the tables anyhow and
they won't be able to access them during vacuuming

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] Re: is this possible? it should be!

2001-08-20 Thread newsreader

On Mon, Aug 20, 2001 at 04:56:29PM -0700, Tony Reina wrote:
> Perhaps GROUP BY will get you where you want to go:
> 
> select count(*), a, b, c from a where d=2 group by a, b, c order by e limit 10;
> 
> 

Here count(*) doesn't give total count i.e. grand total 
count if there is no "limit."


What would be nice is if pg would return 10 rows but declare
at the bottom of the display to give total rows number.  This way
DBI can just do
$n=$sql->total_rows;
or something like that.  I think it requires a major
hack on postgres?  No?  I don't think it will be
any additional cpu cost to return total number of rows
since sorting needs to know all rows and hence
total number of rows


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[GENERAL] is this possible? it should be!

2001-08-19 Thread newsreader

Hello

I have statements (highly simplified just to get
the point across) like

select a,b,c from a where d=2 order by e limit 10;

Now I think that because of "order by" the above query
already "knows" the result of the below query

select count(*) from a where d=2;

The point is that I want to know the total number
of matches and I also want to use "limit".  And
I don't want to do two queries.

If it's impossible I would like to know whether
it costs the same to PG if I use it with or
without limit.

If I use DBI and simplified queries look like

$s=$dbh->prepare('select a,b,c from a where d=2 order by e ');
$s->execute();

I get the total number of rows by

$n=$s->rows;

I then use perl to implement "limit"

Thanks in advance for any hints


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] why sequential scan

2001-08-16 Thread newsreader

On Thu, Aug 16, 2001 at 08:10:41PM -0400, [EMAIL PROTECTED] wrote:
> Ok I set enable_hashjoin and enable_mergejoin to off
> and performance is much much worse: just over 1 second
> job becomes a minute job
> 
> Perhaps I should re-check if the database
> gets bigger.
> 
> Thanks a lot
> 
> On Thu, Aug 16, 2001 at 12:45:28PM -0400, Tom Lane wrote:
> > [EMAIL PROTECTED] writes:
> > > I would then iterate over each id I get and
> > > look up in item like this
> > 
> > > q=> select * from item where item =? order by finish
> > 
> > That's a nestloop join with inner indexscan.  The planner did consider
> > that, and rejected it as slower than the hashjoin it chose.  Now,
> > whether its cost model is accurate for your situation is hard to tell;
> > but personally I'd bet that it's right.  1500 index probes probably
> > are slower than a sequential scan over 5000 items.
> > 
> > You could probably force the planner to choose that plan by setting
> > enable_hashjoin and enable_mergejoin to OFF.  It'd be interesting to
> > see the EXPLAIN result in that situation, as well as actual timings
> > of the query both ways.
> > 
> > regards, tom lane
> > 
> > ---(end of broadcast)---
> > TIP 5: Have you checked our extensive FAQ?
> > 
> > http://www.postgresql.org/users-lounge/docs/faq.html

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] why sequential scan

2001-08-16 Thread newsreader

Two estimates I undestand are quite
good.

select distinct id on body_index where string='book'

returns about 1500 rows.  That matches with 
the bottom line of the plan

There are 5139 rows in table item.  It is
the same number of rows in the plan for
sequential scan

If I were doing a maual join I would do

q=> select distinct id on body_index where string='book'

which gives me an index scan

I would then iterate over each id I get and
look up in item like this

q=> select * from item where item =? order by finish

Explain gives me 1 row estimate for each lookup.
At most 1500 rows.  No?

Below is the original plan for easier reference
-
q=> explain select distinct h.id,i.item,i.heading,i.finish from item i ,body_index h 
where h.id=i.item and
+(h.string='book') order by finish;
NOTICE:  QUERY PLAN:

Unique  (cost=6591.46..6606.51 rows=150 width=24)
  ->  Sort  (cost=6591.46..6591.46 rows=1505 width=24)
->  Hash Join  (cost=5323.27..6512.04 rows=1505 width=24)
  ->  Seq Scan on item i  (cost=0.00..964.39 rows=5139 width=20)
  ->  Hash  (cost=5319.51..5319.51 rows=1505 width=4)
->  Index Scan using body_index_string on body_index h  
(cost=0.00..5319.51 rows=1505 width=4)
--


Thanks

On Thu, Aug 16, 2001 at 10:59:18AM -0400, Tom Lane wrote:
> [EMAIL PROTECTED] writes:
> > Can someone explain why pg is doing
> > a sequential scan on table item with the following
> > statement
> 
> Looks like a fairly reasonable plan to me, if the rows estimates are
> accurate.  Are they?
> 
>   regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] why sequential scan

2001-08-16 Thread newsreader

Can someone explain why pg is doing
a sequential scan on table item with the following
statement

-
q=> explain select distinct h.id,i.item,i.heading,i.finish from item i ,body_index h 
where h.id=i.item and (h.string='book') order by finish;
NOTICE:  QUERY PLAN:

Unique  (cost=6591.46..6606.51 rows=150 width=24)
  ->  Sort  (cost=6591.46..6591.46 rows=1505 width=24)
->  Hash Join  (cost=5323.27..6512.04 rows=1505 width=24)
  ->  Seq Scan on item i  (cost=0.00..964.39 rows=5139 width=20)
  ->  Hash  (cost=5319.51..5319.51 rows=1505 width=4)
->  Index Scan using body_index_string on body_index h  
(cost=0.00..5319.51 rows=1505 width=4)

-

"item" table has integer primary key "item".  It has
15 or so other columns.

The performance is not very impressive with about
5000 records in item table and 1.5 million record in
body_index and both are supposed to get 
much bigger in the real life situation

Is the performance bottle neck that
particular sequential scan?

The database has just been vacuumed.

Thanks in advance

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] I am confused about PointerGetDatum among other things

2001-08-14 Thread newsreader

If anyone cares I have figured out how to
do this.  I use SPI_getbinval
and it works perfectly


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] createdb confusion

2001-08-08 Thread newsreader

I am using 7.1.2 on red hat 7.1

I compiled postgres myself

On Wed, Aug 08, 2001 at 08:16:35PM -0400, [EMAIL PROTECTED] wrote:
> man creatdb says -D supposed to be specify the
> alternative location
> 
> I try (as postgres user)
> 
> $ createdb -D /bla bla
> 
> and it says
> 
>   absolute path are not allowed.
> 
> Then I read man initlocation.  The example
> I see is
> 
> $ initlocation /opt/postgres/data
> $ createdb -D /opt/postgres/data/testdb testdb
> 
> so I do the same and it fails with the same reason
> 
> Does anyone have any idea?
> 
> thanks
> 
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://www.postgresql.org/search.mpl

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] very odd behavior

2001-05-10 Thread newsreader

Thanks everyone for very quick reply.

The reason I found odd was I had
created another able with the same
field name and don't recall having
problems at the time with 7.0.3.
Or did I get the same problem
but I just forgot?  I dumped
and reloaded 7.0.3 table to 7.1
without problem though.

On Thu, May 10, 2001 at 12:26:53PM -0600, Creager, Robert S wrote:
> 
> desc is a keyword - ORDER BY DESC-ending
> 
> Robert Creager
> StorageTek
> INFORMATION made POWERFUL
> 
> > -Original Message-
> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> > 
> > I have 7.1
> > 
> > Can someone take a look the following
> > and tell me why I'm getting errors?
> > I'm completely baffled!
> > 
> > 
> > what=> create table bla(desc text,def text,data text);
> > ERROR:  parser: parse error at or near "desc"
> > what=> create table bla("desc" text,def text,data text);
> > CREATE
> > what=>

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [GENERAL] Re: a primer on trigger?

2001-05-04 Thread newsreader

On Fri, May 04, 2001 at 12:47:02PM -0400, Joel Burton wrote:
> 
> Hmmm... this raises an interesting question.
> 
> Would it be possible to hook into (via trigger or other mechanism) so that
> we could execute a function on commit? There are PG triggers to do things
> like send email, etc., which, yes, can't be undone if the transaction

Could you kindly point me a reference 
to this 'trigger that emails'?  I just
want to see how it's done and see if
I can modify it to my need.

Thanks

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] disk usage advice needed

2001-05-02 Thread newsreader

On Wed, May 02, 2001 at 10:45:09AM -0400, Bruce Momjian wrote:
> > I wish these two tables to live on two separately.
> 
> I just wrote two articles, one on performance and the other on writing
> PostgreSQL applications.  You can get them at:
> 
>   http://candle.pha.pa.us/main/writings/pgsql/
> 

Thank you very much.  Your article was precisely what
I was looking for.  It pointed out to me that I need this
query
select relfilenode,relname from pg_class where relname !~ '^pg';

Regards




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[GENERAL] anyone else with mod_perl/apache and 7.1

2001-04-23 Thread newsreader

I recently upgraded my production server
to 7.1 because people keep saying how 
performance is so much better than 7.0.3
and because 8k row length limitation was removed

postgres is accessed by mod_perl processes
maintaining persistent connections.  Before
I can count the number of mod_perl processes
and the number of postgres backend processes
and they are the same.  Now postgres
processes outnumber mod_perl processes by
about 25% and for some reason the number
of processes (both mod_perl and postgres)
keep increasing beyond what is normal for the same 
amount of traffic at my site.

Anyone else notice that?

Thanks

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] watch your DBI scripts if you are upgrading to 7.1

2001-04-18 Thread newsreader

psql no longer works the systax
$ psql dbname@hostname

In my DBI scripts I had
$d=DBI->connect('dbi:Pg:dbname=dbname@hostname','user')
That must be rewritten as
$d=DBI->connect('dbi:Pg:dbname=dbname;host=hostname','user')

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] Select

2001-04-17 Thread newsreader

On Tue, Apr 17, 2001 at 09:23:02AM -0300, Marcelo Pereira wrote:
> Hi All,
> 
> 
> Now I would like to select all employees which name begins with the letter
> "P".
> 
> > Select * from employee where "name-begin-with-letter-P"  :-)
> 

select * from employee where email ~ '^P';

or if case does not matter

select * from employee where upper(email) ~ '^P'; 

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] man vacuum is broken?? in 7.1

2001-04-15 Thread newsreader

I really don't want to read man vacuum.
man vacuumdb is good.

On Sun, Apr 15, 2001 at 04:12:12PM -0400, [EMAIL PROTECTED] wrote:
> Is it me or man vacumm is broken in 7.1 release?
> Later pages seem to be corrupted.  I don't
> think my download is to be blamed as
> md5sum checks just fine.
> 
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] man vacuum is broken?? in 7.1

2001-04-15 Thread newsreader

Is it me or man vacumm is broken in 7.1 release?
Later pages seem to be corrupted.  I don't
think my download is to be blamed as
md5sum checks just fine.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] consider increasing WAL_FILES

2001-04-13 Thread newsreader

I dump database from 7.0.3 and attempting
to restore to 7.1rc4. And get the following
messages.  I can reproduce them by 
dropping new db and recreating new ones.

Line number varies from one restore to
the next and 'MoveOfflineLogs' message
was seen only once

Should I worry?  How do I increase
WAL_FILES?

The largest table has about 500,000 entries.

Thanks in advance



DEBUG:  copy: line 85254, XLogWrite: new log file created - consider increasing 
WAL_FILES
DEBUG:  MoveOfflineLogs: remove 0010
DEBUG:  MoveOfflineLogs: remove 0011
DEBUG:  copy: line 305196, XLogWrite: new log file created - consider increasing 
WAL_FILES
DEBUG:  copy: line 5129, XLogWrite: new log file created - consider increasing 
WAL_FILES

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] -F option again

2001-03-12 Thread newsreader


I think I have been using -F option since
but now I'm not sure.  I either start
with pg_ctl or postmaster directly.  In any case
man pages suggest that if you can pass optional parameters
to postgres via -o switch from postmaster.  I would
think that if I do it correctly such options will
show up on postgres backend processes.  I do not
see such options from top.  Below I present
to you relevant snippet of my top output.  I've
made sure there is no trailing command line being
cut off due to screen size limitations.

Why do not I see "postgres -F"?

-
  4:44pm  up 11 days,  9:12,  1 user,  load average: 0.00, 0.00, 0.00
84 processes: 83 sleeping, 1 running, 0 zombie, 0 stopped
CPU states:  0.8% user,  1.7% system,  0.0% nice, 97.3% idle
Mem:   385032K av,  339060K used,   45972K free,   0K shrd,2152K buff
Swap:  136512K av,7640K used,  128872K free  100956K cached

  PID PRI  SIZE  RSS SHARE   TIME COMMAND
 1239  15   900  900   684   0:01 top
14287  15   740  624   548   5:55 /usr/local/pgsql/bin/postmaster -o -F -S 2048
20221   9  5048 4988  4516   0:06 /usr/local/pgsql/bin/postgres localhost httpd what 
idle

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] has anybody gotten cygwin1.1.8 to work with postgresql?

2001-02-28 Thread newsreader

I tried at one point though I don't know the cygwin version.

I didn't work; said c compiler cannot produce
executeable

On Wed, Feb 28, 2001 at 10:43:54PM -0500, Jeff wrote:
> I've tried fruitlessly to install cygwin1.1.8 work with postgresql7.03
> 
> Has any body out there done it?
> 
> Where can I get the latest postgresql7.1?
> 
> Thank you
> 



Re: [GENERAL] DBD::Pg is suddenly acting up!

2001-02-21 Thread newsreader

BTW what is show below was done on
a different machine where postgres is
installed in a user directory..
Just to rule out confusion I had earlier
I deleted bin lib include in the user
home directory and reinstall it under
/usr/local/pgsql and the problem still
remains namely I have to supply extra -o
to make it work.


On Wed, Feb 21, 2001 at 11:43:46PM -0500, [EMAIL PROTECTED] wrote:
> Thank you.  Look what I get..
> --
> $ pg_ctl start -o "-F -S 2048"
> postmaster successfully started up.
> $ usage: /home/newsreader/pgsql/bin/postmaster [options]
> -B nbufsset number of shared buffers
> -D datadir  set data directory
> -S  silent mode (disassociate from tty)
> -a system   use this authentication system
> -b backend  use a specific backend server executable
> -d [1-5]set debugging level
> -i  listen on TCP/IP sockets as well as Unix domain socket
> -N nprocs   set max number of backends (1..1024, default 32)
> -n  don't reinitialize shared memory after abnormal exit
> -o option   pass 'option' to each backend servers
> -p port specify port for postmaster to listen on
> -s  send SIGSTOP to all backend servers if one dies
> -
> 
> 
> I've found that 
>   pg_ctl -o "-o -F -S 2048" start
> works as well as 
>   pg_ctl start -o "-o -F -S 2048"
> 
> --
> If you read man page of pg_ctl you will see that
> it is telling you wrong
> 
> 
> 
> 
> 
> On Wed, Feb 21, 2001 at 11:29:30PM -0500, Tom Lane wrote:
> > [EMAIL PROTECTED] writes:
> > > pg_ctl is completely not working for me. I do
> > >   $ pg_ctl -o "-F -S 2048" start
> > > and it keeps telling me I'm not doing it right.
> > 
> > Indeed, you are not.  Try
> > pg_ctl start -o "-F -S 2048"
> > 
> > regards, tom lane



Re: [GENERAL] DBD::Pg is suddenly acting up!

2001-02-21 Thread newsreader

Thank you.  Look what I get..
--
$ pg_ctl start -o "-F -S 2048"
postmaster successfully started up.
$ usage: /home/newsreader/pgsql/bin/postmaster [options]
-B nbufsset number of shared buffers
-D datadir  set data directory
-S  silent mode (disassociate from tty)
-a system   use this authentication system
-b backend  use a specific backend server executable
-d [1-5]set debugging level
-i  listen on TCP/IP sockets as well as Unix domain socket
-N nprocs   set max number of backends (1..1024, default 32)
-n  don't reinitialize shared memory after abnormal exit
-o option   pass 'option' to each backend servers
-p port specify port for postmaster to listen on
-s  send SIGSTOP to all backend servers if one dies
-


I've found that 
pg_ctl -o "-o -F -S 2048" start
works as well as 
pg_ctl start -o "-o -F -S 2048"

--
If you read man page of pg_ctl you will see that
it is telling you wrong





On Wed, Feb 21, 2001 at 11:29:30PM -0500, Tom Lane wrote:
> [EMAIL PROTECTED] writes:
> > pg_ctl is completely not working for me. I do
> > $ pg_ctl -o "-F -S 2048" start
> > and it keeps telling me I'm not doing it right.
> 
> Indeed, you are not.  Try
>   pg_ctl start -o "-F -S 2048"
> 
>   regards, tom lane



Re: [GENERAL] Installing DBI client

2001-02-20 Thread newsreader

If you are going to install DBD::Pg you need lib and include directories
just to install the module

On Tue, Feb 20, 2001 at 04:29:34PM +0100, Jose Manuel Lorenzo Lopez wrote:
> Hello PG's,
> 
> I have a question concerning the DBI module for postgresql.
> 
> I want to use the DBI interface for accessing a remote postgresql DB.
> There is no postgresql installed on the machine I want to use the DBI
> (client), but of course on the DB machine. 
> 
> Which files am I supposed to copy onto the client machine from the
> DB machine, to install and use the DBI interface on the client?
> 
> Thanks a lot in advance for any suggestion!  
> 
> Best Regards / Un saludo / Mit freundlichen Grüßen / Cordiali Saluti
> 
> José Manuel Lorenzo López 
> 
> -- 
> **
> ** José Manuel Lorenzo López**
> **  **
> ** ICA Informationssysteme Consulting & Anwendungsgesellschaft mbH  **
> ** Dept. SAP Basis R/3  VBue**
> **  **
> ** e-mail to: [EMAIL PROTECTED]**
> **