Re: [HACKERS] Client/Server compression?

2002-03-17 Thread Lincoln Yeoh

You can also use stunnel for SSL. Preferable to having SSL in postgresql 
I'd think.

Cheerio,
Link.

At 03:38 PM 3/16/02 -0500, Tom Lane wrote:

FWIW, I was not in favor of the SSL addition either, since (just as you
say) it does nothing that couldn't be done with an SSH tunnel.  If I had



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] Another misinformed article

2002-03-17 Thread Gavin Sherry

http://freshmeat.net/articles/view/426/

This article is quite poorly written. I dare say that I expected more from
people who run a site associated with the categorisation of software (how
can one discuss MySQL, Oracle, Postgres and Access in the same article?).

By point of reference, however, I think Postgres is chugging along
nicely. I would much prefer the author make his point in diff -c format.


Gavin



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Client/Server compression?

2002-03-17 Thread Tom Lane

Greg Copeland [EMAIL PROTECTED] writes:
 Except we seemingly don't see eye to eye on it.  SSH just is not very
 useful in many situations simply because it may not always be
 available.  Now, bring Win32 platforms into the mix and SSH really isn't
 an option at all...not without bringing extra boxes to the mix.  Ack!

Not so.  See http://www.openssh.org/windows.html.

 If I implement compression between the BE and the FE libpq, does that
 mean that it needs to be added to the other interfaces as well?

Yes.

 Is there any documentation which covers the current protocol
 implementation?

Yes.  See the protocol chapter in the developer's guide.

 Have you never had to support a database via modem?

Yes.  ssh has always worked fine for me ;-)

 You do realize that this situation
 if more common that you seem to think it is?

I was not the person claiming that low-bandwidth situations are of no
interest.  I was the person claiming that the Postgres project should
not expend effort on coding and maintaining our own solutions, when
there are perfectly good solutions available that we can sit on top of.

Yes, a solution integrated into Postgres would be easier to use and
perhaps a bit more efficient --- but do the incremental advantages of
an integrated solution justify the incremental cost?  I don't think so.
The advantages seem small to me, and the long-term costs not so small.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Time for 7.2.1?

2002-03-17 Thread Tom Lane

I believe we've now committed fixes for all the must fix items there
were for 7.2.1.  Does anyone have any reasons to hold up 7.2.1 more,
or are we ready to go?

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Time for 7.2.1?

2002-03-17 Thread Bruce Momjian

Tom Lane wrote:
 I believe we've now committed fixes for all the must fix items there
 were for 7.2.1.  Does anyone have any reasons to hold up 7.2.1 more,
 or are we ready to go?

I need to brand 7.2.1 --- will do tomorrow.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] User Level Lock question

2002-03-17 Thread Nicolas Bazin


- Original Message -
From: Lance Ellinghaus [EMAIL PROTECTED]
To: Tom Lane [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Saturday, March 16, 2002 6:54 AM
Subject: Re: [HACKERS] User Level Lock question


 I know it does not sound like something that would need to be done, but
here
 is why I am looking at doing this...

 I am trying to replace a low level ISAM database with PostgreSQL. The low
 level ISAM db allows locking a record during a read to allow Exclusive
 access to the record for that process. If someone tries to do a READ
 operation on that record, it is skipped. I have to duplicate this
 functionality. The application also allows locking multiple records and
then
 unlocking individual records or unlocking all of them at once. This cannot
 be done easily with PostgreSQL unless I add a status field to the
records
 and manage them. This can be done, but User Level Locks seem like a much
 better solution as they provide faster locking, no writes to the database,
 when the backend quits all locks are released automatically, and I could
 lock multiple records and then clear them as needed. They also exist
outside
 of transactions!

 So my idea was to use User Level Locks on records and then include a test
on
 the lock status in my SELECT statements to filter out any records that
have
 a User Level Lock on it. I don't need to set it during the query, just
test
 if there is a lock to remove them from the query. When I need to do a true
 lock during the SELECT, I can do it with the supplied routines.

In INFORMIX you have a similar option except that you have the choice to
decide whether the other client blocks or continue, but in any case it
returns an error status. You even can set a delay while you accept to be
bloked and the lock can be set on database, table or record level. We use
table locking to speed up some time consuming processings.
I guess it would be better to have at least an error code returned. The
application can then choose to ignore the error code.

 Does this make any more sense now or have I made it that much more
 confusing?

 Lance

 - Original Message -
 From: Tom Lane [EMAIL PROTECTED]
 To: Lance Ellinghaus [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Sent: Friday, March 15, 2002 9:11 AM
 Subject: Re: [HACKERS] User Level Lock question


  Lance Ellinghaus [EMAIL PROTECTED] writes:
   Is there an easy way to test the lock on a user level lock without
 actually
   issuing the lock?
 
  Why would you ever want to do such a thing?  If you test the lock but
  don't actually acquire it, someone else might acquire the lock half a
  microsecond after you look at it --- and then what does your test result
  mean?  It's certainly unsafe to take any action based on assuming that
  the lock is free.
 
  I suspect what you really want is a conditional acquire, which you can
  get (in recent versions) using the dontWait parameter to LockAcquire.
 
  regards, tom lane


 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?

 http://www.postgresql.org/users-lounge/docs/faq.html




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] insert statements

2002-03-17 Thread Vince Vielhaber

On Fri, 15 Mar 2002, Tom Lane wrote:

 Vince Vielhaber [EMAIL PROTECTED] writes:
  On Fri, 15 Mar 2002, Thomas Lockhart wrote:
  But I *really* don't see the benefit of that table(table.col)
  syntax. Especially when it cannot (?? we need a counterexample) lead to
  any additional interesting beneficial behavior.

  The only benefit I can come up with is existing stuff written under
  the impression that it's acceptable.

 That's the only benefit I can see either --- but it's not negligible.
 Especially not if the majority of other DBMSes will take this syntax.

 I was originally against adding any such thing, but I'm starting to
 lean in the other direction.

 I'd want it to error out on INSERT foo (bar.col), though ;-)

So would I.

Vince.
-- 
==
Vince Vielhaber -- KA8CSHemail: [EMAIL PROTECTED]http://www.pop4.net
 56K Nationwide Dialup from $16.00/mo at Pop4 Networking
Online Campground Directoryhttp://www.camping-usa.com
   Online Giftshop Superstorehttp://www.cloudninegifts.com
==




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Time zone questions

2002-03-17 Thread Christopher Kings-Lynne

I need to do some timezone manipulation, and I was wondering about this
difference:

australia=# select version();
   version
--
 PostgreSQL 7.1.3 on i386--freebsd4.4, compiled by GCC 2.95.3
(1 row)
australia=# select '2002-03-18 00:00:00' at time zone 'Australia/Sydney';
ERROR:  Time zone 'australia/sydney' not recognized
australia=# set time zone 'Australia/Sydney';
SET VARIABLE
australia=# select '2002-03-18 00:00:00';
  ?column?
-
 2002-03-18 00:00:00
(1 row)


Why can't I use 'australia/sydney' as a time zone in 'at time zone'
notation?  Has it been fixed in 7.2?

Now, say I do this:

select '2002-03-18 00:00:00' at time zone 'AEST';

That will give me aussie eastern time quite happily, but what if I don't
know when summer time starts?  I don't want to have to manually choose
between 'AEST' and 'AESST'???  To me, the way to do this would be to use
'Australia/Sydney' as the time zone, but this doesn't work.

7.2 seems to have the same behaviour...

Chris


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Again, sorry, caching.

2002-03-17 Thread mlw

Andrew Sullivan wrote:
 
 On Sat, Mar 16, 2002 at 09:01:28AM -0500, mlw wrote:
 
  If it is mostly static data, why not just make it a static page?
  Because a static page is a maintenance nightmare. One uses a
  database in a web site to allow content to be changed and upgraded
  dynamically and with a minimum of work.
 
 This seems wrong to me.  Why not build an extra bit of functionality
 so that when the admin makes a static-data change, the new static
 data gets pushed into the static files?
 
 I was originally intrigued by the suggestion you made, but the more I
 thought about it (and read the arguments of others) the more
 convinced I became that the MySQL approach is a mistake.  It's
 probably worth it for their users, who seem not to care that much
 about ACID anyway.  But I think for a system that really wants to
 play in the big leagues, the cache is a big feature that requires a
 lot of development, but which is not adequately useful for most
 cases.  If we had infinite developer resources, it might be worth it.
 In the actual case, I think it's too low a priority.

Again, I can't speak to priority, but I can name a few common application where
caching would be a great benefit. The more I think about it, the more I like
the idea of a 'cacheable' keyword in the select statement.

My big problem with putting the cache outside of the database is that it is now
incumbent on the applications programmer to write a cache. A database should
manage the data, the application should handle how the data is presented.
Forcing the application to implement a cache feels wrong.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Again, sorry, caching.

2002-03-17 Thread mlw

Greg Copeland wrote:
 
 On Sat, 2002-03-16 at 08:36, mlw wrote:
  Triggers and asynchronous notification are not substitutes for real hard ACID
  complient caching. The way you suggest implies only one access model. Take
the
  notion of a library, they have both web and application access. These should
  both be able to use the cache.
 
 
 Well, obviously, you'd need to re-implement the client side cache in
 each implementation of the client.  That is a down side and I certainly
 won't argue that.  As for the no substitute comment, I'm guess I'll
 plead ignorance because I'm not sure what I'm missing here.  What am I
 missing that would not be properly covered by that model?

It would not be guarenteed to be up to date with the state of the database. By
implementing the cache within the database, PostgreSQL could maintain the
consistency.

 
  Also, your suggestion does not address the sub-select case, which I think is
  much bigger, performance wise, and more efficient than MySQL's cache.
 
 I'm really not sure what you mean by that.  Doesn't address it but is
 more efficient?  Maybe it's because I've not had my morning coffee
 yet... ;)

If an internal caching system can be implemented within PostgreSQL, and trust
me, I undersand what a hairball it would be with multiversion concurrency,
omplex queries such as:

select * from (select * from mytable where foo = 'bar' cacheable) as subset
where subset.col = 'value'

The 'cacheable' keyword applied to the query would mean that PostgreSQL could
keep that result set handy for later use. If mytable and that subselect always
does a table scan, no one can argue that this subquery caching could be a huge
win.

As a side note, I REALLY like the idea of a keyword for caching as apposed to
automated caching. t would allow the DBA or developer more control over
PostgreSQL's behavior, and poentially make the fature easier to implement.

 
 
  This whole discussion could be moot, and this could be developed as an
  extension, if there were a function API that could return sets of whole rows.
 
 
Currently a function can only return one value or a setof a single type,
implemented as one function call for each entry in a set. If there could be a
function interface which could return a row, and multiple rows similar to the
'setof' return, that would be very cool. That way caching can be implemented
as:

select * from pgcache('select * from mytable where foo='bar') as subset where
subset.col = 'value';

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Again, sorry, caching.

2002-03-17 Thread mlw

I think the notion that data is managed outside of the database is bogus. Query
caching can improve performance in some specific, but popular, scenarios.
Saying it does not belong within the database and is the job of the
application, is like saying file caching is not a job of the file system but is
the job of the application.

This is a functionality many users want, and can be justified by some very
specific, but very common, scenarios. It is not me to say if it is worth the
work, or if it should be done. From the perspective of the user, having this
capability within the database is an important feature, I want to make the
argument.

Greg Copeland wrote:
 
 I previously replied to you vaguely describing a way you could implement
 this by using a combination of client side caching and database tables
 and triggers to allow you to determine if your cache is still valid.
 Someone came right behind me, Tom maybe??, and indicated that the
 proper/ideal way to do this would be to using postgres' asynchronous
 database notification mechanisms (listen/notify I believe were the
 semantics) to alert your application that your cache has become
 invalid.  Basically, a couple of triggers and the use of the list/notify
 model, and you should be all set.
 
 Done properly, a client side cache which is asynchronously notified by
 the database when it's contents become invalid should be faster than
 relying on MySQL's database caching scheme.  Basically, a strong client
 side cache is going to prevent your database from even having to return
 a cached result set while a database side cache is going to always
 return a result set.  Of course, one of the extra cool things you can do
 is to cache a gzip'd copy of the data contents which would further act
 as an optimization preventing the client or web server (in case they are
 different) from having to recompress every result set.
 
 In the long run, again, if properly done, you should be able to beat
 MySQL's implementation without too extra much effort.  Why?  Because a
 client side cache can be much smarter in the way that it uses it's
 cached contents much in the same way an application is able to better
 cache it's data then what the file system is able to do.  This is why an
 client side cache should be preferred over that of a database result set
 cache.
 
 Greg


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html