Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2015-08-24 Thread Thor Lancelot Simon
On Thu, Aug 20, 2015 at 04:32:19PM +0100, Greg Stark wrote:
 
 That's the problem. initdb tests how many connections can start up
 when writing the default config. But we assume that each process can
 use up to the rlimit file descriptors without running into a
 system-wide limit.

That sounds like a fairly bogus assumption -- unless the system-wide
limit is to be meaningless.

The default NetBSD limits on the VAX are probably still too low, however.

Thor


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-07-17 Thread Thor Lancelot Simon
On Wed, Jun 25, 2014 at 10:50:47AM -0700, Greg Stark wrote:
 On Wed, Jun 25, 2014 at 10:17 AM, Robert Haas robertmh...@gmail.com wrote:
  Well, the fact that initdb didn't produce a working configuration and
  that make installcheck failed to work properly are bad.  But, yeah,
  it's not totally broken.
 
 Yeah it seems to me that these kinds of autoconf and initdb tests
 failing are different from a platform where the spinlock code doesn't
 work. It's actually valuable to have a platform where people routinely
 trigger those configuration values. If they're broken there's not much
 value in carrying them.

Well, I have to ask this question: why should there be any vax-specific
code?  What facilities beyond what POSIX with the threading extensions
offers on a modern system do you really need?  Why?

It seems to me that NetBSD/vax is a very good platform for testing one's
assumptions about whether one's code is truly portable -- because it is
a moderately weird architecture, with some resource constraints, but with
a modern kernel and runtime offering everything you'd get from a software
point of view on any other platform.

Except, of course, for IEEE floating point, because the VAX's floating point
unit simply does not provide that.  But if other tests fail on the VAX or
one's source tree is littered with any other kind of VAX-specific code or
special cases for VAX, I would submit that this suggests one's code has
fairly serious architectual or implementation discipline issues.

Thor


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL for VAX on NetBSD/OpenBSD

2014-07-17 Thread Thor Lancelot Simon
On Thu, Jul 17, 2014 at 07:47:28AM -0400, Robert Haas wrote:
 On Wed, Jul 16, 2014 at 11:45 PM, Thor Lancelot Simon t...@panix.com wrote:
  Well, I have to ask this question: why should there be any vax-specific
  code?  What facilities beyond what POSIX with the threading extensions
  offers on a modern system do you really need?  Why?
 
 We have a spinlock implementation.  When spinlocks are not available,
 we have to fall back to using semaphores, which is much slower.

Neither pthread_mutex nor pthread_rwlock suffices?

Is the spinlock implementation in terms of the primitives provided by
atomic.h?  Could it be?  If so there should really be nothing unusual
about the VAX platform except the FPU.

Thor


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PostgreSQL, NetBSD and NFS

2003-02-06 Thread Thor Lancelot Simon
On Wed, Feb 05, 2003 at 03:09:09PM -0500, Tom Lane wrote:
 D'Arcy J.M. Cain [EMAIL PROTECTED] writes:
  On Wednesday 05 February 2003 13:04, Ian Fry wrote:
  How about adjusting the read and write-size used by the NetBSD machine? I
  think the default is 32k for both read and write on i386 machines now.
  Perhaps try setting them back to 8k (it's the -r and -w flags to mount_nfs,
  IIRC)
 
  Hey!  That did it.
 
 Hot diggety!
 
  So, why does this fix it?

Who knows.  One thing that I'd be interested to know is whether Darcy is
using NFSv2 or NFSv3 -- 32k requests are not, strictly speaking, within
the bounds of the v2 specification.  If he is using UDP rather than TCP
as the transport layer, another potential issue is that 32K requests will
end up as IP packets with a very large number of fragments, potentially
exposing some kind of network stack bug in which the last fragment is
dropped or corrupted (I would suspect that the likelihood of such a bug
in the NetApp stack is quite low, however).  If feasible, it is probably
better to use TCP as the transport and let it handle segmentation whether
the request size is 8K or 32K.

 I think now you file a bug report with the NetBSD kernel folk.  My
 thoughts are running in the direction of a bug having to do with
 scattering a 32K read into multiple kernel disk-cache buffers or
 gathering together multiple cache buffer contents to form a 32K write.

That doesn't make much sense to me.  Pages on i386 are 4K, so whether he
does 8K writes or 32K writes, it will always come from multiple pages in
the pagecache.

 Unless NetBSD has changed from its heritage, the kernel disk cache
 buffers are 8K, and so an 8K NFS read or write would never cross a
 cache buffer boundary.  But 32K would.

I don't know what heritage you're referring to, but it has never been
the case that NetBSD's buffer cache has used fixed-size 8K disk buffers,
and I don't believe that it was ever the case for any Net2 or 4.4-derived
system.

 Or it could be a similar bug on the NFS server's side?

That's concievable.  Of course, a client bug is quite possible, as well,
but I don't think the mechanism you suggest is likely.

-- 
 Thor Lancelot Simon  [EMAIL PROTECTED]
   But as he knew no bad language, he had called him all the names of common
 objects that he could think of, and had screamed: You lamp!  You towel!  You
 plate! and so on.  --Sigmund Freud

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html