pci_mapreg_map() with disabled device

2011-12-01 Thread Frank Wille
Hi!

Is there a reason why pci_mapreg_map() calls bus_space_map() for a memory
or i/o space, even when all spaces have been disabled on the device?

Yesterday I had a case where an ohci(4) device had no PCI_COMMAND_IO_ENABLE
and no PCI_COMMAND_MEM_ENABLE bit set in the PCI_COMMAND_STATUS_REG.
Nevertheless pci_mapreg_map() tried to map an illegal address space,
which resulted in a panic, when running a kernel with DEBUG option:

ohci0 at pci1 dev 25 function 0: Apple Computer Intrepid USB Controller (rev. 
0x00)
extent_alloc_region: extent 'uninorth mem-space' (0x8000 - 0xf3ff)
extent_alloc_region: start 0x0, end 0xfff
panic: extent_alloc_region: region lies outside extent

Wouldn't it be better to check whether a space is enabled in the device,
before trying to map it?

-- 
Frank Wille


Re: pci_mapreg_map() with disabled device

2011-12-01 Thread Jonathan A. Kollasch
On Thu, Dec 01, 2011 at 10:12:44AM +0100, Frank Wille wrote:
> Is there a reason why pci_mapreg_map() calls bus_space_map() for a memory
> or i/o space, even when all spaces have been disabled on the device?

Not necessarily, on x86 it's not uncommon for the I/O or Memory enable
bit to be set wrong by the firmware, even sometimes when the firmware
has allocated resources to the BARs.

> Wouldn't it be better to check whether a space is enabled in the device,
> before trying to map it?

You mean we don't anymore?  I thought that didn't change.

Jonathan Kollasch


Re: pci_mapreg_map() with disabled device

2011-12-01 Thread Frank Wille
On Thu, 1 Dec 2011 13:06:33 +
"Jonathan A. Kollasch"  wrote:

> On Thu, Dec 01, 2011 at 10:12:44AM +0100, Frank Wille wrote:
> > Is there a reason why pci_mapreg_map() calls bus_space_map() for a
> > memory or i/o space, even when all spaces have been disabled on the
> > device?
> 
> Not necessarily, on x86 it's not uncommon for the I/O or Memory enable
> bit to be set wrong by the firmware, even sometimes when the firmware
> has allocated resources to the BARs.

Hmm.. that's bad.
Maybe we could check additionally whether the BAR is 0?


> > Wouldn't it be better to check whether a space is enabled in the
> > device, before trying to map it?
> 
> You mean we don't anymore?  I thought that didn't change.

There is no such check in pci_mapreg_map().


-- 
Frank Wille


USB contigmalloc()

2011-12-01 Thread Thomas Klausner
Matthew Dillon implemented a solution for USB/graphics cards problem
stemming from the fact that some time after the boot, they can't
acquire enough contiguous memory.

http://thread.gmane.org/gmane.os.dragonfly-bsd.kernel/14431

Anyone interested in porting it or implementing something similar? :)
 Thomas


Re: USB contigmalloc()

2011-12-01 Thread Adam Hoka
On 12/1/2011 5:33 PM, Thomas Klausner wrote:
> Matthew Dillon implemented a solution for USB/graphics cards problem
> stemming from the fact that some time after the boot, they can't
> acquire enough contiguous memory.
> 
> http://thread.gmane.org/gmane.os.dragonfly-bsd.kernel/14431
> 
> Anyone interested in porting it or implementing something similar? :)
>  Thomas

Do we have the problem?



Re: USB contigmalloc()

2011-12-01 Thread Izumi Tsutsui
wiz@ wrote:

> Matthew Dillon implemented a solution for USB/graphics cards problem
> stemming from the fact that some time after the boot, they can't
> acquire enough contiguous memory.
> 
> http://thread.gmane.org/gmane.os.dragonfly-bsd.kernel/14431
> 
> Anyone interested in porting it or implementing something similar? :)

I think allocating physically contiguous large memory is a bad design
and I doubt reserving 16MB physical memory at boot time is acceptable
for all machines with USB.

Anyway, currently our udl(4) doesn't require such allocation,
but has more other problems:
http://cvsweb.NetBSD.org/bsdweb.cgi/src/sys/dev/usb/udl.c#rev1.1

---
Izumi Tsutsui


Re: USB contigmalloc()

2011-12-01 Thread Jonathan A. Kollasch
On Thu, Dec 01, 2011 at 05:33:27PM +0100, Thomas Klausner wrote:
> Matthew Dillon implemented a solution for USB/graphics cards problem
> stemming from the fact that some time after the boot, they can't
> acquire enough contiguous memory.
> 
> http://thread.gmane.org/gmane.os.dragonfly-bsd.kernel/14431
> 
> Anyone interested in porting it or implementing something similar? :)
>  Thomas

We have it in the USB stack already.  Though some cardbus and pci drivers
still could use something similar.

It's really Not Good Enough.  These drivers really need to grow proper
scatter/gather support.

Jonathan Kollasch


Re: USB contigmalloc()

2011-12-01 Thread Francois Tigeot
On Thu, Dec 01, 2011 at 05:11:36PM +, Jonathan A. Kollasch wrote:
> On Thu, Dec 01, 2011 at 05:33:27PM +0100, Thomas Klausner wrote:
> > Matthew Dillon implemented a solution for USB/graphics cards problem
> > stemming from the fact that some time after the boot, they can't
> > acquire enough contiguous memory.
> 
> It's really Not Good Enough.  These drivers really need to grow proper
> scatter/gather support.

Iirc, contigmalloc was implemented mainly for the radeon drm driver

-- 
Francois Tigeot


Re: USB contigmalloc()

2011-12-01 Thread Thomas Klausner
On Fri, Dec 02, 2011 at 01:51:42AM +0900, Izumi Tsutsui wrote:
> Anyway, currently our udl(4) doesn't require such allocation,

It's also a problem for radeondrm (no USB involved).
 Thomas


Re: SHMMAX runtime setting for Postgres Installation

2011-12-01 Thread Emmanuel Kasper
Le 24/11/2011 19:21, Eric Haszlakiewicz a écrit :
> On Thu, Nov 24, 2011 at 04:57:39PM +0100, Emmanuel Kasper wrote:
>> Is that Postgres Documentation outdated ?
> 
> Yes.  If you have a few minutes, it would be great if you could submit
> a request with the Postgres guys to have them update it.
> 
> eric
With the help of Marc Balmer, I wrote to the the postgres team and they
updated the documention to mention the sysctl call for NetBSD > 5.0

http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=bc9306f4c5e55042e20c8d8b308e573478f26e34



Re: pool_cache interface and memory constraints

2011-12-01 Thread Sverre Froyen
On On 2011-11-28 at 15:14 David Young wrote
> On Sat, Nov 26, 2011 at 11:04:56AM -0700, Sverre Froyen wrote:
> > Ideally, we would need separate pool caches for PCI, ISA, etc., each with
> > its own set of constraints. These constraints are part of the DMA tag
> > structures. It would therefore be nice to be able to access the pool
> > cache by the DMA tag and the size rather than a pool cache name string.
> > Should pool cache (and pool) be extended to support this?
> 
> It may help to have a look at the way that I handle this in ixgbe(4).
> Start at ixgbe_jcl_reinit() and then have a look at ixgbe_getjcl().
> 
> Many improvements to ixgbe(4)'s scheme can be made.  For example,
> instead of a freelist protected by a mutex (ixgbe_extmem_head_t), a
> pool_cache of ixgbe_extmem_t's could be used.  Also, the system should
> arbitrate between drivers for access to DMA buffers---that would be a
> big improvement over the current mbuf-cluster scheme where a few driver
> instances can starve all others of clusters.

Thanks for the comments! I decided to try to write some code (attached)
to illustrate my ideas. The code is incomplete and untested but I think
that it can provide an idea of what I'm thinking about. I would appreciate 
comments about its feasibility. In particular, will pool_cache and bus_dma 
work together the way I have it?

Individual drivers would request the pools they need in their attach methods
and pass the pool handles around as needed.

Regards,
Sverre

/*

Mbuf_pool_cache_init sets up a DMA safe pool_cache for the
specified bus and size. The pool_cace will use bus_dmamem_alloc
as its memory allocator. Mbuf_pool_cache_init may be called
multiple times for a given bus and size. Subsequent calls
returns the original pool_cache and increments a reference count.
Mbuf_pool_cache_init should be called from bus or device attach
methods as needed.

Mbuf_pool_cache_destroy should similarly be called from a bus or
device detach method.  The reference counter is used to destroy
the pool_cache when appropriate.

*/

#include 
#include 
#include 
#include 

/* The mbuf_pool_item list */
static TAILQ_HEAD(, mbuf_pool_item) mbuf_pool_head =
TAILQ_HEAD_INITIALIZER(mbuf_pool_head);

struct mbuf_pool_item {
TAILQ_ENTRY(mbuf_pool_item) mbuf_pool_list;
bus_dma_tag_t mpi_bus_tag;
unsigned int mpi_size;
char *mpi_name;
pool_cache_t mpi_pc;
unsigned int mpi_refcnt;
};

static bool mbuf_pool_initialized = 0;
static kmutex_t mbuf_pool_lock;

static struct pool_allocator mbuf_pool_allocator;

#define MBUF_POOL_TO_MPI(pool) ((struct mbuf_pool_item *)(pool->pr_qcache))

struct mbuf_pool_item *
mbuf_pool_get_pool_item(pool_cache_t pc, bus_dma_tag_t tag, unsigned int 
size);

char *
mbuf_pool_get_pool_name(bus_dma_tag_t tag, unsigned int size);

pool_cache_t
mbuf_pool_cache_init(bus_dma_tag_t tag, unsigned int size);

void
mbuf_pool_cache_destroy(pool_cache_t pc);

void *
mbuf_pool_cache_get_paddr(pool_cache_t pc, int flags, paddr_t *pap);

void
mbuf_pool_cache_put_paddr(pool_cache_t pc, void *object, paddr_t pa);

/*
 * Custom pool alloc and free methods.
 */

static void *
mbuf_pool_poolpage_alloc(struct pool *pool, int prflags)
{
struct mbuf_pool_item *mpi;
int error, nsegs;
bus_dma_segment_t seg;

mpi = MBUF_POOL_TO_MPI(pool);

/* XXX verify alignment arg (mpi->mpi_size) */
error = bus_dmamem_alloc(mpi->mpi_bus_tag, pool->pr_alloc->pa_pagesz,
mpi->mpi_size, 0, &seg, 1, &nsegs, BUS_DMA_NOWAIT);

return (error == 0 && nsegs == 1) ? (void *)(&seg) : NULL;
}

static void
mbuf_pool_poolpage_free(struct pool *pool, void *addr)
{
struct mbuf_pool_item *mpi;

mpi = MBUF_POOL_TO_MPI(pool);

/* ... */
}

/*
 * Return the mbuf_pool_item struct that matches pc or tag and size.
 * Must be called with mutex held.
 */

struct mbuf_pool_item *
mbuf_pool_get_pool_item(pool_cache_t pc, bus_dma_tag_t tag, unsigned int size)
{
struct mbuf_pool_item *mpi = NULL, *mpi1;

TAILQ_FOREACH(mpi1, &mbuf_pool_head, mbuf_pool_list) {
if (mpi1->mpi_pc == pc ||
(mpi1->mpi_size == size && mpi1->mpi_bus_tag == tag)) {
mpi = mpi1;
break;
}
}

return mpi;
}

char *
mbuf_pool_get_pool_name(bus_dma_tag_t tag, unsigned int size)
{
/* ... */
return NULL;
}

pool_cache_t
mbuf_pool_cache_init(bus_dma_tag_t tag, unsigned int size)
{
pool_cache_t pc = NULL;
char *name;
struct mbuf_pool_item *mpi;

if (! mbuf_pool_initialized) {
/* XXX Racy code. Need a proper constructor? */
/* XXX IPL_NONE implies: cannot use in
   an interrupt handler. Verify! */
mutex_init(&mbuf_pool_lock, MUTEX_DEFAULT, IPL_NONE);
mbuf_pool_initialized = true;
}

mutex_ente