Re: [RFC] PCI bridge driver rewrite
On Thu Feb 24 2005 - 01:33:38 Adam Belay wrote: >The basic flow of the new code is as follows: >1.) A standard "driver core" driver binds to a bridge device. >2.) When "*probe" is called it sets up the hardware and allocates a "struct pci_bus". >3.) The "struct pci_bus" is filled with information about the detected bridge. >4.) The driver then registers the "struct pci_bus" with the PCI Bus Class. >5.) The PCI Bus Class makes the bridge available to sysfs. >6.) It then detects hardware attached to the bridge. >7.) Each new PCI bridge device is registered with the driver model. >8.) All remaining PCI devices are registered with the driver model. > >+static void pci_enable_crs(struct pci_dev *dev) >+{ >+ u16 cap, rpctl; >+ int rpcap = pci_find_capability(dev, PCI_CAP_ID_EXP); >+ if (!rpcap) >+ return; >+ >+ pci_read_config_word(dev, rpcap + PCI_CAP_FLAGS, ); >+ if (((cap & PCI_EXP_FLAGS_TYPE) >> 4) != PCI_EXP_TYPE_ROOT_PORT) >+ return; >+ >+ pci_read_config_word(dev, rpcap + PCI_EXP_RTCTL, ); >+ rpctl |= PCI_EXP_RTCTL_CRSSVE; >+ pci_write_config_word(dev, rpcap + PCI_EXP_RTCTL, rpctl); >+} Adam, We need to coordinate your work with the PCI Express Port bus driver that was accepted into the 2.6.x kernel. The PCI Express Port Bus driver claims all PCI-PCI Bridge's which implements PCI Express Capability Structure. Please refer to PCIEBUS-HOWTO.txt for why we developed PCI Express Port Bus driver to support PCI Express features. Your current patch will claim PCI Express root ports, preventing the PCI Express Port bus driver from loading. Given the many advanced features of PCI Express a separate bus driver was required. Can you change the patch so it only loads on standard PCI bridges and not PCI Express devices? Thanks, Long - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Thu Feb 24 2005 - 01:33:38 Adam Belay wrote: The basic flow of the new code is as follows: 1.) A standard driver core driver binds to a bridge device. 2.) When *probe is called it sets up the hardware and allocates a struct pci_bus. 3.) The struct pci_bus is filled with information about the detected bridge. 4.) The driver then registers the struct pci_bus with the PCI Bus Class. 5.) The PCI Bus Class makes the bridge available to sysfs. 6.) It then detects hardware attached to the bridge. 7.) Each new PCI bridge device is registered with the driver model. 8.) All remaining PCI devices are registered with the driver model. +static void pci_enable_crs(struct pci_dev *dev) +{ + u16 cap, rpctl; + int rpcap = pci_find_capability(dev, PCI_CAP_ID_EXP); + if (!rpcap) + return; + + pci_read_config_word(dev, rpcap + PCI_CAP_FLAGS, cap); + if (((cap PCI_EXP_FLAGS_TYPE) 4) != PCI_EXP_TYPE_ROOT_PORT) + return; + + pci_read_config_word(dev, rpcap + PCI_EXP_RTCTL, rpctl); + rpctl |= PCI_EXP_RTCTL_CRSSVE; + pci_write_config_word(dev, rpcap + PCI_EXP_RTCTL, rpctl); +} Adam, We need to coordinate your work with the PCI Express Port bus driver that was accepted into the 2.6.x kernel. The PCI Express Port Bus driver claims all PCI-PCI Bridge's which implements PCI Express Capability Structure. Please refer to PCIEBUS-HOWTO.txt for why we developed PCI Express Port Bus driver to support PCI Express features. Your current patch will claim PCI Express root ports, preventing the PCI Express Port bus driver from loading. Given the many advanced features of PCI Express a separate bus driver was required. Can you change the patch so it only loads on standard PCI bridges and not PCI Express devices? Thanks, Long - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Monday, February 28, 2005 4:13 pm, Adam Belay wrote: > On Mon, 2005-02-28 at 15:38 -0800, Jesse Barnes wrote: > > On Monday, February 28, 2005 3:27 pm, Adam Belay wrote: > > > How can we specify which bus to target? > > > > Maybe we could have a list of legacy (ISA?) devices for drivers like > > vgacon to attach to? The bus info could be stuffed into the legacy > > device structure itself so that the platform code would know what to do. > > Are these devices actually legacy, or PCI with compatibility interfaces? So far I've only tried VGA cards, like radeons and r128s. > I think a "struct isa_device" would be be useful. Would a pointer to > the "struct pci_bus" do the trick? Yeah, that would work for me. > I was just wondering if we have to reserve a memory range for this? Sure, each bus can have that address range reserved. The ia64 specific HAVE_PCI_LEGACY code (in arch/ia64/sn/pci/pci_dma.c I think) might illuminate things a bit. Basically, each bus has legacy base addresses, we could reserve 64k for port space and 1M for memory. Jesse - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Mon, 2005-02-28 at 15:38 -0800, Jesse Barnes wrote: > On Monday, February 28, 2005 3:27 pm, Adam Belay wrote: > > How can we specify which bus to target? > > Maybe we could have a list of legacy (ISA?) devices for drivers like vgacon > to > attach to? The bus info could be stuffed into the legacy device structure > itself so that the platform code would know what to do. Are these devices actually legacy, or PCI with compatibility interfaces? I think a "struct isa_device" would be be useful. Would a pointer to the "struct pci_bus" do the trick? > > > Also is the legacy IO space mapped to IO Memory on the other side of the > > bridge? > > How do you mean? Legacy I/O port accesses just become strongly ordered > memory > transactions, afaik, and legacy memory accesses are dealt with the same way. > > Jesse I was just wondering if we have to reserve a memory range for this? Thanks, Adam - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Fri, 2005-02-25 at 15:38 -0800, Greg KH wrote: > On Thu, Feb 24, 2005 at 01:22:01AM -0500, Adam Belay wrote: > > I look forward to any comments or suggestions. > > I like it all :) > > If you want to submit patches now that rearrange the code to make it > easier for you to modify in the future to achieve the above goals, feel > free, I'll gladly take them. > > thanks, > > greg k-h I'm going to do an updated release soon. It should take care of some of the issues on the TODO list and also will be based on previous feedback. >From there, I'll start planning a strategy for merging with mainline. I appreciate the comments. Thanks, Adam - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Thu, 2005-02-24 at 10:03 +, Russell King wrote: > On Thu, Feb 24, 2005 at 01:22:01AM -0500, Adam Belay wrote: > > 5.) write a bridge driver for Cardbus hardware > > We have this already - it's called "yenta". Yes, I'm aware. It should read: 5.) adapt the Yenta driver to the new PCI bus class :) > > What you need to be aware of is that cardbus hardware is special - it > may change its resource requirements at any time, both in terms of the > number of BUS IDs it wishes to consume, and the number and size of > IO and memory resources. We can have default sizes allocated for these windows. Maybe, we'll even have rebalancing at some point. As for BUS IDs, I'm not sure about the best behavior. I don't really like reserving 4 positions like we do now. It has a tendency to create conflicts, and seems to be unnecessary. How common are PCI bridge devices that attach to cardbus controllers? Does the BIOS ever preconfigure the cardbus bridge for this situation? I think it's important that we get bus numbering correct. Some hardware has problems now. > > Note also that if a cardbus bridge isn't on the root bus (it happens on > some laptops) these resource changes may impact on upstream bridges and > devices. > Yeah, also legacy resources can't pass through properly if the parent bridge isn't transparent. Complex bus topologies make the problem much more difficult when legacy hardware is involved. Thanks, Adam - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Thu, 2005-02-24 at 02:25 -0500, Jon Smirl wrote: > When you start writing the PCI root bridge driver you'll run into the > AGP drivers that are already attached to the bridge. I was surprised > by this since I expected AGP to be attached to the AGP bridge but now > I learned that it is a root bridge function. I'm going to have the PCI root bridge driver bind to a device on the primary side of the bridge. The device could be enumerated by ACPI or created manually when the bridge is detected. It will not, however, be a PCI device. > > An ISA LPC bridge driver would be nice too. It would let you turn off > serial ports, etc and let other systems know how many ports there are. > No real need for this, just a nice toy. I think this would make a lot of sense. ACPI could be used to enumerate child devices for this bridge. I'd like to begin work on a generic ISA bus driver soon. > > Does this work to cause a probe based on PCI class? > static struct pci_device_id p2p_id_tbl[] = { >{ PCI_DEVICE_CLASS(PCI_CLASS_BRIDGE_PCI << 8, 0x00) }, >{ 0 }, > }; Yes, the macro is used when matching against only a class of device. > > I would like to install a driver that gets called whenever new > CLASS_VGA hardware shows up via hotplug. It won't attach to the > device, it will just add some sysfs attributes. The framebuffer > drivers need to attach the device. If I add attributes this way how > can I remove them? It would be possible, but probably not a clean solution. Ideally we want one driver to bind to the graphics controller and remain bound. It will then create class devices for each graphics subsystem, such as framebuffer. Much work remains to be done before this can happen. Thanks, Adam - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Monday, February 28, 2005 3:27 pm, Adam Belay wrote: > How can we specify which bus to target? Maybe we could have a list of legacy (ISA?) devices for drivers like vgacon to attach to? The bus info could be stuffed into the legacy device structure itself so that the platform code would know what to do. > Also is the legacy IO space mapped to IO Memory on the other side of the > bridge? How do you mean? Legacy I/O port accesses just become strongly ordered memory transactions, afaik, and legacy memory accesses are dealt with the same way. Jesse - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Thu, 2005-02-24 at 15:02 -0800, Jesse Barnes wrote: > On Wednesday, February 23, 2005 11:03 pm, Adam Belay wrote: > > > > Jesse can comment on the specific support needed for multiple legacy IO > > > spaces. > > > > That would be great. Most of my experience has been with only a couple > > legacy IO port ranges passing through the bridge. > > Well, I'll give you one, somewhat perverse, example. On SGI sn2 machines, > each host<->pci bridge (either xio<->pci or numalink<->pci) has two pci > busses and some additional host bus ports. The bridges are capable of > generating low address bus cycles on both busses simultaneously, so we can do > ISA memory access and legacy port I/O on every bus in the system at the same > time. > > The main host chipset has no notion of VGA or legacy routing though, so doing > a port access to say 0x3c8 is ambiguous--we need a bus to target (though the > platform code could provide a 'default' bus for such accesses to go to, this > may be what VGA or legacy routing means for us under your scheme). Likewise, > accessing ISA memory space like 0xa needs a bus to target. > > It would be nice if this sort of thing was taken into account in your new > model, so that for example we could have the vgacon driver talking to > multiple different VGA cards at the same time. > > Thanks, > Jesse How can we specify which bus to target? Also is the legacy IO space mapped to IO Memory on the other side of the bridge? Thanks, Adam - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Thu, 2005-02-24 at 15:02 -0800, Jesse Barnes wrote: On Wednesday, February 23, 2005 11:03 pm, Adam Belay wrote: Jesse can comment on the specific support needed for multiple legacy IO spaces. That would be great. Most of my experience has been with only a couple legacy IO port ranges passing through the bridge. Well, I'll give you one, somewhat perverse, example. On SGI sn2 machines, each host-pci bridge (either xio-pci or numalink-pci) has two pci busses and some additional host bus ports. The bridges are capable of generating low address bus cycles on both busses simultaneously, so we can do ISA memory access and legacy port I/O on every bus in the system at the same time. The main host chipset has no notion of VGA or legacy routing though, so doing a port access to say 0x3c8 is ambiguous--we need a bus to target (though the platform code could provide a 'default' bus for such accesses to go to, this may be what VGA or legacy routing means for us under your scheme). Likewise, accessing ISA memory space like 0xa needs a bus to target. It would be nice if this sort of thing was taken into account in your new model, so that for example we could have the vgacon driver talking to multiple different VGA cards at the same time. Thanks, Jesse How can we specify which bus to target? Also is the legacy IO space mapped to IO Memory on the other side of the bridge? Thanks, Adam - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Monday, February 28, 2005 3:27 pm, Adam Belay wrote: How can we specify which bus to target? Maybe we could have a list of legacy (ISA?) devices for drivers like vgacon to attach to? The bus info could be stuffed into the legacy device structure itself so that the platform code would know what to do. Also is the legacy IO space mapped to IO Memory on the other side of the bridge? How do you mean? Legacy I/O port accesses just become strongly ordered memory transactions, afaik, and legacy memory accesses are dealt with the same way. Jesse - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Thu, 2005-02-24 at 02:25 -0500, Jon Smirl wrote: When you start writing the PCI root bridge driver you'll run into the AGP drivers that are already attached to the bridge. I was surprised by this since I expected AGP to be attached to the AGP bridge but now I learned that it is a root bridge function. I'm going to have the PCI root bridge driver bind to a device on the primary side of the bridge. The device could be enumerated by ACPI or created manually when the bridge is detected. It will not, however, be a PCI device. An ISA LPC bridge driver would be nice too. It would let you turn off serial ports, etc and let other systems know how many ports there are. No real need for this, just a nice toy. I think this would make a lot of sense. ACPI could be used to enumerate child devices for this bridge. I'd like to begin work on a generic ISA bus driver soon. Does this work to cause a probe based on PCI class? static struct pci_device_id p2p_id_tbl[] = { { PCI_DEVICE_CLASS(PCI_CLASS_BRIDGE_PCI 8, 0x00) }, { 0 }, }; Yes, the macro is used when matching against only a class of device. I would like to install a driver that gets called whenever new CLASS_VGA hardware shows up via hotplug. It won't attach to the device, it will just add some sysfs attributes. The framebuffer drivers need to attach the device. If I add attributes this way how can I remove them? It would be possible, but probably not a clean solution. Ideally we want one driver to bind to the graphics controller and remain bound. It will then create class devices for each graphics subsystem, such as framebuffer. Much work remains to be done before this can happen. Thanks, Adam - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Thu, 2005-02-24 at 10:03 +, Russell King wrote: On Thu, Feb 24, 2005 at 01:22:01AM -0500, Adam Belay wrote: 5.) write a bridge driver for Cardbus hardware We have this already - it's called yenta. Yes, I'm aware. It should read: 5.) adapt the Yenta driver to the new PCI bus class :) What you need to be aware of is that cardbus hardware is special - it may change its resource requirements at any time, both in terms of the number of BUS IDs it wishes to consume, and the number and size of IO and memory resources. We can have default sizes allocated for these windows. Maybe, we'll even have rebalancing at some point. As for BUS IDs, I'm not sure about the best behavior. I don't really like reserving 4 positions like we do now. It has a tendency to create conflicts, and seems to be unnecessary. How common are PCI bridge devices that attach to cardbus controllers? Does the BIOS ever preconfigure the cardbus bridge for this situation? I think it's important that we get bus numbering correct. Some hardware has problems now. Note also that if a cardbus bridge isn't on the root bus (it happens on some laptops) these resource changes may impact on upstream bridges and devices. Yeah, also legacy resources can't pass through properly if the parent bridge isn't transparent. Complex bus topologies make the problem much more difficult when legacy hardware is involved. Thanks, Adam - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Fri, 2005-02-25 at 15:38 -0800, Greg KH wrote: On Thu, Feb 24, 2005 at 01:22:01AM -0500, Adam Belay wrote: I look forward to any comments or suggestions. I like it all :) If you want to submit patches now that rearrange the code to make it easier for you to modify in the future to achieve the above goals, feel free, I'll gladly take them. thanks, greg k-h I'm going to do an updated release soon. It should take care of some of the issues on the TODO list and also will be based on previous feedback. From there, I'll start planning a strategy for merging with mainline. I appreciate the comments. Thanks, Adam - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Mon, 2005-02-28 at 15:38 -0800, Jesse Barnes wrote: On Monday, February 28, 2005 3:27 pm, Adam Belay wrote: How can we specify which bus to target? Maybe we could have a list of legacy (ISA?) devices for drivers like vgacon to attach to? The bus info could be stuffed into the legacy device structure itself so that the platform code would know what to do. Are these devices actually legacy, or PCI with compatibility interfaces? I think a struct isa_device would be be useful. Would a pointer to the struct pci_bus do the trick? Also is the legacy IO space mapped to IO Memory on the other side of the bridge? How do you mean? Legacy I/O port accesses just become strongly ordered memory transactions, afaik, and legacy memory accesses are dealt with the same way. Jesse I was just wondering if we have to reserve a memory range for this? Thanks, Adam - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Monday, February 28, 2005 4:13 pm, Adam Belay wrote: On Mon, 2005-02-28 at 15:38 -0800, Jesse Barnes wrote: On Monday, February 28, 2005 3:27 pm, Adam Belay wrote: How can we specify which bus to target? Maybe we could have a list of legacy (ISA?) devices for drivers like vgacon to attach to? The bus info could be stuffed into the legacy device structure itself so that the platform code would know what to do. Are these devices actually legacy, or PCI with compatibility interfaces? So far I've only tried VGA cards, like radeons and r128s. I think a struct isa_device would be be useful. Would a pointer to the struct pci_bus do the trick? Yeah, that would work for me. I was just wondering if we have to reserve a memory range for this? Sure, each bus can have that address range reserved. The ia64 specific HAVE_PCI_LEGACY code (in arch/ia64/sn/pci/pci_dma.c I think) might illuminate things a bit. Basically, each bus has legacy base addresses, we could reserve 64k for port space and 1M for memory. Jesse - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Thu, Feb 24, 2005 at 01:22:01AM -0500, Adam Belay wrote: > Hi all, > > For the past couple weeks I have been reorganizing the PCI subsystem to > better utilize the driver model. Specifically, the bus detection code > is now using a standard PCI driver. It turns out to be a major > undertaking, as the PCI probing code is closely tied into a lot of other > PCI components, and is spread throughout various architecture specific > areas. I'm hoping that these changes will allow for a much cleaner and > more functional PCI implementation. > > The basic flow of the new code is as follows: > 1.) A standard "driver core" driver binds to a bridge device. > 2.) When "*probe" is called it sets up the hardware and allocates a > "struct pci_bus". > 3.) The "struct pci_bus" is filled with information about the detected > bridge. > 4.) The driver then registers the "struct pci_bus" with the PCI Bus > Class. > 5.) The PCI Bus Class makes the bridge available to sysfs. > 6.) It then detects hardware attached to the bridge. > 7.) Each new PCI bridge device is registered with the driver model. > 8.) All remaining PCI devices are registered with the driver model. > > Steps 7 and 8 allow for better resource management. > > > I've attached an early version of my code. It has most of the new PCI > bus class registration code in place, and an early implementation of the > PCI-to-PCI bridge driver. The following remains to be done: > > 1.) refine and cleanup the new PCI Bus API > 2.) export the new API in "linux/pci.h", and cleanup any users of the > old code. > 3.) fix every PCI hotplug driver. > 4.) write a bridge driver for the PCI root bridge > 5.) write a bridge driver for Cardbus hardware > 6.) refine device registration order > 7.) redesign PCI bus number assignment and support bus renumbering > 8.) redesign PCI resource management to be compatible with the new code > 9.) testing on various architectures > 10.) Write "*suspend" and "*resume" routines for PCI bridges. Any ideas > on what needs to be done? > 11.) fix "PCI_LEGACY" (I may have broke it, but it should be trivial) > > I look forward to any comments or suggestions. I like it all :) If you want to submit patches now that rearrange the code to make it easier for you to modify in the future to achieve the above goals, feel free, I'll gladly take them. thanks, greg k-h - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Thu, Feb 24, 2005 at 01:22:01AM -0500, Adam Belay wrote: Hi all, For the past couple weeks I have been reorganizing the PCI subsystem to better utilize the driver model. Specifically, the bus detection code is now using a standard PCI driver. It turns out to be a major undertaking, as the PCI probing code is closely tied into a lot of other PCI components, and is spread throughout various architecture specific areas. I'm hoping that these changes will allow for a much cleaner and more functional PCI implementation. The basic flow of the new code is as follows: 1.) A standard driver core driver binds to a bridge device. 2.) When *probe is called it sets up the hardware and allocates a struct pci_bus. 3.) The struct pci_bus is filled with information about the detected bridge. 4.) The driver then registers the struct pci_bus with the PCI Bus Class. 5.) The PCI Bus Class makes the bridge available to sysfs. 6.) It then detects hardware attached to the bridge. 7.) Each new PCI bridge device is registered with the driver model. 8.) All remaining PCI devices are registered with the driver model. Steps 7 and 8 allow for better resource management. I've attached an early version of my code. It has most of the new PCI bus class registration code in place, and an early implementation of the PCI-to-PCI bridge driver. The following remains to be done: 1.) refine and cleanup the new PCI Bus API 2.) export the new API in linux/pci.h, and cleanup any users of the old code. 3.) fix every PCI hotplug driver. 4.) write a bridge driver for the PCI root bridge 5.) write a bridge driver for Cardbus hardware 6.) refine device registration order 7.) redesign PCI bus number assignment and support bus renumbering 8.) redesign PCI resource management to be compatible with the new code 9.) testing on various architectures 10.) Write *suspend and *resume routines for PCI bridges. Any ideas on what needs to be done? 11.) fix PCI_LEGACY (I may have broke it, but it should be trivial) I look forward to any comments or suggestions. I like it all :) If you want to submit patches now that rearrange the code to make it easier for you to modify in the future to achieve the above goals, feel free, I'll gladly take them. thanks, greg k-h - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Wednesday, February 23, 2005 11:03 pm, Adam Belay wrote: > Yeah, actually I've been thinking about this issue a lot. I think it > would make a lot of sense to export this sort of thing under the > "pci_bus" class in sysfs. The ISA enable bit should probably also be > exported. Furthermore, we should be verifying the BIOS's configuration > of VGA and ISA. I'll try to integrate this in my future releases. I > appreciate the code. > > I also have a number of resource management plans for the VGA enable bit > that I'll get into in my next set of patches. Keep in mind that the interface above is probably specific to PCI to PCI bridges since there's a spec for that. Host to PCI bridges may implement their own methods for VGA routing and legacy port access. > > Jesse can comment on the specific support needed for multiple legacy IO > > spaces. > > That would be great. Most of my experience has been with only a couple > legacy IO port ranges passing through the bridge. Well, I'll give you one, somewhat perverse, example. On SGI sn2 machines, each host<->pci bridge (either xio<->pci or numalink<->pci) has two pci busses and some additional host bus ports. The bridges are capable of generating low address bus cycles on both busses simultaneously, so we can do ISA memory access and legacy port I/O on every bus in the system at the same time. The main host chipset has no notion of VGA or legacy routing though, so doing a port access to say 0x3c8 is ambiguous--we need a bus to target (though the platform code could provide a 'default' bus for such accesses to go to, this may be what VGA or legacy routing means for us under your scheme). Likewise, accessing ISA memory space like 0xa needs a bus to target. It would be nice if this sort of thing was taken into account in your new model, so that for example we could have the vgacon driver talking to multiple different VGA cards at the same time. Thanks, Jesse - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Thu, Feb 24, 2005 at 01:22:01AM -0500, Adam Belay wrote: > 5.) write a bridge driver for Cardbus hardware We have this already - it's called "yenta". What you need to be aware of is that cardbus hardware is special - it may change its resource requirements at any time, both in terms of the number of BUS IDs it wishes to consume, and the number and size of IO and memory resources. Note also that if a cardbus bridge isn't on the root bus (it happens on some laptops) these resource changes may impact on upstream bridges and devices. -- Russell King Linux kernel2.6 ARM Linux - http://www.arm.linux.org.uk/ maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/ 2.6 Serial core - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Thu, Feb 24, 2005 at 01:22:01AM -0500, Adam Belay wrote: 5.) write a bridge driver for Cardbus hardware We have this already - it's called yenta. What you need to be aware of is that cardbus hardware is special - it may change its resource requirements at any time, both in terms of the number of BUS IDs it wishes to consume, and the number and size of IO and memory resources. Note also that if a cardbus bridge isn't on the root bus (it happens on some laptops) these resource changes may impact on upstream bridges and devices. -- Russell King Linux kernel2.6 ARM Linux - http://www.arm.linux.org.uk/ maintainer of: 2.6 PCMCIA - http://pcmcia.arm.linux.org.uk/ 2.6 Serial core - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Wednesday, February 23, 2005 11:03 pm, Adam Belay wrote: Yeah, actually I've been thinking about this issue a lot. I think it would make a lot of sense to export this sort of thing under the pci_bus class in sysfs. The ISA enable bit should probably also be exported. Furthermore, we should be verifying the BIOS's configuration of VGA and ISA. I'll try to integrate this in my future releases. I appreciate the code. I also have a number of resource management plans for the VGA enable bit that I'll get into in my next set of patches. Keep in mind that the interface above is probably specific to PCI to PCI bridges since there's a spec for that. Host to PCI bridges may implement their own methods for VGA routing and legacy port access. Jesse can comment on the specific support needed for multiple legacy IO spaces. That would be great. Most of my experience has been with only a couple legacy IO port ranges passing through the bridge. Well, I'll give you one, somewhat perverse, example. On SGI sn2 machines, each host-pci bridge (either xio-pci or numalink-pci) has two pci busses and some additional host bus ports. The bridges are capable of generating low address bus cycles on both busses simultaneously, so we can do ISA memory access and legacy port I/O on every bus in the system at the same time. The main host chipset has no notion of VGA or legacy routing though, so doing a port access to say 0x3c8 is ambiguous--we need a bus to target (though the platform code could provide a 'default' bus for such accesses to go to, this may be what VGA or legacy routing means for us under your scheme). Likewise, accessing ISA memory space like 0xa needs a bus to target. It would be nice if this sort of thing was taken into account in your new model, so that for example we could have the vgacon driver talking to multiple different VGA cards at the same time. Thanks, Jesse - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
When you start writing the PCI root bridge driver you'll run into the AGP drivers that are already attached to the bridge. I was surprised by this since I expected AGP to be attached to the AGP bridge but now I learned that it is a root bridge function. An ISA LPC bridge driver would be nice too. It would let you turn off serial ports, etc and let other systems know how many ports there are. No real need for this, just a nice toy. Does this work to cause a probe based on PCI class? static struct pci_device_id p2p_id_tbl[] = { { PCI_DEVICE_CLASS(PCI_CLASS_BRIDGE_PCI << 8, 0x00) }, { 0 }, }; I would like to install a driver that gets called whenever new CLASS_VGA hardware shows up via hotplug. It won't attach to the device, it will just add some sysfs attributes. The framebuffer drivers need to attach the device. If I add attributes this way how can I remove them? -- Jon Smirl [EMAIL PROTECTED] - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Thu, 2005-02-24 at 01:45 -0500, Jon Smirl wrote: > On Thu, 24 Feb 2005 01:22:01 -0500, Adam Belay <[EMAIL PROTECTED]> wrote: > > For the past couple weeks I have been reorganizing the PCI subsystem to > > better utilize the driver model. Specifically, the bus detection code > > is now using a standard PCI driver. It turns out to be a major > > What about VGA routing? Most PCI buses do it with the normal VGA bit > but big hardware supports multiple legacy IO spaces via the bridge > chips. > > Are you going to make sysfs entries for the bridges? If so I'd like a > VGA attribute that directly reads the VGA bit from the hardware and > display it instead of using the shadow copy. Yeah, actually I've been thinking about this issue a lot. I think it would make a lot of sense to export this sort of thing under the "pci_bus" class in sysfs. The ISA enable bit should probably also be exported. Furthermore, we should be verifying the BIOS's configuration of VGA and ISA. I'll try to integrate this in my future releases. I appreciate the code. I also have a number of resource management plans for the VGA enable bit that I'll get into in my next set of patches. > > Jesse can comment on the specific support needed for multiple legacy IO > spaces. > That would be great. Most of my experience has been with only a couple legacy IO port ranges passing through the bridge. Thanks, Adam - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Thu, 24 Feb 2005 01:22:01 -0500, Adam Belay <[EMAIL PROTECTED]> wrote: > For the past couple weeks I have been reorganizing the PCI subsystem to > better utilize the driver model. Specifically, the bus detection code > is now using a standard PCI driver. It turns out to be a major What about VGA routing? Most PCI buses do it with the normal VGA bit but big hardware supports multiple legacy IO spaces via the bridge chips. Are you going to make sysfs entries for the bridges? If so I'd like a VGA attribute that directly reads the VGA bit from the hardware and display it instead of using the shadow copy. /* sysfs show for VGA routing bridge */ static ssize_t vga_bridge_show(struct device *dev, char *buf) { struct pci_dev *pdev = to_pci_dev(dev); u16 l; /* don't trust the shadow PCI_BRIDGE_CTL_VGA in pdev */ /* user space (X) may change hardware without telling the kernel */ pci_read_config_word(pdev, PCI_BRIDGE_CONTROL, ); return sprintf(buf, "%d\n", (l & PCI_BRIDGE_CTL_VGA) != 0); } I also use these functions to control VGA routing, maybe they should be part of bridge support. static void bridge_yes(struct pci_dev *pdev) { struct pci_dev *bridge; struct pci_bus *bus; /* Make sure the bridges route to us */ bus = pdev->bus; while (bus) { bridge = bus->self; if (bridge) { bus->bridge_ctl |= PCI_BRIDGE_CTL_VGA; pci_write_config_word(bridge, PCI_BRIDGE_CONTROL, bus->bridge_ctl); } bus = bus->parent; } } static void bridge_no(struct pci_dev *pdev) { struct pci_dev *bridge; struct pci_bus *bus; /* Make sure the bridges don't route to us */ bus = pdev->bus; while (bus) { bridge = bus->self; if (bridge) { bus->bridge_ctl &= ~PCI_BRIDGE_CTL_VGA; pci_write_config_word(bridge, PCI_BRIDGE_CONTROL, bus->bridge_ctl); } bus = bus->parent; } } Jesse can comment on the specific support needed for multiple legacy IO spaces. -- Jon Smirl [EMAIL PROTECTED] - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[RFC] PCI bridge driver rewrite
Hi all, For the past couple weeks I have been reorganizing the PCI subsystem to better utilize the driver model. Specifically, the bus detection code is now using a standard PCI driver. It turns out to be a major undertaking, as the PCI probing code is closely tied into a lot of other PCI components, and is spread throughout various architecture specific areas. I'm hoping that these changes will allow for a much cleaner and more functional PCI implementation. The basic flow of the new code is as follows: 1.) A standard "driver core" driver binds to a bridge device. 2.) When "*probe" is called it sets up the hardware and allocates a "struct pci_bus". 3.) The "struct pci_bus" is filled with information about the detected bridge. 4.) The driver then registers the "struct pci_bus" with the PCI Bus Class. 5.) The PCI Bus Class makes the bridge available to sysfs. 6.) It then detects hardware attached to the bridge. 7.) Each new PCI bridge device is registered with the driver model. 8.) All remaining PCI devices are registered with the driver model. Steps 7 and 8 allow for better resource management. I've attached an early version of my code. It has most of the new PCI bus class registration code in place, and an early implementation of the PCI-to-PCI bridge driver. The following remains to be done: 1.) refine and cleanup the new PCI Bus API 2.) export the new API in "linux/pci.h", and cleanup any users of the old code. 3.) fix every PCI hotplug driver. 4.) write a bridge driver for the PCI root bridge 5.) write a bridge driver for Cardbus hardware 6.) refine device registration order 7.) redesign PCI bus number assignment and support bus renumbering 8.) redesign PCI resource management to be compatible with the new code 9.) testing on various architectures 10.) Write "*suspend" and "*resume" routines for PCI bridges. Any ideas on what needs to be done? 11.) fix "PCI_LEGACY" (I may have broke it, but it should be trivial) I look forward to any comments or suggestions. Thanks, Adam diffstat: Makefile |9 bus-class.c | 225 +++ bus/Makefile |6 bus/bus-p2p.c | 133 ++ device.c | 142 +++ pci.h |4 probe.c | 546 -- remove.c | 126 - 9 files changed, 598 insertions(+), 593 deletions(-) Patch is against 2.6.11-RC3. diff -urN linux/drivers/pci/bus/bus-p2p.c linux-pci/drivers/pci/bus/bus-p2p.c --- linux/drivers/pci/bus/bus-p2p.c 1969-12-31 19:00:00.0 -0500 +++ linux-pci/drivers/pci/bus/bus-p2p.c 2005-02-24 00:19:05.0 -0500 @@ -0,0 +1,133 @@ +/* + * bus-p2p.c - a generic PCI bus driver for PCI<->PCI bridges + * + */ + +#include +#include +#include + +static struct pci_device_id p2p_id_tbl[] = { + { PCI_DEVICE_CLASS(PCI_CLASS_BRIDGE_PCI << 8, 0x00) }, + { 0 }, +}; +MODULE_DEVICE_TABLE(pci, p2p_id_tbl); + +static void p2p_setup_bus_numbers(struct pci_dev *dev, struct pci_bus *bus) +{ + u32 buses; + + pci_read_config_dword(dev, PCI_PRIMARY_BUS, ); + + bus->primary = buses & 0xFF; + bus->secondary = (buses >> 8) & 0xFF; + bus->subordinate = (buses >> 16) & 0xFF; +} + +static void pci_enable_crs(struct pci_dev *dev) +{ + u16 cap, rpctl; + int rpcap = pci_find_capability(dev, PCI_CAP_ID_EXP); + if (!rpcap) + return; + + pci_read_config_word(dev, rpcap + PCI_CAP_FLAGS, ); + if (((cap & PCI_EXP_FLAGS_TYPE) >> 4) != PCI_EXP_TYPE_ROOT_PORT) + return; + + pci_read_config_word(dev, rpcap + PCI_EXP_RTCTL, ); + rpctl |= PCI_EXP_RTCTL_CRSSVE; + pci_write_config_word(dev, rpcap + PCI_EXP_RTCTL, rpctl); +} + +static void p2p_prepare_hardware(struct pci_dev *dev, struct pci_bus *bus) +{ + u16 bctl; + + /* Disable MasterAbortMode during probing to avoid reporting + of bus errors (in some architectures) */ + pci_read_config_word(dev, PCI_BRIDGE_CONTROL, ); + pci_write_config_word(dev, PCI_BRIDGE_CONTROL, + bctl & ~PCI_BRIDGE_CTL_MASTER_ABORT); + + bus->bridge_ctl = bctl; + + pci_enable_crs(dev); +} + +/* FIXME: these need to be defined in linux/pci.h */ +extern struct pci_bus * pci_alloc_bus(void); +extern int pci_add_bus(struct pci_bus *bus); +extern struct pci_bus * pci_derive_parent(struct device *); + +static int p2p_probe(struct pci_dev *dev, const struct pci_device_id *id) +{ + int err, i; + struct pci_bus *bus; + + if (dev->hdr_type != PCI_HEADER_TYPE_BRIDGE) + return -ENODEV; + + bus = pci_alloc_bus(); + + if (!bus) + return -ENOMEM; + + bus->bridge = >dev; + bus->parent = pci_derive_parent(>self->dev); + if (!bus->parent) { + err = -ENODEV; + goto out; + } + + bus->ops = bus->parent->ops; + bus->sysdata =
[RFC] PCI bridge driver rewrite
Hi all, For the past couple weeks I have been reorganizing the PCI subsystem to better utilize the driver model. Specifically, the bus detection code is now using a standard PCI driver. It turns out to be a major undertaking, as the PCI probing code is closely tied into a lot of other PCI components, and is spread throughout various architecture specific areas. I'm hoping that these changes will allow for a much cleaner and more functional PCI implementation. The basic flow of the new code is as follows: 1.) A standard driver core driver binds to a bridge device. 2.) When *probe is called it sets up the hardware and allocates a struct pci_bus. 3.) The struct pci_bus is filled with information about the detected bridge. 4.) The driver then registers the struct pci_bus with the PCI Bus Class. 5.) The PCI Bus Class makes the bridge available to sysfs. 6.) It then detects hardware attached to the bridge. 7.) Each new PCI bridge device is registered with the driver model. 8.) All remaining PCI devices are registered with the driver model. Steps 7 and 8 allow for better resource management. I've attached an early version of my code. It has most of the new PCI bus class registration code in place, and an early implementation of the PCI-to-PCI bridge driver. The following remains to be done: 1.) refine and cleanup the new PCI Bus API 2.) export the new API in linux/pci.h, and cleanup any users of the old code. 3.) fix every PCI hotplug driver. 4.) write a bridge driver for the PCI root bridge 5.) write a bridge driver for Cardbus hardware 6.) refine device registration order 7.) redesign PCI bus number assignment and support bus renumbering 8.) redesign PCI resource management to be compatible with the new code 9.) testing on various architectures 10.) Write *suspend and *resume routines for PCI bridges. Any ideas on what needs to be done? 11.) fix PCI_LEGACY (I may have broke it, but it should be trivial) I look forward to any comments or suggestions. Thanks, Adam diffstat: Makefile |9 bus-class.c | 225 +++ bus/Makefile |6 bus/bus-p2p.c | 133 ++ device.c | 142 +++ pci.h |4 probe.c | 546 -- remove.c | 126 - 9 files changed, 598 insertions(+), 593 deletions(-) Patch is against 2.6.11-RC3. diff -urN linux/drivers/pci/bus/bus-p2p.c linux-pci/drivers/pci/bus/bus-p2p.c --- linux/drivers/pci/bus/bus-p2p.c 1969-12-31 19:00:00.0 -0500 +++ linux-pci/drivers/pci/bus/bus-p2p.c 2005-02-24 00:19:05.0 -0500 @@ -0,0 +1,133 @@ +/* + * bus-p2p.c - a generic PCI bus driver for PCI-PCI bridges + * + */ + +#include linux/pci.h +#include linux/init.h +#include linux/module.h + +static struct pci_device_id p2p_id_tbl[] = { + { PCI_DEVICE_CLASS(PCI_CLASS_BRIDGE_PCI 8, 0x00) }, + { 0 }, +}; +MODULE_DEVICE_TABLE(pci, p2p_id_tbl); + +static void p2p_setup_bus_numbers(struct pci_dev *dev, struct pci_bus *bus) +{ + u32 buses; + + pci_read_config_dword(dev, PCI_PRIMARY_BUS, buses); + + bus-primary = buses 0xFF; + bus-secondary = (buses 8) 0xFF; + bus-subordinate = (buses 16) 0xFF; +} + +static void pci_enable_crs(struct pci_dev *dev) +{ + u16 cap, rpctl; + int rpcap = pci_find_capability(dev, PCI_CAP_ID_EXP); + if (!rpcap) + return; + + pci_read_config_word(dev, rpcap + PCI_CAP_FLAGS, cap); + if (((cap PCI_EXP_FLAGS_TYPE) 4) != PCI_EXP_TYPE_ROOT_PORT) + return; + + pci_read_config_word(dev, rpcap + PCI_EXP_RTCTL, rpctl); + rpctl |= PCI_EXP_RTCTL_CRSSVE; + pci_write_config_word(dev, rpcap + PCI_EXP_RTCTL, rpctl); +} + +static void p2p_prepare_hardware(struct pci_dev *dev, struct pci_bus *bus) +{ + u16 bctl; + + /* Disable MasterAbortMode during probing to avoid reporting + of bus errors (in some architectures) */ + pci_read_config_word(dev, PCI_BRIDGE_CONTROL, bctl); + pci_write_config_word(dev, PCI_BRIDGE_CONTROL, + bctl ~PCI_BRIDGE_CTL_MASTER_ABORT); + + bus-bridge_ctl = bctl; + + pci_enable_crs(dev); +} + +/* FIXME: these need to be defined in linux/pci.h */ +extern struct pci_bus * pci_alloc_bus(void); +extern int pci_add_bus(struct pci_bus *bus); +extern struct pci_bus * pci_derive_parent(struct device *); + +static int p2p_probe(struct pci_dev *dev, const struct pci_device_id *id) +{ + int err, i; + struct pci_bus *bus; + + if (dev-hdr_type != PCI_HEADER_TYPE_BRIDGE) + return -ENODEV; + + bus = pci_alloc_bus(); + + if (!bus) + return -ENOMEM; + + bus-bridge = dev-dev; + bus-parent = pci_derive_parent(bus-self-dev); + if (!bus-parent) { + err = -ENODEV; + goto out; + } + + bus-ops = bus-parent-ops; + bus-sysdata
Re: [RFC] PCI bridge driver rewrite
On Thu, 24 Feb 2005 01:22:01 -0500, Adam Belay [EMAIL PROTECTED] wrote: For the past couple weeks I have been reorganizing the PCI subsystem to better utilize the driver model. Specifically, the bus detection code is now using a standard PCI driver. It turns out to be a major What about VGA routing? Most PCI buses do it with the normal VGA bit but big hardware supports multiple legacy IO spaces via the bridge chips. Are you going to make sysfs entries for the bridges? If so I'd like a VGA attribute that directly reads the VGA bit from the hardware and display it instead of using the shadow copy. /* sysfs show for VGA routing bridge */ static ssize_t vga_bridge_show(struct device *dev, char *buf) { struct pci_dev *pdev = to_pci_dev(dev); u16 l; /* don't trust the shadow PCI_BRIDGE_CTL_VGA in pdev */ /* user space (X) may change hardware without telling the kernel */ pci_read_config_word(pdev, PCI_BRIDGE_CONTROL, l); return sprintf(buf, %d\n, (l PCI_BRIDGE_CTL_VGA) != 0); } I also use these functions to control VGA routing, maybe they should be part of bridge support. static void bridge_yes(struct pci_dev *pdev) { struct pci_dev *bridge; struct pci_bus *bus; /* Make sure the bridges route to us */ bus = pdev-bus; while (bus) { bridge = bus-self; if (bridge) { bus-bridge_ctl |= PCI_BRIDGE_CTL_VGA; pci_write_config_word(bridge, PCI_BRIDGE_CONTROL, bus-bridge_ctl); } bus = bus-parent; } } static void bridge_no(struct pci_dev *pdev) { struct pci_dev *bridge; struct pci_bus *bus; /* Make sure the bridges don't route to us */ bus = pdev-bus; while (bus) { bridge = bus-self; if (bridge) { bus-bridge_ctl = ~PCI_BRIDGE_CTL_VGA; pci_write_config_word(bridge, PCI_BRIDGE_CONTROL, bus-bridge_ctl); } bus = bus-parent; } } Jesse can comment on the specific support needed for multiple legacy IO spaces. -- Jon Smirl [EMAIL PROTECTED] - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
On Thu, 2005-02-24 at 01:45 -0500, Jon Smirl wrote: On Thu, 24 Feb 2005 01:22:01 -0500, Adam Belay [EMAIL PROTECTED] wrote: For the past couple weeks I have been reorganizing the PCI subsystem to better utilize the driver model. Specifically, the bus detection code is now using a standard PCI driver. It turns out to be a major What about VGA routing? Most PCI buses do it with the normal VGA bit but big hardware supports multiple legacy IO spaces via the bridge chips. Are you going to make sysfs entries for the bridges? If so I'd like a VGA attribute that directly reads the VGA bit from the hardware and display it instead of using the shadow copy. Yeah, actually I've been thinking about this issue a lot. I think it would make a lot of sense to export this sort of thing under the pci_bus class in sysfs. The ISA enable bit should probably also be exported. Furthermore, we should be verifying the BIOS's configuration of VGA and ISA. I'll try to integrate this in my future releases. I appreciate the code. I also have a number of resource management plans for the VGA enable bit that I'll get into in my next set of patches. Jesse can comment on the specific support needed for multiple legacy IO spaces. That would be great. Most of my experience has been with only a couple legacy IO port ranges passing through the bridge. Thanks, Adam - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [RFC] PCI bridge driver rewrite
When you start writing the PCI root bridge driver you'll run into the AGP drivers that are already attached to the bridge. I was surprised by this since I expected AGP to be attached to the AGP bridge but now I learned that it is a root bridge function. An ISA LPC bridge driver would be nice too. It would let you turn off serial ports, etc and let other systems know how many ports there are. No real need for this, just a nice toy. Does this work to cause a probe based on PCI class? static struct pci_device_id p2p_id_tbl[] = { { PCI_DEVICE_CLASS(PCI_CLASS_BRIDGE_PCI 8, 0x00) }, { 0 }, }; I would like to install a driver that gets called whenever new CLASS_VGA hardware shows up via hotplug. It won't attach to the device, it will just add some sysfs attributes. The framebuffer drivers need to attach the device. If I add attributes this way how can I remove them? -- Jon Smirl [EMAIL PROTECTED] - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/