Re: Trouble with DMA on PPC linux question
Ben, Benjamin Herrenschmidt wrote on 04/19/2016 01:45:40 AM: > From: Benjamin Herrenschmidt > To: bruce_leon...@selinc.com, linuxppc-dev@lists.ozlabs.org > Date: 04/19/2016 01:46 AM > Subject: Re: Trouble with DMA on PPC linux question > > On Mon, 2016-04-18 at 14:54 -0700, bruce_leon...@selinc.com wrote: > > > > On the DMA transactions that work, the virtual address I hand to > > dma_map_single() is something like 0xe084 and the dma_addr_t result is > > 0x1084 which is less than my 512Mb limit. On the transactions that > > don't work, the virtual address is 0xd539 with the mapped result being > > 0x2539, which is past my upper bound on my RAM. In fact it's not even > > in my memory map, there's a hole there. > > Where does this virtual address come from ? > > The kernel has two types of virtual addresses. Those coming from the > linear mapping (the stuff you get from kmalloc() for example, or > get_pages()) which can be translated using that simple substraction. > > The other is the vmalloc space, and that is a non-linear mapping of > random pages. > > If your vaddr comes from the latter it can't be passed to > dma_map_single as-is, you need to get to the underlying pages first. > > Ben. > That's a good question. I'm not sure where the addresses come from right now (they're handed to me from the MTD layer), but I'll certainly dig into that and see. Thanks for the help! I appreciate the pointer. Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Trouble with DMA on PPC linux question
Good afternoon everyone, We're trying to get some performance gains in an older embedded design by adding DMA to our NAND driver. The HW is an MPC8349 talking across a PCI bus to a NAND controller and we have 512Mb of RAM. We're using the 3.18 kernel and the Freescale "fsl,mpc8349-dma" driver. I've verified using a bus analyzer that DMA transactions are occurring on the PCI bus correctly (correct addresses and the data I'm reading is coming across the bus to the processor correctly). What's not happening is periodically the data being read doesn't make it to RAM. I've narrowed this down to the dma_addr_t I get back from dma_map_single(). Now I'm not an expert on how memory management in the PPC linux kernel works, but based on some experimentation and stepping through some of the code, translating a kernel virtual address is essentially subtracting 0xC000 from the virtual address. I know the equations a bit more than that, I've dug into some of the macros, but many of the constants compile to zero on my setup, so the end result is just the subtraction. On the DMA transactions that work, the virtual address I hand to dma_map_single() is something like 0xe084 and the dma_addr_t result is 0x1084 which is less than my 512Mb limit. On the transactions that don't work, the virtual address is 0xd539 with the mapped result being 0x2539, which is past my upper bound on my RAM. In fact it's not even in my memory map, there's a hole there. (Evidently the MPC4349 DMA engine bypasses the TLBs, since I'm not getting an exception of any kind...learned something new today!) So on the transactions that don't work, they fail because the physical address I give to the DMA engine doesn't exist. The only error indication I get is when I get an ECC error because what's pointed to be the virtual address (where ever that may be) still contains zeros and it fails the ECC comparison check. So my question is, where should I be looking or what config option should I be checking to try and figure out why the upper layers (MTD/UBI/UBIFS/user space) should be giving the NAND layer or my driver a virtual address that can't be translated into a physical address? One thing I have noticed (though I don't know if it's relevant or not) is that when I get a "good" virtual address it's through a call to nand_subpage_read() and when I get a "bad" virtual address it's through a call to nand_read_page_swecc(). I'm not asking if someone can solve my problem for me, but any suggestions of what rocks I can turn over to look for clues would be greatly appreciated. Thanks for you time and suggestions! Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: Question about GPIO Lib
Bill, Bill Gatliff wrote on 01/31/2012 08:39:05 AM: > > I misunderstood your message, then. I was thinking that you were > No worries, I frequently misunderstand myself :) Thanks for taking the time to respond, I appreciate it. > I'm DEFINITELY not saying that gpiolib is generally a waste of time! :) > > I'm just saying that, sadly, in many ways gpiolib is still a > work-in-progress. The userspace component has been somewhat Okay, that's more or less the point I had gotten to myself. My first linux project I just wrote everything myself and on this latest one I've been making a concerted effort to utilize existing services within the kernel. This looked (and still does) like a good candidate, I guess I just need to wrap a little bit of extra around it. Thanks! Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: Question about GPIO Lib
Bill, Bill Gatliff wrote on 01/27/2012 10:42:57 AM: > > On Fri, Jan 27, 2012 at 5:31 AM, wrote: > > > > The problem is we've got a number of other things hooked up to the GPIO > > pins that it would be very bad if someone from user space played with > > them, like our FPGA configuration pin. Some one toggles that and our box > > goes stupid. So what I'm wondering is if there's a way, preferably via > > the device tree, to limit the GPIOs that GPIO Lib exposes to user space? > > Sounds like you DON'T want to merely export that GPIO pin to userspace. > Well, yes I do want to just export to userspace, I just want to restrict the pins that get exported to only those that are defined in the device tree. I don't want or need to access any of the exported pins from kernel space and I don't want user space to access any pin not explicitly called out in the device tree. I want it to behave like gpio-leds only with input as well as output capabilities. > If you have anything in kernel space doing a gpio_request() on that > pin, it won't be exportable to userspace anyway. Regardless, you are > probably better off implement a DEVICE_ATTR that, in its store() > method, treads lightly on said pin. And then do a gpio_request() in > kernel space so that users can't ever see the pin directly. > > Just my $0.02. > If I understand this correctly you're basically saying that gpiolib is a waste of time and I should just write my own driver? Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Another FSL SPI driver question (warning long post)
Hi Kumar, I'm using the 3.0.3 kernel on an MPC8308 and utilizing the spi_fsl_spi driver to talk with an Cypress NvRAM device. I've gotten that working now, but I've come across something I don't understand in the driver and I'm not sure if it's just me or if there's a bug. My issue relates to chip selects, their active state, and their specification in the device tree (lots of moving parts here, so I hope I describe it well enough to be understood). The chip select for the NvRAM is active low, but setting everything up they way I _think_ it should be for an active low signal gets me an active high signal. The device tree entry is: spi@7000 { #address-cells = <1>; #size-cells = <0>; cell-index = <0>; compatible = "fsl,spi"; reg = <0x7000 0x1000>; interrupts = <16 0x8>; interrupt-parent = <&ipic>; mode = "cpu"; gpios = <&gpio1 16 1>; nvram@0 { compatible = "spidev"; spi-max-frequency = <4000>; reg = <0>; }; }; And the relevant driver code is the fsl_spi_chipselect and the fsl_spi_cs_control functions shown below (line numbers are for reference only and don't match lines numbers in .../drivers/spi/spi_fsl_spi.c): 1 static void fsl_spi_chipselect(struct spi_device *spi, int value) 2 { 3 struct mpc8xxx_spi *mpc8xxx_spi = spi_master_get_devdata(spi->master); 4 struct fsl_spi_platform_data *pdata = spi->dev.parent->platform_data; 5 bool pol = spi->mode & SPI_CS_HIGH; 6 struct spi_mpc8xxx_cs *cs = spi->controller_state; 7 8 if (value == BITBANG_CS_INACTIVE) { 9 if (pdata->cs_control) 10 pdata->cs_control(spi, !pol); 11 } 12 13 if (value == BITBANG_CS_ACTIVE) { 14 mpc8xxx_spi->rx_shift = cs->rx_shift; 15 mpc8xxx_spi->tx_shift = cs->tx_shift; 16 mpc8xxx_spi->get_rx = cs->get_rx; 17 mpc8xxx_spi->get_tx = cs->get_tx; 18 19 fsl_spi_change_mode(spi); 20 21 if (pdata->cs_control) 22 pdata->cs_control(spi, pol); 23 } 24 } 25 26 static void fsl_spi_cs_control(struct spi_device *spi, bool on) 27 { 28 struct device *dev = spi->dev.parent; 29 struct mpc8xxx_spi_probe_info *pinfo = to_of_pinfo(dev->platform_data); 30 u16 cs = spi->chip_select; 31 int gpio = pinfo->gpios[cs]; 32 bool alow = pinfo->alow_flags[cs]; 33 34 gpio_set_value(gpio, on ^ alow); 35 } The variable "pol" comes from spi->mode & SPI_CS_HIGH (line 5) and spi->mode gets it's value based on the spi-cs-high attribute in the device tree via .../drivers/of/of_spi.c like this: if (of_find_property(nc, "spi-cs-high", NULL)) spi->mode |= SPI_CS_HIGH; In my case I want an active low CS so I don't include this attribute and spi->mode doesn't get the bit set. "alow" (line 32) ultimately comes from the flags part of the gpios specifier in the SPI node of my device tree via the of_fsl_spi_probe function like this: gpio = of_get_gpio_flags(np, i, &flags); <- flags gets a direct copy of flags in the gpios specifier pinfo->alow_flags[i] = flags & OF_GPIO_ACTIVE_LOW; OF_GPIO_ACTIVE_LOW is an enum with a value of 0x1, implying that a value of "1" in the flags in my gpios specifier is saying the GPIO signal is active low. So if my understanding is right, I've got everything set up in my device tree correctly to have an active low CS. Now to the crux of the problem, line 34, the gpio_set_value call. When an SPI transaction is started and the CS needs to be driven to it's active state (low in my case) fsl_spi_chipselect is called with value = BITBANG_CS_ACTIVE, which leads to line 22, calling fsl_spi_cs_control with pol = 0 because spi->mode doesn't have the SPI_CS_HIGH bit set (line 5) which becomes "on" in fsl_spi_cs_control. alow = 1(line 32) because flags is 1 in the gpios specifier. "on" = 0 XORed with "alow" = 1 equals 1 when gpio_set_value is called, setting my chipselect HIGH not low. Then when the transaction is done fsl_spi_chipselect is called with value = BITBANG_CS_INACTIVE which leads to line 10 calling fsl_spi_cs_control with !pol = 1, alow is still a 1 and 1 XOR 1 = 0 when gpio_set_value is called, setting my chipselect to LOW. I did an experiment in which I added the spi-cs-high attribute to my device tree (which should have made the signal active high) and the result was the signal operated in the opposite way from what the name of the attribute implies. So (if my understanding of the device tree entries is correct) the logic on line 34 appears to be flawed, but since I'm not 100% sure of my understanding I wanted to ask people smarter than I am. More over, I don't think I understand why a device tree entry is used to indicate which state to change the chipselect to. Wouldn't it make more sense if lines 10 and 22 pass in a "1" for
Question about GPIO Lib
(This time with a subject line, sorry) Hi, I'm using the 3.0.3 kernel on an MPC8308 and have turned on GPIO support (CONFIG_GPIOLIB = Y) because the SPI sub-system needed to use it for the GPIO pin I'm using as a CS to a NvRAM part. I also have some jumpers on the processor GPIO pins and I thought it would be really slick to use the built in GPIO support in the kernel rather than roll my own driver to read five pins. So I've got GPIO sysfs support turned on (CONFIG_GPIO_SYSFS = Y) and everything shows up in /sys/class/gpio and works as advertised. Really nice. The problem is we've got a number of other things hooked up to the GPIO pins that it would be very bad if someone from user space played with them, like our FPGA configuration pin. Some one toggles that and our box goes stupid. So what I'm wondering is if there's a way, preferably via the device tree, to limit the GPIOs that GPIO Lib exposes to user space? I tried the following in my device tree without success: gpio1: gpio@c00 { #gpio-cells = <2>; device_type = "gpio"; compatible = "fsl,mpc8308-gpio", "fsl,mpc8349-gpio"; reg = <0xc00 0x18>; interrupts = <74 0x8>; interrupt-parent = <&ipic>; gpio-controller; gpios = <&gpio1 0 0 &gpio1 1 0 &gpio1 2 0 &gpio1 3 0 &gpio1 4 0 &gpio1 5 0 &gpio1 6 0>; }; I also thought maybe a separate child node of the gpio1 node, but I think it would require a "compatible" attribute which would mean a driver to bind it to, and the whole point is to avoid writing a driver if someone else has already got something that will work. Thanks. Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
[no subject]
Hi, I'm using the 3.0.3 kernel on an MPC8308 and have turned on GPIO support (CONFIG_GPIOLIB = Y) because the SPI sub-system needed to use it for the GPIO pin I'm using as a CS to a NvRAM part. I also have some jumpers on the processor GPIO pins and I thought it would be really slick to use the built in GPIO support in the kernel rather than roll my own driver to read five pins. So I've got GPIO sysfs support turned on (CONFIG_GPIO_SYSFS = Y) and everything shows up in /sys/class/gpio and works as advertised. Really nice. The problem is we've got a number of other things hooked up to the GPIO pins that it would be very bad if someone from user space played with them, like our FPGA configuration pin. Some one toggles that and our box goes stupid. So what I'm wondering is if there's a way, preferably via the device tree, to limit the GPIOs that GPIO Lib exposes to user space? I tried the following in my device tree without success: gpio1: gpio@c00 { #gpio-cells = <2>; device_type = "gpio"; compatible = "fsl,mpc8308-gpio", "fsl,mpc8349-gpio"; reg = <0xc00 0x18>; interrupts = <74 0x8>; interrupt-parent = <&ipic>; gpio-controller; gpios = <&gpio1 0 0 &gpio1 1 0 &gpio1 2 0 &gpio1 3 0 &gpio1 4 0 &gpio1 5 0 &gpio1 6 0>; }; I also thought maybe a separate child node of the gpio1 node, but I think it would require a "compatible" attribute which would mean a driver to bind it to, and the whole point is to avoid writing a driver if someone else has already got something that will work. Thanks. Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: FSL SPI driver question
Norbert, > > ok, then I don't know. > > I doubt this is a spidev or FSP SPI driver problem though. > > Questions like: > > Could it be a HW problem ? > Is the correct SPI mode used ? > Does it work in u-boot ? > > Come to mind in a situation like this. > Thanks for the suggestions. I finally found it in the wee hours this morning, and it was operator error. The CY14 by default powers up in the write protect state and from the factory is erased to all zeros. So my writes and subsequent reads appeared to fail by the "fact" that I could never read what I wrote. Guess I need better reading glasses in my old age :-/ Anyway, I'm happily up and talking using the Freescale SPI driver and spidev. Thanks for the help and sorry for the noise on the list. Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: FSL SPI driver question
Hi Norbert, > > > > So the question is, how do I use spidev (or any other means) to get the > > 8308 SPI controller to keep SPICLK active so that the output data from the > > NvRAM gets clocked out to the 8308? > > > > Did you see Documentation/spi/spidev_fdx.c:do_msg ? > it perform a full-duplex (actually half-duplex) 1 byte transfer. > > In your case you need a transfer that outputs 3 bytes (read cmd + address) > and inputs 1? byte. > > If you do it this way I would expect the SPICLK to be active > during the 2nd part of the transfer (whenever the CPU "reads" the > data from SPI client). > Thanks for the reply. Yes I did find spidev_fdx.c and in fact copied it for my tests. I still see SPICLK active only during the time the 8308 is sending data (read cmd + address). Nothing happens with the clock after that when the NvRAM is ready to send data. Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
FSL SPI driver question
Good afternoon, I'm using the 3.0.3 kernel running on an MPC8308 and am trying to interface to a Cypress CY14B256Q2A non-volatile RAM via SPI. I've got the SPI infrastructure, the Freescale SPI driver (drivers/spi/spi_fsl_spi.c), and spidev built into the kernel and everything on the user space/kernel side appears to be working correctly (at least when I try to read the NvRAM's config register all the right places in the kernel get hit and I see the SPI signals active with an o-scope). I think what I'm hitting is a lack of understanding/documentation on the SPI controller in the 8308. To read data from the NvRAM, the Master (the 8308 in this case) needs to clock out a byte long "read" command, two bytes of address, and then clock in the data from the NvRAM. However, I never get any data back. I think the problem is that (direct quote from the 8308 reference manual) "SPICLK is a gated clock, active only during data transfers". So once the read command and address are sent, the 8308 considers the data transfer complete and gates off SPICLK. Without SPICLK, the NvRAM has no way to clock out it's data. I think it's ready to, it just can't. So the question is, how do I use spidev (or any other means) to get the 8308 SPI controller to keep SPICLK active so that the output data from the NvRAM gets clocked out to the 8308? Thanks. Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: MPC8308 bursting question
Hi David, > > Read my MPC8349EA UPM setup notes and see if you have used > similar settings (I assume the local bus UPMs are similar): > I found you paper interesting. I didn't have a problem with my UPM settings, single beat reads and writes worked just fine, however it did tickle my memory and helped me find what was wrong. Turns out that the person who was responsible for the BAx/ORx registers for this chip select had set the BI bit a long time ago, before we were contemplating doing what we're currently doing. The code had been reviewed and signed off, so in my back brain I was thinking everything there was good. When I read your paper, it mentioned the BI bit which got me to thinking. Sure enough, when I looked it was set. Soon as I cleared that bit, my burst transactions started working just fine. Thanks for the help. Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
MPC8308 bursting question
All, This isn't really a Linux PPC question, but this is the smartest mailing list I know for asking PPC hardware questions, so here goes. We're using an MPC8308 and want to use the DMA engine to move data in and out of an FPGA hanging on the local bus. Our bandwidth/local bus burden calculations were done assuming that we could use bursting. So I've setup a UPM to do single beat and burst reads/writes. I've also configured my DMA TCD to use a data transfer size of 32-bytes when accessing the FPGA. The problem I'm having is I can't seem to get the UPM to ever trigger the burst write sequence using the DMA. Single beat reads and writes work okay, but no bursting. In fact, the only thing I've been able to find in the manual that causes a burst transaction is a cache line miss, which is wholly in the purview of the core and really does me no good. Does anyone know of any way (short of issuing a run command to the UPM via MxMR) to force a burst transaction on the 8308? Am I just being dumb and missing something totally fundamental? Thanks. Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: Question on supporting multiple HW versions with a single driver (warning: long post)
Ah, okay. Taking a look at that it makes sense now. Thanks to both of you for the help. Bruce From: Bill Gatliff To: bruce_leon...@selinc.com Cc: David Gibson , linuxppc-dev@lists.ozlabs.org Date: 02/07/2011 09:13 PM Subject: Re: Question on supporting multiple HW versions with a single driver (warning: long post) Guys: I think the Silicon Motion SM501 driver might provide a useful example, since the chip comes in both memory-mapped and PCI versions. Unfortunately the chip is implemented as a multi-function driver (mfd), so the code is not un-complicated. Still fairly straightforward and well-written once you learn your way around it, though. Basically, it implements a core set of functionality to talk to the actual chip registers, which is bus-agnostic. Then the bus-specific drivers use these functions when they actually want to touch the chip itself. In other words, exactly what David suggested. b.g. On Mon, Feb 7, 2011 at 8:37 PM, wrote: >> >> There are a number of drivers which already have this sort of dual bus >> binding. >> > > Thanks for the feedback David, I appreciate it. Could you point me to one > of those drivers that has "this sort of dual bus binding" so can see an > example of what I'm trying to do? > > Bruce > ___ > Linuxppc-dev mailing list > Linuxppc-dev@lists.ozlabs.org > https://lists.ozlabs.org/listinfo/linuxppc-dev > -- Bill Gatliff b...@billgatliff.com ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: Question on supporting multiple HW versions with a single driver (warning: long post)
> > There are a number of drivers which already have this sort of dual bus > binding. > Thanks for the feedback David, I appreciate it. Could you point me to one of those drivers that has "this sort of dual bus binding" so can see an example of what I'm trying to do? Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Question on supporting multiple HW versions with a single driver (warning: long post)
So this is sort of a follow on question to one I posted a month ago about trying to get a PCI driver to work with OF (which I think I more or less understood the answer to). I'm encountering a different sort of problem that I'd like to solve with OF but I'm not sure I can. Let me lay out a little background first. We build embedded systems, so we never really have hot plug events and our addresses (at least for HW interfaces) are pretty much static for any given product. In other words for product "A" the NAND controller will always be at address "X", though on product "B" that same NAND controller may be at address "Y". Also, the devices in the product are static, i.e., we'll always talk to an LXT971 as the PHY. Currently I'm working on building a driver for an ethernet MAC we're putting in an FPGA. The MAC is based on the MPC8347 TSEC and the driver is based on the gianfar driver. (My previous question was how to spoof the OF gianfar driver into thinking it was a PCI driver because our MAC is going to be hanging off a PCI bus. Ultimately I decided to just steal...err...borrow... the guts of the gianfar driver and make it a PCI driver that only deals with our MAC.) Right in the middle of writing this driver, my HW guys came to me and said they wanted to use this same MAC in other products. Great I said. Local bus they said. Which opens up a whole can of worms and leads to my question. We've got a MAC in a FPGA with a nice generic interface on the front of it that can talk to a whole range of different busses, PCI, PCIe, local bus (of any variety of any processor), etc. But the internals of the MAC (i.e., the register sets, the buffers, the whole buffer descriptor mechanism) all looks the same. Seems to me that this is exactly the sort of situation OF and device trees was developed for. What I'd like to do, and I'm sure it's possible but I have no idea how, is to still have this as an OF driver and have the device tree tell the kernel about the HW interface to use. So on one product (currently all products use an MCP83xx variant) I would have a child node under a PCI node to describe it's interrupts, addressing (which could also come from a PCI probe I expect), compatibility, any attached PHYs etc, and on a second product do the same thing under a localbus node. First question that comes to mind is ordering. If I put a child node in the PCI node of the device tree, what happens when the device tree is processed? Is it immediately going to try and find and install a driver for that child node? Since the device tree is processed very early, the PCI bus isn't going to be set up and available yet. Will trying to install a PCI driver via OF even be possible at this point? Then I'd still need a PCI function to claim the device when the PCI bus gets probed. If the driver is already installed via OF, what does the PCI function do? Or am I all backwards. Does having the child node to the PCI node actually do anything when the early OF code runs? If not would the PCI probe function be the first indication to the system that the driver needs to be loaded? In which case I just walk the device tree looking for...what? How would I match up the PCI ID with something in the device tree? Then there's the local bus side of the question? That should truly be an OF driver and use struct of_platform_driver along with that whole mechanism. How do I make that compatible with the version of the MAC that runs on PCI? Or am I making a whole lot of work for myself and I should just make them separate drivers? I'm trying to keep the code base as small and coherent as possible. I don't want to have to maintain multiple copies of a driver that are essentially identical. Thanks. Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: Question about combining a PCI driver with an OF driver
> > > Okay, I get that and it makes sense with what I know so far about how the > > kernel device model works (which I'm still learning). So how would I > > manually add a device? Say I create the PCI wrapper driver that claims > > the clone-TSEC, is there a "register device" type call similar to > > pci_register_driver() that I could put in the wrapper code that causes the > > gfar_probe() to be called? > > Create an OF node (probably under the root node) programatically with > all the information the gianfar driver will want, based on what you > detect on PCI, and call of_platform_device_create(). > Ah, the light bulb clicks on! Thanks for the info. I appreciate it. Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: Question about combining a PCI driver with an OF driver
Scott, Thanks for the feedback. > > Making a faithful clone of any reasonably complex device strikes me as > more work than writing a new ethernet driver. > > The last thing you want to end up doing is... > > > And for speed sake it would go on the PCI bus. > > (So much for letting HW make decisions regarding SW :) ) > > ...hacking up the existing driver to deal with the quirks of the clone, > and having to maintain those hacks. :-) > True, but we really didn't want to recreate all the infrastructure that the gianfar driver has in it we wanted to just use it. Maybe what I should do is just take the guts of the gianfar driver and make a pure PCI driver out of it. > > It shouldn't matter -- the way buses work in Linux, you should be able > to add a platform device at any time, and the driver will receive a > probe() callback. The driver never actively searches for devices to > claim. > Okay, I get that and it makes sense with what I know so far about how the kernel device model works (which I'm still learning). So how would I manually add a device? Say I create the PCI wrapper driver that claims the clone-TSEC, is there a "register device" type call similar to pci_register_driver() that I could put in the wrapper code that causes the gfar_probe() to be called? Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Question about combining a PCI driver with an OF driver
Hi all, I'm working on a project with an MPC8347 and three ethernet ports. Because of end of life issues we've had to replace the part we're using for the third ethernet port and we decided rather than rely on a vendor who would pull a part out from under us every two to three years we would do our own MAC in an FPGA. In order to reduce driver work it was decided that we would use the same hardware interface as the TSEC in the 8347 so we could reuse the gianfar driver. And for speed sake it would go on the PCI bus. (So much for letting HW make decisions regarding SW :) ) So now I'm stuck with hacking the gianfar driver to work on PCI. However, I think it would be a lot more elegant if I could wrap the gianfar driver with a PCI interface. After all the idea is sound, with a HW interface that looks like the TSEC I should be able to reuse the gianfar driver. But the gianfar driver is an open firmware driver registered with a call to of_register_platform_driver() and depending on the order in which the busses are walked the PCI bus may not be enumerated and available when the onboard TSECS are detected and the gianfar driver claims them. So the question is, how can I wrap an OF driver with a PCI driver so that I can just do a thin layer of probing the PCI bus, registering with the PCI sub-system, and then calling the OF probe in the gianfar driver? Thanks for any insight. Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: Question regarding the DTLB Miss exceptions
> > > > It's checking to see if the following bits of DSISR are set: > > 0 - set by DTLB miss if no hash table entry group is found > > No. 0 means we hit a direct store segment. This is a historical feature > of the architecture which we don't use, so should not happen. > Sorry, would have helped if I had specified the processor; we're using an MPC8347, so it's an e300c1 core. Also, my fat-fingering of the keyboard got me on this one - bit 0 on the e300 should be cleared. I put in bit 1's definition by mistake in my last email. > > 2 - an invalid bit > > Not sure what 2 is about. Yeah, and that's the one I'm trying to figure out why the DTLB Miss on Store exception code is setting before calling the DSI exception code :). > > are set and calling it a "weird error". What the heck does that mean, a > > "weird error"? > > Nah. It's a bad name. It means it's an error that is either something we > don't support (like direct store) or something that doesn't need to go > through hash_page, such as a breakpoint match. > Thanks, that actually helps, knowing that what's being done is determining reasons to call hash_page or not. Also knowing (which I suppose I should have figured out earlier) that some of the bits in DSISR are not defined for the e300c1 core but that the code is designed to support implementations that DO define those bits helps make sense of what I'm looking at. > > I missed the earlier discussion, what is your problem ? > Well, the problem I'm having is really irrelevant to the question at hand. My original question to the list was "why is the DTLB Store Miss exception setting bit two of the DSISR (an invalid bit according to the architecture) before calling the DSI exception?". Thanks for the explanations! Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Re: Question regarding the DTLB Miss exceptions
> > When a DTLB Miss exception can't find a PTE for the virtual address being > > written/read, it dummies up the SPRs for a DSI exception and then calls > > directly into the DSI exception code. Just before the DTLB miss code > > stores a value into DSISR it sets bit 2, which for a DSI exception is a > > reserved bit and should be cleared. There's no comment on the code > > (.../arch/powerpc/kernel/head_32.S line 619 of the 2.6.33-rc1 kernel). Can > > anyone tell me why this bit is getting set? > > You mean: > > 616 DataAddressInvalid: > 617 mfspr r3,SPRN_SRR1 > 618 rlwinm r1,r3,9,6,6 /* Get load/store bit */ > 619 addis r1,r1,0x2000 > 620 mtspr SPRN_DSISR,r1 > > Is it trying to set DSISR_ISSTORE? > > #define DSISR_ISSTORE 0x0200 /* access was a store */ > Michael, Thanks for the reply. I mean line 619 above. It's setting 0x2000 (bit 2) in the DSISR. 0x0200 (bit 6 or DSISR_ISSTORE) is already set because it's a DTLB Data Store Miss Exception. But 0x2000 is meaningless for the DSI Exception about to be called. According to the data sheet, it's supposed to be cleared. Someone wrote code to explicitly set a bit in the DSISR that has no meaning in the PPC architecture. So I assume it's being overloaded for some purpose, but I can't glean that purpose from the code. Equally perplexing to me is the following line of code from the DSI Exception code: andis. r0,r10,0xa470 /* weird error? */ It's checking to see if the following bits of DSISR are set: 0 - set by DTLB miss if no hash table entry group is found 2 - an invalid bit 5 - an invalid bit 9 - set if breakpoint match 10 - invalid bit 11 - set if the instruction is an ecwix or ecowx and EAR[E] = 0 So it's checking to see if three bits not defined by the PPC architecture are set and calling it a "weird error". What the heck does that mean, a "weird error"? Obviously overloaded bits used for some totally undocumented purpose that I can't figure out from the code and I'm just trying to understand what they're used for to see if it's related to my problem. Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Question regarding the DTLB Miss exceptions
I'm tracking a problem that's leading me through DSI and DTLB miss exceptions on an MPC8347 (e300c1 core), and I've come across an oddity that I'm hoping someone can explain. When a DTLB Miss exception can't find a PTE for the virtual address being written/read, it dummies up the SPRs for a DSI exception and then calls directly into the DSI exception code. Just before the DTLB miss code stores a value into DSISR it sets bit 2, which for a DSI exception is a reserved bit and should be cleared. There's no comment on the code (.../arch/powerpc/kernel/head_32.S line 619 of the 2.6.33-rc1 kernel). Can anyone tell me why this bit is getting set? Thanks. Bruce ___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev
Need help with using the BDI2K to debug exception handlers
Hi all, Okay, I'm putting on my asbestos underwear and hoping I don't sound too stupid. Here's my sitch: we're seeing an illegal instruction exception, but the tracks our diagnostic code we put into the kernel program_check_exception() function claims the instruction is perfectly good. So I want to use the BDI to set a BP in the program exception and poke around at a HW level rather than a SW level that has gone through an unknown number of context switches. Now I know that using the BDI in exceptions is hard to do for lots of reasons, first and foremost among them being the fact that the BDI uses SRR0/1 for it's own purposes. I've been down this path before and know there's problems. But what I'm seeing is even stranger than usual. I've replaced the program exception code in arch/powerpc/kernel/head_32.S with the following: . = 0x700 ProgramCheck: mtspr SPRN_SPRG1,r9 mtspr SPRN_SPRG2,r10 mtspr SPRN_SPRG7,r3 mfspr r9,SPRN_SRR0 mfspr r10,SPRN_SRR1 andis. r3,r10,0x0008 /* is it an illegal instruction? */ beq 1f /* no so continue */ 2: xor r3,r3,r3/* dummy instruction */ b 2b /* loop forever */ 1: mfspr r10,SPRN_SPRG2 mfspr r9,SPRN_SPRG1 EXCEPTION_PROLOG; addir3,r1,STACK_FRAME_OVERHEAD; EXC_XFER_STD(0x700, program_check_exception); (Before everyone flames me, yes I know there's a bug, I didn't restore r3 before continuing to the program_check_exception; it's immaterial to the problem at hand because I don't really care if I ever successfully get into program_check_exception.) The purpose of all this is to save SRR0/1 into GPRs so the BDI doesn't whack them, check to see if the exception is being call because of an illegal instruction, continue on if not, and provide a place to hang a breakpoint if it is an illegal instruction. So I load up this code, connect to the BDI, set a HW BP on the branch instruction following the line labeled '2', tell it to go and sit back to wait. Eventually our problem occurs and the BDI says "TARGET: stopped" or some such, indicating it's hit the breakpoint. This is where things get strange and I need help. At this point the BDI output from the telnet session says the debug entry cause is and the current PC is 0x6fc, one instruction before the program exception. When I dump the registers r9 and r10 contain nothing the even remotely resemble SRR0/1. The link register contains a valid _physical_ address (though I would expect it to contain a virtual address from the last 'bl' instruction) but when I dump the memory pointed to by LR it contains all zeros, not PPC machine code. It looks like my code isn't even running even though it seems I've hit the breakpoint. It's almost as if the BDI recognizes I'm entering an exception that I've set a BP in and halts just before executing the exception code. I'm not sure I believe it, but that's how it appears. Has anyone seen this or have any suggestion on how I can get the BDI to quit 'helping' me and just stop where I tell it to in an exception handler? Thanks Bruce___ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev