Thanks Bill !
Yes I am using "Undi EDK Intel(R) PRO/1000" driver from Intel, 
it is for e1000_82575 NIC card.

Yes, the driver seems to have the support for 64-bit. Rest of the replies are 
in-lnlined

Vendor-id: device id is:

Shell> pci
   Seg  Bus  Dev  Func
   ---  ---  ---  ----
    00   00   00    00 ==> Bridge Device - PCI/PCI bridge
             Vendor 1957 Device 8040 Prog Interface 0
    00   01   00    00 ==> Network Controller - Ethernet controller
             Vendor 8086 Device 10D3 Prog Interface 0

Regards,
Shaveta

-----Original Message-----
From: Bill Paul [mailto:wp...@windriver.com] 
Sent: Monday, March 28, 2016 10:04 PM
To: edk2-de...@ml01.01.org
Cc: Shaveta Leekha <shaveta.lee...@nxp.com>; edk2-devel@lists.01.org 
<edk2-de...@ml01.01.org>
Subject: Re: [edk2] PCIe memory transaction issue

Of all the gin joints in all the towns in all the world, Shaveta Leekha had to 
walk into mine at 00:29:39 on Monday 28 March 2016 and say:

> Hi,
> 
> In PCIe memory transactions, I am facing an issue.
> 
> The scenario is:

> Case 1:
> In our system, we have allocated 32 bit memory space to one of the PCI 
> device (E1000 NIC card)

You did not say which Intel PRO/1000 card (vendor/device ID). There are 
literally dozens of them. (It's actually not that critical, but I'm curious.)

[Shaveta] Pasted above

> during enumeration and BAR programming. When NIC card is getting used 
> to transmit a ping packet, a local buffer is getting allocated from 32 
> bit main memory space. In this case, the packet is getting sent out 
> successfully.
> 
> 
> Case 2:
> Now when NIC card is getting used to transmit a ping packet, if a 
> local buffer is allocated from 64 bit main memory space. The packet 
> failed to transmit out.
> 
> Doubt 1: Would it be possible for this PCI device/NIC card (in our 
> case) to access this 64 bit address space for sending this packet out of 
> system?

I don't know offhand how the UEFI PRO/1000 driver handles this, but I know that 
pretty much all Intel PRO/1000 cards support 64-bit DMA addressing.

Some older PCI cards, like, say, the Intel 82557/8/9 PRO/100 cards, only 
support 32-bit addressing. That means that they only accept DMA source/target 
addresses that are 32-bits wide. For those, if you have a 64-bit system, you 
must use "bounce buffering." That is, the device can only DMA from addresses 
within the first 4GB of physical memory. If you have a packet buffer outside 
that window, then you have to copy it to a temporary buffer inside the window 
first (i.e. "bounce" it) and then set up the DMA transfer from that location 
instead.


[Shaveta] I have intel 82575 NIC card. Does it support 64 bit addressing?
As I am assuming that 64-bit addressing is supported, so before DMA, no 
separate mapping is done in "RootBridgeIoMap" function of PCI Root Bridge IO 
protocol.
Instead I am mapping the transfer to another 64-bit address space in this 
function.



This requires you to be able to allocate some storage from specific physical 
address regions (i.e. you have to ensure the storage is inside the 4GB window).

However the PRO/1000 doesn't have this limitation: you can specify fully 
qualified 64-bit addresses for both the RX and TX DMA ring base addresses and 
the packet buffers in the DMA descriptors, so you never need bounce buffering. 
This was true even for the earliest PCI-X PRO/1000 NICs, and is still true for 
the PCIe ones.


[Shaveta] Does E1000 driver always does a DMA for getting the buffer from 
system memory Or access memory via core?


For the base addresses, you have two 32-bit registers: one for the upper 32 
bits and one for the lower 32 bits. You have to initialize both. Drivers 
written for 32-bit systems will often hard code the upper 32 bits of the 
address fields to 0. If you use that same driver code on a 64-bit system, DMA 
transfers will still be initiated, but the source/target addresses will be 
wrong.


[Shaveta]   It seems that E1000 intel driver is writing both upper and lower 
bits for Tx and Rx:

  E1000_WRITE_REG (&GigAdapter->hw, E1000_TDBAL(0), (UINT32) (UINTN) 
(GigAdapter->tx_ring));
#if 0 //OM
  MemAddr = (UINT64) (UINTN) GigAdapter->tx_ring;
  MemPtr  = &((UINT32) MemAddr);
  MemPtr++;
#else
  MemPtr  = (UINT32*)(((UINTN)GigAdapter->tx_ring) >> 32);
#endif
  E1000_WRITE_REG (&GigAdapter->hw, E1000_TDBAH(0), *MemPtr);




Similarly for RX ring:

// Setup the RDBA, RDLEN
  //
  E1000_WRITE_REG (&GigAdapter->hw, E1000_RDBAL(0), (UINT32) (UINTN) 
(GigAdapter->rx_ring));

#if 0 //OM
  //
  // Set the MemPtr to the high dword of the rx_ring so we can store it in 
RDBAH0.
  // Right shifts do not seem to work with the EFI compiler so we do it like 
this for now.
  //
  MemAddr = (UINT64) (UINTN) GigAdapter->rx_ring;
  MemPtr  = &((UINT32) MemAddr);
  MemPtr++;
#else
  MemPtr  = (UINT32*)(((UINTN)GigAdapter->rx_ring) >> 32);
#endif
  E1000_WRITE_REG (&GigAdapter->hw, E1000_RDBAH(0), *MemPtr);

  E1000_WRITE_REG (&GigAdapter->hw, E1000_RDLEN(0), (sizeof 
(E1000_RECEIVE_DESCRIPTOR) * DEFAULT_RX_DESCRIPTORS));






> Doubt 2: If a device is allocated 32 bit Memory mapped space from 32 
> bit memory area, then for packet transactions, can we use 64 bit memory space?

Just to clarify: do not confuse the BAR mappings with DMA. They are two 
different concepts. I think a 64-bit BAR allows you to map the device's 
register bank anywhere within the 64-bit address space, whereas with a 32-bit 
BAR you have to map the registers within the first 4GB of address space 
(preferably somewhere that doesn't overlap RAM). However that has nothing to do 
with how DMA works: even with the PRO/1000's BARs mapped to a 32-bit region, 
you should still be able to perform DMA transfers to/from any 64-bit address.

The BARs use an outbound, i.e. the host issues outbound read/write requests and 
the device is the target of those requests.

DMA transfers use an inbound window, i.e. the devices issues read/write 
requests and the host is the target of those requests.


[Shaveta]  I didn't program Inbound windows, so they are open. Means any 
inbound transaction would come as it is.


The PRO/100 requires 32-bit addressing for both inbound and outbound requests.

The PRO/1000 can use 64-bit addressing.

-Bill
 
> Thanks and Regards,
> Shaveta
> 
> 
> Resource MAP for PCI bridge and one PCI device on bus 1 is:
> 
> PciBus: Resource Map for Root Bridge PciRoot(0x0)
> Type =   Io16; Base = 0x0;      Length = 0x1000;        Alignment = 0xFFF
> Base = 0x0;    Length = 0x1000;        Alignment = 0xFFF;      Owner = PPB 
> [00|00|00:**] Type =  Mem32; Base = 0x78000000;       Length = 0x5100000; 
>    Alignment = 0x3FFFFFF Base = 0x78000000;     Length = 0x4000000;    
> Alignment = 0x3FFFFFF;  Owner = PPB  [00|00|00:14] Base = 0x7C000000;    
> Length = 0x1000000;     Alignment = 0xFFFFFF;   Owner = PPB  [00|00|00:10]
> Base = 0x7D000000;     Length = 0x100000;      Alignment = 0xFFFFF;   
> Owner = PPB  [00|00|00:**]
> 
> PciBus: Resource Map for Bridge [00|00|00]
> Type =   Io16; Base = 0x0;      Length = 0x1000;        Alignment = 0xFFF
> Base = 0x0;    Length = 0x20;  Alignment = 0x1F;       Owner = PCI 
> [01|00|00:18] Type =  Mem32; Base = 0x78000000;       Length = 0x4000000; 
>    Alignment = 0x3FFFFFF
> 
> gArmPlatformTokenSpaceGuid.PcdPciMmio32Base|0x40000000
>   gArmPlatformTokenSpaceGuid.PcdPciMmio32Size|0x40000000      # 128M
>   gArmPlatformTokenSpaceGuid.PcdPciMemTranslation|0x1400000000
>   gArmPlatformTokenSpaceGuid.PcdPciMmio64Base|0x1440000000
>   gArmPlatformTokenSpaceGuid.PcdPciMmio64Size|0x40000000
> _______________________________________________
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel

--
=============================================================================
-Bill Paul            (510) 749-2329 | Senior Member of Technical Staff,
                 wp...@windriver.com | Master of Unix-Fu - Wind River Systems 
=============================================================================
   "I put a dollar in a change machine. Nothing changed." - George Carlin 
=============================================================================
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel

Reply via email to