Re: [edk2] edk2 and gnu-efi calling schemes

2018-12-06 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Peter Wiehe had to 
walk into mine at 14:34 on Thursday 06 December 2018 and say:

> OK, another question:
> 
> when writing an UEFI application, edk2 and gnu-efi have different 64bit
> calling schemes. Does that only apply to calling the
> runtime-library/object file (and inside of the UEFI-application, of
> course)? Or does the call from application to UEFI differ in both
> toolkits, too? (If it is the latter, it would mean that the UEFI
> standard is unprecise!)

Both the EDK and GNU EFI obey the same standards when calling UEFI APIs. Their 
exact implementations may differ depending on the circumstances. For example, 
GNU EFI may use the __attribute__((ms_abi)) tag to tell the compiler what ABI 
to use, or if the compiler doesn't support this it can fall back to using some 
compatibility wrapper macros (see lib/x86_64/efi_stub.S). Either way, you end 
up with the same behavior.

Within a given FOO.EFI application, the application code itself can get away 
with using whatever calling convention it wants, right up until it needs to 
call a UEFI firmware routine. At that point, it has to follow the conventions 
spelled out in the UEFI spec.

-Bill

> Kind regards
> 
> Peter Wiehe
> 
> ___
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel
-- 
=====
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] Stack issue after warm UEFI reset and MMU enabling on an Armv8 platform

2018-09-19 Thread Bill Paul
MMU is always tricky. I can't say for sure, but 
I wouldn't be surprised if there's some subtle bug that causes a flush 
operation to be missed and things may just work by coincidence in the cold 
start case.

-Bill
 
> >>Then jump to start of FV:
> >>
> >>typedef
> >>VOID
> >>
> >> (EFIAPI *START_FV)(
> >> 
> >>  VOID
> >>
> >>);
> >>StartOfFv = (START_FV)(UINTN)PcdGet64(PcdFvBaseAddress);
> >>StartOfFv ();
> >>
> >>Now this is what happens on warm reset:
> >>reset -c warm
> >>1. Until ArmEnableMmu() gets called, everything works as expected.
> >>
> >>Here is the stack right before ArmEnableMmu() is called:
> >> ArmConfigureMmu+0x4f8
> >> InitMmu+0x24
> >> MemoryPeim+0x440
> >> PrePiMain+0x114
> >> PrimaryMain+0x68
> >> CEntryPoint+0xC4
> >> EL2:0x88BC
> >> -  End of stack info -
> >>
> >>2. Here is the stack as soon as Mmu is enabled with ArmEnableMmu() :
> >>ArmConfigureMmu+0x4fc <-- This one is correct, at line 745 in
> >> 
> >> ArmConfigureMmu() in ArmPkg/Library/ArmMmuLib/AArch64/ArmMmuLibCore.c
> >> (return EFI_SUCCESS)
> >> 
> >>   _ModuleEntryPoint+0x24 <-- Wrong. This points directly to
> >> 
> >> ASSERT(FALSE); and to CpuDeadLoop() in DxeCoreEntryPoint.c, lines 59-60.
> >> 
> >>   El2:0x8E5E8300 <-- Absolutely bogus
> >>   
> >>--- End of stack info ---
> >>
> >>So, as soon as ArmEnableMmu() exits, execution jumps directly to
> >>CpuDeadLoop() in DxeCoreEntryPoint of _ModuleEntryPoint().
> >>
> >>Would be grateful for any advice.
> >>
> >>Thank you,
> >>Vladimir
> 
> ___
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel
-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] PciSegmentInfoLib instances

2018-05-28 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Ni, Ruiyu had to walk 
into mine at 19:55 on Sunday 27 May 2018 and say:

> No. There is no such instance.
> 
> My understanding:
> Segment is just to separate the PCI devices to different groups.
> Each group of devices use the continuous BUS/IO/MMIO resource.
> Each group has a BASE PCIE address that can be used to access PCIE
> configuration in MMIO way.

This makes it sound like an either/or design choice that a hardware designer 
can make simply for some kind of convenience, and I don't think that's the 
case.

Segments typically indicate completely separate host/PCI interfaces. For 
example, I've seen older Intel boards with both 32-bit/33MHz slots and 64-
bit/66MHz slots. This was not done with a bridge: each set of slots was tied 
to a completely separate host PCI bridge and hence each was a separate 
segment. This was required in order to support legacy 32-bit/33MHz devices 
without forcing the 64-bit/66MHz devices down to 33MHz as well.

With PCIe, on platforms other than Intel, each root complex would also be a 
separate segment. Each root complex would have its own bus/dev/func namespace, 
its own configuration space access method, and its own portion of the physical 
address space into which to map BARs. This means that you could have two or 
more different devices with the same bus/dev/func identifier tuple, meaning 
they are not unique on a platform-wide basis. 

At the hardware level, PCIe may be implemented similarly on Intel too, but 
they hide some of the details from you. The major difference is that even in 
cases where you may have multiple PCIe channels, they all share the same 
bus/dev/func namespace so that you can pretend the bus/dev/func tuples are 
unique platform-wide. The case where you would need to advertise multiple 
segments arises where there's some technical roadblock that prevents 
implementing this illusion of a single namespace in a completely transparent 
way.

In the case of the 32-bit/64-bit hybrid design I mentioned above, scanning the 
bus starting from bus0/dev0/func0 would only allow you to automatically 
discover the 32-bit devices because there was no bridge between the 32-bit and 
64-bit spaces. The hardware allows you to issue configuration accesses to both 
spaces using the same 0xcf8/0xcfc registers, but in order to autodiscover the 
64-bit devices, you needed know ahead of time to also scan starting at 
bus1/dev0/func0. But the only way to know to do that was to check the 
advertised segments in the ACPI device table and honor their starting bus 
numbers.
 
> So with the above understanding, even a platform which has single segment
> can be implemented as a multiple segments platform.

I would speculate this might only be true on Intel. :) Intel is the only 
platform that creates the illusion of a single bus/dev/func namespace for 
multiple PCI "hoses," and it only does that for backward compatibility 
purposes (i.e. to make Windows happy). Without that gimmick, each segment 
would be a separate tree rooted at bus0/dev0/func0, and there wouldn't be much 
point to doing that if you only had a single root complex.

-Bill
 
> Thanks/Ray
> 
> > -Original Message-
> > From: edk2-devel  On Behalf Of Laszlo
> > Ersek
> > Sent: Wednesday, May 23, 2018 3:38 PM
> > To: Ni, Ruiyu 
> > Cc: edk2-devel-01 
> > Subject: [edk2] PciSegmentInfoLib instances
> > 
> > Hi Ray,
> > 
> > do you know of any open source, non-Null, PciSegmentInfoLib instance?
> > (Possibly outside of edk2?)
> > 
> > More precisely, it's not the PciSegmentInfoLib instance itself that's of
> > particular interest, but the hardware and the platform support code that
> > offer multiple PCIe segments.
> > 
> > Thanks
> > Laszlo
> > ___
> > edk2-devel mailing list
> > edk2-devel@lists.01.org
> > https://lists.01.org/mailman/listinfo/edk2-devel
> 
> ___
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel
-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] Query regarding hole in EFI Memory Map

2018-05-14 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Prakhya, Sai Praneeth 
had to walk into mine at 16:30 on Monday 14 May 2018 and say:

> Hi All,
> 
> Recently, I have observed that there was a hole in EFI Memory Map passed by
> firmware to Linux kernel. So, wanted to check with you if this is expected
> or not.
> 
> My Test setup:
> I usually boot qemu with OVMF and Linux kernel. I use below command to boot
> kernel. "qemu-system-x86_64 -cpu host -hda  -serial stdio
> -bios  -m 2G -enable-kvm -smp 2"
> 
> I have noticed that the EFI Memory Map (printed by kernel) is almost
> contiguous but with only one hole ranging from 0xA to 0x10. As far
> as I know, kernel hasn't modified this EFI Memory Map, so I am assuming
> that firmware has passed memory map with a hole. I have looked at UEFI
> spec "GetMemoryMap()" definition, and it says "The map describes all of
> memory, no matter how it is being used". So, I am thinking that EFI Memory
> Map shouldn't have any holes, am I correct? If not, could someone please
> explain me the reason for this hole in EFI Memory Map.

The map may describe all of physical RAM, however it is not necessarily the 
case that all available RAM be physically contiguous.

With older IBM PCs based on the Intel 8088 processor, you could only have a 
1MB address space. The first 640KB was available for RAM. The remaining space 
traditionally contained memory-mapped option ROMs, particularly for things 
like the video BIOS routines. The VGA text screen was also mapped to 0xB8000.

Obviously, later processors made it possible to have additional memory above 
1MB (sometimes called "high memory"), but for backward compatibility purposes, 
the gap from 0xA to 0xF remained.

So basically, on Intel machines you will always see this gap in RAM due to 
"hysterical raisins." It's just an artifact of the platform design. (And for 
that reason you'll see it both with the UEFI memory map facility and the 
legacy E820 BIOS facility).

-Bill


> 
> 
> Please let me know if you want me to post the EFI Memory Map or E820 map
> that I am looking at.
> 
> Note: I have also observed the same hole in E820 map.
> 
> 
> 
> Regards,
> 
> Sai
> ___
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel
-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] Set "db" variable in secure boot setup mode still requires generating PKCS#7?

2018-05-01 Thread Bill Paul
Of all the gin joints in all the towns in all the world, David F. had to walk 
into mine at 14:13 on Tuesday 01 May 2018 and say:

> Hi,
> 
> Had a fairly simple task of wanting to install the latest MS .crt
> files for KEK, and their two files for the "db" (the Windows CA and
> UEFI CA) in a system placed in setup/custom mode.  However, even
> though it seemed to take the KEK, it never took the "db", always had a
> problem on a DH77KC mobo (dumped data headers looked as expected).
> Now when I constructed it, I thought I could leave out any PKCS#7 data
> (set the expected CertType but in the Hdr dwLength only included
> CertType and not any CertData), but looking at the algo in UEFI Spec
> 2.6 page 245, it looks like we'd always have to generate the hash,
> sign it, create all the PKCS stuff even in setup mode?That would
> surely unnecessarily bloat any apps that really only need to update
> things in setup mode wouldn't it?   So to confirm, that is a
> requirement even in setup mode?If so, why?
>

If I understand correctly, I think the issue is that the PK, KEK, db and dbx 
are always considered to be secure environment variables, which means when you 
try to update them with SetVariable(), you always have to include one of the 
authentication flags and a properly formated authentication header.

The difference between variable updates in secure mode vs. setup/custom mode 
is that in setup/custom mode, the signature is not validated. It still has to 
be there, but the firmware doesn't care what it says. So the db update could 
be signed with a completely different KEK than the one loaded into the KEK 
variable, and it would still be accepted.

-Bill



> TIA!!
> ___
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel
-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] [PATCH] SecurityPkg/DxePhysicalPresenceLib: Reject illegal PCR bank allocation

2018-01-25 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Zhang, Chao B had to 
walk into mine at 20:53 on Wednesday 24 January 2018 and say:

> According to TCG PP1.3 spec, error PCR bank allocation input should be
> rejected by Physical Presence. Firmware has to ensure that at least one
> PCR banks is active.
> 
> Cc: Long Qin <qin.l...@intel.com>
> Cc: Yao Jiewen <jiewen@intel.com>
> Contributed-under: TianoCore Contribution Agreement 1.1
> Signed-off-by: Chao Zhang <chao.b.zh...@intel.com>
> ---
>  .../DxeTcg2PhysicalPresenceLib/DxeTcg2PhysicalPresenceLib.c  | 12
>  1 file changed, 12 insertions(+)
> 
> diff --git
> a/SecurityPkg/Library/DxeTcg2PhysicalPresenceLib/DxeTcg2PhysicalPresenceLi
> b.c
> b/SecurityPkg/Library/DxeTcg2PhysicalPresenceLib/DxeTcg2PhysicalPresenceLi
> b.c index 5bf95a1..830266b 100644
> ---
> a/SecurityPkg/Library/DxeTcg2PhysicalPresenceLib/DxeTcg2PhysicalPresenceLi
> b.c +++
> b/SecurityPkg/Library/DxeTcg2PhysicalPresenceLib/DxeTcg2PhysicalPresenceLi
> b.c @@ -186,6 +186,18 @@ Tcg2ExecutePhysicalPresence (
>  case TCG2_PHYSICAL_PRESENCE_SET_PCR_BANKS:
>Status = Tpm2GetCapabilitySupportedAndActivePcrs
> (, ); ASSERT_EFI_ERROR (Status);
> +
> +  //
> +  // PP spec requirements:
> +  //Firmware should check that all requested (set) hashing
> algorithms are supported with respective PCR banks. +  //Firmware
> has to ensure that at least one PCR banks is active +  // If not, an
> error is returned and no action is taken
> +  //
> +  if (CommandParameter == 0 || (CommandParameter &
> (~TpmHashAlgorithmBitmap)) != 0) { +DEBUG((DEBUG_ERROR, "PCR banks
> %x to allocate are not supported by TPM. Skip operation\n",
> CommandParameter)); +return TCG_PP_OPERATION_RESPONSE_BIOS_FAILURE
> +  }
> +  DEBUG((DEBUG_ERROR, "zhangchao TpmHashAlgorithmBitmap %x

Was it your intention to have the debug error message string identify you by 
name? :)

-Bill

> CommandParameter %x\n", TpmHashAlgorithmBitmap, CommandParameter)); Status
> = Tpm2PcrAllocateBanks (PlatformAuth, TpmHashAlgorithmBitmap,
> CommandParameter); if (EFI_ERROR (Status)) {
>  return TCG_PP_OPERATION_RESPONSE_BIOS_FAILURE;
-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] OVMF Secure Boot variable storage issue

2017-07-06 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Jason Dickens had to 
walk into mine at 10:31:18 on Thursday 06 July 2017 and say:

> All,
> 
> I'm trying to understand why the secure boot variables (PK, KEK, db,
> etc) when using the OVMF build are not retained across reboot? It seems
> that this code uses roughly the same SetVariable, GetVariable2 approach
> as say the PlatformConfig uses to store screen resolution (which is
> retained). Additionally, the NvVars file is being at least touched by
> the secure boot configuration. So why are none of the keys retained on
> the next reboot?

If you're running OVMF in the QEMU simulator, and you're using the -bios 
option, try using the -pflash option instead.

I know that when using -bios, QEMU only pretends to allow writes to the 
firmware region, and if you stop QEMU all changes are discarded. The same 
might be true if you just trigger a hard reboot in the simulator too.

If you use -pflash instead, your changes will be saved. Note that this means 
your OVMF image will be modified, so keep a copy of the original elsewhere so 
that you can start over fresh again if you need to.

(Unfortunately I don't think OVMF has a "load factor defaults" option in its 
internal menus.)

-Bill
 
> I know this was an issue in the past, but I haven't found the resolution?
> 
> Jason
> 
> 
> ___
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel

-- 
=====
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] Using a generic PciHostBridgeDxe driver for a multi-PCIe-domain platform

2017-05-30 Thread Bill Paul
I behavior, though I'm not sure to what extent.

So the comment that you found that says:

// Most systems in the world including complex servers have only one Host
Bridge.

Should probably be amended: it should probably say "Most Intel systems" and 
even those systems probably do have more than one host bridge (root complex), 
it's just that it doesn't look like it.

-Bill
 
> Thank you,
> Vladimir
> ___
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


[edk2] UEFI Secure Technologies

2017-02-03 Thread Bill Paul
This is not strictly an EDK development question, but it may be the right 
audience to ask. The UEFI 2.5 specification introduced a section called Secure 
Technologies, which includes the definition for an EFI_PKCS7_VERIFY_PROTOCOL 
(among others).

My question is: what are the odds of this protocol being available in a given 
UEFI firmware build for a fielded system?

The context for this question has to do with how secure boot would be handled 
for OSes other than Windows. Obviously, once UEFI validates the BOOTxxx.EFI 
loader image, the next step would be for the boot loader to validate the OS 
image that comes after it, which requires the same kind of cryptographic 
signature validation that the UEFI firmware performs on loader. But the 
signature check is built into the BS->LoadImage() service and the firmware 
only knows how to check signatures on Microsoft PE/COFF images (signed 
according to the Microsoft Authenticode spec).

I'm assuming that somehow the Microsoft loader takes advantage of the fact 
that Windows executables (including the kernel and its DLLs) are also PE/COFF, 
and it somehow loads those with BS->LoadImage() too. That's great, if you're 
Microsoft.

But if you're not Microsoft, you can't use this strategy, which means your 
loader needs its own custom crypto code.

In theory the presence of EFI_PKCS7_VERIFY_PROTOCOL would mitigate this, but 
only on systems where the firmware includes it.

My concern is that since Windows doesn't depend on it, the odds of this 
protocol being included in a given build might be fairly slim. I'd like to 
hear some other (hopefully better-informed) opinions on this matter.

-Bill

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] build failure trying to build gcc cross-compiler

2016-12-12 Thread Bill Paul
tely destroy all copies of this
> email. 
> _______
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] PCI performance issue

2016-07-15 Thread Bill Paul
et loss due to the lack of
> > interrupts in UEFI, I mean, due to a network polling rate that is too
> > slow (look at the MNP poll and UEFI tick periods)
> > 
> > You should be able to get far better performance than 3MB/min!
> > 
> > Eugene
> > 
> > > -Original Message-
> > > From: edk2-devel [mailto:edk2-devel-boun...@lists.01.org] On Behalf Of
> > > Shaveta Leekha
> > > Sent: Thursday, July 14, 2016 7:45 AM
> > > To: Ard Biesheuvel <ard.biesheu...@linaro.org>
> > > Cc: edk2-devel@lists.01.org; Linaro UEFI Mailman List  > > u...@lists.linaro.org>
> > > Subject: Re: [edk2] PCI performance issue
> > > 
> > > Ok, I can try that !!
> > > 
> > > Thanks and Regards,
> > > Shaveta
> > > 
> > > -Original Message-
> > > From: Ard Biesheuvel [mailto:ard.biesheu...@linaro.org]
> > > Sent: Thursday, July 14, 2016 7:11 PM
> > > To: Shaveta Leekha <shaveta.lee...@nxp.com>
> > > Cc: edk2-devel@lists.01.org; Linaro UEFI Mailman List  > > u...@lists.linaro.org>
> > > Subject: Re: PCI performance issue
> > > 
> > > On 14 July 2016 at 15:29, Shaveta Leekha <shaveta.lee...@nxp.com>
> > 
> > wrote:
> > > > But I have not tested the code (software) on any other
> > > > hardware/board. As I have not yet ported PCI code on any other board
> > > > yet.
> > > 
> > > I would recommend to base your expectations not on U-Boot but on UEFI
> > > running on a different architecture using similar network hardware.
> > > ___
> > > edk2-devel mailing list
> > > edk2-devel@lists.01.org
> > > https://lists.01.org/mailman/listinfo/edk2-devel
> 
> ___
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] PCIe hotplug into downstream port

2016-06-30 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Laszlo Ersek had to 
walk into mine at 11:58:29 on Thursday 30 June 2016 and say:

> On 06/30/16 20:14, Brian J. Johnson wrote:
> > On 06/30/2016 11:47 AM, Laszlo Ersek wrote:
> >> On 06/30/16 18:39, Marcel Apfelbaum wrote:
> >>> On 06/30/2016 07:21 PM, Marcel Apfelbaum wrote:
> >>>> On 06/30/2016 04:07 PM, Laszlo Ersek wrote:
> >>>>> Hi,
[snip] 
> > We can safely skip
> > allocating IO ports to those cards, saving significant space in the
> > overall IO port map.  We've worked with the card vendors on cleaning up
> > their PCIe advertisements and eliminating the use of IO ports, but it
> > takes time.
> 
> I surprisingly often hear about non-conformance or unjustified
> behavior... I guess with huge, elaborate specs, this is unavoidable. No
> single programmer might be able to internalize it all.

This is one of my pet peeves. Things that've thrown me for a loop are:

- Customers with their own special PCI devices that want _enormous_ amounts of 
MMIO space -- there was one case recently where the device wanted an 8GB 
window. (Even if this is legal according to the spec, it strikes me as a 
little unfriendly.)

- Oddball PCI/PCIe controller implementations in SoCs. For example, some 
devices implement support for MSI but don't support the ability to dynamically 
allocate interrupt vectors for multiple MSI sources. This means there's often 
just one vector for MSI events and it's shared by all devices on the bus. (The 
Freescale/NXP i.MX6 is one example of this.)

- FPGA-based PCIe controller logic blocks. To be fair, I've only worked with 
one of these, namely the Xilinx Zynq7k. Maybe that one is just particularly 
irritating. Out of the box, the Zynq doesn't support PCIe: you have to load a 
bitstream file into it to make the internal FPGA think it's a PCIe controller. 
I think it's the case that the customer can customize the logic to add or 
remove features, probably to optimize the use of the available logic gates in 
the FPGA. Unfortunately this can make driver development difficult because it 
makes the "hardware" a moving target. It also means the "hardware" design 
might be kept very simple and a lot of heavy lifting is deferred to the driver 
software.

The Zynq PCIe controller has only one interrupt vector, and it uses it for 
_all_ events (INTx interrupts, MSI interrupts and controller errors). For the 
INTx interrupts, you have to implement level-triggered interrupt semantics in 
software. If you do it wrong, you could miss an event and get totally stuck. 
(And last time I looked at it, I wasn't 100% convinced that the Linux driver 
was doing it exactly right either.) If you only ever want to connect one PCIe 
device to the controller, you can get away with some fairly simple logic, but 
hardly any of our customers would be satisfied with that.

Also, the documentation for the PCIe controller block says that it supports 
both MMIO and I/O space BARs, but the particular bitstream file that I was 
given to work with only seemed to support MMIO. (The registers documented to 
configure the outbound window for I/O transactions didn't seem to do 
anything.)

While I realize that I/O space BARs and INTx interrupts are old and busted and 
MMIO and MSI are the new hotness, we sometimes have customers with highly 
specialized legacy hardware and I would prefer not to throw a spanner in their 
works if I can avoid it.

-Bill

> Thanks!
> Laszlo
> ___
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] [PATCH 0/2] Update UNIXGCC toolchain and make it work again

2016-06-17 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Laszlo Ersek had to 
walk into mine at 15:03:16 on Friday 17 June 2016 and say:

> On 06/17/16 23:18, Laszlo Ersek wrote:
> > After eight hours, I've now reached my edk2-devel folder, with ~40
> > unread messages.
> 
> Haha, that was incorrect, after the next refresh, I saw ~50 more
> messages in there.
> 
> Among those, I've found new messages in this very thread, so I'm no
> longer suggesting that the patches be reposted with correct threading.
> But, for the future, it would be appreciated.

Fair enough.

However I'm still concerned about whether or not they'll actually be accepted. 
If there are issues with their technical validity I'd be happy to rework them. 
I'm mainly concerned about the method used to select the appropriate section 
alignment flags in the OVMF .dsc files. I believe it to be functionally 
correct, but maybe there is a better way.

-Bill

> Thanks
> Laszlo

-- 
=====
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] [PATCH 0/2] Update UNIXGCC toolchain and make it work again

2016-06-17 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Jordan Justen had to 
walk into mine at 16:46:07 on Thursday 16 June 2016 and say:

> On 2016-06-16 16:11:34, Bill Paul wrote:
> > Of all the gin joints in all the towns in all the world, Jordan Justen
> > had to
> > 
> > walk into mine at 15:47:05 on Thursday 16 June 2016 and say:
> > > On 2016-06-16 14:11:01, Bill Paul wrote:
> > > > Really there are two paths here:
> > > > 
> > > > 1) Support the OS host compiler
> > > > 2) Use a cross-build compiler
> > > 
> > > You can also just build and install GCC 4.9 under your home dir, or
> > > another location that doesn't take over the system GCC.
> > 
> > So I either bootstrap a new host GCC or bootstrap a cross-build GCC. I
> > have to build GCC either way.
> 
> Yeah, so build the one that is better supported and tested for EDK II.

I've already made it clear that I'm willing to shoulder the burden of using 
the cross-build toolchain even if it's not considered officially supported. I 
know that if it breaks I get to keep both pieces.

> Relatedly ... If Steven gets clang working for EDK II, will you
> consider using that toolchain instead? That seems like a toolchain for
> EDK II that actually might have a future.

a) Getting _which_ clang working? If you still mean host-based versions of 
clang instead of a cross-build target that I can bootstrap anywhere, then I've 
already explained why I'm against that.

b) Why not have both since the only thing needed to make UNIXGCC work again is 
to FIX THE BITROT.

c) I've already got a patch to fix UNIXGCC now. Why should have to wait until 
later? (I always waited almost a year for someone to fix UNIXGCC.) If you can 
add a clang cross-compiler option instead of the GNU option that's great, but 
until then, why not have a temporary stop-gap?

> > > > Today all I want to do is FIX THE BITROT, especially given that the
> > > > fix is pretty trivial.
> > > 
> > > UNIXGCC is MinGW GCC 4.3, so fixing the bitrot would mean updating the
> > > script to allow it to build MinGW GCC 4.3.
> > 
> > [I think you meant "to build something newer than MinGW GCC 4.3."]
> 
> No, I did not. UNIXGCC is an ancient GCC (4.3) MinGW toolchain. Which
> nobody uses.

But they can use a newer version. (Like I do.)

> And nobody tests. And probably doesn't build. But for
> some readon we can't deprecate it.

I think that's a misinterpretation. It occurred to on the way home from work 
yesterday that what you were trying to say was that the UNIXGCC toolchain 
option is tied to GCC 4.3. It's more accurate to say that UNIXGCC is to the 
MinGW compiler generated by the mingw-build.py script. I don't think there's 
anything written down anywhere that says that script can't be updated to use a 
newer version of GCC.

You may be inferring that it's stuck at that version forever because nobody 
has taken the time to fix it, but I don't think anyone else is saying that's 
the case, and there's no documentation that I can find to support that 
position. The instructions for using the cross-build option are here:

https://github.com/tianocore/tianocore.github.io/wiki/Unix-like-systems

(And on a few descendant pages.)

It says it uses the MinGW compiler. It doesn't specify exactly which version 
nor say that it will always be pegged at the same version forever.

So please, I implore you: stop trying to make it sound that way.
 
> I assumed that the script no longer managed to build MinGW GCC 4.3,
> but maybe it still works...

Yes,  it still does build a working GCC 4.3. But you can't use GCC 4.3 
anymore. Also, it could be arguably considered a bug to have it use that 
version. The 4.3 MinGW target assumes you have to use underscore decoration 
for both IA32 and X64 targets. That's actually wrong (and the wrongness was 
perpetuated in the tools_def file too -- that's something else that I fixed). 
Newer versions fix that (it only applies to IA32).

And as I said, there is nothing written in stone that says the script can't be 
updated to use a newer version of GCC. And doing that took barely more than 
just changing the download paths and MD5 sums. It was not rocket science. No 
animals were harmed.

> > > If we want to add support for MinGW GCC 4.9, then I'd rather see it
> > > called MINGWGCC49.
> > 
> > And when you that, you're still going to have to apply the patch that I
> > just gave you to fix the .dsc files in OVMF because _any_ MinGW build
> > will be missing the support for the -z flag.
> 
> Or, you can try to build ELF GCC 4.9 and then try GCC49 to see if it
> can also work for you.

You're going in circles. I explained already:

1) I don't want to use a host compiler for what is fundamentally a c

Re: [edk2] [PATCH 0/2] Update UNIXGCC toolchain and make it work again

2016-06-16 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Jordan Justen had to 
walk into mine at 15:47:05 on Thursday 16 June 2016 and say:

> On 2016-06-16 14:11:01, Bill Paul wrote:
> > Of all the gin joints in all the towns in all the world, Jordan Justen
> > had to
> > 
> > walk into mine at 12:54:33 on Thursday 16 June 2016 and say:
> > > Rather than promoting
> > > usage of mingw based toolchains, I think we should deprecate them
> > > altogether. They are not recommended toolchains for EDK II, and I
> > > think they only cause confusion.
> > 
> > Because building EDK2/OVMF is already so simple otherwise?
> 
> Actually, it is, assuming your system GCC is new enough. This is why
> it is commonly used, and is the most tested (on Linux).
> 
> > > I'm still not sure what is preventing you from using GCC49. Last time
> > > this came up, I don't think you answered why. David had a half-way
> > > decent reason for using a mingw based toolchain, but I personally
> > > don't think it was good enough to keep a separate toolchain in
> > > tools_def.template.
> > 
> > I did explain why, though fairness it was a while ago so permit me to
> > reiterate: the expectation is to use the host GCC toolchain to do builds,
> > and the particular system I chose to use (FreeBSD 9.1), had a host GCC
> > and binutils that wasn't quite up to the task. However it was perfectly
> > capable of bootstrapping a cross compiler.
> > 
> > So I did that and it worked great. It was certainly less hassle that
> > upgrading the system compiler.
> 
> 
> 
> > Really there are two paths here:
> > 
> > 1) Support the OS host compiler
> > 2) Use a cross-build compiler
> 
> You can also just build and install GCC 4.9 under your home dir, or
> another location that doesn't take over the system GCC.

So I either bootstrap a new host GCC or bootstrap a cross-build GCC. I have to 
build GCC either way.
 
> If you make a dir, and then symlink ar, gcc and ld to your ELF GCC 4.9
> build, then you can set the GCC49_BIN environment variable to point at
> that directory which will then allow EDK II to build with the GCC49
> toolchain.

_Or_ I can just use the build script that already exists.
 
> (At least this worked well for me in the past on some of our build
> pool machines.)
> 
> > Today all I want to do is FIX THE BITROT, especially given that the fix
> > is pretty trivial.
> 
> UNIXGCC is MinGW GCC 4.3, so fixing the bitrot would mean updating the
> script to allow it to build MinGW GCC 4.3.

[I think you meant "to build something newer than MinGW GCC 4.3."]

I *did* update the mingw-gcc-build.py script. I also updated the 
tools_def.template to match it. It made it GCC 4.9.3 and binutils 2.25. That's 
why it says "make it work again" in the subject line.

(Now you will tell me it's my fault you didn't notice the other patch because 
I used Kmail.)

> I'd rather see us finally admit that UNIXGCC is dead, and remove it
> than see someone update the script to build it. I've tried to get both
> ELFGCC and UNIXGCC deprecated. I think the only thing that has
> happened is that we have stopped testing those toolchains, and instead
> GCC4* is tested.
> 
> > You don't have to "support" it. But since it's there and since I've
> > explained that in spite of your perspective it has merit, why don't we
> > just clean it up a little until someone comes up with a better idea.
> > Apparently everyone was content to ignore the UNIXGCC toolchain before;
> > you're certainly free to go back to doing that again after applying the
> > patches.
> 
> So 'everyone' is ignoring it, but we need it updated in our
> tools_def.template?

In this context:

everyone == all the EDK2 maintainers

I did not mean:

everione != everybody outside Intel using EDK

_I'm_ clearly not ignoring it.
 
> If we want to add support for MinGW GCC 4.9, then I'd rather see it
> called MINGWGCC49.

And when you that, you're still going to have to apply the patch that I just 
gave you to fix the .dsc files in OVMF because _any_ MinGW build will be 
missing the support for the -z flag.

> The UNIXGCC name is too generic. But, I still think
> it is better (today) to just recommend/use GCC49.

It is not _better_. It is only _different_. You personally prefer it that way.

But today I don't care about that. Today I just want to FIX THE BITROT. If 
someone wants to rename UNIXGCC to MINGWGCC they can submit another patch to 
do that after the bitrot is fixed.

-Bill

> -Jordan
> 
> > > Maybe tools_def.template gets split into tools_def.supported,
> > > tools_def.deprecated, and tools_def.community. Or, maybe 

Re: [edk2] [PATCH 0/2] Update UNIXGCC toolchain and make it work again

2016-06-16 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Jordan Justen had to 
walk into mine at 12:54:33 on Thursday 16 June 2016 and say:

> On 2016-06-16 10:14:50, Bill Paul wrote:
> > Of all the gin joints in all the towns in all the world, Jordan Justen
> > had to
> > 
> > walk into mine at 09:37:31 on Thursday 16 June 2016 and say:
> > > On 2016-06-16 09:19:41, Bill Paul wrote:
> > > > Of all the gin joints in all the towns in all the world, Jordan
> > > > Justen had to
> > > > 
> > > > walk into mine at 18:11:27 on Wednesday 15 June 2016 and say:
> > > > > Can you use git send-email rather than KMail to send your patches,
> > > > > so they will be threaded?
> > > > 
> > > > You know, I spent about 10 minutes looking over my patches trying to
> > > > think of there was *anything* I'd forgotten to do that someone might
> > > > nitpick me over and for once I thought I'd gotten everything right. I
> > > > guess should have known better.
> > > > 
> > > > No, actually, I can't use git send-email. I only have one machine
> > > > that's setup to send e-mail and it's not the one I used for
> > > > development.
> > > 
> > > Yeah. I've had a similar situation with some temp dev machines. Two
> > > things that I've used in the past are:
> > > 
> > > 1. Push the branch to a personal git repo. Fetch it on the machine
> > > 
> > >that can send email. Generate and send the patches.
> > > 
> > > 2. Generate the patches, and copy them to a machine that can send the
> > > 
> > >patches. Use git send-email to send the patches.
> > 
> > That's fine, but are you saying I have to do one of these thing right now
> > in order to get these patches accepted?
> 
> No. I wouldn't say that.
> 
> I don't think we should make these changes.

I knew you were going to say that.

> Rather than promoting
> usage of mingw based toolchains, I think we should deprecate them
> altogether. They are not recommended toolchains for EDK II, and I
> think they only cause confusion.

Because building EDK2/OVMF is already so simple otherwise?

I must disagree. The pieces are there, and aside from some minor bitrot, they 
still work. If someone wants to come along and do a full re-evaluation of this 
matter later, that's fine. All I want to do now is just fix the bitrot.

> There's really nothing preventing you from having your own personal
> tools_def config for a toolchain. The real question is whether the
> toolchain is useful for a lot of people, or something we want to
> officially support.
> 
> I'm still not sure what is preventing you from using GCC49. Last time
> this came up, I don't think you answered why. David had a half-way
> decent reason for using a mingw based toolchain, but I personally
> don't think it was good enough to keep a separate toolchain in
> tools_def.template.

I did explain why, though fairness it was a while ago so permit me to 
reiterate: the expectation is to use the host GCC toolchain to do builds, and 
the particular system I chose to use (FreeBSD 9.1), had a host GCC and 
binutils that wasn't quite up to the task. However it was perfectly capable of 
bootstrapping a cross compiler.

So I did that and it worked great. It was certainly less hassle that upgrading 
the system compiler. Actually the system compiler in FreeBSD is now clang, but 
that's precisely the point: no matter what happens to the host compiler, as 
long as it can bootstrap a cross-build toolchain, I'll always be able to 
create a working built environment for EDK2.

Also, from my point of view as an embedded software developer, it's not 
strictly correct to use a host toolchain for building the EDK2 firmware since 
it's fundamentally a cross-build project. The host compiler is mainly for 
native applications, which EDK firmware images, drivers and applications are 
not.

Can you sometimes use the host compiler for standalone images? Sure! FreeBSD 
does it to build its bootloader apps. However that's expected because FreeBSD 
is also self-hosting.

For VxWorks on the Intel platform, we (Wind River/Intel) use ELF images, but 
we don't expect people to build VxWorks with the ELF GCC that comes with 
Linux. You might actually be able to do that (because ELF is ELF and x86 is 
x86) but really you're supposed to use the cross-build tool chain that comes 
with the SDK. This is mainly so that we know that regardless of whether they 
compile on Linux or on Windows, everything behaves the same way. The same 
source is always compiled to the same object code.

Windows is the one murky case. The EFI firmware was originally developed with 
Microsoft comp

Re: [edk2] [PATCH 0/2] Update UNIXGCC toolchain and make it work again

2016-06-16 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Jordan Justen had to 
walk into mine at 09:37:31 on Thursday 16 June 2016 and say:

> On 2016-06-16 09:19:41, Bill Paul wrote:
> > Of all the gin joints in all the towns in all the world, Jordan Justen
> > had to
> > 
> > walk into mine at 18:11:27 on Wednesday 15 June 2016 and say:
> > > Can you use git send-email rather than KMail to send your patches, so
> > > they will be threaded?
> > 
> > You know, I spent about 10 minutes looking over my patches trying to
> > think of there was *anything* I'd forgotten to do that someone might
> > nitpick me over and for once I thought I'd gotten everything right. I
> > guess should have known better.
> > 
> > No, actually, I can't use git send-email. I only have one machine that's
> > setup to send e-mail and it's not the one I used for development.
> 
> Yeah. I've had a similar situation with some temp dev machines. Two
> things that I've used in the past are:
> 
> 1. Push the branch to a personal git repo. Fetch it on the machine
>that can send email. Generate and send the patches.
> 
> 2. Generate the patches, and copy them to a machine that can send the
>patches. Use git send-email to send the patches.

That's fine, but are you saying I have to do one of these thing right now in 
order to get these patches accepted?

-Bill

> -Jordan
> 
> > > On 2016-06-15 16:36:12, Bill Paul wrote:
> > > > A while ago there was some talk of updating the UNIXGCC toolchain to
> > > > support a newer version of GCC and binutils. Unfortunately after
> > > > almost a year, nothing has happened. (I think Ard Biesheuvel said he
> > > > had plans to fix this, but apparently nothing came of this.) In fact
> > > > things have gotten slightly worse.
> > > > 
> > > > I've listened to all the various opinions about keeping the UNIXGCC
> > > > toolchain option around, but I still think it's useful, and the fixes
> > > > to update it and make it work again are small, so I'm hoping there
> > > > won't be tremendous resistance them.
> > > 
> > > I don't think we should 'upgrade' UNIXGCC. Instead, I think we should
> > > deprecate it. I think a better idea would be a MINGWGCC49 toolchain,
> > > but even then, I don't think it is worth-while to maintain a separate
> > > mingw gcc based toolchain.
> > > 
> > > Any reason that you can't use an elf based GCC 4.9 with the GCC49
> > > toolchain? This is the best supported toolchain for (non OS X)
> > > unix-like environments.
> > > 
> > > -Jordan
> > > 
> > > > This patch set updates the mingw-gcc-build.py script to use GCC 4.9.3
> > > > and binutils 2.25, and updates the rules for UNIXGCC in tools_def
> > > > accordingly. The only real issue that the newer compiler version must
> > > > not use underscore decorations for X64 builds.
> > > > 
> > > > Aside from fixing the build script and rules, the only problem I ran
> > > > into is that the -z linker option used for force 4K section
> > > > alignment only works ELF versions of GCC. With the MinGW linker
> > > > (which is targeted for PE/COFF), you need to use different flags. I
> > > > tried to adjust the rules to add an exception for the UNIXGCC case
> > > > without breaking the other cases. This should be thoroughly reviewed
> > > > to make sure I did it right.
> > > > 
> > > > With these fixes I was able to build working IA32 and X64 release
> > > > images of the OVMF firmware on my FreeBSD/amd64 host.
> > > > 
> > > > Bill Paul (2):
> > > >   This commit updates the support for MinGW/UNIXGCC cross-build
> > > >   
> > > > toolchain.
> > > >   
> > > >   This commit makes OvmfPkg builds work with UNIXGCC again.
> > > >  
> > > >  BaseTools/Conf/tools_def.template | 19 ---
> > > >  BaseTools/gcc/mingw-gcc-build.py  | 11 +--
> > > >  OvmfPkg/OvmfPkgIa32.dsc   |  3 ++-
> > > >  OvmfPkg/OvmfPkgIa32X64.dsc|  3 ++-
> > > >  OvmfPkg/OvmfPkgX64.dsc|  3 ++-
> > > >  5 files changed, 23 insertions(+), 16 deletions(-)

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] [PATCH 0/2] Update UNIXGCC toolchain and make it work again

2016-06-16 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Jordan Justen had to 
walk into mine at 18:11:27 on Wednesday 15 June 2016 and say:

> Can you use git send-email rather than KMail to send your patches, so
> they will be threaded?

You know, I spent about 10 minutes looking over my patches trying to think of 
there was *anything* I'd forgotten to do that someone might nitpick me over 
and for once I thought I'd gotten everything right. I guess should have known 
better.

No, actually, I can't use git send-email. I only have one machine that's setup 
to send e-mail and it's not the one I used for development.

-Bill
 
> On 2016-06-15 16:36:12, Bill Paul wrote:
> > A while ago there was some talk of updating the UNIXGCC toolchain to
> > support a newer version of GCC and binutils. Unfortunately after almost
> > a year, nothing has happened. (I think Ard Biesheuvel said he had plans
> > to fix this, but apparently nothing came of this.) In fact things have
> > gotten slightly worse.
> > 
> > I've listened to all the various opinions about keeping the UNIXGCC
> > toolchain option around, but I still think it's useful, and the fixes to
> > update it and make it work again are small, so I'm hoping there won't be
> > tremendous resistance them.
> 
> I don't think we should 'upgrade' UNIXGCC. Instead, I think we should
> deprecate it. I think a better idea would be a MINGWGCC49 toolchain,
> but even then, I don't think it is worth-while to maintain a separate
> mingw gcc based toolchain.
> 
> Any reason that you can't use an elf based GCC 4.9 with the GCC49
> toolchain? This is the best supported toolchain for (non OS X)
> unix-like environments.
> 
> -Jordan
> 
> > This patch set updates the mingw-gcc-build.py script to use GCC 4.9.3
> > and binutils 2.25, and updates the rules for UNIXGCC in tools_def
> > accordingly. The only real issue that the newer compiler version must
> > not use underscore decorations for X64 builds.
> > 
> > Aside from fixing the build script and rules, the only problem I ran into
> > is that the -z linker option used for force 4K section alignment only
> > works ELF versions of GCC. With the MinGW linker (which is targeted for
> > PE/COFF), you need to use different flags. I tried to adjust the rules
> > to add an exception for the UNIXGCC case without breaking the other
> > cases. This should be thoroughly reviewed to make sure I did it right.
> > 
> > With these fixes I was able to build working IA32 and X64 release images
> > of the OVMF firmware on my FreeBSD/amd64 host.
> > 
> > Bill Paul (2):
> >   This commit updates the support for MinGW/UNIXGCC cross-build
> >   
> > toolchain.
> >   
> >   This commit makes OvmfPkg builds work with UNIXGCC again.
> >  
> >  BaseTools/Conf/tools_def.template | 19 ---
> >  BaseTools/gcc/mingw-gcc-build.py  | 11 +--
> >  OvmfPkg/OvmfPkgIa32.dsc   |  3 ++-
> >  OvmfPkg/OvmfPkgIa32X64.dsc|  3 ++-
> >  OvmfPkg/OvmfPkgX64.dsc|  3 ++-
> >  5 files changed, 23 insertions(+), 16 deletions(-)

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


[edk2] [PATCH 1/2] BaseTools: This commit updates the support for MinGW/UNIXGCC cross-build toolchain.

2016-06-15 Thread Bill Paul
The following changes have been made:

- The mingw-gcc-build.py script now uses GCC 4.9.3 and binutils 2.25.

- GCC 4.3.0 used underscore decoration for both IA32 and X64 builds, but
the official convention is that it's only used on IA32 and newer versions
of GCC MinGW now follow this convention. A new set of macros for the X64
case in tools_def have been added to remove the explict exclusion of
underscores, and the UNIXGCC tool definition has been updated to use them.

- Explcit DEBUG and RELEASE versions of CC_FLAGS have been added for
the UNIXGCC tool definition so that -Wno-unused-but-set-variable can be
specified for RELEASE builds

- Documentation has been updated in tools_def to indicate the new GCC
and binutils versions

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Bill Paul <wp...@windriver.com>
---
 BaseTools/Conf/tools_def.template | 19 ---
 BaseTools/gcc/mingw-gcc-build.py  | 11 +--
 2 files changed, 17 insertions(+), 13 deletions(-)

diff --git a/BaseTools/Conf/tools_def.template 
b/BaseTools/Conf/tools_def.template
index 2065fa3..c75ee38 100644
--- a/BaseTools/Conf/tools_def.template
+++ b/BaseTools/Conf/tools_def.template
@@ -324,8 +324,8 @@ DEFINE SOURCERY_CYGWIN_TOOLS = /cygdrive/c/Program 
Files/CodeSourcery/Sourcery G
 #   Intel(r) ACPI Compiler (iasl.exe) from
 #   https://acpica.org/downloads
 #   UNIXGCC -UNIX-   Requires:
-# GCC 4.3.0
-# binutils 2.20.51.0.5
+# GCC 4.9.3
+# binutils 2.25
 #Optional:
 # Required to build platforms or ACPI tables:
 #   Intel(r) ACPI Compiler from
@@ -4333,9 +4333,11 @@ DEFINE GCC_ARM_AARCH64_DLINK_COMMON= --emit-relocs -
nostdlib --gc-sections -u $(
 DEFINE GCC_ARM_DLINK_FLAGS = DEF(GCC_ARM_AARCH64_DLINK_COMMON) -z 
common-page-size=0x20
 DEFINE GCC_AARCH64_DLINK_FLAGS = DEF(GCC_ARM_AARCH64_DLINK_COMMON) -z 
common-page-size=0x20
 DEFINE GCC_IA32_X64_ASLDLINK_FLAGS = DEF(GCC_IA32_X64_DLINK_COMMON) --entry 
_ReferenceAcpiTable -u $(IMAGE_ENTRY_POINT)
+DEFINE GCC_X64_ASLDLINK_FLAGS  = DEF(GCC_IA32_X64_DLINK_COMMON) --entry 
ReferenceAcpiTable -u $(IMAGE_ENTRY_POINT)
 DEFINE GCC_ARM_ASLDLINK_FLAGS  = DEF(GCC_ARM_DLINK_FLAGS) --entry 
ReferenceAcpiTable -u $(IMAGE_ENTRY_POINT)
 DEFINE GCC_AARCH64_ASLDLINK_FLAGS  = DEF(GCC_AARCH64_DLINK_FLAGS) --entry 
ReferenceAcpiTable -u $(IMAGE_ENTRY_POINT)
 DEFINE GCC_IA32_X64_DLINK_FLAGS= DEF(GCC_IA32_X64_DLINK_COMMON) --entry 
_$(IMAGE_ENTRY_POINT) --file-alignment 0x20 --section-alignment 0x20 -Map 
$(DEST_DIR_DEBUG)/$(BASE_NAME).map
+DEFINE GCC_X64_DLINK_FLAGS = DEF(GCC_IA32_X64_DLINK_COMMON) --entry 
$(IMAGE_ENTRY_POINT) --file-alignment 0x20 --section-alignment 0x20 -Map 
$(DEST_DIR_DEBUG)/$(BASE_NAME).map
 DEFINE GCC_IPF_DLINK_FLAGS = -nostdlib -O2 --gc-sections --dll -
static --entry $(IMAGE_ENTRY_POINT) --undefined $(IMAGE_ENTRY_POINT) -Map 
$(DEST_DIR_DEBUG)/$(BASE_NAME).map
 DEFINE GCC_IPF_OBJCOPY_FLAGS   = -I elf64-ia64-little -O efi-bsdrv-ia64
 DEFINE GCC_IPF_SYMRENAME_FLAGS = --redefine-sym memcpy=CopyMem
@@ -4463,9 +4465,9 @@ DEFINE GCC49_AARCH64_ASLDLINK_FLAGS  = 
DEF(GCC48_AARCH64_ASLDLINK_FLAGS)
 *_UNIXGCC_*_ASL_PATH = DEF(UNIX_IASL_BIN)
 
 *_UNIXGCC_IA32_DLINK_FLAGS   = DEF(GCC_IA32_X64_DLINK_FLAGS) --
image-base=0
-*_UNIXGCC_X64_DLINK_FLAGS= DEF(GCC_IA32_X64_DLINK_FLAGS) --
image-base=0
+*_UNIXGCC_X64_DLINK_FLAGS= DEF(GCC_X64_DLINK_FLAGS) --image-
base=0
 *_UNIXGCC_IA32_ASLDLINK_FLAGS= DEF(GCC_IA32_X64_ASLDLINK_FLAGS)
-*_UNIXGCC_X64_ASLDLINK_FLAGS = DEF(GCC_IA32_X64_ASLDLINK_FLAGS)
+*_UNIXGCC_X64_ASLDLINK_FLAGS = DEF(GCC_X64_ASLDLINK_FLAGS)
 *_UNIXGCC_*_ASM_FLAGS= DEF(GCC_ASM_FLAGS)
 *_UNIXGCC_*_PP_FLAGS = DEF(GCC_PP_FLAGS)
 *_UNIXGCC_*_ASLPP_FLAGS  = DEF(GCC_ASLPP_FLAGS)
@@ -4490,10 +4492,11 @@ DEFINE GCC49_AARCH64_ASLDLINK_FLAGS  = 
DEF(GCC48_AARCH64_ASLDLINK_FLAGS)
 *_UNIXGCC_IA32_VFRPP_PATH   = DEF(UNIXGCC_IA32_PETOOLS_PREFIX)gcc
 *_UNIXGCC_IA32_RC_PATH  = DEF(UNIXGCC_IA32_PETOOLS_PREFIX)objcopy
 
-*_UNIXGCC_IA32_CC_FLAGS = DEF(GCC_IA32_CC_FLAGS)
 *_UNIXGCC_IA32_RC_FLAGS = DEF(GCC_IA32_RC_FLAGS)
 *_UNIXGCC_IA32_OBJCOPY_FLAGS=
 *_UNIXGCC_IA32_NASM_FLAGS   = -f win32
+DEBUG_UNIXGCC_IA32_CC_FLAGS = DEF(GCC_IA32_CC_FLAGS)
+RELEASE_UNIXGCC_IA32_CC_FLAGS   = DEF(GCC_IA32_CC_FLAGS) -Wno-unused-but-
set-variable
 
 ##
 # X64 definitions
@@ -4510,10 +4513,11 @@ DEFINE GCC49_AARCH64_ASLDLINK_FLAGS  = 
DEF(GCC48_AARCH64_ASLDLINK_FLAGS)
 *_UNIXGCC_X64_RC_PATH   = DEF(UNIXGCC_X64_PETOOLS_PREFIX)o

[edk2] [PATCH 2/2] OvmfPkg: This commit makes OvmfPkg builds work with UNIXGCC again.

2016-06-15 Thread Bill Paul

Previously, a linker rule was added to the ,dsc files to force section
alignment to 4096 bytes. This rule was wildcarded to apply to all GCC
builds. Unfortunately this only works for GCC/binutils that are targeted
for ELF. It doesn't work with UNIXGCC, because the MinGW version of the
GNU linker doesn't support the -z flag. The equivalent of
-z common-page-size=x for MinGW is to use --section-alignment=x and
--file-alignment=x.

The linker rules have been updated so that the existing -z option is
still applied for ELF GCC builds as before while an exception is used
for UNIXGCC so that it uses the equivalent PE flags instead.

Contributed-under: TianoCore Contribution Agreement 1.0
Signed-off-by: Bill Paul <wp...@windriver.com>
---
 OvmfPkg/OvmfPkgIa32.dsc| 3 ++-
 OvmfPkg/OvmfPkgIa32X64.dsc | 3 ++-
 OvmfPkg/OvmfPkgX64.dsc | 3 ++-
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/OvmfPkg/OvmfPkgIa32.dsc b/OvmfPkg/OvmfPkgIa32.dsc
index 737f300..765e0b6 100644
--- a/OvmfPkg/OvmfPkgIa32.dsc
+++ b/OvmfPkg/OvmfPkgIa32.dsc
@@ -46,7 +46,8 @@
   GCC:*_*_*_CC_FLAGS   = -mno-mmx -mno-sse
 
 [BuildOptions.common.EDKII.DXE_RUNTIME_DRIVER]
-  GCC:*_*_*_DLINK_FLAGS = -z common-page-size=0x1000
+  GCC:*_UNIXGCC_*_DLINK_FLAGS = --section-alignment=0x1000 --file-
alignment=0x1000
+  GCC:*_GCC*_*_DLINK_FLAGS = -z common-page-size=0x1000
 
 

 #
diff --git a/OvmfPkg/OvmfPkgIa32X64.dsc b/OvmfPkg/OvmfPkgIa32X64.dsc
index 854cf6d..55fbce0 100644
--- a/OvmfPkg/OvmfPkgIa32X64.dsc
+++ b/OvmfPkg/OvmfPkgIa32X64.dsc
@@ -51,7 +51,8 @@
 !endif
 
 [BuildOptions.common.EDKII.DXE_RUNTIME_DRIVER]
-  GCC:*_*_*_DLINK_FLAGS = -z common-page-size=0x1000
+  GCC:*_UNIXGCC_*_DLINK_FLAGS = --section-alignment=0x1000 --file-
alignment=0x1000
+  GCC:*_GCC*_*_DLINK_FLAGS = -z common-page-size=0x1000
 
 

 #
diff --git a/OvmfPkg/OvmfPkgX64.dsc b/OvmfPkg/OvmfPkgX64.dsc
index 0cb2f60..5466bee 100644
--- a/OvmfPkg/OvmfPkgX64.dsc
+++ b/OvmfPkg/OvmfPkgX64.dsc
@@ -51,7 +51,8 @@
 !endif
 
 [BuildOptions.common.EDKII.DXE_RUNTIME_DRIVER]
-  GCC:*_*_*_DLINK_FLAGS = -z common-page-size=0x1000
+  GCC:*_UNIXGCC_*_DLINK_FLAGS = --section-alignment=0x1000 --file-
alignment=0x1000
+  GCC:*_GCC*_*_DLINK_FLAGS = -z common-page-size=0x1000
 
 

 #
-- 
2.4.6

___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


[edk2] [PATCH 0/2] Update UNIXGCC toolchain and make it work again

2016-06-15 Thread Bill Paul
A while ago there was some talk of updating the UNIXGCC toolchain to
support a newer version of GCC and binutils. Unfortunately after almost
a year, nothing has happened. (I think Ard Biesheuvel said he had plans
to fix this, but apparently nothing came of this.) In fact things have
gotten slightly worse.

I've listened to all the various opinions about keeping the UNIXGCC
toolchain option around, but I still think it's useful, and the fixes to
update it and make it work again are small, so I'm hoping there won't be
tremendous resistance them.

This patch set updates the mingw-gcc-build.py script to use GCC 4.9.3
and binutils 2.25, and updates the rules for UNIXGCC in tools_def
accordingly. The only real issue that the newer compiler version must
not use underscore decorations for X64 builds.

Aside from fixing the build script and rules, the only problem I ran into
is that the -z linker option used for force 4K section alignment only works
ELF versions of GCC. With the MinGW linker (which is targeted for PE/COFF),
you need to use different flags. I tried to adjust the rules to add an
exception for the UNIXGCC case without breaking the other cases. This should
be thoroughly reviewed to make sure I did it right.

With these fixes I was able to build working IA32 and X64 release images
of the OVMF firmware on my FreeBSD/amd64 host.

Bill Paul (2):
  This commit updates the support for MinGW/UNIXGCC cross-build
toolchain.
  This commit makes OvmfPkg builds work with UNIXGCC again.

 BaseTools/Conf/tools_def.template | 19 ---
 BaseTools/gcc/mingw-gcc-build.py  | 11 +--
 OvmfPkg/OvmfPkgIa32.dsc   |  3 ++-
 OvmfPkg/OvmfPkgIa32X64.dsc|  3 ++-
 OvmfPkg/OvmfPkgX64.dsc|  3 ++-
 5 files changed, 23 insertions(+), 16 deletions(-)

-- 
2.4.6

___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] PCIe memory transaction issue

2016-03-28 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Shaveta Leekha had to 
walk into mine at 10:47:05 on Monday 28 March 2016 and say:

> Thanks Bill !
> Yes I am using "Undi EDK Intel(R) PRO/1000" driver from Intel,
> it is for e1000_82575 NIC card.
> 
> Yes, the driver seems to have the support for 64-bit. Rest of the replies
> are in-lnlined
> 
> Vendor-id: device id is:
> 
> Shell> pci
>Seg  Bus  Dev  Func
>---  ---  ---  
> 00   00   0000 ==> Bridge Device - PCI/PCI bridge
>  Vendor 1957 Device 8040 Prog Interface 0
> 00   01   0000 ==> Network Controller - Ethernet controller
>  Vendor 8086 Device 10D3 Prog Interface 0

I'm sorry, but this is not an 82575 NIC. This is an 82574L. However that 
doesn't matter: it still supports 64-bit DMA.

To answer your questions:

> [Shaveta] I have intel 82575 NIC card. Does it support 64 bit addressing?

You have an Intel 82574 NIC, but yes it does support 64-bit addressing.

> [Shaveta] Does E1000 driver always does a DMA for getting the buffer from
> system memory Or access memory via core?

All PRO/1000 devices use DMA transfers to move packets between the host CPU 
and the NIC. There's no option for doing programmed I/O instead, as far as I'm 
aware.

> [Shaveta]   It seems that E1000 intel driver is writing both upper and
> lower bits for Tx and Rx:
[...]

Okay, that's good. Check that it also populates the RX and TX DMA descriptors 
with 64-bit addresses too. (The descriptors also have 64-bit address fields, 
with upper and lower portions.)

> [Shaveta]  I didn't program Inbound windows, so they are open. Means any
> inbound transaction would come as it is.

In your follow-up e-mail, you also say:

> [Shaveta]   I am using layerscape ARM V8 LS2080 paltform.

> Yes, I am programming outbound windows, but no programming for inbound
> windows. The driver was working perfectly fine when system was using 32-bit 
> memory(DDR) space, it only broke when system memory(DDR) area have been
> relocated to 64 bit address space.

> So I am not much doubting outbound windows programming, should I?

Again, the _outbound_ windows only affect the ability of the host to 
read/write the PRO/1000 device's register banks. It doesn't affect DMA.

It looks like the LS2080 is in pre-production and the reference manual isn't 
(publicly) available, so I can't be completely sure what you need to do for 
this SoC.

I'm familiar with the PCIe controller logic in the Freescale/NXP PPC QorIQ 
parts (P2020, P4080, T4240, T2080), but I haven't worked on one of the ARM 
DPAA parts yet. (I have worked on the i.MX6Q though, but that's only a 32-bit 
A9 core.)

However, if the PCIe controller in the LS2080 is anything like its PPC DPAA 
relatives, then it should have a set of registers for configuring each inbound 
and outbound window. The PPC parts have a few different outbound windows which 
can be configured with different sizes/bases and attributes so that you can 
map different kinds of BARs. (MEMIO, I/O space, etc...)

For inbound windows, there's usually at least two sets of registers (one for 
regular DMA transfers, and potentially one other for delivery of MSIs). I 
don't know what the reset default values are for the inbound window registers 
on your device. Usually you have to initialize the window base and translation 
addresses and the attributes, which includes the window size, snooping control 
bits, and so on. (You usually want to enable snooping for DMA transfers since 
then the hardware will enforce cache coherency for you.)

The window size field for the PPC parts is a value from 0 to some maximum that 
specifies the window size in powers of 2. There's usually a table in the 
attribute register description that lists the valid settings. The maximum may 
be something like 1TB, depending on the part.

If this was a T2080 CPU for example and the inbound window base address was 0 
and the size was constrained to only 4GB, then PCIe bus master devices would 
not be able to perform DMA transfers beyond the first 4GB of physical address 
space. You would need to change the size field in the attribute register to 
make the window bigger.

-Bill

> Regards,
> Shaveta
> 
> -Original Message-
> From: Bill Paul [mailto:wp...@windriver.com]
> Sent: Monday, March 28, 2016 10:04 PM
> To: edk2-de...@ml01.01.org
> Cc: Shaveta Leekha <shaveta.lee...@nxp.com>; edk2-devel@lists.01.org
> <edk2-de...@ml01.01.org> Subject: Re: [edk2] PCIe memory transaction issue
> 
> Of all the gin joints in all the towns in all the world, Shaveta Leekha had 
to walk into mine at 00:29:39 on Monday 28 March 2016 and say:
> > Hi,
> > 
> > In PCIe memory transactions, I am facing an issue.
> > 
> > The scenario is:
> > 
> > Case 1:
> > In our system, we h

Re: [edk2] PCIe memory transaction issue

2016-03-28 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Bill Paul had to walk 
into mine at 09:33:57 on Monday 28 March 2016 and say:

> Of all the gin joints in all the towns in all the world, Shaveta Leekha had
> to
> 
> walk into mine at 00:29:39 on Monday 28 March 2016 and say:
> > Hi,
> > 
> > In PCIe memory transactions, I am facing an issue.
> > 
> > The scenario is:
> > 
> > Case 1:
> > In our system, we have allocated 32 bit memory space to one of the PCI
> > device (E1000 NIC card)
> 
> You did not say which Intel PRO/1000 card (vendor/device ID). There are
> literally dozens of them. (It's actually not that critical, but I'm
> curious.)
> 
> > during enumeration and BAR programming. When NIC
> > card is getting used to transmit a ping packet, a local buffer is getting
> > allocated from 32 bit main memory space. In this case, the packet is
> > getting sent out successfully.
> > 
> > 
> > Case 2:
> > Now when NIC card is getting used to transmit a ping packet, if a local
> > buffer is allocated from 64 bit main memory space. The packet failed to
> > transmit out.
> > 
> > Doubt 1: Would it be possible for this PCI device/NIC card (in our case)
> > to access this 64 bit address space for sending this packet out of
> > system?
> 
> I don't know offhand how the UEFI PRO/1000 driver handles this, but I know
> that pretty much all Intel PRO/1000 cards support 64-bit DMA addressing.
> 
> Some older PCI cards, like, say, the Intel 82557/8/9 PRO/100 cards, only
> support 32-bit addressing. That means that they only accept DMA
> source/target addresses that are 32-bits wide. For those, if you have a
> 64-bit system, you must use "bounce buffering." That is, the device can
> only DMA from addresses within the first 4GB of physical memory. If you
> have a packet buffer outside that window, then you have to copy it to a
> temporary buffer inside the window first (i.e. "bounce" it) and then set
> up the DMA transfer from that location instead.
> 
> This requires you to be able to allocate some storage from specific
> physical address regions (i.e. you have to ensure the storage is inside
> the 4GB window).
> 
> However the PRO/1000 doesn't have this limitation: you can specify fully
> qualified 64-bit addresses for both the RX and TX DMA ring base addresses
> and the packet buffers in the DMA descriptors, so you never need bounce
> buffering. This was true even for the earliest PCI-X PRO/1000 NICs, and is
> still true for the PCIe ones.
> 
> For the base addresses, you have two 32-bit registers: one for the upper 32
> bits and one for the lower 32 bits. You have to initialize both. Drivers
> written for 32-bit systems will often hard code the upper 32 bits of the
> address fields to 0. If you use that same driver code on a 64-bit system,
> DMA transfers will still be initiated, but the source/target addresses
> will be wrong.
> 
> > Doubt 2: If a device is allocated 32 bit Memory mapped space from 32 bit
> > memory area, then for packet transactions, can we use 64 bit memory
> > space?
> 
> Just to clarify: do not confuse the BAR mappings with DMA. They are two
> different concepts. I think a 64-bit BAR allows you to map the device's
> register bank anywhere within the 64-bit address space, whereas with a
> 32-bit BAR you have to map the registers within the first 4GB of address
> space (preferably somewhere that doesn't overlap RAM). However that has
> nothing to do with how DMA works: even with the PRO/1000's BARs mapped to
> a 32-bit region, you should still be able to perform DMA transfers to/from
> any 64-bit address.
> 
> The BARs use an outbound, i.e. the host issues outbound read/write requests
> and the device is the target of those requests.
> 
> DMA transfers use an inbound window, i.e. the devices issues read/write
> requests and the host is the target of those requests.
> 
> The PRO/100 requires 32-bit addressing for both inbound and outbound
> requests.
> 
> The PRO/1000 can use 64-bit addressing.

Oh, sorry, there's something else I forgot to mention:

In addition to writing the PRO/1000 driver to correctly support 64-bit DMA 
addressing, it's sometimes necessary to program the PCIe controller itself 
correctly as well. I actually don't know how you'd do this on Intel IA32 or 
X64 platforms, because it involves low-level chipset initialization which is 
considered "secret sauce" by Intel.

But for ARM and PPC SoCs (like those made by Freescale/NXP), I know that you 
have to program the outbound and inbound window sizes and translation offsets 
in order for all transfers to work. (I had to do this for the VxWorks drivers 
for the Fr

Re: [edk2] PCIe memory transaction issue

2016-03-28 Thread Bill Paul
ciMmio32Size|0x4000  # 128M
>   gArmPlatformTokenSpaceGuid.PcdPciMemTranslation|0x14
>   gArmPlatformTokenSpaceGuid.PcdPciMmio64Base|0x144000
>   gArmPlatformTokenSpaceGuid.PcdPciMmio64Size|0x4000
> ___
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] [PATCH v2 00/16] unify GCC command line options

2015-10-08 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Bill Paul had to walk 
into mine at 10:30:26 on Monday 24 August 2015 and say:

> Of all the gin joints in all the towns in all the world, Ard Biesheuvel had
> to
> 
> walk into mine at 10:22:59 on Monday 24 August 2015 and say:
> > On 24 August 2015 at 19:20, Bill Paul <wp...@windriver.com> wrote:
> > > Of all the gin joints in all the towns in all the world, Ard Biesheuvel
> > > had to
> > > 
> > > walk into mine at 10:06:10 on Monday 24 August 2015 and say:
> > >> On 24 August 2015 at 19:02, Bill Paul <wp...@windriver.com> wrote:
> > >> > Of all the gin joints in all the towns in all the world, Ard
> > >> > Biesheuvel had to
> > 
> > >> > walk into mine at 09:54:08 on Monday 24 August 2015 and say:
> > [...]
> > 
> > >> >> Jordan suggested to drop UNIXGCC as well, and introduce MINGW
> > >> >> instead iff we want the MinGW PE/COFF GCC, and I think we do, if
> > >> >> only so that we have a LLP64 environment for X64 available to
> > >> >> those without the possibility or the desire to run a MS toolchains
> > >> >> under Windows.
> > >> > 
> > >> > People should be able to build a known-good crossbuild toolchain.
> > >> > This is the simplest way to provide that option.
> > >> 
> > >> Meh. The primary audience of this feature are people building UEFI for
> > >> X64 on X64, in which case the GCC4x options are arguably simpler. But
> > >> apparently we agree that we should keep it /and/ support it.
> > >> 
> > >> > By the way, do you think I can get you to update the
> > >> > mingw-gcc-build.py script while you're at it? :)
> > >> 
> > >> I proposed some updates here
> > >> http://thread.gmane.org/gmane.comp.bios.edk2.devel/1297
> > >> (with you on cc). Care to ack those?
> > > 
> > > Is there a particular reason why you chose to use binutils from
> > > www.kernel.org rather than from ftpmirror.gnu.org (other than "that's
> > > what it was doing before")?
> > 
> > Nope, that was it :-)
> > 
> > In fact, I vaguely remember noticing the kernel.org URL and thinking
> > "hmm that's odd" but for some reason, it did not provoke any action on
> > my part
> 
> My attention was drawn to it before because the specific version the script
> was looking for previously ceased to exist on www.kernel.org, which broke
> the script.
> 
> > > In my testing I used binutins 2.25 from gnu.org, and it worked ok. I
> > > thought it made more sense to get both packages from the same place.
> > > 
> > > source_files_common = {
> > > 
> > > 'binutils': {
> > > 
> > > 'url': 'http://ftpmirror.gnu.org/binutils/' + \
> > > 
> > >'binutils-$version.tar.bz2',
> > > 
> > > 'version': '2.25',
> > > 'md5': 'd9f3303f802a5b6b0bb73a335ab89d66',
> > > },
> > > 
> > > }
> > 
> > Yes, 2.25 would be even better. In fact, it might make sense to wait
> > for 2.26 to appear, since it adds support for --gc-sections (see the
> > other part of this thread) which brings performance of mingw in line
> > with ELF based GCC regarding code size.
> 
> Fair enough, as long as we don't have to wait too long. In any case, aside
> from this, the changes look ok to me.
> 
> -Bill

So... about that "as long as we don't have to wait too long" thing? I think 
it's been too long. :)

-Bill

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] PCI code is finding only Bus 0. I need Bus 1

2015-09-24 Thread Bill Paul
9 PM, Laszlo Ersek <ler...@redhat.com> wrote:
> > 
> > Laszlo
> 
> ___
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] [Qemu-devel] Windows does not support DataTableRegion at all [was: docs: describe QEMU's VMGenID design]

2015-09-14 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Laszlo Ersek had to 
walk into mine at 11:20:28 on Monday 14 September 2015 and say:

> On 09/14/15 18:53, Bill Paul wrote:
> > Of all the gin joints in all the towns in all the world, Laszlo Ersek had
> > to
> > 
> > walk into mine at 03:24:42 on Monday 14 September 2015 and say:
> >> On 09/14/15 10:24, Igor Mammedov wrote:
> >>> On Sun, 13 Sep 2015 15:34:51 +0300
> >>> 
> >>> "Michael S. Tsirkin" <m...@redhat.com> wrote:
> >>>> On Sun, Sep 13, 2015 at 01:56:44PM +0200, Laszlo Ersek wrote:
> >>>>> As the subject suggests, I have terrible news.
> >>>>> 
> >>>>> I'll preserve the full context here, so that it's easy to scroll back
> >>>>> to the ASL for reference.
> >>>>> 
> >>>>> I'm also CC'ing edk2-devel, because a number of BIOS developers
> >>>>> should be congregating there.
> >>>> 
> >>>> Wow, bravo! It does look like we need to go back to
> >>>> the drawing board.
> > 
> > I read your original post on this with great interest, and I applaud your
> > determination in tracking this down. Nice job.
> 
> Thank you!
> 
> > Sadly, it seems you too have
> > fallen victim to the "If It Works With Windows, It Must Be Ok" syndrome.
> 
> Well, I'd put it like this: we've fallen victim to a publicly
> undocumented feature gap / divergence from the ACPI spec in Windows'
> ACPI.SYS.
> 
> > Now, I realize that as far as this particular situation is concerned,
> > even if Microsoft decided to add support for DataTableRegion() tomorrow,
> > it wouldn't really help because there are too many different versions of
> > Windows in the field and there's no way to retroactively patch them all.
> > (Gee, that sounds familiar...)
> 
> Correct.
> 
> > Nevertheless, am I correct in saying that this is in fact a bug in
> > Microsoft's ACPI implementation (both in their ASL compiler and in the
> > AML parser)?
> 
> Absolutely. You are absolutely right.
> 
> We implemented the VMGENID spec with some undeniable creativity, but it
> broke down because the AML interpreter in Windows does not support an
> ACPI 2.0 feature.
> 
> (That interpreter is supposed to be ACPI 4.0 compliant, minimally; at
> least if we can judge it after the "matching" AML.exe's stated
> compatibility level, which is ACPI 4.0 in the standalone download, and
> 5.0 if you get it as part of the WDK.)
> 
> > Unless
> > DataTableRegion() is specified to be optional in some way (I don't know
> > if it is or not, but I doubt it),
> 
> It's not, to the best of my knowledge.
> 
> > this sounds like an clear cut case of non-
> > compliance with the ACPI spec.
> 
> Yes, it's precisely that.
> 
> > And if that's true, isn't there any way to get
> > Microsoft to fix it?
> 
> I don't know. Is there?

You would think that someone at Intel would know someone at Microsoft that 
could put some wheels in motion. (All this technology and still we have 
trouble communicating. :P )
 
> Microsoft continue to release updates (KB*) for Windows 7, Windows 8,
> Windows 10, and I think rolling a fix out for those would cover our
> needs quite okay.
> 
> But:
> - This would force QEMU/KVM host users to upgrade their Windows guest.
>   Maybe not such a big hurdle, but I reckon Windows sysadmins are
>   really conservative about installing updates. Perhaps we could solve
>   this issue but documentation.

I agree with you that it's a hassle, but so is patching any other Windows bug. 
And while this particular use of DataTableRegion() affects VMs, it has bearing 
on bare metal installations too.
 
> - More importantly, how do I even *report* this bug? How do I convince
>   Microsoft to implement a whole missing feature in their ACPI compiler
>   and interpreter? Can I demonstrate business justification?
>
>   I'm doubtful especially because DataTableRegion's usefulness is quite
>   apparent to the ACPI developer in search for parametrization options.
>   DataTableRegion was published as part of ACPI 2.0, on July 27, 2000
>   (more than fifteen years ago). I simply cannot imagine that in all
>   that time *no* physical platform's firmware developer tried to use
>   DataTableRegion.
> 
>   Therefore I can't help but assume that some big BIOS developer
>   company has already reported this to Microsoft, and the feature
>   request has been rejected. So what chance would I have?

I understand what you're saying. But, there has to be some way to deal with 
these sorts of 

Re: [edk2] UEFI and NIST SP-147 compliance

2015-09-09 Thread Bill Paul
nature. The BIOS
> installation package should also be signed, and the digital signature
> should be verified before execution. Once the update has executed
> successfully, the configuration baseline should be validated to confirm
> that the computer system is still in compliance with the organization’s
> defined policy.
> 
> Disposition Phase:
> Before the computer system is disposed and leaves the organization, the
> organization should remove or destroy any sensitive data from the system
> BIOS. The configuration baseline should be reset to the manufacturer’s
> default profile; in particular, sensitive settings such as passwords
> should be deleted from the system and keys should also be removed from
> the key store. If the system BIOS includes any organization-specific
> customizations then a vendor-provided BIOS image should be installed.
> This phase of the platform life cycle reduces chances for accidental
> data leakage.
> 
> snip
> 
> ___
> edk2-devel mailing list
> edk2-devel@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] UEFI requirements for 32-bit Windows 8.1?

2015-09-03 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Brian J. Johnson had 
to walk into mine at 08:51:20 on Thursday 03 September 2015 and say:

> On 09/03/2015 05:08 AM, Laszlo Ersek wrote:
> > Hi,
> > 
> > 64-bit Windows 8.1 boots on QEMU + OVMF just fine. (The "pc" (i440fx)
> > machine type of QEMU has "always" worked, and we recently fixed "q35"
> > too.)
> > 
> > However, 32-bit Windows 8.1 (ie. the installer of it) crashes with a
> > BSoD on the 32-bit build of OVMF *immediately*. This happens regardless
> > of the QEMU machine type. The error message I'm getting is:
> > 
> > http://people.redhat.com/~lersek/windows-on-ovmf32/win8-ovmf32.png
> > 
> > According to <https://msdn.microsoft.com/en-us/library/cc704588.aspx>,
> > the error code 0xc185 means "STATUS_IO_DEVICE_ERROR".
> > 
> > I also tried with Windows 10:
> > 
> > http://people.redhat.com/~lersek/windows-on-ovmf32/win10-ovmf32.png
> > 
> > Here I get 0xc00d, "STATUS_INVALID_PARAMETER".
> > 
> > The Windows ISOs I tried with were:
> > - en_windows_8.1_pro_n_vl_with_update_x86_dvd_6051127.iso
> > - en_windows_10_enterprise_2015_ltsb_n_x86_dvd_6848317.iso
> > 
> > Can someone please help me debug this? The difference between x64 and
> > x86 is "inexplicable".
> 
> I've worked through some firmware issues on older MS releases, but never
> Windows 8 or 10.  So this advice may be out of date.  Do you know if
> Windows got through the boot loader and is starting the kernel?  If so,
> you can turn on extra debug messages to show the drivers as they are
> loading.  That can give you some good clues.  If that's not enough, you
> can enable remote debugging and use MS's debuggers (eg. WinDbg) and
> symbol tables to get an idea of the call chain which is failing.  It's
> been a long time since I've done this, so I'm rusty on the specifics...
> searching on msdn.microsoft.com should get you going.
> 
> Historically, Windows has been extremely picky about ACPI tables, much
> more so than Linux.

No: historically hardware vendors have been insufficiently picky about 
creating their ACPI tables, leading to what I have not-so-affectionately named 
the "If It Works With Windows, It Must Be Okay" syndrome.

I think Microsoft uses their own ACPI implementation in Windows rather than 
the Intel reference ACPI CA code. (At the very least I know they have their 
own ASL compiler.) Also, the majority of x86 hardware vendors, aside from 
Apple, only validate their systems with Windows because they perceive their 
target market as being comprised mainly of Windows users. Yes, even today. (Go 
count how many machines have Windows logo stickers on them. Now go count how 
many have little Linux or FreeBSD logo stickers on them. Big difference, isn't 
there.)

As a result, ACPI tables or AML code will often have little quirks that don't 
seem to bother Windows but which break everything else. In more egregious 
cases, the ASL might even be effectively written to say "if (OS == Windows) 
{work right} else {swallow own tongue}."

Given that ACPI is supposed to be an industry standard, you would think this 
wouldn't be a problem. But it's a _big_ and complicated standard, and as with 
any big and complicated standard, getting everyone to interpret it 100% 
unambiguously is hard. The same is true of UEFI.

Some of these issues would be avoided if the hardware manufacturers went to 
the trouble of testing their ACPI blobs with the Intel reference code and 
tools instead of just the Microsoft ones. But Microsoft has no incentive to 
compel logo program participants to do this, and there isn't "UEFI Forum" logo 
program (as far as I know). Even if there were, they might not feel obliged to 
comply with it anyway, because If It Works With Windows, It Must Be Okay.

I'm sorry I can't offer anything constructive to help you solve this 
particular problem though (other than be persistent and add lots of debug 
instrumentation), but I'm glad to see that at least someone is bothering to 
test the 32-bit build.

-Bill

> Boot issues often have to do with ACPI details.  It
> has also had some quirks re. what it expects in the EFI memory map,
> although those have mostly related to really large systems (eg. PCIe
> segment layout.)
> 
> I see you CC'd some folks at Microsoft.  Hopefully they will be able to
> give you more specific advice.

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   "I put a dollar in a change machine. Nothing changed." - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] [PATCH v2 00/16] unify GCC command line options

2015-08-24 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Ard Biesheuvel had to 
walk into mine at 10:06:10 on Monday 24 August 2015 and say:

 On 24 August 2015 at 19:02, Bill Paul wp...@windriver.com wrote:
  Of all the gin joints in all the towns in all the world, Ard Biesheuvel
  had to
  
  walk into mine at 09:54:08 on Monday 24 August 2015 and say:
  On 19 August 2015 at 00:27, Laszlo Ersek ler...@redhat.com wrote:
   On 08/18/15 22:04, Paolo Bonzini wrote:
   On 18/08/2015 08:52, Ard Biesheuvel wrote:
   Personally, I would not mind deprecating GCC44, but the biggest
   question I would have is what toolchains do the latest UDK releases
   claim to support.
   
   We also have the issue that every time I ask about deprecating a
   toolchain, Larry looks at me like I'm crazy. :)
   
   Well, perhaps he can chime in and explain his motivation behind
   this? At some point, we need to start removing things, surely.
   Larry just has a higher tolerance for pain :-)
   
   RHEL 6 is shipping GCC 4.4.  True, there are software collections to
   overcome that, but I think supporting GCC 4.4 is a good idea for at
   least a couple more years.
   
   Laszlo, do you still use RHEL 6?  Are you building with GCC 4.4?
   
   My laptop dual-boots RHEL-6 and RHEL-7, but I only use RHEL-6 when I
   need to work on RHEL-6 qemu-kvm or the RHEL-6 kernel. Which is
   nowadays practically never, thankfully.
   
   In addition, I couldn't sensibly *test* OVMF on a RHEL-6 host, because
   the RHEL-6 components lack support for the pflash-backed varstore.
   Which, for me at least, makes *building* OVMF on RHEL-6 kinda moot
   too.
   
   I have a number of Fedora virtual machines just for build-testing with
   gcc-4.4..gcc-4.9, but they are the consequence of the edk2 compiler
   support, not the reason for it. :)
   
   So, I'm in favor of dropping gcc-4.4 support. (In Fedora release
   terms, gcc-4.4 corresponds to fc13.)
  
  OK, so [supposedly] Larry is the only one who objects to deprecating
  toolchains, but since he has not responded to the suggestion, I think
  we should proceed anyway.
  
  I will respin this series, and instead of bringing CYGGCC and ELFGCC
  etc in line, I will propose to move them to tools_def.attic or
  whichever name is preferred by the group.
  
  Jordan suggested to drop UNIXGCC as well, and introduce MINGW instead
  iff we want the MinGW PE/COFF GCC, and I think we do, if only so that
  we have a LLP64 environment for X64 available to those without the
  possibility or the desire to run a MS toolchains under Windows.
  
  People should be able to build a known-good crossbuild toolchain. This is
  the simplest way to provide that option.
 
 Meh. The primary audience of this feature are people building UEFI for
 X64 on X64, in which case the GCC4x options are arguably simpler. But
 apparently we agree that we should keep it /and/ support it.
 
  By the way, do you think I can get you to update the mingw-gcc-build.py
  script while you're at it? :)
 
 I proposed some updates here
 http://thread.gmane.org/gmane.comp.bios.edk2.devel/1297
 (with you on cc). Care to ack those?

Is there a particular reason why you chose to use binutils from www.kernel.org 
rather than from ftpmirror.gnu.org (other than that's what it was doing 
before)? In my testing I used binutins 2.25 from gnu.org, and it worked ok. I 
thought it made more sense to get both packages from the same place.

source_files_common = {
'binutils': {
'url': 'http://ftpmirror.gnu.org/binutils/' + \
   'binutils-$version.tar.bz2',
'version': '2.25',
'md5': 'd9f3303f802a5b6b0bb73a335ab89d66',
},
}

-Bill

 Thanks,
 Ard.

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   I put a dollar in a change machine. Nothing changed. - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] [PATCH v2 00/16] unify GCC command line options

2015-08-24 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Ard Biesheuvel had to 
walk into mine at 09:54:08 on Monday 24 August 2015 and say:

 On 19 August 2015 at 00:27, Laszlo Ersek ler...@redhat.com wrote:
  On 08/18/15 22:04, Paolo Bonzini wrote:
  On 18/08/2015 08:52, Ard Biesheuvel wrote:
  Personally, I would not mind deprecating GCC44, but the biggest
  question I would have is what toolchains do the latest UDK releases
  claim to support.
  
  We also have the issue that every time I ask about deprecating a
  toolchain, Larry looks at me like I'm crazy. :)
  
  Well, perhaps he can chime in and explain his motivation behind this?
  At some point, we need to start removing things, surely. Larry just
  has a higher tolerance for pain :-)
  
  RHEL 6 is shipping GCC 4.4.  True, there are software collections to
  overcome that, but I think supporting GCC 4.4 is a good idea for at
  least a couple more years.
  
  Laszlo, do you still use RHEL 6?  Are you building with GCC 4.4?
  
  My laptop dual-boots RHEL-6 and RHEL-7, but I only use RHEL-6 when I
  need to work on RHEL-6 qemu-kvm or the RHEL-6 kernel. Which is nowadays
  practically never, thankfully.
  
  In addition, I couldn't sensibly *test* OVMF on a RHEL-6 host, because
  the RHEL-6 components lack support for the pflash-backed varstore.
  Which, for me at least, makes *building* OVMF on RHEL-6 kinda moot too.
  
  I have a number of Fedora virtual machines just for build-testing with
  gcc-4.4..gcc-4.9, but they are the consequence of the edk2 compiler
  support, not the reason for it. :)
  
  So, I'm in favor of dropping gcc-4.4 support. (In Fedora release terms,
  gcc-4.4 corresponds to fc13.)
 
 OK, so [supposedly] Larry is the only one who objects to deprecating
 toolchains, but since he has not responded to the suggestion, I think
 we should proceed anyway.
 
 I will respin this series, and instead of bringing CYGGCC and ELFGCC
 etc in line, I will propose to move them to tools_def.attic or
 whichever name is preferred by the group.
 
 Jordan suggested to drop UNIXGCC as well, and introduce MINGW instead
 iff we want the MinGW PE/COFF GCC, and I think we do, if only so that
 we have a LLP64 environment for X64 available to those without the
 possibility or the desire to run a MS toolchains under Windows.

People should be able to build a known-good crossbuild toolchain. This is the 
simplest way to provide that option.

By the way, do you think I can get you to update the mingw-gcc-build.py script 
while you're at it? :)

-Bill

 
 GCC44 could perhaps be moved to the attic as well, but it does not
 need to be in this series imo

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   I put a dollar in a change machine. Nothing changed. - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] [PATCH v2 00/16] unify GCC command line options

2015-08-17 Thread Bill Paul
Of all the gin joints in all the towns in all the world, David Woodhouse had 
to walk into mine at 11:00:23 on Monday 17 August 2015 and say:

 On Mon, 2015-08-17 at 10:53 -0700, Jordan Justen wrote:
  UNIXGCC and CYGGCC are GCC 4.3  mingw based. Did this get tested?
  
  I think ELFGCC is unused at this point. (And has been since UnixPkg
  was deprecated.)
  
  I think we should deprecate all three of these toolchains. I would
  like to see us move them to BaseTools/Conf/tools_def.deprecated. I'll
  add Larry to this email, because I think he disagrees with
  deprecating
  toolchains...
  
  If you make these changes and it breaks those toolchains, I don't
  think we would be able to notice, because I don't think we test them
  in our build pool anymore. To me this is all the more reason to move
  them out of tools_def.template.
 
 I was building with UNIXGCC last week, to test LLP64 builds without the
 pain of actually having to deal with Windows.
 
 I'd rather see it updated to work with modern MinGW rather than
 deprecated.

I use UNIXGCC with the cross-compiler toolchain generated by mingw-gcc-
build.py. Yes, I know the existing version of the script uses GCC 4.3.0. 
That's why I made an updated version that uses 4.9.3:

http://people.freebsd.org/~wpaul/edk2/mingw-gcc-build.py

I know you don't want to support this script, that's why I did the work for 
you. :) Yes I've tested both IA32 and X64 builds. Yes they work fine.

There is value in being able to bootstrap your own cross-build toolchain on 
whatever platform. I don't think you should be so quick to remove it.

-Bill

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   I put a dollar in a change machine. Nothing changed. - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] [PATCH v2 00/16] unify GCC command line options

2015-08-17 Thread Bill Paul
Of all the gin joints in all the towns in all the world, Jordan Justen had to 
walk into mine at 11:22:15 on Monday 17 August 2015 and say:

 On 2015-08-17 11:10:57, Bill Paul wrote:
  Of all the gin joints in all the towns in all the world, David Woodhouse
  had
  
  to walk into mine at 11:00:23 on Monday 17 August 2015 and say:
   On Mon, 2015-08-17 at 10:53 -0700, Jordan Justen wrote:
UNIXGCC and CYGGCC are GCC 4.3  mingw based. Did this get tested?

I think ELFGCC is unused at this point. (And has been since UnixPkg
was deprecated.)

I think we should deprecate all three of these toolchains. I would
like to see us move them to BaseTools/Conf/tools_def.deprecated. I'll
add Larry to this email, because I think he disagrees with
deprecating
toolchains...

If you make these changes and it breaks those toolchains, I don't
think we would be able to notice, because I don't think we test them
in our build pool anymore. To me this is all the more reason to move
them out of tools_def.template.
   
   I was building with UNIXGCC last week, to test LLP64 builds without the
   pain of actually having to deal with Windows.
   
   I'd rather see it updated to work with modern MinGW rather than
   deprecated.
  
  I use UNIXGCC with the cross-compiler toolchain generated by mingw-gcc-
  build.py. Yes, I know the existing version of the script uses GCC 4.3.0.
  That's why I made an updated version that uses 4.9.3:
  
  http://people.freebsd.org/~wpaul/edk2/mingw-gcc-build.py
  
  I know you don't want to support this script, that's why I did the work
  for you. :) Yes I've tested both IA32 and X64 builds. Yes they work
  fine.
  
  There is value in being able to bootstrap your own cross-build toolchain
  on whatever platform. I don't think you should be so quick to remove it.
 
 Can't you use an elf-based GCC4.9 with the GCC49 toolchain instead?

I could, but there's not really much point.

UEFI uses PE/COFF as its object format, right? Using an ELF-based GCC means 
that you have to add a extra conversion step during the build process in order 
to go from ELF to PE/COFF. The rationale for doing it that way is: A lot of 
*NIX systems already have ELF-based system compilers installed, we might as 
well use them. I understand the usefulness of this approach.

However, if I'm going to be bootstrapping my own cross-build tools from 
scratch, that rationale no longer applies: if I have the option of selecting a 
target that gets me PE/COFF objects directly, I might as well do that.

 I'm not sure it makes sense to 'upgrade' the UNIXGCC toolchain to be
 based on GCC 4.9 rather than 4.3. I think GCC 4.3 was implicitly part
 of the definition of the UNIXGCC toolchain. (Well, maybe explicitly if
 you count the comment in tools_def :) This is why I'd rather deprecate
 it as a toolchain, and use the GCC4X toolchains instead.

I don't think this reasoning is valid. It doesn't seem fair to say that just 
because the UNIXGCC target was originally set up to use GCC 4.3.0, you can 
never upgrade it to a newer version. Technically, GCC 4.3.0 is buggy if you 
consider that it gets the underscore decoration convention wrong for X64. I 
would argue that it makes sense to fix this, since if the intent was to 
produce a cross-build toolchain that emulates the Microsoft toolchain 
behavior, it's not actually doing that now. It hasn't been left like that 
because people wanted it that way: it was left like that because until now 
nobody cared enough to fix it.

-Bill

 -Jordan
 ___
 edk2-devel mailing list
 edk2-devel@lists.01.org
 https://lists.01.org/mailman/listinfo/edk2-devel

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   I put a dollar in a change machine. Nothing changed. - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel


Re: [edk2] [RFC PATCH 0/4] unify GCC command line options

2015-08-13 Thread Bill Paul
 was waiting for the commotion around the tools_def.template 
file to die down a bit before submitting any patches, but that hasn't happened 
yet.

-Bill

 Btw I managed to complete the OVMF/X86 build with Mingw without any
 reported errors regarding the stack protector, which I did not disable
 afaik.
 What is the symptom you observed here?
 
  We already have different calling conventions, and different size
  'long', between GCC and MSVC builds. MinGW is consistent with the MSVC
  builds for this — why would that be a problem? That was the whole
  *point* in wanting to use MinGW, for me — to test a LLP64 build without
  the pain of actually having to use Windows+MSVC.
 
 Indeed. The primary motivation for my recent involvement with the
 toolchain configs is to help ensure that the representative sample of
 toolchains we may propose for pre-commit compile tests sufficiently
 covers the architectures and toolchains we care about. And MinGW would
 be useful for this purpose as well, especially since it is a free
 toolchain that implements LLP64.
 
  (I wonder if we could get the MSVC build running under wine... now
  *that* would be useful)
 
 ___
 edk2-devel mailing list
 edk2-devel@lists.01.org
 https://lists.01.org/mailman/listinfo/edk2-devel

-- 
=
-Bill Paul(510) 749-2329 | Senior Member of Technical Staff,
 wp...@windriver.com | Master of Unix-Fu - Wind River Systems
=
   I put a dollar in a change machine. Nothing changed. - George Carlin
=
___
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel