Re: [coreboot] 16 GPUs on one board

2018-01-04 Thread Zoran Stojsavljevic
> Yep, I am another crypto currencies miner.  But in all truth,
> I find the hardware challenge more fun then the bitcoin stuff.

Thank you for conforming. Nothing wrong with it, as far as I can tell. :-)

But for the sake of time, you should have this setup working ASAP.
This is the aim, isn't it?

> Power is not the issue (any more).  I have 2Kw worth of PSU.
> 2X HP Common slot 750W PSU + Thermaltake 500W PSU.
> Currently, with all 8 cards running full tilt, across two motherboards,
> and I am drawing 960~1000W.  Those numbers are according to a
> Kill-A-Watt meter.

Thank you for the update.

> Hardware wise, this is all x86_64.

Which CPU are you using there? i3? i5? i7? Which code name, or Core number?

And how much of system memory are you using? I guess, not less then
8GB (in two $GB DIMMs).

> Arthur: Thanks for the details.  I have a board that with give me a
> "missing memory"  beep code with more then 6 GPUs.  Now I
> understand why!

So, each of the newest GTX 1070s need lot of memory from the host?! I
always thought that all these GFX cards have their own GFX memory,
dedicated to themselves?! And, YES, there is some system (buffer)
memory dedicated to GFX processing.

> How can I track down how much system DRAM a GPU is using?
> These are all the newest Nvidia Pascal based cards.  Mostly GTX 1070's.

I can tell, I am also interested in this to know!

> Is this just a BIOS level issue?  Or is there some hardware component I
> should be aware of?

BIOS issue? Might be. You need to find video memory buffer dedicated
for GFX cards. in this lieu, here is one perfect article for you:
http://smallbusiness.chron.com/change-memory-allocated-graphics-card-58676.html

Namely paragraph: Changing the Memory Allocation

Typical values listed in the BIOS these days are 32MB, 64MB, 128MB,
256MB, 512MB and 1024MB. For your configuration, it is obvious that
you need to set in BIOS the maximum size: 1024MB (since you have 16
PCIe GFX cards to support)!

I hope this helps (waiting for you to report after you change the GFX
system (buffer) memory area)!

Zoran
___

On Fri, Jan 5, 2018 at 5:51 AM, Adam Talbot  wrote:
> Yep, I am another crypto currencies miner.  But in all truth, I find the
> hardware challenge more fun then the bitcoin stuff.
>
> Power is not the issue (any more).  I have 2Kw worth of PSU. 2X HP Common
> slot 750W PSU + Thermaltake 500W PSU.  Currently, with all 8 cards running
> full tilt, across two motherboards, and I am drawing 960~1000W.  Those
> numbers are according to a Kill-A-Watt meter.
>
> Hardware wise, this is all x86_64.
>
> Arthur: Thanks for the details.  I have a board that with give me a "missing
> memory"  beep code with more then 6 GPUs.  Now I understand why!
>
> How can I track down how much system DRAM a GPU is using?  These are all the
> newest Nvidia Pascal based cards.  Mostly GTX 1070's.
>
> On an interesting note, one of my oldest motherboards, a Gigabyte
> GA-970A-UD3 will boot with all 8 cards, but gives me the no VGA beep code.
> Serial console for the win!
>
> Is this just a BIOS level issue?  Or is there some hardware component I
> should be aware of?
>
> Thanks for the help.
> -Adam
>
>
> On Thu, Jan 4, 2018 at 8:14 PM, Zoran Stojsavljevic
>  wrote:
>>
>> > I am totally off the deep end and don't know where else to turn
>> > for help/advice.  I am trying to get 16 GPU's on one motherboard.
>>
>> H. Yet another crypto currencies miner. ;-)
>>
>> > Whenever I attach more then 3~5 GPU's to a single motherboard,
>> > it fails to post.  To make matters worse, my post code reader(s) don't
>> > seem to give me any good error codes.  Or at least nothing I can go on.
>>
>> You should have at minimum 1KW PSU for this job. At least... I guess,
>> even more (for 16 discrete GPUs) 2 x 1KW would be reasonable.
>>
>> Zoran
>> ___
>>
>> On Thu, Jan 4, 2018 at 8:38 PM, Adam Talbot  wrote:
>> > -Coreboot
>> > I am totally off the deep end and don't know where else to turn for
>> > help/advice.  I am trying to get 16 GPU's on one motherboard. Whenever I
>> > attach more then 3~5 GPU's to a single motherboard, it fails to post.
>> > To
>> > make matters worse, my post code reader(s) don't seem to give me any
>> > good
>> > error codes.  Or at least nothing I can go on.
>> >
>> > I am using PLX PEX8614 chips (PCIe 12X switch) to take 4 lanes and pass
>> > them
>> > to 8 GPU's, 1 lane per GPU. Bandwidth is not an issues as all my code
>> > runs
>> > native on the GPUs. Depending on the motherboard, I can get up to 5
>> > GPU's to
>> > post.  After many hours of debugging, googling, and trouble shooting, I
>> > am
>> > out of ideas.
>> >
>> > At this point I have no clue. I think there is a hardware, and a BIOS
>> > component? Can you help me understand the post process and where the
>> > hang up
>> > is occurring?  Do you think Coreboot will get around this hangup and, if
>> > so,
>> > can you advise a motherboard for me to test with?
>> >
>> > Its been a long time sense I

Re: [coreboot] BDX-DE PCI init fail

2018-01-04 Thread 杜睿哲_Pegatron
Hi David,

After trying to use SeaBIOS as payload, I got more information about reboot 
issue as attached file. While U-Boot just reboots directly and Grub hangs, the 
SeaBIOS’s dump complains “No bootable device” at the end. Do you think is it 
the cause of reboot? Can I say my U-Boot and Grub versions are not supporting 
BDX-DE?

-Hilbert
This e-mail and its attachment may contain information that is confidential or 
privileged, and are solely for the use of the individual to whom this e-mail is 
addressed. If you are not the intended recipient or have received it 
accidentally, please immediately notify the sender by reply e-mail and destroy 
all copies of this email and its attachment. Please be advised that any 
unauthorized use, disclosure, distribution or copying of this email or its 
attachment is strictly prohibited.
本電子郵件及其附件可能含有機密或依法受特殊管制之資訊,僅供本電子郵件之受文者使用。台端如非本電子郵件之受文者或誤收本電子郵件,請立即回覆郵件通知寄件人,並銷毀本電子郵件之所有複本及附件。任何未經授權而使用、揭露、散佈或複製本電子郵件或其附件之行為,皆嚴格禁止。
= PEIM FSP is Completed =

Returned from FspNotify(EnumInitPhaseReadyToBoot)
Jumping to boot code at 000ff06e(7eff6000)
CPU0: stack: 00129000 - 0012a000, lowest used address 00129b00, stack used: 
1280 bytes
entry= 0x000ff06e
lb_start = 0x0010
lb_size  = 0x001302f0
buffer   = 0x7ed6
SeaBIOS (version rel-1.10.2-0-g5f4c7b1)
BUILD: gcc: (coreboot toolchain v1.47 August 16th, 2017) 6.3.0 binutils: (GNU 
Binutils) 2.28
Found mainboard Intel Camelback Mountain CRB
Relocating init from 0x000e3940 to 0x7ef74da0 (size 49600)
Found CBFS header at 0xffe00138
multiboot: eax=0, ebx=0
Found 25 PCI devices (max PCI bus is 05)
Copying SMBIOS entry point from 0x7efc1000 to 0x000f7120
Copying ACPI RSDP from 0x7efd2000 to 0x000f70f0
Using pmtimer, ioport 0x408
Scan for VGA option rom
XHCI init on dev 00:14.0: regs @ 0xfea0, 21 ports, 32 slots, 32 byte 
contexts
XHCIprotocol USB  2.00, 8 ports (offset 1), def 3001
XHCIprotocol USB  3.00, 6 ports (offset 16), def 1000
XHCIextcap 0xc1 @ 0xfea08040
XHCIextcap 0xc0 @ 0xfea08070
XHCIextcap 0x1 @ 0xfea0846c
EHCI init on dev 00:1a.0 (regs=0xfea18020)
EHCI init on dev 00:1d.0 (regs=0xfea19020)
WARNING - Timeout at i8042_flush:71!
AHCI controller at 00:1f.2, iobase 0xfea17000, irq 0
Found 0 lpt ports
Found 2 serial ports
XHCI no devices found
ehci_wait_td error - status=80e42
Initialized USB HUB (0 ports used)
Initialized USB HUB (0 ports used)
All threads complete.
Scan for option roms
Running option rom at c000:0003
Running option rom at c100:0003
Searching bootorder for: /pci@i0cf8/pci-bridge@2,2/*@0
Searching bootorder for: /pci@i0cf8/pci-bridge@2,2/*@0,1

Press ESC for boot menu.

Searching bootorder for: HALT
Space available for UMB: c2000-ee800, f6940-f70d0
Returned 192512 bytes of ZoneHigh
e820 map has 9 items:
  0:  - 0009fc00 = 1 RAM
  1: 0009fc00 - 000a = 2 RESERVED
  2: 000f - 0010 = 2 RESERVED
  3: 0010 - 7efb = 1 RAM
  4: 7efb - 9000 = 2 RESERVED
  5: feb0 - feb1 = 2 RESERVED
  6: feb8 - fef0 = 2 RESERVED
  7: ff00 - 0001 = 2 RESERVED
  8: 0001 - 00028000 = 1 RAM
enter handle_19:
  NULL
Booting from ROM...
Booting from c000:0b91
enter handle_18:
  NULL
Booting from ROM...
Booting from c100:0b91
enter handle_18:
  NULL
Booting from Floppy...
Boot failed: could not read the boot disk

enter handle_18:
  NULL
Booting from Hard Disk...
Boot failed: could not read the boot disk

enter handle_18:
  NULL
No bootable device.  Retrying in 60 seconds.
Rebooting.
In resume (status=0)
In 32bit resume
Attempting a hard reboot
ACPI hard reset 1:cf9 (6)-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] 16 GPUs on one board

2018-01-04 Thread Zoran Stojsavljevic
> I am totally off the deep end and don't know where else to turn
> for help/advice.  I am trying to get 16 GPU's on one motherboard.

H. Yet another crypto currencies miner. ;-)

> Whenever I attach more then 3~5 GPU's to a single motherboard,
> it fails to post.  To make matters worse, my post code reader(s) don't
> seem to give me any good error codes.  Or at least nothing I can go on.

You should have at minimum 1KW PSU for this job. At least... I guess,
even more (for 16 discrete GPUs) 2 x 1KW would be reasonable.

Zoran
___

On Thu, Jan 4, 2018 at 8:38 PM, Adam Talbot  wrote:
> -Coreboot
> I am totally off the deep end and don't know where else to turn for
> help/advice.  I am trying to get 16 GPU's on one motherboard. Whenever I
> attach more then 3~5 GPU's to a single motherboard, it fails to post.  To
> make matters worse, my post code reader(s) don't seem to give me any good
> error codes.  Or at least nothing I can go on.
>
> I am using PLX PEX8614 chips (PCIe 12X switch) to take 4 lanes and pass them
> to 8 GPU's, 1 lane per GPU. Bandwidth is not an issues as all my code runs
> native on the GPUs. Depending on the motherboard, I can get up to 5 GPU's to
> post.  After many hours of debugging, googling, and trouble shooting, I am
> out of ideas.
>
> At this point I have no clue. I think there is a hardware, and a BIOS
> component? Can you help me understand the post process and where the hang up
> is occurring?  Do you think Coreboot will get around this hangup and, if so,
> can you advise a motherboard for me to test with?
>
> Its been a long time sense I last compiled linuxbios. ;-)
>
> Thanks
> -Adam
>
> --
> coreboot mailing list: coreboot@coreboot.org
> https://mail.coreboot.org/mailman/listinfo/coreboot

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] usb3/xhci issues with coreboot on Thinkpad x230

2018-01-04 Thread Nico Huber

On 04.01.2018 15:31, mer...@aaathats3as.com wrote:

On 2018-01-02 20:49, Nico Huber wrote:

As you mentioned that you didn't change other settings, may I assume
that you run the same ME firmware with coreboot and vendor during your
tests? also, is your ME firmware in its original state?


I'm sorry, I missed that. Intel ME was neutralized with me_cleaner.
This is for both my tests with coreboot and with stock BIOS.


Please always retest with fully functional ME firmware.




please send a coreboot log taken with `cbmem -c` (you can find cbmem in
util/cbmem/ in the coreboot source tree).


Here is one with the same coreboot version/build as it was in the dmesg
output in my bug report on the openbsd-bugs ml.


Thanks. I can't find anything suspicious. Though, coreboot doesn't have
to do much for USB controllers anyway.

Some more questions:
o Are the drives you tested all SuperSpeed devices?
o Do the ports work when using the EHCI controller?
  (The ports can be switched to either xHCI or EHCI. In Linux you'd
   just only load the EHCI driver and not the xHCI one, don't know
   about OpenBSD.)
o Did you ever test with another OS?
o Can you provide a dmesg from a run with vendor BIOS?
o Can OpenBSD boot/run without BIOS help? If so, could you test with
  a different payload?

Nico

--
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Doubt about SPD init in Skylake

2018-01-04 Thread Nico Huber

Hi Merle,

you should always keep the mailing list addressed. Otherwise you'll
get less responses, obviously.

Please tell us more about your project. Is it a hobby thing? or do you
work on a professional product? it really matters, because in the latter
case you should try to get a contact to Intel and ask them.

On 04.01.2018 09:08, 王翔 wrote:

I found this array on 
https://github.com/IntelFsp/FSP/blob/Skylake/SkylakeFspBinPkg/Docs/SkylakeFspIntegrationGuide.pdf
 4.1.2.1 section .
But , what's meaning?


It's not documented. Some lucky people have access to the FSP source
code and might be able to tell.


How get this value ?


From board schematics and Intel's Platform Design Guide mostly.

Nico

--
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] 16 GPUs on one board

2018-01-04 Thread Testerman, Brett
We use a PLX chip in our design and I will say they are incredibly sensitive to 
reset. Are you sure your voltages are holding up during startup? The fact that 
it works with a limited number of GPU’s would be indicative of voltage sag due 
to the higher inrush current. A slow rise time on the reset signal can cause a 
lot of devices to not come out of reset properly. Same with the reset line 
de-asserting before all the voltages were stable.

Brett


From: coreboot [mailto:coreboot-boun...@coreboot.org] On Behalf Of Adam Talbot
Sent: Thursday, January 04, 2018 12:39 PM
To: coreboot@coreboot.org
Subject: [coreboot] 16 GPUs on one board

** The Sender is from outside the Cobham Commercial Datacentre organisation **
-Coreboot
I am totally off the deep end and don't know where else to turn for 
help/advice.  I am trying to get 16 GPU's on one motherboard. Whenever I attach 
more then 3~5 GPU's to a single motherboard, it fails to post.  To make matters 
worse, my post code reader(s) don't seem to give me any good error codes.  Or 
at least nothing I can go on.

I am using PLX PEX8614 chips (PCIe 12X switch) to take 4 lanes and pass them to 
8 GPU's, 1 lane per GPU. Bandwidth is not an issues as all my code runs native 
on the GPUs. Depending on the motherboard, I can get up to 5 GPU's to post.  
After many hours of debugging, googling, and trouble shooting, I am out of 
ideas.

At this point I have no clue. I think there is a hardware, and a BIOS 
component? Can you help me understand the post process and where the hang up is 
occurring?  Do you think Coreboot will get around this hangup and, if so, can 
you advise a motherboard for me to test with?

Its been a long time sense I last compiled linuxbios. ;-)

Thanks
-Adam
-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] BDX-DE PCI init fail

2018-01-04 Thread David Hendricks
Hi Hilbert,
For what it's worth, I was able to boot Linux as the payload without any
obvious problems. It might be good to try other payloads, or see if you can
enable serial console earlier in u-boot to find exactly where it reboots.

Here is my CPUID and microcode info printed by coreboot during ramstage:
microcode: sig=0x50664 pf=0x10 revision=0xf0c
CPUID: 00050664
Cores: 32
Stepping: Y0
Revision ID: 05

So it appears I am using the production CPU and microcode that Zoran
suggested. To obtain this, I downloaded SRV_P_203.exe from Intel's website
and converted M1050664_0F0C.TXT into a C-style header that can be
included by coreboot's build system.


On Thu, Jan 4, 2018 at 3:19 AM, Hilbert Tu(杜睿哲_Pegatron) <
hilbert...@pegatroncorp.com> wrote:

> Hi Zoran,
>
> About this issue, I decide to follow David's suggestion to comment out the
> SMBus clock gating and then it can continue booting until load my U-Boot
> payload. But then it enters infinite reboot as previous attached log
> "smbus_init_fail_max_dump2.txt". I am not sure if is a side effect or
> just a new issue. Do you have any recommendation about the reboot? By the
> way, we have our own BDX-DE board, not Camelback CRB. But just use similar
> configuration. Thanks.
>
> -Hilbert
> This e-mail and its attachment may contain information that is
> confidential or privileged, and are solely for the use of the individual to
> whom this e-mail is addressed. If you are not the intended recipient or
> have received it accidentally, please immediately notify the sender by
> reply e-mail and destroy all copies of this email and its attachment.
> Please be advised that any unauthorized use, disclosure, distribution or
> copying of this email or its attachment is strictly prohibited.
> 本電子郵件及其附件可能含有機密或依法受特殊管制之資訊,僅供本電子郵件之受文者使用。台端如非本電子郵件之受文者或誤收本電子郵件,
> 請立即回覆郵件通知寄件人,並銷毀本電子郵件之所有複本及附件。任何未經授權而使用、揭露、散佈或複製本電子郵件或其附件之行為,皆嚴格禁止。
>
-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] 16 GPUs on one board

2018-01-04 Thread Arthur Heymans
Hi

What target are you on?

Coreboot tries to locate all PCI BAR's below 4G in the PCI_MMIO region and 
above the lower DRAM
limit (the rest of the DRAM is mapped above 4G). Typically a GPU takes
around 256M but I guess that could be more nowadays. If that doesn't fit
in the PCI MMIO region, it will have troubles and probably not boot.

The real fix would be to have coreboot locate BAR's above 4G too.

At least that is what I think is going on here...

(sry for top posting it felt like the answer was better in one block)

Adam Talbot  writes:

> -Coreboot
> I am totally off the deep end and don't know where else to turn for 
> help/advice. I am trying to get 16 GPU's on one motherboard. Whenever I 
> attach more then 3~5 GPU's to a single motherboard, it fails to post. To make 
> matters worse, my post
> code reader(s) don't seem to give me any good error codes. Or at least 
> nothing I can go on.
>
> I am using PLX PEX8614 chips (PCIe 12X switch) to take 4 lanes and pass them 
> to 8 GPU's, 1 lane per GPU. Bandwidth is not an issues as all my code runs 
> native on the GPUs. Depending on the motherboard, I can get up to 5 GPU's to 
> post. After
> many hours of debugging, googling, and trouble shooting, I am out of ideas. 
>
> At this point I have no clue. I think there is a hardware, and a BIOS 
> component? Can you help me understand the post process and where the hang up 
> is occurring? Do you think Coreboot will get around this hangup and, if so, 
> can you advise a
> motherboard for me to test with? 
>
> Its been a long time sense I last compiled linuxbios. ;-)
>
> Thanks
> -Adam

Kind regards

-- 
Arthur Heymans

-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


[coreboot] 16 GPUs on one board

2018-01-04 Thread Adam Talbot
-Coreboot
I am totally off the deep end and don't know where else to turn for
help/advice.  I am trying to get 16 GPU's on one motherboard. Whenever I
attach more then 3~5 GPU's to a single motherboard, it fails to post.  To
make matters worse, my post code reader(s) don't seem to give me any good
error codes.  Or at least nothing I can go on.

I am using PLX PEX8614 chips (PCIe 12X switch) to take 4 lanes and pass
them to 8 GPU's, 1 lane per GPU. Bandwidth is not an issues as all my code
runs native on the GPUs. Depending on the motherboard, I can get up to 5
GPU's to post.  After many hours of debugging, googling, and trouble
shooting, I am out of ideas.

At this point I have no clue. I think there is a hardware, and a BIOS
component? Can you help me understand the post process and where the hang
up is occurring?  Do you think Coreboot will get around this hangup and, if
so, can you advise a motherboard for me to test with?

Its been a long time sense I last compiled linuxbios. ;-)

Thanks
-Adam
-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] usb3/xhci issues with coreboot on Thinkpad x230

2018-01-04 Thread merino

On 2018-01-02 20:49, Nico Huber wrote:

As you mentioned that you didn't change other settings, may I assume
that you run the same ME firmware with coreboot and vendor during your
tests? also, is your ME firmware in its original state?


I'm sorry, I missed that. Intel ME was neutralized with me_cleaner.
This is for both my tests with coreboot and with stock BIOS.


please send a coreboot log taken with `cbmem -c` (you can find cbmem in
util/cbmem/ in the coreboot source tree).


Here is one with the same coreboot version/build as it was in the dmesg
output in my bug report on the openbsd-bugs ml.

*** Pre-CBMEM romstage console overflowed, log truncated! ***
FS: Found @ offset 1fec0 size 1
find_current_mrc_cache_local: picked entry 0 from cache block
Trying stored timings.
Starting Ivybridge RAM training (1).
100MHz reference clock support: yes
PLL busy... done in 40 us
MCU frequency is set at : 800 MHz
Done dimm mapping
Update PCI-E configuration space:
PCI(0, 0, 0)[a0] = 0
PCI(0, 0, 0)[a4] = 2
PCI(0, 0, 0)[bc] = c2a0
PCI(0, 0, 0)[a8] = 3b60
PCI(0, 0, 0)[ac] = 2
PCI(0, 0, 0)[b8] = c000
PCI(0, 0, 0)[b0] = c0a0
PCI(0, 0, 0)[b4] = c080
PCI(0, 0, 0)[7c] = 7f
PCI(0, 0, 0)[70] = fe00
PCI(0, 0, 0)[74] = 1
PCI(0, 0, 0)[78] = fe000c00
Done memory map
Done io registers
t123: 1767, 6000, 7620
ME: FW Partition Table  : OK
ME: Bringup Loader Failure  : NO
ME: Firmware Init Complete  : NO
ME: Manufacturing Mode  : NO
ME: Boot Options Present: NO
ME: Update In Progress  : NO
ME: Current Working State   : Recovery
ME: Current Operation State : Bring up
ME: Current Operation Mode  : Normal
ME: Error Code  : No Error
ME: Progress Phase  : BUP Phase
ME: Power Management Event  : Pseudo-global reset
ME: Progress Phase State: Waiting for DID BIOS message
ME: FWS2: 0x161f017a
ME:  Bist in progress: 0x0
ME:  ICC Status  : 0x1
ME:  Invoke MEBx : 0x1
ME:  CPU replaced: 0x1
ME:  MBP ready   : 0x1
ME:  MFS failure : 0x1
ME:  Warm reset req  : 0x0
ME:  CPU repl valid  : 0x1
ME:  (Reserved)  : 0x0
ME:  FW update req   : 0x0
ME:  (Reserved)  : 0x0
ME:  Current state   : 0x1f
ME:  Current PM event: 0x6
ME:  Progress code   : 0x1
Full training required
PASSED! Tell ME that DRAM is ready
ME: FWS2: 0x162c017a
ME:  Bist in progress: 0x0
ME:  ICC Status  : 0x1
ME:  Invoke MEBx : 0x1
ME:  CPU replaced: 0x1
ME:  MBP ready   : 0x1
ME:  MFS failure : 0x1
ME:  Warm reset req  : 0x0
ME:  CPU repl valid  : 0x1
ME:  (Reserved)  : 0x0
ME:  FW update req   : 0x0
ME:  (Reserved)  : 0x0
ME:  Current state   : 0x2c
ME:  Current PM event: 0x6
ME:  Progress code   : 0x1
ME: Requested BIOS Action: Continue to boot
ME: FW Partition Table  : OK
ME: Bringup Loader Failure  : NO
ME: Firmware Init Complete  : NO
ME: Manufacturing Mode  : NO
ME: Boot Options Present: NO
ME: Update In Progress  : NO
ME: Current Working State   : Recovery
ME: Current Operation State : Bring up
ME: Current Operation Mode  : Normal
ME: Error Code  : No Error
ME: Progress Phase  : BUP Phase
ME: Power Management Event  : Pseudo-global reset
ME: Progress Phase State: 0x2c
memcfg DDR3 clock 1600 MHz
memcfg channel assignment: A: 0, B  1, C  2
memcfg channel[0] config (00620010):
   ECC inactive
   enhanced interleave mode on
   rank interleave on
   DIMMA 4096 MB width x8 dual rank, selected
   DIMMB 0 MB width x8 single rank
memcfg channel[1] config (00620010):
   ECC inactive
   enhanced interleave mode on
   rank interleave on
   DIMMA 4096 MB width x8 dual rank, selected
   DIMMB 0 MB width x8 single rank
CBMEM:
IMD: root @ b000 254 entries.
IMD: root @ bfffec00 62 entries.
CBMEM entry for DIMM info: 0xbfffe960
MTRR Range: Start=ff00 End=0 (Size 100)
MTRR Range: Start=0 End=100 (Size 100)
MTRR Range: Start=bf80 End=c000 (Size 80)
MTRR Range: Start=c000 End=c080 (Size 80)
CBFS: 'Master Header Locator' located CBFS at [b00100:bfffc0)
CBFS: Locating 'fallback/ramstage'
CBFS: Found @ offset 2ff00 size 12663
Decompressing stage fallback/ramstage @ 0xbffa0fc0 (227184 bytes)
Loading module at bffa1000 with entry bffa1000. filesize: 0x26870 
memsize: 0x37730

Processing 2548 relocs. Offset value of 0xbfea1000


coreboot-4.6-196-g0fb6568 Mon May 22 22:53:27 UTC 2017 ramstage 
starting...

Normal boot.
BS: BS_PRE_DEVICE times (us): entry 0 run 2 exit 0
BS: BS_DEV_INIT_CHIPS times (us): entry 0 run 2 exit 0
Enumerating buses...
Show all devs... Before device enumeration.
Root Device: enabled 1
CPU_CLUSTER: 0: enabled 1
APIC: 00: enabled 1
APIC: acac: enabled 0
DOMAIN: : enabled 1
PCI: 00:00.0: enabled 1
PCI: 00:01.0: enabled 0
PCI: 00:02.0: enabled 1
PCI: 00:14.0: enabled 1
PCI: 00:16.0: enabled 1
PCI: 00:16.1: enabled 0
PCI: 00:16.2: enabled 0
PCI: 00:16.3: enabled 0
PCI: 00:19.0: enabled 1
PCI: 00:1a.0: enabled 1
PCI: 00:1b.0: enabled 1
PCI: 00:1c.0: enabled 1
PCI: 00:00.0: enabled

Re: [coreboot] Depthcharge License

2018-01-04 Thread tturne

On 2017-12-24 05:27, Ivan Ivanov wrote:

This commit got merged and the Depthcharge now should have the
requested license files
at its root directory. Dear friend, please let us learn about your
device once it is released,
maybe some of the coreboot developers would want to get your device -
to use, and possibly help

Best regards,
Ivan Ivanov



Forgive my lack of list etiquette.
With this commit, my legal department is satisfied and has given 
engineering the green light.

Will share details on project as and when I can.
Thanks all who responded and contributed to resolving issue raised by 
this thread.

Cheers,
T.mike

2017-12-21 21:34 GMT+03:00 Aaron Durbin via coreboot 
:

https://chromium-review.googlesource.com/c/chromiumos/platform/depthcharge/+/837589

On Thu, Dec 21, 2017 at 11:14 AM,  wrote:


On 2017-12-19 09:53, David Hendricks wrote:


To be fair, it appears that many source files refer to a 
non-existent

LICENSE file. Someone on the CrOS team should probably just add the
LICENSE file for depthcharge and/or contact mal@ to see how the
license info is being collected these days (e.g. for
chrome://os-credits [3]).



Thanks for the input on the DC licensing, and because lawyers are 
involved

I need to ask a direct question.
Can a Depthcharge maintainer (anybody who has +2 gerrit authority 
would

suffice) state on this thread
that the Depthcharge project license is GPLv2 (or later)?

I know this feels redundant and that the question has been asked and
answered, I'm just trying
to satisfy our internal legal group.

Per David's point above, this direct question would not be required 
to

satisfy the lawyers
if a top-level COPYING or LICENSE file is added to Depthcharge.  
Which I

will be happy to
add once I'm able to start contributing to the project.
Cheers,
T.mike


On Tue, Dec 19, 2017 at 9:19 AM, ron minnich 
wrote:


Is there even a question? Looks like aaron just answered the
original question, which boils down to Read The Source?

On Tue, Dec 19, 2017 at 7:58 AM Aaron Durbin via coreboot
 wrote:

On Tue, Dec 19, 2017 at 8:03 AM,  wrote:
On 2017-12-15 13:39, ttu...@codeaurora.org wrote:
Preparing to mirror the coreboot.org [1] requires us to vet the
various
licenses, etc.

There doesn't appear to be a LICENSE or COPYING file in the
Depthcharge tree.

My understanding is that Depthcharge is licensed GPLv2 (or later).

How would I confirm this with an online source?

Cheers,
T.mike

Should this query be posted on Chromium list rather than Coreboot
list?



Probably. The files in the depthcharge repository  have licenses at
the top of each file. They are GPLv2.


Cheers,
T.mike

--
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot [2]


 --
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot [2]
--
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot [2]



Links:
--
[1] http://coreboot.org
[2] https://mail.coreboot.org/mailman/listinfo/coreboot
[3] https://www.chromium.org/chromium-os/licenses




--
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


--
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] BDX-DE PCI init fail

2018-01-04 Thread Zoran Stojsavljevic
> ... I am not sure if is a side effect or just a new issue. Do you have any 
> recommendation about the reboot?

[1] The complete boot log is beneficial as far as you have
progressed... Isn't it? But you can also try SeaBIOS as payload (let
go with simple ones), and see how far you'll go with it?
[2] Would be nice to try all of this on your own proprietary board (I
hope you have there CPUID 0x50664), and see how it goes (with logs, as
well)?!

Bis Heute Abend, oder Morgen,
Zoran
___

On Thu, Jan 4, 2018 at 12:19 PM, Hilbert Tu(杜睿哲_Pegatron)
 wrote:
> Hi Zoran,
>
> About this issue, I decide to follow David's suggestion to comment out the 
> SMBus clock gating and then it can continue booting until load my U-Boot 
> payload. But then it enters infinite reboot as previous attached log 
> "smbus_init_fail_max_dump2.txt". I am not sure if is a side effect or just a 
> new issue. Do you have any recommendation about the reboot? By the way, we 
> have our own BDX-DE board, not Camelback CRB. But just use similar 
> configuration. Thanks.
>
> -Hilbert
> This e-mail and its attachment may contain information that is confidential 
> or privileged, and are solely for the use of the individual to whom this 
> e-mail is addressed. If you are not the intended recipient or have received 
> it accidentally, please immediately notify the sender by reply e-mail and 
> destroy all copies of this email and its attachment. Please be advised that 
> any unauthorized use, disclosure, distribution or copying of this email or 
> its attachment is strictly prohibited.
> 本電子郵件及其附件可能含有機密或依法受特殊管制之資訊,僅供本電子郵件之受文者使用。台端如非本電子郵件之受文者或誤收本電子郵件,請立即回覆郵件通知寄件人,並銷毀本電子郵件之所有複本及附件。任何未經授權而使用、揭露、散佈或複製本電子郵件或其附件之行為,皆嚴格禁止。
-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] BDX-DE PCI init fail

2018-01-04 Thread 杜睿哲_Pegatron
Hi Zoran,

About this issue, I decide to follow David's suggestion to comment out the 
SMBus clock gating and then it can continue booting until load my U-Boot 
payload. But then it enters infinite reboot as previous attached log 
"smbus_init_fail_max_dump2.txt". I am not sure if is a side effect or just a 
new issue. Do you have any recommendation about the reboot? By the way, we have 
our own BDX-DE board, not Camelback CRB. But just use similar configuration. 
Thanks.

-Hilbert
This e-mail and its attachment may contain information that is confidential or 
privileged, and are solely for the use of the individual to whom this e-mail is 
addressed. If you are not the intended recipient or have received it 
accidentally, please immediately notify the sender by reply e-mail and destroy 
all copies of this email and its attachment. Please be advised that any 
unauthorized use, disclosure, distribution or copying of this email or its 
attachment is strictly prohibited.
本電子郵件及其附件可能含有機密或依法受特殊管制之資訊,僅供本電子郵件之受文者使用。台端如非本電子郵件之受文者或誤收本電子郵件,請立即回覆郵件通知寄件人,並銷毀本電子郵件之所有複本及附件。任何未經授權而使用、揭露、散佈或複製本電子郵件或其附件之行為,皆嚴格禁止。
-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] BDX-DE PCI init fail

2018-01-04 Thread Zoran Stojsavljevic
YES, U R correct. BDX-DE supoosed to be SoC, as I now recall. It was
once upon a time/a long time back, about 2.5 to 3 years ago I played
with this stuff.

You supposed NOT to use anything beneath 0x50663 (forget 0x50661/2). I
have no idea why you are using PPR 0x50663, the production part (PR)
is actually 0x50664.

My best guess, you are using some CRB (CamelBack Mountain CRB?!) as
given, with FSP v 1.0. And I recall, this should work... Out of box,
although I never tried it with FSP. I tried it with the internal
AMI/INTEL UEFI and Fedora 23, IIRC?! I tried 0x50661 and 0x50662, but
also with internal AMI/INTEL UEFI.

Werner (if he recalls the stuff, since Werner most certainly played
with FSP on CamelBack Mountain CRB) can help... If?!

Zoran

On Thu, Jan 4, 2018 at 10:06 AM, Hilbert Tu(杜睿哲_Pegatron)
 wrote:
> Hi Zoran,
>
> I don't understand. We don't have extra MCU and, from following message, we 
> also have correct microcode. Why you mean we should have " PPR 0x50663 PPR 
> PCH "? My understanding is they are in/just the same chip...Please help to 
> clarify. Thanks.
>
>>>microcode: sig=0x50663 pf=0x10 revision=0x70e 
>>><<===
>>>CPUID: 00050663
>
> -Hilbert
> This e-mail and its attachment may contain information that is confidential 
> or privileged, and are solely for the use of the individual to whom this 
> e-mail is addressed. If you are not the intended recipient or have received 
> it accidentally, please immediately notify the sender by reply e-mail and 
> destroy all copies of this email and its attachment. Please be advised that 
> any unauthorized use, disclosure, distribution or copying of this email or 
> its attachment is strictly prohibited.
> 本電子郵件及其附件可能含有機密或依法受特殊管制之資訊,僅供本電子郵件之受文者使用。台端如非本電子郵件之受文者或誤收本電子郵件,請立即回覆郵件通知寄件人,並銷毀本電子郵件之所有複本及附件。任何未經授權而使用、揭露、散佈或複製本電子郵件或其附件之行為,皆嚴格禁止。
-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] BDX-DE PCI init fail

2018-01-04 Thread 杜睿哲_Pegatron
Hi Zoran,

I don't understand. We don't have extra MCU and, from following message, we 
also have correct microcode. Why you mean we should have " PPR 0x50663 PPR PCH 
"? My understanding is they are in/just the same chip...Please help to clarify. 
Thanks.

>>microcode: sig=0x50663 pf=0x10 revision=0x70e 
>><<===
>>CPUID: 00050663

-Hilbert
This e-mail and its attachment may contain information that is confidential or 
privileged, and are solely for the use of the individual to whom this e-mail is 
addressed. If you are not the intended recipient or have received it 
accidentally, please immediately notify the sender by reply e-mail and destroy 
all copies of this email and its attachment. Please be advised that any 
unauthorized use, disclosure, distribution or copying of this email or its 
attachment is strictly prohibited.
本電子郵件及其附件可能含有機密或依法受特殊管制之資訊,僅供本電子郵件之受文者使用。台端如非本電子郵件之受文者或誤收本電子郵件,請立即回覆郵件通知寄件人,並銷毀本電子郵件之所有複本及附件。任何未經授權而使用、揭露、散佈或複製本電子郵件或其附件之行為,皆嚴格禁止。
-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot