Re: qemu: fatal: lockup...

2021-12-14 Thread abhijeet inamdar
Hi,

Can I know how these "values" are written
https://github.com/qemu/qemu/blob/stable-6.0/hw/char/pl011.c#L33-#L39. I
tried to check and read from the reference manual but I did not get it?
Please let me know.

BR.
Abhijeet.

On Fri, Dec 10, 2021 at 5:14 PM Peter Maydell 
wrote:

> On Fri, 10 Dec 2021 at 15:44, abhijeet inamdar
>  wrote:
> >
> > Hi,
> >
> > In the Qemu monitor:
> >
> > (qemu) info qom-tree
> > /machine (vcpu-machine)
> >   /peripheral (container)
> >   /peripheral-anon (container)
> >   /unattached (container)
> > /sysbus (System)
>
> > What does this "Unattached" mean here?
>
> "/unattached" is where all QOM objects which don't have an
> explicit parent get put. More modern QEMU code generally
> creates QOM objects and explicitly parents them; older code
> does not. Machine model code often doesn't parent the devices
> it creates, for instance.
>
> The shape of the QOM tree doesn't generally make any
> difference in practice to how things run.
>
> thanks
> -- PMM
>


Re: qemu: fatal: lockup...

2021-12-10 Thread Peter Maydell
On Fri, 10 Dec 2021 at 15:44, abhijeet inamdar
 wrote:
>
> Hi,
>
> In the Qemu monitor:
>
> (qemu) info qom-tree
> /machine (vcpu-machine)
>   /peripheral (container)
>   /peripheral-anon (container)
>   /unattached (container)
> /sysbus (System)

> What does this "Unattached" mean here?

"/unattached" is where all QOM objects which don't have an
explicit parent get put. More modern QEMU code generally
creates QOM objects and explicitly parents them; older code
does not. Machine model code often doesn't parent the devices
it creates, for instance.

The shape of the QOM tree doesn't generally make any
difference in practice to how things run.

thanks
-- PMM



Re: qemu: fatal: lockup...

2021-12-10 Thread abhijeet inamdar
Hi,

In the Qemu monitor:

(qemu) info qom-tree
/machine (vcpu-machine)
  /peripheral (container)
  /peripheral-anon (container)
  /unattached (container)
/sysbus (System)

and

(qemu) info qtree
bus: main-system-bus
  type System
  dev: vcpu-control, id ""
chardev_out = "serial0"
chardev_in = ""
...
dev: armv7m, id ""
gpio-in "NMI" 1
gpio-out "SYSRESETREQ" 1
gpio-in "" 64
cpu-type = "cortex-m3-arm-cpu"
memory = "/machine/unattached/system[0]"
idau = ""
init-svtor = 0 (0x0)
enable-bitband = false
start-powered-off = false
vfp = true
dsp = true
  dev: ARM,bitband-memory, id ""
base = 0 (0x0)
source-memory = ""
mmio /0200
  dev: ARM,bitband-memory, id ""
base = 0 (0x0)
source-memory = ""
mmio /0200
  dev: armv7m_nvic, id ""
gpio-in "systick-trigger" 2
gpio-out "sysbus-irq" 1
num-irq = 80 (0x50)
mmio /1000
  dev: armv7m_systick, id ""
gpio-out "sysbus-irq" 1
mmio /00e0

What does this "Unattached" mean here?

BR.
Abhijeet.

On Wed, Dec 8, 2021 at 11:37 PM abhijeet inamdar <
abhijeetinamdar3...@gmail.com> wrote:

> How important are these lines
> https://github.com/qemu/qemu/blob/stable-4.2/target/arm/cpu.c#L1928-#L1929
> .
>
>  Anyways I will use the newer Qemu but curious was this a problem in my
> building?
> Please let me know so that I can move forward. It's really important for
> me.
>
> BR.
> Abhijeet.
>
> On Wed, Dec 8, 2021 at 3:11 PM abhijeet inamdar <
> abhijeetinamdar3...@gmail.com> wrote:
>
>> I will look into the latest one. Can you elaborate on that please!
>>
>> And I'm running a test.elf for the similar machine which I'm building. It
>> runs test only if the given size of the flash and RAM are double of the
>> required/being used size. How? Is there any offset kind of thing that needs
>> to be changed or I wandering I'm doing something wrong?
>>
>> BR.
>> Abhijeet.
>>
>> On Wed, 8 Dec, 2021, 12:30 Peter Maydell, 
>> wrote:
>>
>>> On Wed, 8 Dec 2021 at 10:45, abhijeet inamdar
>>>  wrote:
>>> >
>>> > Hi,
>>> >
>>> > Are these lines to be changed accordingly for my compatible
>>> https://github.com/qemu/qemu/blob/stable-4.2/target/arm/helper.c#L9842-#L9852
>>> or is it fixed?
>>>
>>> Those lines aren't board-specific, they are an
>>> architectural requirement on the CPU. They set up the
>>> default memory permissions (executable or not) when the
>>> MPU is disabled.
>>>
>>> Also, you're looking at QEMU 4.2 there, which is now
>>> pretty old. If you're actively developing a new board
>>> model, use the most recent QEMU at least, and preferably
>>> head-of-git.
>>>
>>> thanks
>>> -- PMM
>>>
>>


Re: qemu: fatal: lockup...

2021-12-08 Thread abhijeet inamdar
How important are these lines
https://github.com/qemu/qemu/blob/stable-4.2/target/arm/cpu.c#L1928-#L1929.

 Anyways I will use the newer Qemu but curious was this a problem in my
building?
Please let me know so that I can move forward. It's really important for me.

BR.
Abhijeet.

On Wed, Dec 8, 2021 at 3:11 PM abhijeet inamdar <
abhijeetinamdar3...@gmail.com> wrote:

> I will look into the latest one. Can you elaborate on that please!
>
> And I'm running a test.elf for the similar machine which I'm building. It
> runs test only if the given size of the flash and RAM are double of the
> required/being used size. How? Is there any offset kind of thing that needs
> to be changed or I wandering I'm doing something wrong?
>
> BR.
> Abhijeet.
>
> On Wed, 8 Dec, 2021, 12:30 Peter Maydell, 
> wrote:
>
>> On Wed, 8 Dec 2021 at 10:45, abhijeet inamdar
>>  wrote:
>> >
>> > Hi,
>> >
>> > Are these lines to be changed accordingly for my compatible
>> https://github.com/qemu/qemu/blob/stable-4.2/target/arm/helper.c#L9842-#L9852
>> or is it fixed?
>>
>> Those lines aren't board-specific, they are an
>> architectural requirement on the CPU. They set up the
>> default memory permissions (executable or not) when the
>> MPU is disabled.
>>
>> Also, you're looking at QEMU 4.2 there, which is now
>> pretty old. If you're actively developing a new board
>> model, use the most recent QEMU at least, and preferably
>> head-of-git.
>>
>> thanks
>> -- PMM
>>
>


Re: qemu: fatal: lockup...

2021-12-08 Thread abhijeet inamdar
I will look into the latest one. Can you elaborate on that please!

And I'm running a test.elf for the similar machine which I'm building. It
runs test only if the given size of the flash and RAM are double of the
required/being used size. How? Is there any offset kind of thing that needs
to be changed or I wandering I'm doing something wrong?

BR.
Abhijeet.

On Wed, 8 Dec, 2021, 12:30 Peter Maydell,  wrote:

> On Wed, 8 Dec 2021 at 10:45, abhijeet inamdar
>  wrote:
> >
> > Hi,
> >
> > Are these lines to be changed accordingly for my compatible
> https://github.com/qemu/qemu/blob/stable-4.2/target/arm/helper.c#L9842-#L9852
> or is it fixed?
>
> Those lines aren't board-specific, they are an
> architectural requirement on the CPU. They set up the
> default memory permissions (executable or not) when the
> MPU is disabled.
>
> Also, you're looking at QEMU 4.2 there, which is now
> pretty old. If you're actively developing a new board
> model, use the most recent QEMU at least, and preferably
> head-of-git.
>
> thanks
> -- PMM
>


Re: qemu: fatal: lockup...

2021-12-08 Thread Peter Maydell
On Wed, 8 Dec 2021 at 10:45, abhijeet inamdar
 wrote:
>
> Hi,
>
> Are these lines to be changed accordingly for my compatible 
> https://github.com/qemu/qemu/blob/stable-4.2/target/arm/helper.c#L9842-#L9852 
> or is it fixed?

Those lines aren't board-specific, they are an
architectural requirement on the CPU. They set up the
default memory permissions (executable or not) when the
MPU is disabled.

Also, you're looking at QEMU 4.2 there, which is now
pretty old. If you're actively developing a new board
model, use the most recent QEMU at least, and preferably
head-of-git.

thanks
-- PMM



Re: qemu: fatal: lockup...

2021-12-08 Thread abhijeet inamdar
Hi,

Are these lines to be changed accordingly for my compatible
https://github.com/qemu/qemu/blob/stable-4.2/target/arm/helper.c#L9842-#L9852
or is it fixed?

BR.
Abhijeet.

On Tue, 7 Dec, 2021, 13:29 abhijeet inamdar, 
wrote:

> The only difference I found is that at the 0x0 we have ROM for the first
> and SRAM for the latter.
>
> As you said to build the hardware as it is it hard to get the exact
> size/address of Flash and RAM or the range of them which I think is the
> main issue.
>
> BR.
> Abhijeet.
>
> On Tue, 7 Dec, 2021, 11:12 Peter Maydell, 
> wrote:
>
>> On Tue, 7 Dec 2021 at 09:23, abhijeet inamdar
>>  wrote:
>> >
>> > And we have two memory map one at the boot time and other is the normal
>> memory map. There is slight chance in the size/addresses. My doubt is which
>> do mapping do we usually follow?
>>
>> The rule of thumb is: model what the real hardware does.
>> If that means "we have one mapping at boot time and then
>> at runtime there's some register that changes the mapping",
>> then model that.
>>
>> That said, sometimes if the two maps are very similar, or if
>> in fact you know the guest you care about never does whatever
>> the thing is to change the memory map, then you can get away
>> with only modelling one of them.
>>
>> -- PMM
>>
>


Re: qemu: fatal: lockup...

2021-12-07 Thread abhijeet inamdar
The only difference I found is that at the 0x0 we have ROM for the first
and SRAM for the latter.

As you said to build the hardware as it is it hard to get the exact
size/address of Flash and RAM or the range of them which I think is the
main issue.

BR.
Abhijeet.

On Tue, 7 Dec, 2021, 11:12 Peter Maydell,  wrote:

> On Tue, 7 Dec 2021 at 09:23, abhijeet inamdar
>  wrote:
> >
> > And we have two memory map one at the boot time and other is the normal
> memory map. There is slight chance in the size/addresses. My doubt is which
> do mapping do we usually follow?
>
> The rule of thumb is: model what the real hardware does.
> If that means "we have one mapping at boot time and then
> at runtime there's some register that changes the mapping",
> then model that.
>
> That said, sometimes if the two maps are very similar, or if
> in fact you know the guest you care about never does whatever
> the thing is to change the memory map, then you can get away
> with only modelling one of them.
>
> -- PMM
>


Re: qemu: fatal: lockup...

2021-12-07 Thread Peter Maydell
On Tue, 7 Dec 2021 at 09:23, abhijeet inamdar
 wrote:
>
> And we have two memory map one at the boot time and other is the normal 
> memory map. There is slight chance in the size/addresses. My doubt is which 
> do mapping do we usually follow?

The rule of thumb is: model what the real hardware does.
If that means "we have one mapping at boot time and then
at runtime there's some register that changes the mapping",
then model that.

That said, sometimes if the two maps are very similar, or if
in fact you know the guest you care about never does whatever
the thing is to change the memory map, then you can get away
with only modelling one of them.

-- PMM



Re: qemu: fatal: lockup...

2021-12-07 Thread abhijeet inamdar
And we have two memory map one at the boot time and other is the normal
memory map. There is slight chance in the size/addresses. My doubt is which
do mapping do we usually follow?

BR.
Abhijeet.

On Mon, 6 Dec, 2021, 17:10 abhijeet inamdar, 
wrote:

> I have searched in all the reference manual but we don't have Flash memory
> as such but we have NOR and NAND Flash memory address.
>
> BR.
> Abhijeet.
>
> On Mon, 6 Dec, 2021, 15:16 Peter Maydell, 
> wrote:
>
>> On Mon, 6 Dec 2021 at 13:30, abhijeet inamdar
>>  wrote:
>> >
>> > How to find out where actually the hardware is expecting the initial
>> VTOR?
>>
>> This should be documented in its technical reference
>> manual/specification.
>>
>> -- PMM
>>
>


Re: qemu: fatal: lockup...

2021-12-06 Thread abhijeet inamdar
I have searched in all the reference manual but we don't have Flash memory
as such but we have NOR and NAND Flash memory address.

BR.
Abhijeet.

On Mon, 6 Dec, 2021, 15:16 Peter Maydell,  wrote:

> On Mon, 6 Dec 2021 at 13:30, abhijeet inamdar
>  wrote:
> >
> > How to find out where actually the hardware is expecting the initial
> VTOR?
>
> This should be documented in its technical reference
> manual/specification.
>
> -- PMM
>


Re: qemu: fatal: lockup...

2021-12-06 Thread Peter Maydell
On Mon, 6 Dec 2021 at 13:30, abhijeet inamdar
 wrote:
>
> How to find out where actually the hardware is expecting the initial VTOR?

This should be documented in its technical reference
manual/specification.

-- PMM



Re: qemu: fatal: lockup...

2021-12-06 Thread abhijeet inamdar
How to find out where actually the hardware is expecting the initial VTOR?

Mine is a Cortex-M3 machine. But not entirely it's a combination of two
cores A7 and M3.

Any tips!

BR.
Abhijeet.

On Mon, 6 Dec, 2021, 12:06 Peter Maydell,  wrote:

> On Sun, 5 Dec 2021 at 17:16, abhijeet inamdar
>  wrote:
> >
> > So the solution would be placing the vector correctly and mapping to the
> right address for Flash/RAM...?
>
> Yes, you need to make sure your model:
>  * puts RAM and flash where the hardware does
>  * configures the CPU object to set the initial VTOR to
>the same place the hardware does
> and you need to make sure your guest code is written and
> linked to execute from the correct addresses.
>
> -- PMM
>


Re: qemu: fatal: lockup...

2021-12-06 Thread Peter Maydell
On Sun, 5 Dec 2021 at 17:16, abhijeet inamdar
 wrote:
>
> So the solution would be placing the vector correctly and mapping to the 
> right address for Flash/RAM...?

Yes, you need to make sure your model:
 * puts RAM and flash where the hardware does
 * configures the CPU object to set the initial VTOR to
   the same place the hardware does
and you need to make sure your guest code is written and
linked to execute from the correct addresses.

-- PMM



Re: qemu: fatal: lockup...

2021-12-05 Thread abhijeet inamdar
So the solution would be placing the vector correctly and mapping to the
right address for Flash/RAM...?


BR.
Abhijeet.

On Sun, 5 Dec, 2021, 17:59 Peter Maydell,  wrote:

> On Sat, 4 Dec 2021 at 23:06, abhijeet inamdar
>  wrote:
> > I'm getting this error. There is no hit for this in google as most are
> on Hardfault. The error is :
> >
> >  "Taking exception 18 [v7M INVSTATE UsageFault]
> > ...BusFault with BFSR.STKERR
> > ...taking pending nonsecure exception 3
> > qemu: fatal: Lockup: can't take terminal derived exception (original
> exception priority -1)".
>
> This means that you tried to take an exception (UsageFault), but
> trying to stack the registers for the exception failed (BusFault
> with BFSR.STKERR set), and then on top of that we had another
> exception trying to take the busfault that meant we were unable
> to take any exception at all (this is what "terminal derived exception"
> means). I think that last fault was a vector table fetch failure,
> as I don't think QEMU has any other cases of terminal derived
> exceptions. The lockup happens when the terminal derived
> exception is at the same effective priority (here -1) as the
> exception we were trying to take (busfault).
>
> If you look in the execution trace you'll probably find that
> the stack pointer is bogus. The SP is initially read from the
> vector table, so if the vector table isn't actually readable
> then your code will start with a bogus SP value, which then
> means that trying to take an exception will take this bus fault.
>
> > I have changed the address of the vectors(target/arm/cpu.c) to be
> > placed for a Cortex-M3 machine where I want to place it or is it
> > restricted to only "0x0".
>
> The vector table can go wherever your machine/SoC puts it, but you
> shouldn't be editing cpu.c to change it. The CPU object has two
> QOM properties, init-svtor and init-nsvtor, which the board code
> sets to specify the Secure VTOR and the NonSecure VTOR reset values.
> (If a CPU doesn't support secure, like the M3, the init-nsvtor is
> the only vtor). The machine/SoC code should configure these CPU
> properties to match whatever the real hardware has them set to.
>
> > And is there any fixed size for the Flash size for the M3 machine
> > (I don't believe but doubt)
>
> This is entirely up to the machine model code -- you need to create
> the flash in the right place in the address map and with the right
> size corresponding to the hardware you're emulating.
>
> The symptoms described here suggest to me that your machine model
> isn't creating RAM/flash in the right places, or that you're
> setting the vector table address to the wrong place.
>
> thanks
> -- PMM
>


Re: qemu: fatal: lockup...

2021-12-05 Thread Peter Maydell
On Sat, 4 Dec 2021 at 23:06, abhijeet inamdar
 wrote:
> I'm getting this error. There is no hit for this in google as most are on 
> Hardfault. The error is :
>
>  "Taking exception 18 [v7M INVSTATE UsageFault]
> ...BusFault with BFSR.STKERR
> ...taking pending nonsecure exception 3
> qemu: fatal: Lockup: can't take terminal derived exception (original 
> exception priority -1)".

This means that you tried to take an exception (UsageFault), but
trying to stack the registers for the exception failed (BusFault
with BFSR.STKERR set), and then on top of that we had another
exception trying to take the busfault that meant we were unable
to take any exception at all (this is what "terminal derived exception"
means). I think that last fault was a vector table fetch failure,
as I don't think QEMU has any other cases of terminal derived
exceptions. The lockup happens when the terminal derived
exception is at the same effective priority (here -1) as the
exception we were trying to take (busfault).

If you look in the execution trace you'll probably find that
the stack pointer is bogus. The SP is initially read from the
vector table, so if the vector table isn't actually readable
then your code will start with a bogus SP value, which then
means that trying to take an exception will take this bus fault.

> I have changed the address of the vectors(target/arm/cpu.c) to be
> placed for a Cortex-M3 machine where I want to place it or is it
> restricted to only "0x0".

The vector table can go wherever your machine/SoC puts it, but you
shouldn't be editing cpu.c to change it. The CPU object has two
QOM properties, init-svtor and init-nsvtor, which the board code
sets to specify the Secure VTOR and the NonSecure VTOR reset values.
(If a CPU doesn't support secure, like the M3, the init-nsvtor is
the only vtor). The machine/SoC code should configure these CPU
properties to match whatever the real hardware has them set to.

> And is there any fixed size for the Flash size for the M3 machine
> (I don't believe but doubt)

This is entirely up to the machine model code -- you need to create
the flash in the right place in the address map and with the right
size corresponding to the hardware you're emulating.

The symptoms described here suggest to me that your machine model
isn't creating RAM/flash in the right places, or that you're
setting the vector table address to the wrong place.

thanks
-- PMM



Re: qemu: fatal: Lockup: can't escalate 3 to HardFault (current priority -1)

2021-09-30 Thread Peter Maydell
On Thu, 30 Sept 2021 at 12:34, abhijeet inamdar
 wrote:
> Actually the ELF generates the .bin file which is being used to run on the 
> target (hardware). It's address starts from zero when I see the starting 
> frames of it. As follows:
>
> IN:
> 0x0002:  c0de   stm  r0!, {r1, r2, r3, r4, r6, r7}
> 0x0004:  0003   movs r3, r0
> 0x0006:     movs r0, r0
> 0x0008:  0001   movs r1, r0
> 0x000a:     movs r0, r0
> 0x000c:  0002   movs r2, r0
> 0x000e:     movs r0, r0
> 0x0010:  0168   lsls r0, r5, #5
> 0x0012:     movs r0, r0
> 0x0014:  5838   ldr  r0, [r7, r0]

This clearly isn't code; it's some kind of data. It's not
a vector table, because it starts
 0xc0de
 0x0003
 0x0001
 0x0002
 0x0168

and those aren't plausible looking addresses.

The guest CPU loads the reset SP and PC. The reset PC
is 0x0003, so we start at address 0x0002 in Thumb
mode. The data at that address is not a sensible instruction
(it's that "stm r0!..."), but we execute it. r0 is 0, so this
is going to store new random data all over the existing
data that we were incorrectly executing. The inevitable
result is that we take an exception, and this time the
vector table is full of zeros, so now we try to execute
from 0x0 in non-Thumb mode, which means we take another exception,
which is Lockup.

The solution remains the same: you need to load a guest
image which puts a valid vector table in guest memory
at the address where the CPU expects it (which looks like
0x0 in this case). Until you do this, your guest code
will crash in mysterious-looking ways because you are
not running what you think you are running.

-- PMM



Re: qemu: fatal: Lockup: can't escalate 3 to HardFault (current priority -1)

2021-09-30 Thread abhijeet inamdar
The above is when I load the .bin instead of ELF in the machine.

On Thu, Sep 30, 2021 at 1:33 PM abhijeet inamdar <
abhijeetinamdar3...@gmail.com> wrote:

> Actually the ELF generates the .bin file which is being used to run on the
> target (hardware). It's address starts from zero when I see the starting
> frames of it. As follows:
>
>
> IN:
> 0x0002:  c0de   stm  r0!, {r1, r2, r3, r4, r6, r7}
> 0x0004:  0003   movs r3, r0
> 0x0006:     movs r0, r0
> 0x0008:  0001   movs r1, r0
> 0x000a:     movs r0, r0
> 0x000c:  0002   movs r2, r0
> 0x000e:     movs r0, r0
> 0x0010:  0168   lsls r0, r5, #5
> 0x0012:     movs r0, r0
> 0x0014:  5838   ldr  r0, [r7, r0]
> 0x0016:     movs r0, r0
> 0x0018:     movs r0, r0
> 0x001a:     movs r0, r0
> 0x001c:  ac8e   add  r4, sp, #0x238
> 0x001e:  48d4   ldr  r0, [pc, #0x350]
> 0x0020:  39bb   subs r1, #0xbb
> 0x0022:  421b   tst  r3, r3
> 0x0024:  3db7   subs r5, #0xb7
> 0x0026:  5d30   ldrb r0, [r6, r4]
> 0x0028:  79df   ldrb r7, [r3, #7]
> 0x002a:  fcf6 6a34  ldc2lp10, c6, [r6], #0xd0
>
> OUT: [size=1040]
> 0x70849100:  8b 5d f0 movl -0x10(%rbp), %ebx
> 0x70849103:  85 dbtestl%ebx, %ebx
> 0x70849105:  0f 8c cb 02 00 00jl   0x708493d6
> 0x7084910b:  8b 5d 04 movl 4(%rbp), %ebx
> 0x7084910e:  44 8b 65 00  movl (%rbp), %r12d
> 0x70849112:  41 8b fc movl %r12d, %edi
> 0x70849115:  c1 ef 05 shrl $5, %edi
> 0x70849118:  23 7d 80 andl -0x80(%rbp), %edi
> 0x7084911b:  48 03 7d 88  addq -0x78(%rbp), %rdi
> 0x7084911f:  41 8d 74 24 03   leal 3(%r12), %esi
> 0x70849124:  81 e6 00 fc ff ffandl $0xfc00, %esi
> 0x7084912a:  3b 77 04 cmpl 4(%rdi), %esi
> 0x7084912d:  41 8b f4 movl %r12d, %esi
> 0x70849130:  0f 85 ac 02 00 00jne  0x708493e2
> 0x70849136:  48 03 77 10  addq 0x10(%rdi), %rsi
> 0x7084913a:  89 1emovl %ebx, (%rsi)
> 0x7084913c:  41 8d 5c 24 04   leal 4(%r12), %ebx
> 0x70849141:  44 8b e3 movl %ebx, %r12d
> 0x70849144:  44 8b 6d 08  movl 8(%rbp), %r13d
> 0x70849148:  41 8b fc movl %r12d, %edi
> 0x7084914b:  c1 ef 05 shrl $5, %edi
> 0x7084914e:  23 7d 80 andl -0x80(%rbp), %edi
> 0x70849151:  48 03 7d 88  addq -0x78(%rbp), %rdi
> 0x70849155:  41 8d 74 24 03   leal 3(%r12), %esi
> 0x7084915a:  81 e6 00 fc ff ffandl $0xfc00, %esi
> 0x70849160:  3b 77 04 cmpl 4(%rdi), %esi
> 0x70849163:  41 8b f4 movl %r12d, %esi
> 0x70849166:  0f 85 8f 02 00 00jne  0x708493fb
> 0x7084916c:  48 03 77 10  addq 0x10(%rdi), %rsi
> 0x70849170:  44 89 2e movl %r13d, (%rsi)
> 0x70849173:  83 c3 04 addl $4, %ebx
> 0x70849176:  44 8b e3 movl %ebx, %r12d
> 0x70849179:  44 8b 6d 0c  movl 0xc(%rbp), %r13d
> 0x7084917d:  41 8b fc movl %r12d, %edi
> 0x70849180:  c1 ef 05 shrl $5, %edi
> 0x70849183:  23 7d 80 andl -0x80(%rbp), %edi
> 0x70849186:  48 03 7d 88  addq -0x78(%rbp), %rdi
> 0x7084918a:  41 8d 74 24 03   leal 3(%r12), %esi
> 0x7084918f:  81 e6 00 fc ff ffandl $0xfc00, %esi
> 0x70849195:  3b 77 04 cmpl 4(%rdi), %esi
> 0x70849198:  41 8b f4 movl %r12d, %esi
> 0x7084919b:  0f 85 74 02 00 00jne  0x70849415
> 0x708491a1:  48 03 77 10  addq 0x10(%rdi), %rsi
> 0x708491a5:  44 89 2e movl %r13d, (%rsi)
> 0x708491a8:  83 c3 04 addl $4, %ebx
> 0x708491ab:  44 8b e3 movl %ebx, %r12d
> 0x708491ae:  44 8b 6d 10  movl 0x10(%rbp), %r13d
> 0x708491b2:  41 8b fc movl %r12d, %edi
> 0x708491b5:  c1 ef 05 shrl $5, %edi
> 0x708491b8:  23 7d 80 andl -0x80(%rbp), %edi
> 0x708491bb:  48 03 7d 88  addq -0x78(%rbp), %rdi
> 0x708491bf:  41 8d 74 24 03   leal 3(%r12), %esi
> 0x708491c4:  81 e6 00 fc ff ffandl $0xfc00, %esi
> 0x708491ca:  3b 77 04 cmpl 4(%rdi), %esi
> 0x708491cd:  41 8b f4   

Re: qemu: fatal: Lockup: can't escalate 3 to HardFault (current priority -1)

2021-09-30 Thread abhijeet inamdar
Actually the ELF generates the .bin file which is being used to run on the
target (hardware). It's address starts from zero when I see the starting
frames of it. As follows:


IN:
0x0002:  c0de   stm  r0!, {r1, r2, r3, r4, r6, r7}
0x0004:  0003   movs r3, r0
0x0006:     movs r0, r0
0x0008:  0001   movs r1, r0
0x000a:     movs r0, r0
0x000c:  0002   movs r2, r0
0x000e:     movs r0, r0
0x0010:  0168   lsls r0, r5, #5
0x0012:     movs r0, r0
0x0014:  5838   ldr  r0, [r7, r0]
0x0016:     movs r0, r0
0x0018:     movs r0, r0
0x001a:     movs r0, r0
0x001c:  ac8e   add  r4, sp, #0x238
0x001e:  48d4   ldr  r0, [pc, #0x350]
0x0020:  39bb   subs r1, #0xbb
0x0022:  421b   tst  r3, r3
0x0024:  3db7   subs r5, #0xb7
0x0026:  5d30   ldrb r0, [r6, r4]
0x0028:  79df   ldrb r7, [r3, #7]
0x002a:  fcf6 6a34  ldc2lp10, c6, [r6], #0xd0

OUT: [size=1040]
0x70849100:  8b 5d f0 movl -0x10(%rbp), %ebx
0x70849103:  85 dbtestl%ebx, %ebx
0x70849105:  0f 8c cb 02 00 00jl   0x708493d6
0x7084910b:  8b 5d 04 movl 4(%rbp), %ebx
0x7084910e:  44 8b 65 00  movl (%rbp), %r12d
0x70849112:  41 8b fc movl %r12d, %edi
0x70849115:  c1 ef 05 shrl $5, %edi
0x70849118:  23 7d 80 andl -0x80(%rbp), %edi
0x7084911b:  48 03 7d 88  addq -0x78(%rbp), %rdi
0x7084911f:  41 8d 74 24 03   leal 3(%r12), %esi
0x70849124:  81 e6 00 fc ff ffandl $0xfc00, %esi
0x7084912a:  3b 77 04 cmpl 4(%rdi), %esi
0x7084912d:  41 8b f4 movl %r12d, %esi
0x70849130:  0f 85 ac 02 00 00jne  0x708493e2
0x70849136:  48 03 77 10  addq 0x10(%rdi), %rsi
0x7084913a:  89 1emovl %ebx, (%rsi)
0x7084913c:  41 8d 5c 24 04   leal 4(%r12), %ebx
0x70849141:  44 8b e3 movl %ebx, %r12d
0x70849144:  44 8b 6d 08  movl 8(%rbp), %r13d
0x70849148:  41 8b fc movl %r12d, %edi
0x7084914b:  c1 ef 05 shrl $5, %edi
0x7084914e:  23 7d 80 andl -0x80(%rbp), %edi
0x70849151:  48 03 7d 88  addq -0x78(%rbp), %rdi
0x70849155:  41 8d 74 24 03   leal 3(%r12), %esi
0x7084915a:  81 e6 00 fc ff ffandl $0xfc00, %esi
0x70849160:  3b 77 04 cmpl 4(%rdi), %esi
0x70849163:  41 8b f4 movl %r12d, %esi
0x70849166:  0f 85 8f 02 00 00jne  0x708493fb
0x7084916c:  48 03 77 10  addq 0x10(%rdi), %rsi
0x70849170:  44 89 2e movl %r13d, (%rsi)
0x70849173:  83 c3 04 addl $4, %ebx
0x70849176:  44 8b e3 movl %ebx, %r12d
0x70849179:  44 8b 6d 0c  movl 0xc(%rbp), %r13d
0x7084917d:  41 8b fc movl %r12d, %edi
0x70849180:  c1 ef 05 shrl $5, %edi
0x70849183:  23 7d 80 andl -0x80(%rbp), %edi
0x70849186:  48 03 7d 88  addq -0x78(%rbp), %rdi
0x7084918a:  41 8d 74 24 03   leal 3(%r12), %esi
0x7084918f:  81 e6 00 fc ff ffandl $0xfc00, %esi
0x70849195:  3b 77 04 cmpl 4(%rdi), %esi
0x70849198:  41 8b f4 movl %r12d, %esi
0x7084919b:  0f 85 74 02 00 00jne  0x70849415
0x708491a1:  48 03 77 10  addq 0x10(%rdi), %rsi
0x708491a5:  44 89 2e movl %r13d, (%rsi)
0x708491a8:  83 c3 04 addl $4, %ebx
0x708491ab:  44 8b e3 movl %ebx, %r12d
0x708491ae:  44 8b 6d 10  movl 0x10(%rbp), %r13d
0x708491b2:  41 8b fc movl %r12d, %edi
0x708491b5:  c1 ef 05 shrl $5, %edi
0x708491b8:  23 7d 80 andl -0x80(%rbp), %edi
0x708491bb:  48 03 7d 88  addq -0x78(%rbp), %rdi
0x708491bf:  41 8d 74 24 03   leal 3(%r12), %esi
0x708491c4:  81 e6 00 fc ff ffandl $0xfc00, %esi
0x708491ca:  3b 77 04 cmpl 4(%rdi), %esi
0x708491cd:  41 8b f4 movl %r12d, %esi
0x708491d0:  0f 85 59 02 00 00jne  0x7084942f
0x708491d6:  48 03 77 10  addq 0x10(%rdi), %rsi
0x708491da:  44 89 2e movl %r13d, (%rsi)
0x708491dd:  83 c3 04 addl $4, %ebx
0x708491e0:  44 8b e3  

Re: qemu: fatal: Lockup: can't escalate 3 to HardFault (current priority -1)

2021-09-30 Thread Peter Maydell
On Thu, 30 Sept 2021 at 07:17, abhijeet inamdar
 wrote:
>
> But this very ELF file runs on the target(real hardware) perfectly. So how 
> different should it be to emulate?

Real hardware doesn't have a magic ELF file loader. The
details of what a debug environment or whatever mechanism
you're using to put the ELF file on the target or an
emulator expect from an ELF file vary. QEMU wants you to
provide a vector table. (I imagine that the mechanism you're
using with the real hardware starts execution at the ELF
entry point.)

-- PMM



Re: qemu: fatal: Lockup: can't escalate 3 to HardFault (current priority -1)

2021-09-30 Thread abhijeet inamdar
But this very ELF file runs on the target(real hardware) perfectly. So how
different should it be to emulate?

Thank you,
Abhijeet.

On Wed, Sep 29, 2021 at 10:31 PM Peter Maydell 
wrote:

> On Wed, 29 Sept 2021 at 16:24, abhijeet inamdar
>  wrote:
> >
> > I tried to add -d in_asm,out_asm,guest_errors it gives out as follows:
>
> 'int,exec,cpu' are probably also helpful.
>
> > [New Thread 0x7fffe700 (LWP 44283)]
> > 
> > IN:
> > 0x:    andeqr0, r0, r0
>
> We started at address 0 in not-thumb mode. Your ELF file is
> almost certainly not correct (ie it does not include a suitable
> vector table for the CPU to get its reset PC and SP from).
>
> -- PMM
>


Re: qemu: fatal: Lockup: can't escalate 3 to HardFault (current priority -1)

2021-09-29 Thread Peter Maydell
On Wed, 29 Sept 2021 at 16:24, abhijeet inamdar
 wrote:
>
> I tried to add -d in_asm,out_asm,guest_errors it gives out as follows:

'int,exec,cpu' are probably also helpful.

> [New Thread 0x7fffe700 (LWP 44283)]
> 
> IN:
> 0x:    andeqr0, r0, r0

We started at address 0 in not-thumb mode. Your ELF file is
almost certainly not correct (ie it does not include a suitable
vector table for the CPU to get its reset PC and SP from).

-- PMM



Re: qemu: fatal: Lockup: can't escalate 3 to HardFault (current priority -1)

2021-09-29 Thread abhijeet inamdar
I tried to add -d in_asm,out_asm,guest_errors it gives out as follows:

PROLOGUE: [size=45]
0x70849000:  55   pushq%rbp
0x70849001:  53   pushq%rbx
0x70849002:  41 54pushq%r12
0x70849004:  41 55pushq%r13
0x70849006:  41 56pushq%r14
0x70849008:  41 57pushq%r15
0x7084900a:  48 8b ef movq %rdi, %rbp
0x7084900d:  48 81 c4 78 fb ff ff addq $-0x488, %rsp
0x70849014:  ff e6jmpq *%rsi
0x70849016:  33 c0xorl %eax, %eax
0x70849018:  48 81 c4 88 04 00 00 addq $0x488, %rsp
0x7084901f:  c5 f8 77 vzeroupper
0x70849022:  41 5fpopq %r15
0x70849024:  41 5epopq %r14
0x70849026:  41 5dpopq %r13
0x70849028:  41 5cpopq %r12
0x7084902a:  5b   popq %rbx
0x7084902b:  5d   popq %rbp
0x7084902c:  c3   retq

[New Thread 0x7fffe700 (LWP 44283)]

IN:
0x:    andeqr0, r0, r0

OUT: [size=64]
0x70849100:  8b 5d f0 movl -0x10(%rbp), %ebx
0x70849103:  85 dbtestl%ebx, %ebx
0x70849105:  0f 8c 1f 00 00 00jl   0x7084912a
0x7084910b:  c7 45 3c 00 00 00 00 movl $0, 0x3c(%rbp)
0x70849112:  48 8b fd movq %rbp, %rdi
0x70849115:  be 12 00 00 00   movl $0x12, %esi
0x7084911a:  ba 00 00 00 02   movl $0x200, %edx
0x7084911f:  b9 01 00 00 00   movl $1, %ecx
0x70849124:  ff 15 0e 00 00 00callq*0xe(%rip)
0x7084912a:  48 8d 05 12 ff ff ff leaq -0xee(%rip), %rax
0x70849131:  e9 e2 fe ff ff   jmp  0x70849018
0x70849136:  90   nop
0x70849137:  90   nop
0x70849138:  .quad  0x55a70e01


IN:
0x:    andeqr0, r0, r0

OUT: [size=64]
0x70849240:  8b 5d f0 movl -0x10(%rbp), %ebx
0x70849243:  85 dbtestl%ebx, %ebx
0x70849245:  0f 8c 1f 00 00 00jl   0x7084926a
0x7084924b:  c7 45 3c 00 00 00 00 movl $0, 0x3c(%rbp)
0x70849252:  48 8b fd movq %rbp, %rdi
0x70849255:  be 12 00 00 00   movl $0x12, %esi
0x7084925a:  ba 00 00 00 02   movl $0x200, %edx
0x7084925f:  b9 01 00 00 00   movl $1, %ecx
0x70849264:  ff 15 0e 00 00 00callq*0xe(%rip)
0x7084926a:  48 8d 05 12 ff ff ff leaq -0xee(%rip), %rax
0x70849271:  e9 a2 fd ff ff   jmp  0x70849018
0x70849276:  90   nop
0x70849277:  90   nop
0x70849278:  .quad  0x55a70e01

qemu: fatal: Lockup: can't escalate 3 to HardFault (current priority -1)

R00= R01= R02= R03=
R04= R05= R06= R07=
R08= R09= R10= R11=
R12= R13=ffe0 R14=fff9 R15=
XPSR=4003 -Z-- A handler
FPSCR: 

Thread 3 "qemu-system-arm" received signal SIGABRT, Aborted.
[Switching to Thread 0x7fffe700 (LWP 44283)]
0x75f31438 in __GI_raise (sig=sig@entry=6) at
../sysdeps/unix/sysv/linux/raise.c:54
54 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) n
[Thread 0x7fffe700 (LWP 44283) exited]
[Thread 0x73049700 (LWP 44282) exited]

Program terminated with signal SIGABRT, Aborted.
The program no longer exists.
(gdb)

it aborts in the next step only. How can I proceed?

Thank you,
Abhijeet.

On Fri, Sep 17, 2021 at 11:11 AM Peter Maydell 
wrote:

> On Thu, 16 Sept 2021 at 20:13, abhijeet inamdar
>
>  wrote:
> >
> > Is there any way/s to check where actually is it failing or point which
> file?
>
> Use the usual debugging facilities -- gdbstub or -d debug logging.
>
> -- PMM
>


Re: qemu: fatal: Lockup: can't escalate 3 to HardFault (current priority -1)

2021-09-17 Thread Peter Maydell
On Thu, 16 Sept 2021 at 20:13, abhijeet inamdar

 wrote:
>
> Is there any way/s to check where actually is it failing or point which file?

Use the usual debugging facilities -- gdbstub or -d debug logging.

-- PMM



Re: qemu: fatal: Lockup: can't escalate 3 to HardFault (current priority -1)

2021-09-16 Thread abhijeet inamdar
Is there any way/s to check where actually is it failing or point which
file?

Thank you,
Abhijeet.

On Thu, Sep 16, 2021 at 8:49 PM Peter Maydell 
wrote:

> On Thu, 16 Sept 2021 at 19:46, Peter Maydell 
> wrote:
> >
> > On Thu, 16 Sept 2021 at 17:52, abhijeet inamdar
> >  wrote:
> > > How do I fix it ? it's for cortex-m3 and the below is the gdb trace
> when I load ELF.
> > >
> > > qemu: fatal: Lockup: can't escalate 3 to HardFault (current priority
> -1)
> > >
> > > R00= R01= R02= R03=
> > > R04= R05= R06= R07=
> > > R08= R09= R10= R11=
> > > R12= R13=ffe0 R14=fff9 R15=
> > > XPSR=4003 -Z-- A handler
> > > FPSCR: 
>
> > This particular case is "we needed to take a HardFault exception,
> > but we were already in a HardFault exception". The most common
> > cause of this is that your code has crashed hard on startup
> > (eg it tries to read from unreadable memory or jumps off into nowhere:
> > if this happens before it has set up exception handling for HardFault
> > then you get this. This also happens if its attempt to handle
> > HardFaults is buggy and crashes.)
>
> Oh, and note that the PC is zero and the Thumb bit is not set:
> this means that your guest code did something that caused the
> CPU to try to take an exception, but your ELF file didn't
> provide an exception vector table, and so the vector table
> entry for the exception was 0. That means that the CPU will
> attempt to execute from address 0 with the Thumb bit clear,
> which provokes an immediate UsageFault exception, usually leading
> to the exception-in-an-exception Lockup case above.
>
> -- PMM
>


Re: qemu: fatal: Lockup: can't escalate 3 to HardFault (current priority -1)

2021-09-16 Thread Peter Maydell
On Thu, 16 Sept 2021 at 19:46, Peter Maydell  wrote:
>
> On Thu, 16 Sept 2021 at 17:52, abhijeet inamdar
>  wrote:
> > How do I fix it ? it's for cortex-m3 and the below is the gdb trace when I 
> > load ELF.
> >
> > qemu: fatal: Lockup: can't escalate 3 to HardFault (current priority -1)
> >
> > R00= R01= R02= R03=
> > R04= R05= R06= R07=
> > R08= R09= R10= R11=
> > R12= R13=ffe0 R14=fff9 R15=
> > XPSR=4003 -Z-- A handler
> > FPSCR: 

> This particular case is "we needed to take a HardFault exception,
> but we were already in a HardFault exception". The most common
> cause of this is that your code has crashed hard on startup
> (eg it tries to read from unreadable memory or jumps off into nowhere:
> if this happens before it has set up exception handling for HardFault
> then you get this. This also happens if its attempt to handle
> HardFaults is buggy and crashes.)

Oh, and note that the PC is zero and the Thumb bit is not set:
this means that your guest code did something that caused the
CPU to try to take an exception, but your ELF file didn't
provide an exception vector table, and so the vector table
entry for the exception was 0. That means that the CPU will
attempt to execute from address 0 with the Thumb bit clear,
which provokes an immediate UsageFault exception, usually leading
to the exception-in-an-exception Lockup case above.

-- PMM



Re: qemu: fatal: Lockup: can't escalate 3 to HardFault (current priority -1)

2021-09-16 Thread Peter Maydell
On Thu, 16 Sept 2021 at 17:52, abhijeet inamdar
 wrote:
> How do I fix it ? it's for cortex-m3 and the below is the gdb trace when I 
> load ELF.
>
> qemu: fatal: Lockup: can't escalate 3 to HardFault (current priority -1)
>
> R00= R01= R02= R03=
> R04= R05= R06= R07=
> R08= R09= R10= R11=
> R12= R13=ffe0 R14=fff9 R15=
> XPSR=4003 -Z-- A handler
> FPSCR: 

If the CPU goes into Lockup this indicates that something has gone
very badly wrong with your guest code, and the situation is not
recoverable. In real hardware the CPU sits there doing absolutely
nothing forever more[*]. QEMU doesn't actually emulate the CPU being
in Lockup state, so it just treats it as a fatal error. (Check the
M-profile architecture reference for more information on Lockup and
the various kinds of guest bug that can get you there.)

This particular case is "we needed to take a HardFault exception,
but we were already in a HardFault exception". The most common
cause of this is that your code has crashed hard on startup
(eg it tries to read from unreadable memory or jumps off into nowhere:
if this happens before it has set up exception handling for HardFault
then you get this. This also happens if its attempt to handle
HardFaults is buggy and crashes.)

You should approach this by debugging your guest and looking at
what it is doing before it gets to this point.

[*] Technically there are ways to get yourself out of Lockup
state on a real CPU, such as having an external watchdog that
resets the CPU, or some extremely esoteric tricks used only by
code that's trying to test how Lockup state behaves.

-- PMM