Re: [PATCH v7 16/26] x86/insn-eval: Support both signed 32-bit and 64-bit effective addresses

2017-07-27 Thread Ricardo Neri
On Thu, 2017-07-27 at 15:26 +0200, Borislav Petkov wrote:
> On Tue, Jul 25, 2017 at 04:48:13PM -0700, Ricardo Neri wrote:
> > I meant to say the 4 most significant bytes. In this case, the
> > 64-address 0x1234 would lie in the kernel memory while
> > 0x1234 would correctly be in the user space memory.
> 
> That explanation is better.
> 
> > Yes, perhaps the check above is not needed. I included that check as
> > part of my argument validation. In a 64-bit kernel, this function could
> > be called with val with non-zero most significant bytes.
> 
> So say that in the comment so that it is obvious *why*.
> 
> > I have looked into this closely and as far as I can see, the 4 least
> > significant bytes will wrap around when using 64-bit signed numbers as
> > they would when using 32-bit signed numbers. For instance, for two
> > positive numbers we have:
> > 
> > 7fff: + 7000: = efff:.
> > 
> > The addition above overflows.
> 
> Yes, MSB changes.
> 
> > When sign-extended to 64-bit numbers we would have:
> > 
> > ::7fff: + ::7000: = ::efff:.
> > 
> > The addition above does not overflow. However, the 4 least significant
> > bytes overflow as we expect.
> 
> No they don't - you are simply using 64-bit regs:
> 
>0x46b8 <+8>: movq   $0x7fff,-0x8(%rbp)
>0x46c0 <+16>:movq   $0x7000,-0x10(%rbp)
>0x46c8 <+24>:mov-0x8(%rbp),%rdx
>0x46cc <+28>:mov-0x10(%rbp),%rax
> => 0x46d0 <+32>:add%rdx,%rax
> 
> rax0xefff   4026531839
> rbx0x0  0
> rcx0x0  0
> rdx0x7fff   2147483647
> 
> ...
> 
> eflags 0x206[ PF IF ]
> 
> (OF flag is not set).

True, I don't have the OF set. However the 4 least significant bytes
wrapped around; which is what I needed.
> 
> > We can clamp the 4 most significant bytes.
> > 
> > For a two's complement negative numbers we can have:
> > 
> > : + 8000: = 7fff: with a carry flag.
> > 
> > The addition above overflows.
> 
> Yes.
> 
> > When sign-extending to 64-bit numbers we would have:
> > 
> > ::: + ::8000: = ::7fff: with a
> > carry flag.
> > 
> > The addition above does not overflow. However, the 4 least significant
> > bytes overflew and wrapped around as they would when using 32-bit signed
> > numbers.
> 
> Right. Ok.
> 
> And come to think of it now, I'm wondering, whether it would be
> better/easier/simpler/more straight-forward, to do the 32-bit operations
> with 32-bit types and separate 32-bit functions and have the hardware do
> that for you.
> 
> This way you can save yourself all that ugly and possibly error-prone
> casting back and forth and have the code much more readable too.

That sounds fair. I had to explain a lot this code and probably is not
worth it. I can definitely use 32-bit variable types for the 32-bit case
and drop all these castings.

The 32-bit and 64-bit functions would look identical except for the
variables used to compute the effective address. Perhaps I could use a
union:

union eff_addr {
#if  CONFIG_X86_64
longaddr64;
#endif
int addr32;
};

And use one or the other based on the address size given by the CS.L
CS.D bits of the segment descriptor or address size overrides.

However using the union could be less readable than having two almost
identical functions.

Thanks and BR,
Ricardo

--
To unsubscribe from this list: send the line "unsubscribe linux-msdos" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Здравствуйте! Вас интересуют клиентские базы данных? Ответ на Email: prodawez...@gmail.com

2017-07-27 Thread sqlmbtbgtuegjdwroma...@fortador.com
Здравствуйте! Вас интересуют клиентские базы  данных? Ответ на Email: 
prodawez...@gmail.com


Re: [PATCH v7 24/26] x86: Enable User-Mode Instruction Prevention

2017-07-27 Thread Borislav Petkov
On Tue, Jul 25, 2017 at 05:44:08PM -0700, Ricardo Neri wrote:
> On Fri, 2017-06-09 at 18:10 +0200, Borislav Petkov wrote:
> > On Fri, May 05, 2017 at 11:17:22AM -0700, Ricardo Neri wrote:
> > > User_mode Instruction Prevention (UMIP) is enabled by setting/clearing a
> > > bit in %cr4.
> > > 
> > > It makes sense to enable UMIP at some point while booting, before user
> > > spaces come up. Like SMAP and SMEP, is not critical to have it enabled
> > > very early during boot. This is because UMIP is relevant only when there 
> > > is
> > > a userspace to be protected from. Given the similarities in relevance, it
> > > makes sense to enable UMIP along with SMAP and SMEP.
> > > 
> > > UMIP is enabled by default. It can be disabled by adding clearcpuid=514
> > > to the kernel parameters.

...

> So would this become a y when more machines have UMIP?

I guess. Stuff which proves reliable and widespread gets automatically
enabled with time, in most cases. IMHO, of course.

> Why would static_cpu_has() reply wrong if alternatives are not in place?
> Because it uses the boot CPU data? When it calls _static_cpu_has() it
> would do something equivalent to

Nevermind - I forgot that static_cpu_has() now drops to dynamic check
before alternatives application.

-- 
Regards/Gruss,
Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)
-- 
--
To unsubscribe from this list: send the line "unsubscribe linux-msdos" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v7 16/26] x86/insn-eval: Support both signed 32-bit and 64-bit effective addresses

2017-07-27 Thread Borislav Petkov
On Tue, Jul 25, 2017 at 04:48:13PM -0700, Ricardo Neri wrote:
> I meant to say the 4 most significant bytes. In this case, the
> 64-address 0x1234 would lie in the kernel memory while
> 0x1234 would correctly be in the user space memory.

That explanation is better.

> Yes, perhaps the check above is not needed. I included that check as
> part of my argument validation. In a 64-bit kernel, this function could
> be called with val with non-zero most significant bytes.

So say that in the comment so that it is obvious *why*.

> I have looked into this closely and as far as I can see, the 4 least
> significant bytes will wrap around when using 64-bit signed numbers as
> they would when using 32-bit signed numbers. For instance, for two
> positive numbers we have:
> 
> 7fff: + 7000: = efff:.
> 
> The addition above overflows.

Yes, MSB changes.

> When sign-extended to 64-bit numbers we would have:
> 
> ::7fff: + ::7000: = ::efff:.
> 
> The addition above does not overflow. However, the 4 least significant
> bytes overflow as we expect.

No they don't - you are simply using 64-bit regs:

   0x46b8 <+8>: movq   $0x7fff,-0x8(%rbp)
   0x46c0 <+16>:movq   $0x7000,-0x10(%rbp)
   0x46c8 <+24>:mov-0x8(%rbp),%rdx
   0x46cc <+28>:mov-0x10(%rbp),%rax
=> 0x46d0 <+32>:add%rdx,%rax

rax0xefff   4026531839
rbx0x0  0
rcx0x0  0
rdx0x7fff   2147483647

...

eflags 0x206[ PF IF ]

(OF flag is not set).

> We can clamp the 4 most significant bytes.
> 
> For a two's complement negative numbers we can have:
> 
> : + 8000: = 7fff: with a carry flag.
> 
> The addition above overflows.

Yes.

> When sign-extending to 64-bit numbers we would have:
> 
> ::: + ::8000: = ::7fff: with a
> carry flag.
> 
> The addition above does not overflow. However, the 4 least significant
> bytes overflew and wrapped around as they would when using 32-bit signed
> numbers.

Right. Ok.

And come to think of it now, I'm wondering, whether it would be
better/easier/simpler/more straight-forward, to do the 32-bit operations
with 32-bit types and separate 32-bit functions and have the hardware do
that for you.

This way you can save yourself all that ugly and possibly error-prone
casting back and forth and have the code much more readable too.

Hmmm.

-- 
Regards/Gruss,
Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)
-- 
--
To unsubscribe from this list: send the line "unsubscribe linux-msdos" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html