On 11/17, Yonghong Song wrote:
>
> On 11/17/17 9:25 AM, Oleg Nesterov wrote:
> >On 11/15, Yonghong Song wrote:
> >>
> >>v3 -> v4:
> >> . Revert most of v3 change as 32bit emulation is not really working
> >> on x86_64 platform
is ongoing to address this issue.
Reviewed-by: Oleg Nesterov
Please test your patch with the fix below, in this particular case the
TIF_IA32 check should be fine. Although this is not what we really want,
we should probably use user_64bit_mode(regs) which checks ->cs. But this
needs more c
On 11/15, Oleg Nesterov wrote:
>
> So please, check if uprobe_init_insn() fails or not in this case. After that
> we will know whether your patch needs the additional is_64bit_mm() check in
> push_setup_xol_ops() or not.
OK, I did the check for you.
uprobe_init_insn() doesn't f
On 11/14, Yonghong Song wrote:
>
>
> On 11/14/17 8:03 AM, Oleg Nesterov wrote:
> >Ah, no, sizeof_long() is broken by the same reason, so you can't test it...
>
> Right. I hacked the emulate_push_stack (original name: push_ret_address)
> with sizeof_long = 4, and 32bi
On 11/14, Yonghong Song wrote:
>
> On 11/14/17 7:51 AM, Oleg Nesterov wrote:
> >
> >And test_thread_flag(TIF_ADDR32) is not right too. The caller is not
> >necessarily the probed task. See is_64bit_mm(mm) in
> >arch_uprobe_analyze_insn().
>
> I printed out some
On 11/14, Oleg Nesterov wrote:
>
> > +#ifdef CONFIG_X86_64
> > + if (test_thread_flag(TIF_ADDR32))
> > + return -ENOSYS;
> > +#endif
>
> No, this doesn't look right, see my previous email. You should do this
> check in the "if (insn-&g
On 11/13, Yonghong Song wrote:
>
> +static int push_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn)
> +{
> + u8 opc1 = OPCODE1(insn), reg_offset = 0;
> +
> + if (opc1 < 0x50 || opc1 > 0x57)
> + return -ENOSYS;
> +
> + if (insn->length > 2)
> + retur
On 11/13, Yonghong Song wrote:
>
> On 11/13/17 4:59 AM, Oleg Nesterov wrote:
> >>+ switch (opc1) {
> >>+ case 0x50:
> >>+ reg_offset = offsetof(struct pt_regs, r8);
> >>+ break;
> >>+
The patch looks good to me, but I have a question because I know nothing
about insn encoding,
On 11/10, Yonghong Song wrote:
>
> +static int push_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn)
> +{
> + u8 opc1 = OPCODE1(insn), reg_offset = 0;
> +
> + if (opc1 < 0x50 || opc1
Yonghong,
The patch looks good to me, but I'll try to read it carefully later.
Just a couple of cosmetic nits for now.
On 11/09, Yonghong Song wrote:
>
> --- a/arch/x86/include/asm/uprobes.h
> +++ b/arch/x86/include/asm/uprobes.h
> @@ -53,6 +53,10 @@ struct arch_uprobe {
> u
On 11/09, Yonghong Song wrote:
>
> + if (insn_class == UPROBE_PUSH_INSN) {
> + src_ptr = get_push_reg_ptr(auprobe, regs);
> + reg_width = sizeof_long();
> + sp = regs->sp;
> + if (copy_to_user((void __user *)(sp - reg_width), src_ptr,
> reg_width
On 11/09, Oleg Nesterov wrote:
>
> And. Do you really need ->post_xol() method to emulate "push"? Why we can't
> simply execute it out-of-line if copy_to_user() fails?
>
> branch_post_xol_op() is needed because we can't execute "call" out-of-line,
&g
On 11/09, Yonghong Song wrote:
>
> This patch extends the emulation to "push "
> insns. These insns are typical in the beginning
> of the function. For example, bcc
> in https://github.com/iovisor/bcc repo provides
> tools to measure funclantency, detect memleak, etc.
> The tools will place uprobes
On 06/30, Paul E. McKenney wrote:
>
> On Fri, Jun 30, 2017 at 05:20:10PM +0200, Oleg Nesterov wrote:
> >
> > I do not think the overhead will be noticeable in this particular case.
> >
> > But I am not sure I understand why do we wa
On 06/30, Paul E. McKenney wrote:
>
> > > + raw_spin_lock_irq(&task->pi_lock);
> > > + raw_spin_unlock_irq(&task->pi_lock);
>
> I agree that the spin_unlock_wait() implementations would avoid the
> deadlock with an acquisition from an interrupt handler, while also
> avoiding the nee
On 06/29, Paul E. McKenney wrote:
>
> --- a/kernel/task_work.c
> +++ b/kernel/task_work.c
> @@ -109,7 +109,8 @@ void task_work_run(void)
>* the first entry == work, cmpxchg(task_works) should
>* fail, but it can play with *work and other entries.
>*/
On 01/23, Eric Dumazet wrote:
>
> On Mon, 2017-01-23 at 11:56 -0500, Jason Baron wrote:
> > On 01/23/2017 09:30 AM, Oleg Nesterov wrote:
> > > Hello,
> > >
> > > smp_mb__after_atomic() looks wrong and misleading, sock_reset_flag() does
> > > the
&g
On 01/23, Jason Baron wrote:
>
> >It was added by 3c7151275c0c9a "tcp: add memory barriers to write space
> >paths"
> >and the patch looks correct in that we need the barriers in tcp_check_space()
> >and tcp_poll() in theory, so it seems tcp_check_space() needs smp_mb() ?
> >
>
> Yes, I think it s
Hello,
smp_mb__after_atomic() looks wrong and misleading, sock_reset_flag() does the
non-atomic __clear_bit() and thus it can not guarantee test_bit(SOCK_NOSPACE)
(non-atomic too) won't be reordered.
It was added by 3c7151275c0c9a "tcp: add memory barriers to write space paths"
and the patch look
On 08/04, Marcelo Ricardo Leitner wrote:
>
> On Tue, Aug 04, 2015 at 06:33:34PM +0200, Oleg Nesterov wrote:
> > Commit 1fbe4b46caca "net: pktgen: kill the Wait for kthread_stop
> > code in pktgen_thread_worker()" removed (in particular) the final
> > __set_curren
() after return.
Afaics, we can simply remove both set_current_state()'s, and we
could do this a long ago right after ef87979c273a2 "pktgen: better
scheduler friendliness" which changed pktgen_thread_worker() to
use wait_event_interruptible_timeout().
Reported-by: Huang Ying
Signed-o
pktgen_thread_worker() doesn't need to wait for kthread_stop(), it
can simply exit. Just pktgen_create_thread() and pg_net_exit() should
do get_task_struct()/put_task_struct(). kthread_stop(dead_thread) is
fine.
Signed-off-by: Oleg Nesterov
---
net/core/pktgen.c | 11 ++-
1
Hello,
I am not familiar with this code and I have no idea how to test
these changes, so 2/2 comes as a separate change. 1/2 looks like
the obvious bugfix, and probably candidate for -stable.
Oleg.
net/core/pktgen.c |9 ++---
1 files changed, 2 insertions(+), 7 deletions(-)
--
To unsub
pktgen_thread_worker() is obviously racy, kthread_stop() can come
between the kthread_should_stop() check and set_current_state().
Signed-off-by: Oleg Nesterov
Reported-by: Jan Stancek
Reported-by: Marcelo Leitner
---
net/core/pktgen.c |4 +++-
1 files changed, 3 insertions(+), 1
On 10/22, Jarek Poplawski wrote:
>
> On Fri, Oct 19, 2007 at 09:50:14AM +0200, Jarek Poplawski wrote:
> > On Thu, Oct 18, 2007 at 07:48:19PM +0400, Oleg Nesterov wrote:
> > > On 10/18, Jarek Poplawski wrote:
> > > >
> > > > +/**
> > > > +
On 10/18, Jarek Poplawski wrote:
>
> +/**
> + * flush_work_sync - block until a work_struct's callback has terminated
^^^
Hmm...
> + * Similar to cancel_work_sync() but will only busy wait (without cancel)
> + * if the work is
On 10/12, Andrew Morton wrote:
>
> On Fri, 12 Oct 2007 15:47:59 -0400
> Mathieu Desnoyers <[EMAIL PROTECTED]> wrote:
>
> > Hi Andrew,
> >
> > I noticed a regression between 2.6.23-rc8-mm2 and 2.6.23-mm1 (with your
> > hotfixes). User space threads seems to receive a ERESTART_RESTARTBLOCK
> > as s
On 10/13, Oleg Nesterov wrote:
>
> On 10/12, Andrew Morton wrote:
> >
> > Bisection shows that this problem is caused by these two patches:
> >
> > pid-namespaces-allow-cloning-of-new-namespace.patch
>
> This? http://marc.info/?l=linux-mm-commits&m=1187
On 07/02, Jarek Poplawski wrote:
>
> > > --- a/net/core/netpoll.c
> > > +++ b/net/core/netpoll.c
> > > @@ -72,7 +72,8 @@ static void queue_process(struct work_struct *work)
> > > netif_tx_unlock(dev);
> > > local_irq_restore(flags);
> > >
> > > -
On 07/02, Jarek Poplawski wrote:
>
> diff -Nurp 2.6.22-rc7-/net/core/netpoll.c 2.6.22-rc7/net/core/netpoll.c
> --- 2.6.22-rc7-/net/core/netpoll.c2007-07-02 09:03:27.0 +0200
> +++ 2.6.22-rc7/net/core/netpoll.c 2007-07-02 09:32:34.0 +0200
> @@ -72,8 +72,7 @@ static void queue
ld avoid signals when it
comes to kernel threads.
One can always use force_sig() or allow_signal() + send_sig() when
it is really needed, like cifs does.
> On 6/26/07, Oleg Nesterov <[EMAIL PROTECTED]> wrote:
> >
> >Personally, I don't think we should do this.
>
On 06/26, Satyam Sharma wrote:
>
> Yes, why not embed a send_sig(SIGKILL) just before the wake_up_process()
> in kthread_stop() itself?
Personally, I don't think we should do this.
kthread_stop() doesn't always mean "kill this thread asap". Suppose that
CPU_DOWN does kthread_stop(workqueue->thre
On 04/25, David Howells wrote:
>
> Oleg Nesterov <[EMAIL PROTECTED]> wrote:
>
> > Yes sure. Note that this is documented:
> >
> > /*
> > * Kill off a pending schedule_delayed_work(). Note that the work
> > callback
> >
On 04/24, David Howells wrote:
>
> Oleg Nesterov <[EMAIL PROTECTED]> wrote:
>
> > Sure, I'll grep for cancel_delayed_work(). But unless I missed something,
> > this change should be completely transparent for all users. Otherwise, it
> > is buggy.
>
On 04/24, David Howells wrote:
>
> Oleg Nesterov <[EMAIL PROTECTED]> wrote:
>
> > Great. I'll send the s/del_timer_sync/del_timer/ patch.
>
> I didn't say I necessarily agreed that this was a good idea. I just meant
> that
> I agree that it will w
On 04/24, David Howells wrote:
>
> Oleg Nesterov <[EMAIL PROTECTED]> wrote:
>
> > The current code uses del_timer_sync(). It will also return 0. However, it
> > will spin waiting for timer->function() to complete. So we are just wasting
> > CP
On 04/24, David Howells wrote:
>
> Oleg Nesterov <[EMAIL PROTECTED]> wrote:
>
> > > > We only care when del_timer() returns true. In that case, if the timer
> > > > function still runs (possible for single-threaded wqs), it has already
> > > > pa
On 04/23, David Howells wrote:
>
> > We only care when del_timer() returns true. In that case, if the timer
> > function still runs (possible for single-threaded wqs), it has already
> > passed __queue_work().
>
> Why do you assume that?
If del_timer() returns true, the timer was pending. This m
On 04/20, Andrew Morton wrote:
>
> On Fri, 20 Apr 2007 11:41:46 +0100
> David Howells <[EMAIL PROTECTED]> wrote:
>
> > There are only two non-net patches that AF_RXRPC depends on:
> >
> > (1) The key facility changes. That's all my code anyway, and shouldn't be
> > a
> > problem to merge
On 02/21, Stephen Hemminger wrote:
>
> I would rather put it in a bugfix patchset for 2.6.21 and 2.6.20-stable
OK. Even better. Could you also remove br_private.h:BR_PORT_DEBOUNCE then?
Oleg.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL P
On 02/21, Stephen Hemminger wrote:
>
> This is what I was suggesting by getting rid of the work queue completely.
Can't comment this patch, but if we can get rid of the work_struct - good!
> -static void port_carrier_check(struct work_struct *work)
> +void br_port_carrier_check(struct net_bridge_
On 02/21, Jarek Poplawski wrote:
>
> > On Wed, 21 Feb 2007 01:19:41 +0300
> > Oleg Nesterov <[EMAIL PROTECTED]> wrote:
> >
> > > + p = container_of(work, struct net_bridge_port, carrier_check.work);
> > >
> > > rtnl_lock();
> > >
On 02/20, Stephen Hemminger wrote:
>
> On Wed, 21 Feb 2007 01:19:41 +0300
> Oleg Nesterov <[EMAIL PROTECTED]> wrote:
>
> > static void release_nbp(struct kobject *kobj)
> > {
> > struct net_bridge_port *p
> > = conta
the container.
port_carrier_check() uses p->dev->br_port == NULL as indication that
net_bridge_port
is under destruction. Otherwise it assumes that p->dev->br_port == p.
Signed-off-by: Oleg Nesterov <[EMAIL PROTECTED]>
Acked-By: David Howells <[EMAIL PROTECTED]>
--- WQ/net/br
44 matches
Mail list logo