[datameet] Re: Connecting political boundaries to PIN codes/mapping PIN codes

2023-12-02 Thread Mantha Chandrasekhar
Hi Jatin Rajani,
Did you find any data sets that matches assembly constituencies with pin 
codes?


On Monday, May 18, 2020 at 6:31:12 PM UTC+5:30 jatin rajani wrote:

> Hi Jeff,
> Were you able to find the pin code matching with assembly constituencies?
>
>
> On Friday, 14 February 2014 22:39:02 UTC+5:30, Jeff Weaver wrote:
>>
>> Hi all, 
>>
>> Just came across this group, and excited to read about all the cool data 
>> tasks that people are working on. I’m currently trying to link a dataset of 
>> school data (DISE) to the MP/MLA constituencies (and eventually maybe 
>> panchayats) in which they are located. Each school has its district, block, 
>> village and PIN code listed. I was really hoping to connect it to assembly 
>> constituencies using the PIN code data, and avoid doing any name-matching. 
>> My plan was to use the GPS coordinates of PIN codes and then link them 
>> using the GIS maps of constituencies. 
>>
>> However, I haven’t been able to find a large enough dataset on the GPS 
>> coordinates of the pincodes. Does anyone know of a good public (or private) 
>> source of this or have another suggestion for how to do this? I have the 
>> dataset that someone in the group made using Open Street Maps, and it’s 
>> great, but it doesn’t cover a lot of the pin codes (especially the rural 
>> ones). Or are there companies that are willing to sell it for a reasonable 
>> price?
>>
>> Thanks!
>>
>

-- 
Datameet is a community of Data Science enthusiasts in India. Know more about 
us by visiting http://datameet.org
--- 
You received this message because you are subscribed to the Google Groups 
"datameet" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to datameet+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/datameet/5b0fa49a-0c00-4694-87fd-c11f99f57f41n%40googlegroups.com.


Can we install Bigtop-3.2.0 on RHEL8

2023-08-23 Thread Chandrasekhar A
Hi Team,
I am trying to setup Apache Hadoop Bigtop 3.2.0 on RHEL8 environment from
official bigtop release website I do not see any repositories supporting
either centos8 or RHEL8.
Bigtop website do not mention anything about RHEL8 and Centos8
distributions.Does anyone tried installing Bigtop3.2.0 on RHEL8 or Centos8
environment ?


Regards,
Chandu


Re: [go-nuts] cgo pam module signal handling

2023-08-14 Thread Chandrasekhar R
The scenario is:
1) sudo starts and sets up a signal handler for SIGCHLD 
2) pam modules gets loaded
3) Go gets initialized and sets the SA_ONSTACK flag specifically by calling 
rt_sigaction with a pointer to the existing signal handler in *sa_handler *
field*.*
4) Sudo initialized a new signal handler for SIGCHLD
5) After the command is run and SIGCHLD signal is received by a Go created 
thread instead of the parent sudo thread then it goes to the signal handler 
created in step 1.

I believe this is the current set of events happening. I can share a strace 
dump if it would help.

On Monday, August 14, 2023 at 12:17:34 PM UTC-7 Ian Lance Taylor wrote:

> On Mon, Aug 14, 2023 at 12:02 PM Chandrasekhar R  
> wrote:
> >
> > My understanding currently is sudo sets up a signal handler in pre_exec 
> and another signal handler later on which is tied into its main event loop.
> > Go gets initialized when the pre_exec signal handler is used and it adds 
> rt_sigaction(SIGCHLD..) with the SA_ONSTACK flag as mentioned here.
> > Then after the sudo command (echo) gets executed, the SIGCHLD is 
> received by one of the go threads which then runs the pre_exec signal 
> handler which is the old and nonexistent signal handler.
> >
> > My approach is to block the Go threads from receiving the SIGCHLD signal 
> and thus not let the signal to be handled by the old signal handler.
>
> I don't quite understand the scenario you are describing. What
> matters is the signal handler. When Go adds the SA_ONSTACK flag, it
> doesn't change the signal handler. Which thread a signal is delivered
> to does not affect which signal handler gets run.
>
> Ian
>
>
> > On Friday, August 11, 2023 at 10:05:48 PM UTC-7 Ian Lance Taylor wrote:
> >>
> >> On Fri, Aug 11, 2023 at 11:51 AM Chandrasekhar R  
> wrote:
> >> >
> >> > I am planning on using a pam module written in Go (specifically 
> https://github.com/uber/pam-ussh) . When I run a script which calls sudo 
> continuously with an echo command, I am noticing zombie/defunct processes 
> starting to pop up.
> >> >
> >> > On doing strace, I noticed that the SIGCHLD gets delivered to one of 
> the threads created when Go gets initialized (i.e. the shared object gets 
> loaded).
> >> >
> >> > I tried to add one level of indirection by having a separate C code 
> which creates a new thread, sets the signal mask to block SIGCHLD and then 
> use dlopen to open the shared object created using cgo. I am still facing 
> the same issue, is there any pointer on how to fix this issue? I think this 
> would be a wide issue across all PAM modules written using cgo.
> >>
> >> As far as I know the specific thread that receives a SIGCHLD signal is
> >> fairly random. What matters is not the thread that receives the
> >> signal, but the signal handler that is installed. Signal handlers are
> >> process-wide. What signal handler is running when you get a SIGCHLD?
> >> What signal handler do you expect to run?
> >>
> >> Ian
> >
> > --
> > You received this message because you are subscribed to the Google 
> Groups "golang-nuts" group.
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to golang-nuts...@googlegroups.com.
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/golang-nuts/7b395394-da12-4b19-9e07-5c8f7e91dcabn%40googlegroups.com
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/216fb89f-ba9c-41e6-ba90-485b9174a0a5n%40googlegroups.com.


Re: [go-nuts] cgo pam module signal handling

2023-08-14 Thread Chandrasekhar R
My understanding currently is sudo sets up a signal handler in pre_exec and 
another signal handler later on which is tied into its main event loop.
Go gets initialized when the pre_exec signal handler is used and it adds 
rt_sigaction(SIGCHLD..) with the SA_ONSTACK flag as mentioned here 
<https://pkg.go.dev/os/signal#hdr-Non_Go_programs_that_call_Go_code>. 
Then after the sudo command (echo) gets executed, the SIGCHLD is received 
by one of the go threads which then runs the pre_exec signal handler which 
is the old and nonexistent signal handler.

My approach is to block the Go threads from receiving the SIGCHLD signal 
and thus not let the signal to be handled by the old signal handler.

On Friday, August 11, 2023 at 10:05:48 PM UTC-7 Ian Lance Taylor wrote:

> On Fri, Aug 11, 2023 at 11:51 AM Chandrasekhar R  
> wrote:
> >
> > I am planning on using a pam module written in Go (specifically 
> https://github.com/uber/pam-ussh) . When I run a script which calls sudo 
> continuously with an echo command, I am noticing zombie/defunct processes 
> starting to pop up.
> >
> > On doing strace, I noticed that the SIGCHLD gets delivered to one of the 
> threads created when Go gets initialized (i.e. the shared object gets 
> loaded).
> >
> > I tried to add one level of indirection by having a separate C code 
> which creates a new thread, sets the signal mask to block SIGCHLD and then 
> use dlopen to open the shared object created using cgo. I am still facing 
> the same issue, is there any pointer on how to fix this issue? I think this 
> would be a wide issue across all PAM modules written using cgo.
>
> As far as I know the specific thread that receives a SIGCHLD signal is
> fairly random. What matters is not the thread that receives the
> signal, but the signal handler that is installed. Signal handlers are
> process-wide. What signal handler is running when you get a SIGCHLD?
> What signal handler do you expect to run?
>
> Ian
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/7b395394-da12-4b19-9e07-5c8f7e91dcabn%40googlegroups.com.


[go-nuts] cgo pam module signal handling

2023-08-11 Thread Chandrasekhar R
Hey community,

I am planning on using a pam module written in Go (specifically 
https://github.com/uber/pam-ussh) . When I run a script which calls sudo 
continuously with an echo command, I am noticing zombie/defunct processes 
starting to pop up.

On doing strace, I noticed that the SIGCHLD gets delivered to one of the 
threads created when Go gets initialized (i.e. the shared object gets 
loaded).

I tried to add one level of indirection by having a separate C code which 
creates a new thread, sets the signal mask to block SIGCHLD and then use 
*dlopen* to open the shared object created using cgo. I am still facing the 
same issue, is there any pointer on how to fix this issue? I think this 
would be a wide issue across all PAM modules written using cgo.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/564ad353-885e-4662-8ac6-f15d41771a64n%40googlegroups.com.


Bug#1040400: Kernel continues to try for unreachable NFS mount indefinitely causing machine to be unresponsive

2023-07-05 Thread Chandrasekhar Ramasubramanyan
Package: Kernel
Version: 4.19.100generic

If a (remote) NFS server (mounted with the default options) goes down or 
becomes unreachable it causes the (local) load average to get very high
This making the machine very slow after few hours. Any kind of requests(HTTP 
request to application running on the machine) gets timed out.

We see timeout errors in the kernel logs under /var/log/kern.log while try to 
reach out to the NFS mount.
This issue was observed in a machine installed with Debian 9 (stretch).

We expect that if kernel is not able to access the unreachable NFS mount, it 
shouldn't make the system unresponsive.




[zeromq-dev] zmq_poll returns -1 ( Context has been shut down) when polling 2 sockets

2023-07-04 Thread Chandrasekhar Nunna
hi all,
i am using zactor and in the call back i receive a pipe socket.
i create a router socket and polling this sockets for messages.
it is not working and giving -1 result as return value of zmq_poll;
what could be the problem?
void
echo_actor (zsock_t *pipe, void *args)
{
// need to implement...?
printf("thread id : %lu", GetCurrentThreadId());
server_t* self = server_new(pipe);
zmq_pollitem_t items[] =
{
{ self->pipe, 0, ZMQ_POLLIN, 0 },
{ self->router, 0, ZMQ_POLLIN, 0 }
};
self->monitor_at = zclock_time() + self->monitor;
while (!self->stopped && !zctx_interrupted)
{
//  Calculate tickless timer, up to interval seconds
uint64_t tickless = zclock_time() + self->monitor;
uint64_t diff = (tickless - zclock_time()) * ZMQ_POLL_MSEC;
//  Poll until at most next timer event
int rc = zmq_poll(items, 2, diff );
if (rc == -1)
{
break;  //  Context has been shut down
}

//  Process incoming message from either socket
if (items[0].revents & ZMQ_POLLIN)
server_control_message(self);

if (items[1].revents & ZMQ_POLLIN)
server_client_message(self);

//  If clock went past timeout, then monitor server
if (zclock_time() >= self->monitor_at)
{
printf("clock went past timeout, need to monitor the server");
}
}
server_destroy();
}

int main()
{
zactor_t *actor = zactor_new (echo_actor, "Hello, World");
assert (actor);
zstr_sendx (actor, "ECHO", "This is a string", NULL);
char *string = zstr_recv (actor);
fprintf(stdout,"%s\n",string);
assert (streq (string, "This is a string"));
free (string);
zactor_destroy ();
}
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: 7.0.0 #15

2021-08-29 Thread chandrasekhar chodavarapu
Please unsubscribe me.

thanks and regards


On Sat, Aug 28, 2021 at 3:22 AM Ali Alhaidary 
wrote:

> The link Apache OpenMeetings Project – List of general configuration
> options
> 
> is broken
>
>
> https://ci-builds.apache.org/job/OpenMeetings/job/openmeetings/site/openmeetings-server/GeneralConfiguration.html
>
> Is there any new field added to the database?
>
> Ali
>


[kate] [Bug 437056] New: Automatic spellcheck in `kate` does not persist

2021-05-13 Thread R Chandrasekhar
https://bugs.kde.org/show_bug.cgi?id=437056

Bug ID: 437056
   Summary: Automatic spellcheck in `kate` does not persist
   Product: kate
   Version: 21.04.0
  Platform: Manjaro
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: kwrite-bugs-n...@kde.org
  Reporter: chyav...@gmail.com
  Target Milestone: ---

SUMMARY
Automatic spellcheck in kate is not persistent.

STEPS TO REPRODUCE
1. Automatic spell check enabled by default.
2. Type an erroneous word: it is highlighted.
3. Correct the error.
4. Type some more words with and without errors.
5. The erroneous words are not highlighted any more.
6. Toggle Ctrl-Shift-O and the errors show up again.

OBSERVED RESULT

Erroneous words added after a spellcheck correction are not highlighted.

EXPECTED RESULT

I would expect erroneous words to be highlighted regardless of previous errors
being corrected. The Ctrl-Shift-O should not be necessary with automatic
spellcheck on.

SOFTWARE/OS VERSIONS
Windows: 
macOS: 
Linux/KDE Plasma: 
(available in About System)
KDE Plasma Version: 5.21.4
KDE Frameworks Version: 5.81.0
Qt Version: 5.15.2

ADDITIONAL INFORMATION

Toggling Ctrl-Shift-O is a workaround but that makes a mockery of automatic
spell checking.

-- 
You are receiving this mail because:
You are watching all bug changes.

[Nouveau] Enquiry about EVOC projects

2021-04-16 Thread Tarun Chandrasekhar
Respected Sir,

I'm Tarun, B. Tech sophomore from IIT Delhi. I was looking for an
opportunity to work in the domain of parallel programming for my summer
internship. I'm interested in the projects "Helping out with Nouveau OpenCL
driver" and "Dynamic reclocking". Can I get more information about these
projects?
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau


[tip: sched/core] sched/fair: Ignore percpu threads for imbalance pulls

2021-04-09 Thread tip-bot2 for Lingutla Chandrasekhar
The following commit has been merged into the sched/core branch of tip:

Commit-ID: 9bcb959d05eeb564dfc9cac13a59843a4fb2edf2
Gitweb:
https://git.kernel.org/tip/9bcb959d05eeb564dfc9cac13a59843a4fb2edf2
Author:Lingutla Chandrasekhar 
AuthorDate:Wed, 07 Apr 2021 23:06:26 +01:00
Committer: Peter Zijlstra 
CommitterDate: Fri, 09 Apr 2021 18:02:20 +02:00

sched/fair: Ignore percpu threads for imbalance pulls

During load balance, LBF_SOME_PINNED will be set if any candidate task
cannot be detached due to CPU affinity constraints. This can result in
setting env->sd->parent->sgc->group_imbalance, which can lead to a group
being classified as group_imbalanced (rather than any of the other, lower
group_type) when balancing at a higher level.

In workloads involving a single task per CPU, LBF_SOME_PINNED can often be
set due to per-CPU kthreads being the only other runnable tasks on any
given rq. This results in changing the group classification during
load-balance at higher levels when in reality there is nothing that can be
done for this affinity constraint: per-CPU kthreads, as the name implies,
don't get to move around (modulo hotplug shenanigans).

It's not as clear for userspace tasks - a task could be in an N-CPU cpuset
with N-1 offline CPUs, making it an "accidental" per-CPU task rather than
an intended one. KTHREAD_IS_PER_CPU gives us an indisputable signal which
we can leverage here to not set LBF_SOME_PINNED.

Note that the aforementioned classification to group_imbalance (when
nothing can be done) is especially problematic on big.LITTLE systems, which
have a topology the likes of:

  DIE [  ]
  MC  [][]
   0  1  2  3
   L  L  B  B

  arch_scale_cpu_capacity(L) < arch_scale_cpu_capacity(B)

Here, setting LBF_SOME_PINNED due to a per-CPU kthread when balancing at MC
level on CPUs [0-1] will subsequently prevent CPUs [2-3] from classifying
the [0-1] group as group_misfit_task when balancing at DIE level. Thus, if
CPUs [0-1] are running CPU-bound (misfit) tasks, ill-timed per-CPU kthreads
can significantly delay the upgmigration of said misfit tasks. Systems
relying on ASYM_PACKING are likely to face similar issues.

Signed-off-by: Lingutla Chandrasekhar 
[Use kthread_is_per_cpu() rather than p->nr_cpus_allowed]
[Reword changelog]
Signed-off-by: Valentin Schneider 
Signed-off-by: Peter Zijlstra (Intel) 
Reviewed-by: Dietmar Eggemann 
Reviewed-by: Vincent Guittot 
Link: 
https://lkml.kernel.org/r/20210407220628.3798191-2-valentin.schnei...@arm.com
---
 kernel/sched/fair.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bc34e35..1ad929b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7598,6 +7598,10 @@ int can_migrate_task(struct task_struct *p, struct 
lb_env *env)
if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
return 0;
 
+   /* Disregard pcpu kthreads; they are where they need to be. */
+   if ((p->flags & PF_KTHREAD) && kthread_is_per_cpu(p))
+   return 0;
+
if (!cpumask_test_cpu(env->dst_cpu, p->cpus_ptr)) {
int cpu;
 


[tip: sched/core] sched/fair: Ignore percpu threads for imbalance pulls

2021-04-09 Thread tip-bot2 for Lingutla Chandrasekhar
The following commit has been merged into the sched/core branch of tip:

Commit-ID: 8d25d10a4f5a5d87c062838358ab5b3ed7eaa131
Gitweb:
https://git.kernel.org/tip/8d25d10a4f5a5d87c062838358ab5b3ed7eaa131
Author:Lingutla Chandrasekhar 
AuthorDate:Wed, 07 Apr 2021 23:06:26 +01:00
Committer: Peter Zijlstra 
CommitterDate: Fri, 09 Apr 2021 13:52:10 +02:00

sched/fair: Ignore percpu threads for imbalance pulls

During load balance, LBF_SOME_PINNED will be set if any candidate task
cannot be detached due to CPU affinity constraints. This can result in
setting env->sd->parent->sgc->group_imbalance, which can lead to a group
being classified as group_imbalanced (rather than any of the other, lower
group_type) when balancing at a higher level.

In workloads involving a single task per CPU, LBF_SOME_PINNED can often be
set due to per-CPU kthreads being the only other runnable tasks on any
given rq. This results in changing the group classification during
load-balance at higher levels when in reality there is nothing that can be
done for this affinity constraint: per-CPU kthreads, as the name implies,
don't get to move around (modulo hotplug shenanigans).

It's not as clear for userspace tasks - a task could be in an N-CPU cpuset
with N-1 offline CPUs, making it an "accidental" per-CPU task rather than
an intended one. KTHREAD_IS_PER_CPU gives us an indisputable signal which
we can leverage here to not set LBF_SOME_PINNED.

Note that the aforementioned classification to group_imbalance (when
nothing can be done) is especially problematic on big.LITTLE systems, which
have a topology the likes of:

  DIE [  ]
  MC  [][]
   0  1  2  3
   L  L  B  B

  arch_scale_cpu_capacity(L) < arch_scale_cpu_capacity(B)

Here, setting LBF_SOME_PINNED due to a per-CPU kthread when balancing at MC
level on CPUs [0-1] will subsequently prevent CPUs [2-3] from classifying
the [0-1] group as group_misfit_task when balancing at DIE level. Thus, if
CPUs [0-1] are running CPU-bound (misfit) tasks, ill-timed per-CPU kthreads
can significantly delay the upgmigration of said misfit tasks. Systems
relying on ASYM_PACKING are likely to face similar issues.

Signed-off-by: Lingutla Chandrasekhar 
[Use kthread_is_per_cpu() rather than p->nr_cpus_allowed]
[Reword changelog]
Signed-off-by: Valentin Schneider 
Signed-off-by: Peter Zijlstra (Intel) 
Reviewed-by: Dietmar Eggemann 
Reviewed-by: Vincent Guittot 
Link: 
https://lkml.kernel.org/r/20210407220628.3798191-2-valentin.schnei...@arm.com
---
 kernel/sched/fair.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d0bd861..d10e33d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7598,6 +7598,10 @@ int can_migrate_task(struct task_struct *p, struct 
lb_env *env)
if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
return 0;
 
+   /* Disregard pcpu kthreads; they are where they need to be. */
+   if ((p->flags & PF_KTHREAD) && kthread_is_per_cpu(p))
+   return 0;
+
if (!cpumask_test_cpu(env->dst_cpu, p->cpus_ptr)) {
int cpu;
 


[tip: sched/core] sched/fair: Ignore percpu threads for imbalance pulls

2021-04-09 Thread tip-bot2 for Lingutla Chandrasekhar
The following commit has been merged into the sched/core branch of tip:

Commit-ID: 29b628b521119c0dfe151da302e11018cb32db4f
Gitweb:
https://git.kernel.org/tip/29b628b521119c0dfe151da302e11018cb32db4f
Author:Lingutla Chandrasekhar 
AuthorDate:Wed, 07 Apr 2021 23:06:26 +01:00
Committer: Peter Zijlstra 
CommitterDate: Thu, 08 Apr 2021 23:09:44 +02:00

sched/fair: Ignore percpu threads for imbalance pulls

During load balance, LBF_SOME_PINNED will be set if any candidate task
cannot be detached due to CPU affinity constraints. This can result in
setting env->sd->parent->sgc->group_imbalance, which can lead to a group
being classified as group_imbalanced (rather than any of the other, lower
group_type) when balancing at a higher level.

In workloads involving a single task per CPU, LBF_SOME_PINNED can often be
set due to per-CPU kthreads being the only other runnable tasks on any
given rq. This results in changing the group classification during
load-balance at higher levels when in reality there is nothing that can be
done for this affinity constraint: per-CPU kthreads, as the name implies,
don't get to move around (modulo hotplug shenanigans).

It's not as clear for userspace tasks - a task could be in an N-CPU cpuset
with N-1 offline CPUs, making it an "accidental" per-CPU task rather than
an intended one. KTHREAD_IS_PER_CPU gives us an indisputable signal which
we can leverage here to not set LBF_SOME_PINNED.

Note that the aforementioned classification to group_imbalance (when
nothing can be done) is especially problematic on big.LITTLE systems, which
have a topology the likes of:

  DIE [  ]
  MC  [][]
   0  1  2  3
   L  L  B  B

  arch_scale_cpu_capacity(L) < arch_scale_cpu_capacity(B)

Here, setting LBF_SOME_PINNED due to a per-CPU kthread when balancing at MC
level on CPUs [0-1] will subsequently prevent CPUs [2-3] from classifying
the [0-1] group as group_misfit_task when balancing at DIE level. Thus, if
CPUs [0-1] are running CPU-bound (misfit) tasks, ill-timed per-CPU kthreads
can significantly delay the upgmigration of said misfit tasks. Systems
relying on ASYM_PACKING are likely to face similar issues.

[Use kthread_is_per_cpu() rather than p->nr_cpus_allowed]
[Reword changelog]
Signed-off-by: Valentin Schneider 
Signed-off-by: Lingutla Chandrasekhar 
Signed-off-by: Peter Zijlstra (Intel) 
Reviewed-by: Dietmar Eggemann 
Reviewed-by: Vincent Guittot 
Link: 
https://lkml.kernel.org/r/20210407220628.3798191-2-valentin.schnei...@arm.com
---
 kernel/sched/fair.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d0bd861..d10e33d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7598,6 +7598,10 @@ int can_migrate_task(struct task_struct *p, struct 
lb_env *env)
if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
return 0;
 
+   /* Disregard pcpu kthreads; they are where they need to be. */
+   if ((p->flags & PF_KTHREAD) && kthread_is_per_cpu(p))
+   return 0;
+
if (!cpumask_test_cpu(env->dst_cpu, p->cpus_ptr)) {
int cpu;
 


[PATCH] sched/fair: Ignore percpu threads for imbalance pulls

2021-02-17 Thread Lingutla Chandrasekhar
In load balancing, when balancing group is unable to pull task
due to ->cpus_ptr constraints from busy group, then it sets
LBF_SOME_PINNED to lb env flags, as a consequence, sgc->imbalance
is set for its parent domain level. which makes the group
classified as imbalance to get help from another balancing cpu.

Consider a 4-CPU big.LITTLE system with CPUs 0-1 as LITTLEs and
CPUs 2-3 as Bigs with below scenario:
- CPU0 doing newly_idle balancing
- CPU1 running percpu kworker and RT task (small tasks)
- CPU2 running 2 big tasks
- CPU3 running 1 medium task

While CPU0 is doing newly_idle load balance at MC level, it fails to
pull percpu kworker from CPU1 and sets LBF_SOME_PINNED to lb env flag
and set sgc->imbalance at DIE level domain. As LBF_ALL_PINNED not cleared,
it tries to redo the balancing by clearing CPU1 in env cpus, but it don't
find other busiest_group, so CPU0 stops balacing at MC level without
clearing 'sgc->imbalance' and restart the load balacing at DIE level.

And CPU0 (balancing cpu) finds LITTLE's group as busiest_group with group
type as imbalance, and Bigs that classified the level below imbalance type
would be ignored to pick as busiest, and the balancing would be aborted
without pulling any tasks (by the time, CPU1 might not have running tasks).

It is suboptimal decision to classify the group as imbalance due to
percpu threads. So don't use LBF_SOME_PINNED for per cpu threads.

Signed-off-by: Lingutla Chandrasekhar 

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 04a3ce20da67..44a05ad8c96b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7560,7 +7560,9 @@ int can_migrate_task(struct task_struct *p, struct lb_env 
*env)
 
schedstat_inc(p->se.statistics.nr_failed_migrations_affine);
 
-   env->flags |= LBF_SOME_PINNED;
+   /* Ignore percpu threads for imbalance pulls. */
+   if (p->nr_cpus_allowed > 1)
+   env->flags |= LBF_SOME_PINNED;
 
/*
 * Remember if this task can be migrated to any other CPU in
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of 
Code Aurora Forum,
 a Linux Foundation Collaborative Project.



[spyder] Re: Unable to Import module

2021-02-06 Thread Chandrasekhar Subramanyam
Instead of Spider I started using other IDE PyCharm . It is working .
Stopped using Spider.

On Thursday, February 4, 2021 at 9:12:26 PM UTC+5:30 Chandrasekhar 
Subramanyam wrote:

>
> I have installed tensorflow_hub and set up the path in the Window 8 
> Environmental variable.
>
> But when I run the command : import tensorflow_hub as thub 
>
> I get error message  Module not found.
>
> But when I run the same from Python command line I am not getting the 
> error.
>
> Please help 
>

-- 
You received this message because you are subscribed to the Google Groups 
"spyder" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to spyderlib+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/spyderlib/fbd1b4df-b4f0-475f-b423-772809dd1fa5n%40googlegroups.com.


[spyder] Unable to Import module

2021-02-04 Thread Chandrasekhar Subramanyam

I have installed tensorflow_hub and set up the path in the Window 8 
Environmental variable.

But when I run the command : import tensorflow_hub as thub 

I get error message  Module not found.

But when I run the same from Python command line I am not getting the error.

Please help 

-- 
You received this message because you are subscribed to the Google Groups 
"spyder" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to spyderlib+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/spyderlib/2f65ad7b-1773-4105-ab88-155970ea00b3n%40googlegroups.com.


Re: [datameet] What is Covid RT-PCR test's Ct (Cycle Threshold) limit in India?

2021-01-15 Thread Chandrasekhar S.
CT value is not a reliable indicator. Earlier labs used to report CT value
by default but now they do not - but one can ask for the CT value.
https://www.icmr.gov.in/pdf/covid/techdoc/Advisory_on_correlation_of_COVID_severity_with_Ct_values.pdf

On Fri, Jan 15, 2021 at 11:18 AM Nikhil VJ  wrote:

> Hi All,
>
> I wanted to know if the Indian government or related agencies have set any
> standard limit in a crucial setting in the Covid tests called "Cycle
> Threshold".
>
> Are the folks collecting stats on covid cases - having any data on cycle
> values for those cases?
>
> Have you or someone you know been diagnosed as a Covid-19 case? What was
> the cycle value in your test result? Does the lab that did the test share
> this data?
>
> Sharing an article and some excerpts from it:
>
>
> https://www.msn.com/en-us/health/medical/experts-us-covid-19-positivity-rate-high-due-to-too-sensitive-tests/ar-BB18wE8B
> Experts: US COVID-19 positivity rate high due to 'too sensitive' tests
>
> "With a cutoff of 35, about half of those tests would no longer qualify as
> positive. About 70 percent would no longer be judged positive if the cycles
> were limited to 30.
> In Massachusetts, from 85 to 90 percent of people who tested positive in
> July with a cycle threshold of 40 would have been considered negative if
> the threshold were 30 cycles, Mina said. "
> "The Food and Drug Administration said that it does not specify the cycle
> threshold ranges used to determine who is positive and 'commercial
> manufacturers and laboratories set their own.'"
> "The CDC said its own calculations suggest its extremely hard to detect a
> live virus in a sample above a threshold of 33 cycles. "
>
>
> This was one - there's many more if I search for "RT-PCR test cycle
> threshold value covid" on duckduckgo. (tip: google search is broken when it
> comes to anything controversial. Proverbial case of overprotective mother
> suffocating the child in the quest to protect it. Take the same query and
> run it in Bing, Duckduckgo etc also.)
>
> Looking into this I'm seeing an analogy with vectorizing raster satellite
> imagery : Your software can easily fill the whole thing up with false
> positives, or can produce no result at all. Lot of fine tuning is required
> to get the "perfect setting" that minimizes the false positives and false
> negatives, and you often never reach a perfect setting that didn't have any
> mistakes. It's not a hard yes/no thing. You invariably need manual
> intervention (and even with AI interventions we're seeing problems), and it
> frustrates the hell out of people who assumed this technology thing is a
> silver bullet.
>
> Inviting people with better knowledge on this topic to correct me: My
> understanding is that there is an exponential (maybe doubling, maybe some
> other factor) change from one Ct value to the other. To go from 33 to 40,
> would be.. well, non-trivial.
>
> So one set of data needed is : What are the Ct limits being used in
> current testing? Is there a single value standardized by the government? If
> there is variation, then who decides?
>
> Another set of data that can be just as useful : Have these Ct limits been
> changed since the pandemic began almost a year ago? How have they changed,
> and is there any co-relation between that and the Covid+ case counts? Is it
> possible to "explode" / "reign in" a pandemic by merely altering this
> setting without any ground level realities changing? If yes, then why is
> talk about it missing from the mainstream discourse?
>
>
> --
> Cheers,
> Nikhil VJ
> https://nikhilvj.co.in
>
> --
> Datameet is a community of Data Science enthusiasts in India. Know more
> about us by visiting http://datameet.org
> ---
> You received this message because you are subscribed to the Google Groups
> "datameet" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to datameet+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/datameet/CAH7jeuM%2BFh7nsNGzaF5PkTxR-VUA%2B9vce7%3DwCRWcxiVnKYsKQQ%40mail.gmail.com
> 
> .
>

-- 
Datameet is a community of Data Science enthusiasts in India. Know more about 
us by visiting http://datameet.org
--- 
You received this message because you are subscribed to the Google Groups 
"datameet" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to datameet+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/datameet/CA%2BDCqDgqA-RfnxT0emvNF8U6eYfQGc8oPAewKO1asGBa92DN_w%40mail.gmail.com.


[PECL-DEV] VCS Account Request: kcomkar

2020-12-27 Thread K Chandrasekhar Omkar
Monitoring new extensions

Sponsor:
Not Applicable

-- 
PECL development discussion Mailing List (https://pecl.php.net/)
To unsubscribe, visit: https://www.php.net/unsub.php



Re: New installation tutorials OpenMeetings 5.1.0 on different OSes

2020-12-06 Thread chandrasekhar chodavarapu
Please unsubscribe me

On Sun, Dec 6, 2020 at 9:51 PM Alvaro  wrote:

>
> Hello,
>
> ==
>
> Allow me first to appreciate all the help i've received from all of you,
> without which i would not have been able to perform any of the published
> tutorials.
>
> Thank you Maxim, for your help and understanding during these years, in
> which with my continuous questions and annoyances to you i have been able
> to learn something and translate it in the OpenMeetings installation
> documents.
>
> Thanks to Sebastian Wagner, who was the first in time to put up with his
> inexhaustible patience. Thank you also for your teaching.
>
> Thank you all sincerely.
>
> ==
>
> Maxim has launched the new OpenMeetings 5.1.0 release, increasingly
> complete and in a wonderful job.
>
> The installation tutorials for this on differents OSes can find them in:
>
>
> https://cwiki.apache.org/confluence/display/OPENMEETINGS/Tutorials+for+installing+OpenMeetings+and+Tools
>
> ...called:
>
>
> Installation OpenMeetings 5.1.0 on Arch Linux
>
> Installation OpenMeetings 5.1.0 on Centos 7
>
> Installation OpenMeetings 5.1.0 on Centos 8
>
> Installation OpenMeetings 5.1.0 on Debian 10
>
> Installation OpenMeetings 5.1.0 on Fedora 32
>
> Installation OpenMeetings 5.1.0 on Fedora 33
>
> Installation OpenMeetings 5.1.0 on openSUSE Leap 15.2
>
> Installation OpenMeetings 5.1.0 on Ubuntu 18.04 lts
>
> Installation OpenMeetings 5.1.0 on Ubuntu 20.04 lts
>
>
> Best Regards
>
> Alvaro
>
>
>
>
> ...
>


[Bug 1905584] [NEW] package grub-pc 2.04-1ubuntu35.1 failed to install/upgrade: installed grub-pc package post-installation script subprocess returned error exit status 1

2020-11-25 Thread K Chandrasekhar Omkar
Public bug reported:

While installing as an OEM PC

ProblemType: Package
DistroRelease: Ubuntu 20.10
Package: grub-pc 2.04-1ubuntu35.1
ProcVersionSignature: Ubuntu 5.8.0-25.26-generic 5.8.14
Uname: Linux 5.8.0-25-generic x86_64
NonfreeKernelModules: zfs zunicode zavl icp zcommon znvpair
ApportVersion: 2.20.11-0ubuntu50.2
AptOrdering: NULL: ConfigurePending
Architecture: amd64
CasperMD5CheckResult: pass
CasperVersion: 1.455
Date: Wed Nov 25 21:13:36 2020
DuplicateSignature:
 package:grub-pc:2.04-1ubuntu35.1
 Setting up tzdata (2020d-1ubuntu1) ...
 /var/lib/dpkg/info/tzdata.postinst: 44: 3: Bad file descriptor
 dpkg: error processing package tzdata (--configure):
  installed tzdata package post-installation script subprocess returned error 
exit status 2
ErrorMessage: installed grub-pc package post-installation script subprocess 
returned error exit status 1
LiveMediaBuild: Ubuntu 20.10 "Groovy Gorilla" - Release amd64 (20201022)
ProcCmdLine: BOOT_IMAGE=/casper/vmlinuz file=/cdrom/preseed/hostname.seed 
only-ubiquity oem-config/enable=true quiet splash ---
Python3Details: /usr/bin/python3.8, Python 3.8.6, python3-minimal, 
3.8.6-0ubuntu1
PythonDetails: N/A
RelatedPackageVersions:
 dpkg 1.20.5ubuntu2
 apt  2.1.10
SourcePackage: grub2
Title: package grub-pc 2.04-1ubuntu35.1 failed to install/upgrade: installed 
grub-pc package post-installation script subprocess returned error exit status 1
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: grub2 (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-package groovy

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1905584

Title:
  package grub-pc 2.04-1ubuntu35.1 failed to install/upgrade: installed
  grub-pc package post-installation script subprocess returned error
  exit status 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1905584/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[jira] [Commented] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2020-03-09 Thread Chandrasekhar Thumuluru (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055108#comment-17055108
 ] 

Chandrasekhar Thumuluru commented on CASSANDRA-15397:
-

{quote}
I'm not sure if assuming long will be a good idea.
{quote}
I meant in the context of generics and about the performance.  

I'll make necessary changes, compare it again and post the results. 

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>    Reporter: Chandrasekhar Thumuluru
>    Assignee: Chandrasekhar Thumuluru
>Priority: Low
>  Labels: pull-request-available
> Attachments: 90p_100k_sstables_with_1000_searches.png, 
> 90p_1million_sstables_with_1000_searches.png, 
> 90p_250k_sstables_with_1000_searches.png, 
> 90p_500k_sstables_with_1000_searches.png, 
> 90p_750k_sstables_with_1000_searches.png, 
> 95p_1_SSTable_with_5000_Searches.png, 
> 95p_100k_sstables_with_1000_searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_1million_sstables_with_1000_searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_250k_sstables_with_1000_searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 95p_500k_sstables_with_1000_searches.png, 
> 95p_750k_sstables_with_1000_searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_100k_sstables_with_1000_searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_1million_sstables_with_1000_searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_250k_sstables_with_1000_searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, 
> 99p_500k_sstables_with_1000_searches.png, 
> 99p_750k_sstables_with_1000_searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_100k_sstables_with_1000_searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_1million_sstables_with_1000_searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_250k_sstables_with_1000_searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png, 
> Mean_500k_sstables_with_1000_searches.png, 
> Mean_750k_sstables_with_1000_searches.png, TESTS-TestSuites.xml.lz4, 
> replace_intervaltree_with_intervallist.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. The x-axis in the graphs is the search interval 
> coverage. 10p means the search interval covered 10% of the intervals. The 
> y-axis is the time the search took in nanos. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree by removing the data 
> portion of the interval.  Modified the template version (Java generics) to a 
> specialized version. 
> # I used the code from Cassandra version _3.11_.
> # Time in the graph is in nanos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2020-03-09 Thread Chandrasekhar Thumuluru (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17055108#comment-17055108
 ] 

Chandrasekhar Thumuluru edited comment on CASSANDRA-15397 at 3/9/20, 3:50 PM:
--

{quote}
I'm not sure if assuming long will be a good idea.
{quote}
I meant in the context of generics and not about the performance.  I'll make 
necessary changes, compare it again and post the results. 


was (Author: cthumuluru):
{quote}
I'm not sure if assuming long will be a good idea.
{quote}
I meant in the context of generics and about the performance.  

I'll make necessary changes, compare it again and post the results. 

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>    Reporter: Chandrasekhar Thumuluru
>    Assignee: Chandrasekhar Thumuluru
>Priority: Low
>  Labels: pull-request-available
> Attachments: 90p_100k_sstables_with_1000_searches.png, 
> 90p_1million_sstables_with_1000_searches.png, 
> 90p_250k_sstables_with_1000_searches.png, 
> 90p_500k_sstables_with_1000_searches.png, 
> 90p_750k_sstables_with_1000_searches.png, 
> 95p_1_SSTable_with_5000_Searches.png, 
> 95p_100k_sstables_with_1000_searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_1million_sstables_with_1000_searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_250k_sstables_with_1000_searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 95p_500k_sstables_with_1000_searches.png, 
> 95p_750k_sstables_with_1000_searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_100k_sstables_with_1000_searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_1million_sstables_with_1000_searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_250k_sstables_with_1000_searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, 
> 99p_500k_sstables_with_1000_searches.png, 
> 99p_750k_sstables_with_1000_searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_100k_sstables_with_1000_searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_1million_sstables_with_1000_searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_250k_sstables_with_1000_searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png, 
> Mean_500k_sstables_with_1000_searches.png, 
> Mean_750k_sstables_with_1000_searches.png, TESTS-TestSuites.xml.lz4, 
> replace_intervaltree_with_intervallist.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. The x-axis in the graphs is the search interval 
> coverage. 10p means the search interval covered 10% of the intervals. The 
> y-axis is the time the search took in nanos. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree by removing the data 
> portion of the interval.  Modified the template version (Java generics) to a 
> specialized version. 
> # I used the code from Cassandra version _3.11_.
> # Time in the graph is in nanos

[jira] [Commented] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2020-03-08 Thread Chandrasekhar Thumuluru (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17054593#comment-17054593
 ] 

Chandrasekhar Thumuluru commented on CASSANDRA-15397:
-

[~benedict] — Sorry for the delayed update on this ticket. 

I made necessary changes to IntervalList implementation based on your feedback. 
The previous submission was moved to IntervalList2. I also refactored the other 
code to use the IntervalList instead of IntervalTree. You suggested to use 4 
long[] arrays but I couldn't do so since the code uses a generic that's 
comparable. I'm not sure if assuming long will be a good idea. Instead I 
changed the implementation to use two lists one to store the interval points 
and the other to store the data. Based on my performance comparison, I see the 
previous submission performs better. It could be partly because the previous 
implementation packs all the relevant items together. I also added the 
performance comparison for 100k, 250k, 500k, 750k and 1 million SSTables based 
on 1000 searches. 

Based on my analysis, the performance of Interval list with (250k and above) 
huge number of SSTables falls behind Interval tree by a small margin but the 
trade-off is less construction cost and less garbage creation during 
construction. Please take a look and let me know what you think. 

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>    Reporter: Chandrasekhar Thumuluru
>    Assignee: Chandrasekhar Thumuluru
>Priority: Low
>  Labels: pull-request-available
> Attachments: 90p_100k_sstables_with_1000_searches.png, 
> 90p_1million_sstables_with_1000_searches.png, 
> 90p_250k_sstables_with_1000_searches.png, 
> 90p_500k_sstables_with_1000_searches.png, 
> 90p_750k_sstables_with_1000_searches.png, 
> 95p_1_SSTable_with_5000_Searches.png, 
> 95p_100k_sstables_with_1000_searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_1million_sstables_with_1000_searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_250k_sstables_with_1000_searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 95p_500k_sstables_with_1000_searches.png, 
> 95p_750k_sstables_with_1000_searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_100k_sstables_with_1000_searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_1million_sstables_with_1000_searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_250k_sstables_with_1000_searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, 
> 99p_500k_sstables_with_1000_searches.png, 
> 99p_750k_sstables_with_1000_searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_100k_sstables_with_1000_searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_1million_sstables_with_1000_searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_250k_sstables_with_1000_searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png, 
> Mean_500k_sstables_with_1000_searches.png, 
> Mean_750k_sstables_with_1000_searches.png, TESTS-TestSuites.xml.lz4, 
> replace_intervaltree_with_intervallist.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during rep

[jira] [Updated] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2020-03-08 Thread Chandrasekhar Thumuluru (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar Thumuluru updated CASSANDRA-15397:

Attachment: 90p_1million_sstables_with_1000_searches.png
Mean_1million_sstables_with_1000_searches.png
95p_1million_sstables_with_1000_searches.png
99p_1million_sstables_with_1000_searches.png
90p_750k_sstables_with_1000_searches.png
95p_750k_sstables_with_1000_searches.png
99p_750k_sstables_with_1000_searches.png
Mean_750k_sstables_with_1000_searches.png
90p_500k_sstables_with_1000_searches.png
95p_500k_sstables_with_1000_searches.png
99p_500k_sstables_with_1000_searches.png
Mean_500k_sstables_with_1000_searches.png
90p_250k_sstables_with_1000_searches.png
95p_250k_sstables_with_1000_searches.png
99p_250k_sstables_with_1000_searches.png
Mean_250k_sstables_with_1000_searches.png
90p_100k_sstables_with_1000_searches.png
95p_100k_sstables_with_1000_searches.png
99p_100k_sstables_with_1000_searches.png
Mean_100k_sstables_with_1000_searches.png

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>    Reporter: Chandrasekhar Thumuluru
>    Assignee: Chandrasekhar Thumuluru
>Priority: Low
>  Labels: pull-request-available
> Attachments: 90p_100k_sstables_with_1000_searches.png, 
> 90p_1million_sstables_with_1000_searches.png, 
> 90p_250k_sstables_with_1000_searches.png, 
> 90p_500k_sstables_with_1000_searches.png, 
> 90p_750k_sstables_with_1000_searches.png, 
> 95p_1_SSTable_with_5000_Searches.png, 
> 95p_100k_sstables_with_1000_searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_1million_sstables_with_1000_searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_250k_sstables_with_1000_searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 95p_500k_sstables_with_1000_searches.png, 
> 95p_750k_sstables_with_1000_searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_100k_sstables_with_1000_searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_1million_sstables_with_1000_searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_250k_sstables_with_1000_searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, 
> 99p_500k_sstables_with_1000_searches.png, 
> 99p_750k_sstables_with_1000_searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_100k_sstables_with_1000_searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_1million_sstables_with_1000_searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_250k_sstables_with_1000_searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png, 
> Mean_500k_sstables_with_1000_searches.png, 
> Mean_750k_sstables_with_1000_searches.png, TESTS-TestSuites.xml.lz4, 
> replace_intervaltree_with_intervallist.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using r

[jira] [Comment Edited] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-12-20 Thread Chandrasekhar Thumuluru (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17001037#comment-17001037
 ] 

Chandrasekhar Thumuluru edited comment on CASSANDRA-15397 at 12/20/19 5:50 PM:
---

[~benedict] — Thanks for your inputs. 
 * I'll rename the class. I intentionally didn't do it in the first version of 
PR so it looks less distracting. 
 * I'll definitely do the performance comparison with million+ SSTables. Please 
note, my previous tests were not produced from read SSTables. The SSTable 
metadata was generated with random distributions. You can refer to the test 
files attached and let me know if you have any suggestions. I guess not using 
the real SSTables is fair to compare the performance of IntervalTree?
 *  I definitely share your concern on potential slowness due to linear scan, 
but I shared some code references in this 
[doc|https://docs.google.com/document/d/1vwo9ArZbtgWUwJcvZGes_69YVh4yiP9c7NQFgI0iynQ/edit?usp=sharing]
  which makes me believe we are still good. Let me know your thought on that 
too. 
* I'm willing to try the improvement proposed to the algorithm. I'll talk to my 
team to gather context around what you are suggesting and get back to you if 
I've any questions. 
* I'm definitely willing to try the proposed changes and don't mind even it the 
assumption turns out to be wrong. 


was (Author: cthumuluru):
[~benedict] — Thanks for your inputs. 
 * I'll rename the class. I intentionally didn't do it in the first version of 
PR so it looks less distracting. 
 * I'll definitely do the performance comparison with million+ SSTables. Please 
note, my previous tests were not produced from read SSTables. The SSTable 
metadata was generated with random distributions. You can refer to the test 
files attached and let me know if you have any suggestions. I guess not using 
the real SSTables is fair to compare the performance of IntervalTree?
 *  I definitely share your concern on potential slowness due to linear scan, 
but I shared some code reference in this 
[doc|https://docs.google.com/document/d/1vwo9ArZbtgWUwJcvZGes_69YVh4yiP9c7NQFgI0iynQ/edit?usp=sharing]
  which makes me believe we are still good. Let me know your thought on that 
too. 
* I'm willing to try the improvement proposed to the algorithm. I'll talk to my 
team to gather context around what you are suggesting and get back to you if 
I've any questions. 
* I'm definitely willing to try the proposed changes and don't mind even it the 
assumption turns out to be wrong. 

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>    Reporter: Chandrasekhar Thumuluru
>    Assignee: Chandrasekhar Thumuluru
>Priority: Low
>  Labels: pull-request-available
> Attachments: 95p_1_SSTable_with_5000_Searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png, TESTS-TestSuites.xml.lz4, 
> replace_intervaltree_with_intervallist.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Sear

[jira] [Comment Edited] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-12-20 Thread Chandrasekhar Thumuluru (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17001037#comment-17001037
 ] 

Chandrasekhar Thumuluru edited comment on CASSANDRA-15397 at 12/20/19 5:50 PM:
---

[~benedict] — Thanks for your inputs. 
 * I'll rename the class. I intentionally didn't do it in the first version of 
PR so it looks less distracting. 
 * I'll definitely do the performance comparison with million+ SSTables. Please 
note, my previous tests were not produced from read SSTables. The SSTable 
metadata was generated with random distributions. You can refer to the test 
files attached and let me know if you have any suggestions. I guess not using 
the real SSTables is fair to compare the performance of IntervalTree?
 *  I definitely share your concern on potential slowness due to linear scan, 
but I shared some code references in this 
[doc|https://docs.google.com/document/d/1vwo9ArZbtgWUwJcvZGes_69YVh4yiP9c7NQFgI0iynQ/edit?usp=sharing]
  which makes me believe we are still good. Let me know your thought on that 
too. 
* I'm willing to try the improvement proposed to the algorithm. I'll talk to my 
team to gather context around what you are suggesting and get back to you if 
I've any questions. 
* I'm definitely willing to try the proposed changes and don't mind even if the 
assumption turns out to be wrong. 


was (Author: cthumuluru):
[~benedict] — Thanks for your inputs. 
 * I'll rename the class. I intentionally didn't do it in the first version of 
PR so it looks less distracting. 
 * I'll definitely do the performance comparison with million+ SSTables. Please 
note, my previous tests were not produced from read SSTables. The SSTable 
metadata was generated with random distributions. You can refer to the test 
files attached and let me know if you have any suggestions. I guess not using 
the real SSTables is fair to compare the performance of IntervalTree?
 *  I definitely share your concern on potential slowness due to linear scan, 
but I shared some code references in this 
[doc|https://docs.google.com/document/d/1vwo9ArZbtgWUwJcvZGes_69YVh4yiP9c7NQFgI0iynQ/edit?usp=sharing]
  which makes me believe we are still good. Let me know your thought on that 
too. 
* I'm willing to try the improvement proposed to the algorithm. I'll talk to my 
team to gather context around what you are suggesting and get back to you if 
I've any questions. 
* I'm definitely willing to try the proposed changes and don't mind even it the 
assumption turns out to be wrong. 

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>    Reporter: Chandrasekhar Thumuluru
>    Assignee: Chandrasekhar Thumuluru
>Priority: Low
>  Labels: pull-request-available
> Attachments: 95p_1_SSTable_with_5000_Searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png, TESTS-TestSuites.xml.lz4, 
> replace_intervaltree_with_intervallist.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Sear

[jira] [Commented] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-12-20 Thread Chandrasekhar Thumuluru (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17001037#comment-17001037
 ] 

Chandrasekhar Thumuluru commented on CASSANDRA-15397:
-

[~benedict] — Thanks for your inputs. 
 * I'll rename the class. I intentionally didn't do it in the first version of 
PR so it looks less distracting. 
 * I'll definitely do the performance comparison with million+ SSTables. Please 
note, my previous tests were not produced from read SSTables. The SSTable 
metadata was generated with random distributions. You can refer to the test 
files attached and let me know if you have any suggestions. I guess not using 
the real SSTables is fair to compare the performance of IntervalTree?
 *  I definitely share your concern on potential slowness due to linear scan, 
but I shared some code reference in this 
[doc|https://docs.google.com/document/d/1vwo9ArZbtgWUwJcvZGes_69YVh4yiP9c7NQFgI0iynQ/edit?usp=sharing]
  which makes me believe we are still good. Let me know your thought on that 
too. 
* I'm willing to try the improvement proposed to the algorithm. I'll talk to my 
team to gather context around what you are suggesting and get back to you if 
I've any questions. 
* I'm definitely willing to try the proposed changes and don't mind even it the 
assumption turns out to be wrong. 

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>    Reporter: Chandrasekhar Thumuluru
>    Assignee: Chandrasekhar Thumuluru
>Priority: Low
>  Labels: pull-request-available
> Attachments: 95p_1_SSTable_with_5000_Searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png, TESTS-TestSuites.xml.lz4, 
> replace_intervaltree_with_intervallist.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. The x-axis in the graphs is the search interval 
> coverage. 10p means the search interval covered 10% of the intervals. The 
> y-axis is the time the search took in nanos. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree by removing the data 
> portion of the interval.  Modified the template version (Java generics) to a 
> specialized version. 
> # I used the code from Cassandra version _3.11_.
> # Time in the graph is in nanos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-12-19 Thread Chandrasekhar Thumuluru (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17000219#comment-17000219
 ] 

Chandrasekhar Thumuluru commented on CASSANDRA-15397:
-

[~benedict] — I posted the changes to my branch and created a 
[PR|https://github.com/apache/cassandra/pull/400]. Please provide your comments 
when you find free time. Thanks. 

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>    Reporter: Chandrasekhar Thumuluru
>    Assignee: Chandrasekhar Thumuluru
>Priority: Low
>  Labels: pull-request-available
> Attachments: 95p_1_SSTable_with_5000_Searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png, TESTS-TestSuites.xml.lz4, 
> replace_intervaltree_with_intervallist.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. The x-axis in the graphs is the search interval 
> coverage. 10p means the search interval covered 10% of the intervals. The 
> y-axis is the time the search took in nanos. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree by removing the data 
> portion of the interval.  Modified the template version (Java generics) to a 
> specialized version. 
> # I used the code from Cassandra version _3.11_.
> # Time in the graph is in nanos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-12-10 Thread Chandrasekhar Thumuluru (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16993222#comment-16993222
 ] 

Chandrasekhar Thumuluru commented on CASSANDRA-15397:
-

[~benedict] — When you get a chance, can you review the changes and let me know 
your feedback?

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>    Reporter: Chandrasekhar Thumuluru
>    Assignee: Chandrasekhar Thumuluru
>Priority: Low
> Attachments: 95p_1_SSTable_with_5000_Searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png, TESTS-TestSuites.xml.lz4, 
> replace_intervaltree_with_intervallist.patch
>
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. The x-axis in the graphs is the search interval 
> coverage. 10p means the search interval covered 10% of the intervals. The 
> y-axis is the time the search took in nanos. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree by removing the data 
> portion of the interval.  Modified the template version (Java generics) to a 
> specialized version. 
> # I used the code from Cassandra version _3.11_.
> # Time in the graph is in nanos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-11-14 Thread Chandrasekhar Thumuluru (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar Thumuluru updated CASSANDRA-15397:

Attachment: TESTS-TestSuites.xml.lz4

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>    Reporter: Chandrasekhar Thumuluru
>    Assignee: Chandrasekhar Thumuluru
>Priority: Low
> Attachments: 95p_1_SSTable_with_5000_Searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png, TESTS-TestSuites.xml.lz4, 
> replace_intervaltree_with_intervallist.patch
>
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. The x-axis in the graphs is the search interval 
> coverage. 10p means the search interval covered 10% of the intervals. The 
> y-axis is the time the search took in nanos. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree by removing the data 
> portion of the interval.  Modified the template version (Java generics) to a 
> specialized version. 
> # I used the code from Cassandra version _3.11_.
> # Time in the graph is in nanos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-11-14 Thread Chandrasekhar Thumuluru (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16974574#comment-16974574
 ] 

Chandrasekhar Thumuluru commented on CASSANDRA-15397:
-

[~benedict] —
 * Attached the patch file with my changes 
[^replace_intervaltree_with_intervallist.patch]. I used trunk for the patch.
 * Attached the unit test run results [^TESTS-TestSuites.xml.lz4] .
 * 
[Link|https://docs.google.com/document/d/1vwo9ArZbtgWUwJcvZGes_69YVh4yiP9c7NQFgI0iynQ/edit?usp=sharing]
 to google doc with some details about the proposed algorithm.

Please review the changes when you get a chance and let me know your feedback.

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>    Reporter: Chandrasekhar Thumuluru
>    Assignee: Chandrasekhar Thumuluru
>Priority: Low
> Attachments: 95p_1_SSTable_with_5000_Searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png, TESTS-TestSuites.xml.lz4, 
> replace_intervaltree_with_intervallist.patch
>
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. The x-axis in the graphs is the search interval 
> coverage. 10p means the search interval covered 10% of the intervals. The 
> y-axis is the time the search took in nanos. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree by removing the data 
> portion of the interval.  Modified the template version (Java generics) to a 
> specialized version. 
> # I used the code from Cassandra version _3.11_.
> # Time in the graph is in nanos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-11-14 Thread Chandrasekhar Thumuluru (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar Thumuluru updated CASSANDRA-15397:

Attachment: replace_intervaltree_with_intervallist.patch

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>    Reporter: Chandrasekhar Thumuluru
>    Assignee: Chandrasekhar Thumuluru
>Priority: Low
> Attachments: 95p_1_SSTable_with_5000_Searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png, 
> replace_intervaltree_with_intervallist.patch
>
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. The x-axis in the graphs is the search interval 
> coverage. 10p means the search interval covered 10% of the intervals. The 
> y-axis is the time the search took in nanos. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree by removing the data 
> portion of the interval.  Modified the template version (Java generics) to a 
> specialized version. 
> # I used the code from Cassandra version _3.11_.
> # Time in the graph is in nanos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-11-05 Thread Chandrasekhar Thumuluru (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16967732#comment-16967732
 ] 

Chandrasekhar Thumuluru commented on CASSANDRA-15397:
-

Sure [~benedict]. I can make the changes and update the ticket with Github 
links. As you can see I simplified the IntervalTree implementation for 
comparison purposes. I'll make the final changes with tests and push them to my 
fork by weekend.

I completely agree with you it's not a pressing change but given the 
construction cost and immutable nature of IntervalTree usage I felt it's worth 
a shot. 

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>    Reporter: Chandrasekhar Thumuluru
>    Assignee: Chandrasekhar Thumuluru
>Priority: Low
> Attachments: 95p_1_SSTable_with_5000_Searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png
>
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. The x-axis in the graphs is the search interval 
> coverage. 10p means the search interval covered 10% of the intervals. The 
> y-axis is the time the search took in nanos. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree by removing the data 
> portion of the interval.  Modified the template version (Java generics) to a 
> specialized version. 
> # I used the code from Cassandra version _3.11_.
> # Time in the graph is in nanos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-11-05 Thread Chandrasekhar Thumuluru (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar Thumuluru updated CASSANDRA-15397:

Attachment: Mean_3_SSTable_with_5000_Searches.png
Mean_25000_SSTable_with_5000_Searches.png
Mean_2_SSTable_with_5000_Searches.png
Mean_15000_SSTable_with_5000_Searches.png
Mean_1_SSTable_with_5000_Searches.png
Mean_5000_SSTable_with_5000_Searches.png

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chandrasekhar Thumuluru
>Priority: Normal
> Attachments: 95p_1_SSTable_with_5000_Searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java, 
> Mean_1_SSTable_with_5000_Searches.png, 
> Mean_15000_SSTable_with_5000_Searches.png, 
> Mean_2_SSTable_with_5000_Searches.png, 
> Mean_25000_SSTable_with_5000_Searches.png, 
> Mean_3_SSTable_with_5000_Searches.png, 
> Mean_5000_SSTable_with_5000_Searches.png
>
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. The x-axis in the graphs is the search interval 
> coverage. 10p means the search interval covered 10% of the intervals. The 
> y-axis is the time the search took in nanos. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree by removing the data 
> portion of the interval.  Modified the template version (Java generics) to a 
> specialized version. 
> # I used the code from Cassandra version _3.11_.
> # Time in the graph is in nanos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-11-05 Thread Chandrasekhar Thumuluru (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar Thumuluru updated CASSANDRA-15397:

Attachment: 95p_5000_SSTable_with_5000_Searches.png
95p_1_SSTable_with_5000_Searches.png
95p_15000_SSTable_with_5000_Searches.png
95p_2_SSTable_with_5000_Searches.png
95p_25000_SSTable_with_5000_Searches.png
95p_3_SSTable_with_5000_Searches.png

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chandrasekhar Thumuluru
>Priority: Normal
> Attachments: 95p_1_SSTable_with_5000_Searches.png, 
> 95p_15000_SSTable_with_5000_Searches.png, 
> 95p_2_SSTable_with_5000_Searches.png, 
> 95p_25000_SSTable_with_5000_Searches.png, 
> 95p_3_SSTable_with_5000_Searches.png, 
> 95p_5000_SSTable_with_5000_Searches.png, 
> 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java
>
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. The x-axis in the graphs is the search interval 
> coverage. 10p means the search interval covered 10% of the intervals. The 
> y-axis is the time the search took in nanos. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree by removing the data 
> portion of the interval.  Modified the template version (Java generics) to a 
> specialized version. 
> # I used the code from Cassandra version _3.11_.
> # Time in the graph is in nanos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-11-05 Thread Chandrasekhar Thumuluru (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar Thumuluru updated CASSANDRA-15397:

Description: 
Cassandra uses IntervalTrees to identify the SSTables that overlap with search 
interval. In Cassandra, IntervalTrees are not mutated. They are recreated each 
time a mutation is required. This can be an issue during repairs. In fact we 
noticed such issues during repair. 

Since lists are cache friendly compared to linked lists and trees, I decided to 
compare the search performance with:
* Linear Walk.
* Elimination using Binary Search (idea is to eliminate intervals using start 
and end points of search interval). 

Based on the tests I ran, I noticed Binary Search based elimination almost 
always performs similar to IntervalTree or out performs IntervalTree based 
search. The cost of IntervalTree construction is also substantial and produces 
lot of garbage during repairs. 

I ran the tests using random intervals to build the tree/lists and another 
randomly generated search interval with 5000 iterations. I'm attaching all the 
relevant graphs. The x-axis in the graphs is the search interval coverage. 10p 
means the search interval covered 10% of the intervals. The y-axis is the time 
the search took in nanos. 

PS: 
# For the purpose of test, I simplified the IntervalTree by removing the data 
portion of the interval.  Modified the template version (Java generics) to a 
specialized version. 
# I used the code from Cassandra version _3.11_.
# Time in the graph is in nanos. 

  was:
Cassandra uses IntervalTrees to identify the SSTables that overlap with search 
interval. In Cassandra, IntervalTrees are not mutated. They are recreated each 
time a mutation is required. This can be an issue during repairs. In fact we 
noticed such issues during repair. 

Since lists are cache friendly compared to linked lists and trees, I decided to 
compare the search performance with:
* Linear Walk.
* Elimination using Binary Search (idea is to eliminate intervals using start 
and end points of search interval). 

Based on the tests I ran, I noticed Binary Search based elimination almost 
always performs similar to IntervalTree or out performs IntervalTree based 
search. The cost of IntervalTree construction is also substantial and produces 
lot of garbage during repairs. 

I ran the tests using random intervals to build the tree/lists and another 
randomly generated search interval with 5000 iterations. I'm attaching all the 
relevant graphs. The x-axis in the graphs is the search interval coverage. 10p 
means the search interval covered 10% of the intervals. The y-axis is the time 
the search took in nanos. 

PS: 
# For the purpose of test, I simplified the IntervalTree code by making it 
non-generic and removing the data portion of the interval.  
# I used the code from Cassandra version _3.11_.
# Time in the graph is in nanos. 


> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chandrasekhar Thumuluru
>Priority: Normal
> Attachments: 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java
>
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. The x-axis in the graphs is the search interval 
>

[jira] [Updated] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-11-05 Thread Chandrasekhar Thumuluru (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar Thumuluru updated CASSANDRA-15397:

Description: 
Cassandra uses IntervalTrees to identify the SSTables that overlap with search 
interval. In Cassandra, IntervalTrees are not mutated. They are recreated each 
time a mutation is required. This can be an issue during repairs. In fact we 
noticed such issues during repair. 

Since lists are cache friendly compared to linked lists and trees, I decided to 
compare the search performance with:
* Linear Walk.
* Elimination using Binary Search (idea is to eliminate intervals using start 
and end points of search interval). 

Based on the tests I ran, I noticed Binary Search based elimination almost 
always performs similar to IntervalTree or out performs IntervalTree based 
search. The cost of IntervalTree construction is also substantial and produces 
lot of garbage during repairs. 

I ran the tests using random intervals to build the tree/lists and another 
randomly generated search interval with 5000 iterations. I'm attaching all the 
relevant graphs. The x-axis in the graphs is the search interval coverage. 10p 
means the search interval covered 10% of the intervals. The y-axis is the time 
the search took in nanos. 

PS: 
# For the purpose of test, I simplified the IntervalTree code by making it 
non-generic and removing the data portion of the interval.  
# I used the code from Cassandra version _3.11_.
# Time in the graph is in nanos. 

  was:
Cassandra uses IntervalTrees to identify the SSTables that overlap with search 
interval. In Cassandra, IntervalTrees are not mutated. They are recreated each 
time a mutation is required. This can be an issue during repairs. In fact we 
noticed such issues during repair. 

Since lists are cache friendly compared to linked lists and trees, I decided to 
compare the search performance with:
* Linear Walk.
* Elimination using Binary Search (idea is to eliminate intervals using start 
and end points of search interval). 

Based on the tests I ran, I noticed Binary Search based elimination almost 
always performs similar to IntervalTree performance or out performs 
IntervalTree based search. 

I ran the tests using random intervals to build the tree/lists and another 
randomly generated search interval with 5000 iterations. I'm attaching all the 
relevant graphs. 


PS: 
# For the purpose of test, I simplified the IntervalTree code by making it 
non-generic and removing the data portion of the interval.  
# I used the code from Cassandra version _3.11_.
# Time in the graph is in nanos. 


> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chandrasekhar Thumuluru
>Priority: Normal
> Attachments: 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java
>
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree or out performs IntervalTree based 
> search. The cost of IntervalTree construction is also substantial and 
> produces lot of garbage during repairs. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. The x-axis in the graphs is the search interval 
> coverage. 10p means the search interval covered 10% of the intervals. The 
> y-axis is the time the search took in nanos. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree code by making it 
> non-generic and removing the data portion of the interval.  
> # I use

[jira] [Updated] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-11-04 Thread Chandrasekhar Thumuluru (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar Thumuluru updated CASSANDRA-15397:

Description: 
Cassandra uses IntervalTrees to identify the SSTables that overlap with search 
interval. In Cassandra, IntervalTrees are not mutated. They are recreated each 
time a mutation is required. This can be an issue during repairs. In fact we 
noticed such issues during repair. 

Since lists are cache friendly compared to linked lists and trees, I decided to 
compare the search performance with:
* Linear Walk.
* Elimination using Binary Search (idea is to eliminate intervals using start 
and end points of search interval). 

Based on the tests I ran, I noticed Binary Search based elimination almost 
always performs similar to IntervalTree performance or out performs 
IntervalTree based search. 

I ran the tests using random intervals to build the tree/lists and another 
randomly generated search interval with 5000 iterations. I'm attaching all the 
relevant graphs. 


PS: 
# For the purpose of test, I simplified the IntervalTree code by making it 
non-generic and removing the data portion of the interval.  
# I used the code from Cassandra version _3.11_.
# Time in the graph is in nanos. 

  was:
Cassandra uses IntervalTrees to identify the SSTables that overlap with search 
interval. In Cassandra, IntervalTrees are not mutated. They are recreated each 
time a mutation is required. This can be an issue during repairs. In fact we 
noticed such issues during repair. 

Since lists are cache friendly compared to linked lists and trees, I decided to 
compare the search performance with:
* Linear Walk.
* Elimination using Binary Search (idea is to eliminate intervals using start 
and end points of search interval). 

Based on the tests I ran, I noticed Binary Search based elimination almost 
always performs similar to IntervalTree performance or out performs 
IntervalTree based search. 

I ran the tests using random intervals to build the tree/lists and another 
randomly generated search interval with 5000 iterations. I'm attaching all the 
relevant graphs. 

PS: 
# For the purpose of test, I simplified the IntervalTree code by making it 
non-generic and removing the data portion of the interval.  
# I used the code from Cassandra version _3.11_.
# Time in the graph is in nanos. 


> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chandrasekhar Thumuluru
>Priority: Normal
> Attachments: 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java
>
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree performance or out performs 
> IntervalTree based search. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree code by making it 
> non-generic and removing the data portion of the interval.  
> # I used the code from Cassandra version _3.11_.
> # Time in the graph is in nanos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-11-04 Thread Chandrasekhar Thumuluru (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar Thumuluru updated CASSANDRA-15397:

Description: 
Cassandra uses IntervalTrees to identify the SSTables that overlap with search 
interval. In Cassandra, IntervalTrees are not mutated. They are recreated each 
time a mutation is required. This can be an issue during repairs. In fact we 
noticed such issues during repair. 

Since lists are cache friendly compared to linked lists and trees, I decided to 
compare the search performance with:
* Linear Walk.
* Elimination using Binary Search (idea is to eliminate intervals using start 
and end points of search interval). 

Based on the tests I ran, I noticed Binary Search based elimination almost 
always performs similar to IntervalTree performance or out performs 
IntervalTree based search. 

I ran the tests using random intervals to build the tree/lists and another 
randomly generated search interval with 5000 iterations. I'm attaching all the 
relevant graphs. 

PS: 
# For the purpose of test, I simplified the IntervalTree code by making it 
non-generic and removing the data portion of the interval.  
# I used the code from Cassandra version _3.11_.
# Time in the graph is in nanos. 

  was:
Cassandra uses IntervalTrees to identify the SSTables that overlap with search 
interval. In Cassandra, IntervalTrees are not mutated. They are recreated each 
time a mutation is required. This can be an issue during repairs. In fact we 
noticed such issues during repair. 

Since lists are cache friendly compared to linked lists and trees, I decided to 
compare the search performance with:
* Linear Walk.
* Elimination using Binary Search (idea is to eliminate intervals using start 
and end points of search interval). 

Based on the tests I ran, I noticed Binary Search based elimination almost 
always performs similar to IntervalTree performance or out performs 
IntervalTree based search. 

I ran the tests using random intervals to build the tree/lists and another 
randomly generated search interval with 5000 iterations. I'm attaching all the 
relevant graphs. 

PS: For the purpose of test, I simplified the IntervalTree code by making it 
non-generic and removing the data portion of the interval.  I used the code 
from version _3.11_.


> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chandrasekhar Thumuluru
>Priority: Normal
> Attachments: 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java
>
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree performance or out performs 
> IntervalTree based search. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. 
> PS: 
> # For the purpose of test, I simplified the IntervalTree code by making it 
> non-generic and removing the data portion of the interval.  
> # I used the code from Cassandra version _3.11_.
> # Time in the graph is in nanos. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-11-04 Thread Chandrasekhar Thumuluru (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar Thumuluru updated CASSANDRA-15397:

Attachment: IntervalTreeSimplified.java

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chandrasekhar Thumuluru
>Priority: Normal
> Attachments: 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java
>
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree performance or out performs 
> IntervalTree based search. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. 
> PS: For the purpose of test, I simplified the IntervalTree code by making it 
> non-generic and removing the data portion of the interval.  I used the code 
> from version _3.11_.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-11-04 Thread Chandrasekhar Thumuluru (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar Thumuluru updated CASSANDRA-15397:

Attachment: IntervalList.java

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chandrasekhar Thumuluru
>Priority: Normal
> Attachments: 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java
>
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree performance or out performs 
> IntervalTree based search. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. 
> PS: For the purpose of test, I simplified the IntervalTree code by making it 
> non-generic and removing the data portion of the interval.  I used the code 
> from version _3.11_.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-11-04 Thread Chandrasekhar Thumuluru (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar Thumuluru updated CASSANDRA-15397:

Attachment: IntervalListWithElimination.java

> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chandrasekhar Thumuluru
>Priority: Normal
> Attachments: 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png, IntervalList.java, 
> IntervalListWithElimination.java, IntervalTreeSimplified.java
>
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree performance or out performs 
> IntervalTree based search. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. 
> PS: For the purpose of test, I simplified the IntervalTree code by making it 
> non-generic and removing the data portion of the interval.  I used the code 
> from version _3.11_.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-11-04 Thread Chandrasekhar Thumuluru (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar Thumuluru updated CASSANDRA-15397:

Description: 
Cassandra uses IntervalTrees to identify the SSTables that overlap with search 
interval. In Cassandra, IntervalTrees are not mutated. They are recreated each 
time a mutation is required. This can be an issue during repairs. In fact we 
noticed such issues during repair. 

Since lists are cache friendly compared to linked lists and trees, I decided to 
compare the search performance with:
* Linear Walk.
* Elimination using Binary Search (idea is to eliminate intervals using start 
and end points of search interval). 

Based on the tests I ran, I noticed Binary Search based elimination almost 
always performs similar to IntervalTree performance or out performs 
IntervalTree based search. 

I ran the tests using random intervals to build the tree/lists and another 
randomly generated search interval with 5000 iterations. I'm attaching all the 
relevant graphs. 

PS: For the purpose of test, I simplified the IntervalTree code by making it 
non-generic and removing the data portion of the interval.  I used the code 
from version _3.11_.

  was:
Cassandra uses IntervalTrees to identify the SSTables that overlap with search 
interval. In Cassandra, IntervalTrees are not mutated. They are recreated each 
time a mutation is required. This can be an issue during repairs. In fact we 
noticed such issues during repair. 

Since lists are cache friendly compared to linked lists and trees, I decided to 
compare the search performance with:
* Linear Walk.
* Elimination using Binary Search (idea is to eliminate intervals using start 
and end points of search interval). 

Based on the tests I ran, I noticed Binary Search based elimination almost 
always performs similar to IntervalTree performance or out performs 
IntervalTree based search. 

I ran the tests using random intervals to build the tree/lists and another 
randomly generated search interval with 5000 iterations. I'm attaching all the 
relevant graphs. 

PS: For the purpose of test, I simplified the IntervalTree code by making it 
non-generic and removing the data portion of the interval.  


> IntervalTree performance comparison with Linear Walk and Binary Search based 
> Elimination. 
> --
>
> Key: CASSANDRA-15397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chandrasekhar Thumuluru
>Priority: Normal
> Attachments: 99p_1_SSTable_with_5000_Searches.png, 
> 99p_15000_SSTable_with_5000_Searches.png, 
> 99p_2_SSTable_with_5000_Searches.png, 
> 99p_25000_SSTable_with_5000_Searches.png, 
> 99p_3_SSTable_with_5000_Searches.png, 
> 99p_5000_SSTable_with_5000_Searches.png
>
>
> Cassandra uses IntervalTrees to identify the SSTables that overlap with 
> search interval. In Cassandra, IntervalTrees are not mutated. They are 
> recreated each time a mutation is required. This can be an issue during 
> repairs. In fact we noticed such issues during repair. 
> Since lists are cache friendly compared to linked lists and trees, I decided 
> to compare the search performance with:
> * Linear Walk.
> * Elimination using Binary Search (idea is to eliminate intervals using start 
> and end points of search interval). 
> Based on the tests I ran, I noticed Binary Search based elimination almost 
> always performs similar to IntervalTree performance or out performs 
> IntervalTree based search. 
> I ran the tests using random intervals to build the tree/lists and another 
> randomly generated search interval with 5000 iterations. I'm attaching all 
> the relevant graphs. 
> PS: For the purpose of test, I simplified the IntervalTree code by making it 
> non-generic and removing the data portion of the interval.  I used the code 
> from version _3.11_.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-15397) IntervalTree performance comparison with Linear Walk and Binary Search based Elimination.

2019-11-04 Thread Chandrasekhar Thumuluru (Jira)
Chandrasekhar Thumuluru created CASSANDRA-15397:
---

 Summary: IntervalTree performance comparison with Linear Walk and 
Binary Search based Elimination. 
 Key: CASSANDRA-15397
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15397
 Project: Cassandra
  Issue Type: Improvement
Reporter: Chandrasekhar Thumuluru
 Attachments: 99p_1_SSTable_with_5000_Searches.png, 
99p_15000_SSTable_with_5000_Searches.png, 
99p_2_SSTable_with_5000_Searches.png, 
99p_25000_SSTable_with_5000_Searches.png, 
99p_3_SSTable_with_5000_Searches.png, 
99p_5000_SSTable_with_5000_Searches.png

Cassandra uses IntervalTrees to identify the SSTables that overlap with search 
interval. In Cassandra, IntervalTrees are not mutated. They are recreated each 
time a mutation is required. This can be an issue during repairs. In fact we 
noticed such issues during repair. 

Since lists are cache friendly compared to linked lists and trees, I decided to 
compare the search performance with:
* Linear Walk.
* Elimination using Binary Search (idea is to eliminate intervals using start 
and end points of search interval). 

Based on the tests I ran, I noticed Binary Search based elimination almost 
always performs similar to IntervalTree performance or out performs 
IntervalTree based search. 

I ran the tests using random intervals to build the tree/lists and another 
randomly generated search interval with 5000 iterations. I'm attaching all the 
relevant graphs. 

PS: For the purpose of test, I simplified the IntervalTree code by making it 
non-generic and removing the data portion of the interval.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (KAFKA-8711) Kafka 2.3.0 Transient Unit Test Failures SocketServerTest. testControlPlaneRequest

2019-07-30 Thread Chandrasekhar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896180#comment-16896180
 ] 

Chandrasekhar commented on KAFKA-8711:
--

Any feedback?

> Kafka 2.3.0 Transient Unit Test Failures SocketServerTest. 
> testControlPlaneRequest
> --
>
> Key: KAFKA-8711
> URL: https://issues.apache.org/jira/browse/KAFKA-8711
> Project: Kafka
>  Issue Type: Test
>  Components: core, unit tests
>Affects Versions: 2.3.0
>    Reporter: Chandrasekhar
>Priority: Critical
> Attachments: KafkaAUTFailures07242019_PASS2.txt, 
> KafkaUTFailures07242019_PASS2.GIF
>
>
> Cloned Kakfa 2.3.0 source to our git repo and compiled it using 'gradle 
> build', we see the following error consistently:
> Gradle Version 4.7
>  
> testControlPlaneRequest
> java.net.BindException: Address already in use (Bind failed)
>     at java.net.PlainSocketImpl.socketBind(Native Method)
>     at 
> java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
>     at java.net.Socket.bind(Socket.java:644)
>     at java.net.Socket.(Socket.java:433)
>     at java.net.Socket.(Socket.java:286)
>     at kafka.network.SocketServerTest.connect(SocketServerTest.scala:140)
>     at 
> kafka.network.SocketServerTest.$anonfun$testControlPlaneRequest$1(SocketServerTest.scala:200)
>     at 
> kafka.network.SocketServerTest.$anonfun$testControlPlaneRequest$1$adapted(SocketServerTest.scala:199)
>     at 
> kafka.network.SocketServerTest.withTestableServer(SocketServerTest.scala:1141)
>     at 
> kafka.network.SocketServerTest.testControlPlaneRequest(SocketServerTest.scala:199)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>     at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
>     at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>     at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>     at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
>     at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>     at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.gradle.internal.dispatch.ReflectionDisp

[jira] [Commented] (KAFKA-8711) Kafka 2.3.0 Transient Unit Test Failures SocketServerTest. testControlPlaneRequest

2019-07-24 Thread Chandrasekhar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892118#comment-16892118
 ] 

Chandrasekhar commented on KAFKA-8711:
--

Attached find the logs...Please let us know if we can comment this test case 
for this build and move forward until there is a release available with this 
fix. If so where is this code in repo and how should we run the subset of tc's 
rather than waiting ~2hrs to find the failures...

 

Thanks Much

 

Chandra

[^KafkaAUTFailures07242019_PASS2.txt]!KafkaUTFailures07242019_PASS2.GIF!

> Kafka 2.3.0 Transient Unit Test Failures SocketServerTest. 
> testControlPlaneRequest
> --
>
> Key: KAFKA-8711
> URL: https://issues.apache.org/jira/browse/KAFKA-8711
> Project: Kafka
>  Issue Type: Test
>  Components: core, unit tests
>Affects Versions: 2.3.0
>    Reporter: Chandrasekhar
>Priority: Critical
> Attachments: KafkaAUTFailures07242019_PASS2.txt, 
> KafkaUTFailures07242019_PASS2.GIF
>
>
> Cloned Kakfa 2.3.0 source to our git repo and compiled it using 'gradle 
> build', we see the following error consistently:
> Gradle Version 4.7
>  
> testControlPlaneRequest
> java.net.BindException: Address already in use (Bind failed)
>     at java.net.PlainSocketImpl.socketBind(Native Method)
>     at 
> java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
>     at java.net.Socket.bind(Socket.java:644)
>     at java.net.Socket.(Socket.java:433)
>     at java.net.Socket.(Socket.java:286)
>     at kafka.network.SocketServerTest.connect(SocketServerTest.scala:140)
>     at 
> kafka.network.SocketServerTest.$anonfun$testControlPlaneRequest$1(SocketServerTest.scala:200)
>     at 
> kafka.network.SocketServerTest.$anonfun$testControlPlaneRequest$1$adapted(SocketServerTest.scala:199)
>     at 
> kafka.network.SocketServerTest.withTestableServer(SocketServerTest.scala:1141)
>     at 
> kafka.network.SocketServerTest.testControlPlaneRequest(SocketServerTest.scala:199)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>     at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
>     at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>     at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>     at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
>     at 
> org.gradle.api.internal.tasks.testing.SuiteTes

[jira] [Updated] (KAFKA-8711) Kafka 2.3.0 Transient Unit Test Failures SocketServerTest. testControlPlaneRequest

2019-07-24 Thread Chandrasekhar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar updated KAFKA-8711:
-
Attachment: KafkaUTFailures07242019_PASS2.GIF

> Kafka 2.3.0 Transient Unit Test Failures SocketServerTest. 
> testControlPlaneRequest
> --
>
> Key: KAFKA-8711
> URL: https://issues.apache.org/jira/browse/KAFKA-8711
> Project: Kafka
>  Issue Type: Test
>  Components: core, unit tests
>Affects Versions: 2.3.0
>    Reporter: Chandrasekhar
>Priority: Critical
> Attachments: KafkaAUTFailures07242019_PASS2.txt, 
> KafkaUTFailures07242019_PASS2.GIF
>
>
> Cloned Kakfa 2.3.0 source to our git repo and compiled it using 'gradle 
> build', we see the following error consistently:
> Gradle Version 4.7
>  
> testControlPlaneRequest
> java.net.BindException: Address already in use (Bind failed)
>     at java.net.PlainSocketImpl.socketBind(Native Method)
>     at 
> java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
>     at java.net.Socket.bind(Socket.java:644)
>     at java.net.Socket.(Socket.java:433)
>     at java.net.Socket.(Socket.java:286)
>     at kafka.network.SocketServerTest.connect(SocketServerTest.scala:140)
>     at 
> kafka.network.SocketServerTest.$anonfun$testControlPlaneRequest$1(SocketServerTest.scala:200)
>     at 
> kafka.network.SocketServerTest.$anonfun$testControlPlaneRequest$1$adapted(SocketServerTest.scala:199)
>     at 
> kafka.network.SocketServerTest.withTestableServer(SocketServerTest.scala:1141)
>     at 
> kafka.network.SocketServerTest.testControlPlaneRequest(SocketServerTest.scala:199)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>     at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
>     at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>     at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>     at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
>     at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>     at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.gradle.internal.dispatch.ReflectionDisp

[jira] [Created] (KAFKA-8711) Kafka 2.3.0 Transient Unit Test Failures SocketServerTest. testControlPlaneRequest

2019-07-24 Thread Chandrasekhar (JIRA)
Chandrasekhar created KAFKA-8711:


 Summary: Kafka 2.3.0 Transient Unit Test Failures 
SocketServerTest. testControlPlaneRequest
 Key: KAFKA-8711
 URL: https://issues.apache.org/jira/browse/KAFKA-8711
 Project: Kafka
  Issue Type: Test
  Components: core, unit tests
Affects Versions: 2.3.0
Reporter: Chandrasekhar


Cloned Kakfa 2.3.0 source to our git repo and compiled it using 'gradle build', 
we see the following error consistently:

Gradle Version 4.7

 

testControlPlaneRequest
java.net.BindException: Address already in use (Bind failed)
    at java.net.PlainSocketImpl.socketBind(Native Method)
    at 
java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
    at java.net.Socket.bind(Socket.java:644)
    at java.net.Socket.(Socket.java:433)
    at java.net.Socket.(Socket.java:286)
    at kafka.network.SocketServerTest.connect(SocketServerTest.scala:140)
    at 
kafka.network.SocketServerTest.$anonfun$testControlPlaneRequest$1(SocketServerTest.scala:200)
    at 
kafka.network.SocketServerTest.$anonfun$testControlPlaneRequest$1$adapted(SocketServerTest.scala:199)
    at 
kafka.network.SocketServerTest.withTestableServer(SocketServerTest.scala:1141)
    at 
kafka.network.SocketServerTest.testControlPlaneRequest(SocketServerTest.scala:199)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
    at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
    at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
    at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
    at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
    at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
    at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
    at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
    at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
    at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
    at com.sun.proxy.$Proxy1.processTestClass(Unknown Source)
    at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
    at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498

[jira] [Created] (KAFKA-8711) Kafka 2.3.0 Transient Unit Test Failures SocketServerTest. testControlPlaneRequest

2019-07-24 Thread Chandrasekhar (JIRA)
Chandrasekhar created KAFKA-8711:


 Summary: Kafka 2.3.0 Transient Unit Test Failures 
SocketServerTest. testControlPlaneRequest
 Key: KAFKA-8711
 URL: https://issues.apache.org/jira/browse/KAFKA-8711
 Project: Kafka
  Issue Type: Test
  Components: core, unit tests
Affects Versions: 2.3.0
Reporter: Chandrasekhar


Cloned Kakfa 2.3.0 source to our git repo and compiled it using 'gradle build', 
we see the following error consistently:

Gradle Version 4.7

 

testControlPlaneRequest
java.net.BindException: Address already in use (Bind failed)
    at java.net.PlainSocketImpl.socketBind(Native Method)
    at 
java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
    at java.net.Socket.bind(Socket.java:644)
    at java.net.Socket.(Socket.java:433)
    at java.net.Socket.(Socket.java:286)
    at kafka.network.SocketServerTest.connect(SocketServerTest.scala:140)
    at 
kafka.network.SocketServerTest.$anonfun$testControlPlaneRequest$1(SocketServerTest.scala:200)
    at 
kafka.network.SocketServerTest.$anonfun$testControlPlaneRequest$1$adapted(SocketServerTest.scala:199)
    at 
kafka.network.SocketServerTest.withTestableServer(SocketServerTest.scala:1141)
    at 
kafka.network.SocketServerTest.testControlPlaneRequest(SocketServerTest.scala:199)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
    at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
    at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
    at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
    at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
    at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
    at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
    at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
    at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
    at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
    at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
    at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
    at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
    at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
    at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
    at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
    at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
    at com.sun.proxy.$Proxy1.processTestClass(Unknown Source)
    at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
    at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498

[jira] [Updated] (KAFKA-8706) Kafka 2.3.0 Transient Unit Test Failures on Oracle Linux - See attached for details

2019-07-24 Thread Chandrasekhar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar updated KAFKA-8706:
-
Priority: Major  (was: Minor)
 Summary: Kafka 2.3.0 Transient Unit Test Failures on Oracle Linux - See 
attached for details  (was: Kafka 2.3.0 Unit Test Failures on Oracle Linux  - 
Need help debugging framework or issue.)

> Kafka 2.3.0 Transient Unit Test Failures on Oracle Linux - See attached for 
> details
> ---
>
> Key: KAFKA-8706
> URL: https://issues.apache.org/jira/browse/KAFKA-8706
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>    Reporter: Chandrasekhar
>Priority: Major
> Attachments: KafkaAUTFailures.txt, KafkaAUTFailures07242019.txt, 
> KafkaAUTFailures07242019_PASS2.txt, KafkaUTFailures07242019.GIF, 
> KafkaUTFailures07242019_PASS2.GIF
>
>
> Hi
> We have just imported KAFKA 2.3.0 source code from git repo and compiling 
> using Gradle 4.7 on Oracle VM with following info:
> [vagrant@localhost kafka-2.3.0]$ uname -a
>  Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
>  [vagrant@localhost kafka-2.3.0]$
>  
> Upon compiling (#gradle build) , there are 6 test failures at the end. Failed 
> Tests are reported as following:
> DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
>  SaslSslAdminClientIntegrationTest. 
> testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
>  UserQuotaTest. testQuotaOverrideDelete 
>  UserQuotaTest. testThrottledProducerConsumer 
>  MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
>  SocketServerTest. testControlPlaneRequest
> Attached find the failures.
>  
> [^KafkaAUTFailures.txt]
>  
>  
>  We would like to know if we are missing anything in our build environment or 
> if this is a known test failures in Kafka 2.3.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-8706) Kafka 2.3.0 Unit Test Failures on Oracle Linux - Need help debugging framework or issue.

2019-07-24 Thread Chandrasekhar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar updated KAFKA-8706:
-
Component/s: unit tests

> Kafka 2.3.0 Unit Test Failures on Oracle Linux  - Need help debugging 
> framework or issue.
> -
>
> Key: KAFKA-8706
> URL: https://issues.apache.org/jira/browse/KAFKA-8706
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>    Reporter: Chandrasekhar
>Priority: Minor
> Attachments: KafkaAUTFailures.txt, KafkaAUTFailures07242019.txt, 
> KafkaAUTFailures07242019_PASS2.txt, KafkaUTFailures07242019.GIF, 
> KafkaUTFailures07242019_PASS2.GIF
>
>
> Hi
> We have just imported KAFKA 2.3.0 source code from git repo and compiling 
> using Gradle 4.7 on Oracle VM with following info:
> [vagrant@localhost kafka-2.3.0]$ uname -a
>  Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
>  [vagrant@localhost kafka-2.3.0]$
>  
> Upon compiling (#gradle build) , there are 6 test failures at the end. Failed 
> Tests are reported as following:
> DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
>  SaslSslAdminClientIntegrationTest. 
> testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
>  UserQuotaTest. testQuotaOverrideDelete 
>  UserQuotaTest. testThrottledProducerConsumer 
>  MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
>  SocketServerTest. testControlPlaneRequest
> Attached find the failures.
>  
> [^KafkaAUTFailures.txt]
>  
>  
>  We would like to know if we are missing anything in our build environment or 
> if this is a known test failures in Kafka 2.3.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8706) Kafka 2.3.0 Unit Test Failures on Oracle Linux - Need help debugging framework or issue.

2019-07-24 Thread Chandrasekhar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892055#comment-16892055
 ] 

Chandrasekhar commented on KAFKA-8706:
--

Second Pass: Failed Test Cases: 3, per your list above:

UserQuotaTest. testQuotaOverrideDelete  - 
https://issues.apache.org/jira/browse/KAFKA-8032
UserQuotaTest. testThrottledProducerConsumer - 
https://issues.apache.org/jira/browse/KAFKA-8073
SocketServerTest. testControlPlaneRequest : Need to write a bug - will do in a 
few mins...

 

Please let me know if you know the reason for these intermittent failures 3,4,6 
...and let us know if this release is stable enough.

 

!KafkaUTFailures07242019_PASS2.GIF![^KafkaAUTFailures07242019_PASS2.txt]

> Kafka 2.3.0 Unit Test Failures on Oracle Linux  - Need help debugging 
> framework or issue.
> -
>
> Key: KAFKA-8706
> URL: https://issues.apache.org/jira/browse/KAFKA-8706
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>    Reporter: Chandrasekhar
>Priority: Minor
> Attachments: KafkaAUTFailures.txt, KafkaAUTFailures07242019.txt, 
> KafkaAUTFailures07242019_PASS2.txt, KafkaUTFailures07242019.GIF, 
> KafkaUTFailures07242019_PASS2.GIF
>
>
> Hi
> We have just imported KAFKA 2.3.0 source code from git repo and compiling 
> using Gradle 4.7 on Oracle VM with following info:
> [vagrant@localhost kafka-2.3.0]$ uname -a
>  Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
>  [vagrant@localhost kafka-2.3.0]$
>  
> Upon compiling (#gradle build) , there are 6 test failures at the end. Failed 
> Tests are reported as following:
> DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
>  SaslSslAdminClientIntegrationTest. 
> testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
>  UserQuotaTest. testQuotaOverrideDelete 
>  UserQuotaTest. testThrottledProducerConsumer 
>  MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
>  SocketServerTest. testControlPlaneRequest
> Attached find the failures.
>  
> [^KafkaAUTFailures.txt]
>  
>  
>  We would like to know if we are missing anything in our build environment or 
> if this is a known test failures in Kafka 2.3.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-8706) Kafka 2.3.0 Unit Test Failures on Oracle Linux - Need help debugging framework or issue.

2019-07-24 Thread Chandrasekhar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar updated KAFKA-8706:
-
Attachment: KafkaUTFailures07242019_PASS2.GIF

> Kafka 2.3.0 Unit Test Failures on Oracle Linux  - Need help debugging 
> framework or issue.
> -
>
> Key: KAFKA-8706
> URL: https://issues.apache.org/jira/browse/KAFKA-8706
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>    Reporter: Chandrasekhar
>Priority: Minor
> Attachments: KafkaAUTFailures.txt, KafkaAUTFailures07242019.txt, 
> KafkaAUTFailures07242019_PASS2.txt, KafkaUTFailures07242019.GIF, 
> KafkaUTFailures07242019_PASS2.GIF
>
>
> Hi
> We have just imported KAFKA 2.3.0 source code from git repo and compiling 
> using Gradle 4.7 on Oracle VM with following info:
> [vagrant@localhost kafka-2.3.0]$ uname -a
>  Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
>  [vagrant@localhost kafka-2.3.0]$
>  
> Upon compiling (#gradle build) , there are 6 test failures at the end. Failed 
> Tests are reported as following:
> DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
>  SaslSslAdminClientIntegrationTest. 
> testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
>  UserQuotaTest. testQuotaOverrideDelete 
>  UserQuotaTest. testThrottledProducerConsumer 
>  MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
>  SocketServerTest. testControlPlaneRequest
> Attached find the failures.
>  
> [^KafkaAUTFailures.txt]
>  
>  
>  We would like to know if we are missing anything in our build environment or 
> if this is a known test failures in Kafka 2.3.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-8706) Kafka 2.3.0 Unit Test Failures on Oracle Linux - Need help debugging framework or issue.

2019-07-24 Thread Chandrasekhar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar updated KAFKA-8706:
-
Attachment: KafkaAUTFailures07242019_PASS2.txt

> Kafka 2.3.0 Unit Test Failures on Oracle Linux  - Need help debugging 
> framework or issue.
> -
>
> Key: KAFKA-8706
> URL: https://issues.apache.org/jira/browse/KAFKA-8706
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>    Reporter: Chandrasekhar
>Priority: Minor
> Attachments: KafkaAUTFailures.txt, KafkaAUTFailures07242019.txt, 
> KafkaAUTFailures07242019_PASS2.txt, KafkaUTFailures07242019.GIF, 
> KafkaUTFailures07242019_PASS2.GIF
>
>
> Hi
> We have just imported KAFKA 2.3.0 source code from git repo and compiling 
> using Gradle 4.7 on Oracle VM with following info:
> [vagrant@localhost kafka-2.3.0]$ uname -a
>  Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
>  [vagrant@localhost kafka-2.3.0]$
>  
> Upon compiling (#gradle build) , there are 6 test failures at the end. Failed 
> Tests are reported as following:
> DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
>  SaslSslAdminClientIntegrationTest. 
> testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
>  UserQuotaTest. testQuotaOverrideDelete 
>  UserQuotaTest. testThrottledProducerConsumer 
>  MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
>  SocketServerTest. testControlPlaneRequest
> Attached find the failures.
>  
> [^KafkaAUTFailures.txt]
>  
>  
>  We would like to know if we are missing anything in our build environment or 
> if this is a known test failures in Kafka 2.3.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8706) Kafka 2.3.0 Unit Test Failures on Oracle Linux - Need help debugging framework or issue.

2019-07-24 Thread Chandrasekhar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891890#comment-16891890
 ] 

Chandrasekhar commented on KAFKA-8706:
--

Re-running just one more time to confirm, if we hit these 3 times...I will have 
to change priority to critical to make sure there is no impending danger to 
actual running services...Kindly suggest on severity and let me know if you 
need any more information from this effort...we are just trying to get to 
latest so we are not left behind.

 

Appreciate swift response again!!

 

Best regards

 

Chandra Rao

> Kafka 2.3.0 Unit Test Failures on Oracle Linux  - Need help debugging 
> framework or issue.
> -
>
> Key: KAFKA-8706
> URL: https://issues.apache.org/jira/browse/KAFKA-8706
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>    Reporter: Chandrasekhar
>Priority: Minor
> Attachments: KafkaAUTFailures.txt, KafkaAUTFailures07242019.txt, 
> KafkaUTFailures07242019.GIF
>
>
> Hi
> We have just imported KAFKA 2.3.0 source code from git repo and compiling 
> using Gradle 4.7 on Oracle VM with following info:
> [vagrant@localhost kafka-2.3.0]$ uname -a
>  Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
>  [vagrant@localhost kafka-2.3.0]$
>  
> Upon compiling (#gradle build) , there are 6 test failures at the end. Failed 
> Tests are reported as following:
> DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
>  SaslSslAdminClientIntegrationTest. 
> testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
>  UserQuotaTest. testQuotaOverrideDelete 
>  UserQuotaTest. testThrottledProducerConsumer 
>  MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
>  SocketServerTest. testControlPlaneRequest
> Attached find the failures.
>  
> [^KafkaAUTFailures.txt]
>  
>  
>  We would like to know if we are missing anything in our build environment or 
> if this is a known test failures in Kafka 2.3.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (KAFKA-8706) Kafka 2.3.0 Unit Test Failures on Oracle Linux - Need help debugging framework or issue.

2019-07-24 Thread Chandrasekhar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891872#comment-16891872
 ] 

Chandrasekhar edited comment on KAFKA-8706 at 7/24/19 2:08 PM:
---

[~ckamal]: Thanks for swift response and references to known issues!

 

Reran the AUT again, and found four failures. The following four are 
consistently failing. NOTE: Initially I had the build run of file descriptors, 
so I bumped up the open file descriptors to get around that issue before first 
run.

Do these failures mean anything to actually running software? i.e. the failures:

testProduceConsumeViaAssign

testThrottledProducerConsumer

testMetricsDuringTopicCreateDelete

testControlPlaneRequest

 

[^KafkaAUTFailures07242019.txt]!KafkaUTFailures07242019.GIF!


was (Author: chandranc@oracle.com):
Reran the AUT again, and found four failures. The following four are 
consistently failing. NOTE: Initially I had the build run of file descriptors, 
so I bumped up the open file descriptors to get around that issue before first 
run.

Do these failures mean anything to actually running software? i.e. the failures:

testProduceConsumeViaAssign

testThrottledProducerConsumer

testMetricsDuringTopicCreateDelete

testControlPlaneRequest

 

[^KafkaAUTFailures07242019.txt]!KafkaUTFailures07242019.GIF!

> Kafka 2.3.0 Unit Test Failures on Oracle Linux  - Need help debugging 
> framework or issue.
> -
>
> Key: KAFKA-8706
> URL: https://issues.apache.org/jira/browse/KAFKA-8706
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>    Reporter: Chandrasekhar
>Priority: Minor
> Attachments: KafkaAUTFailures.txt, KafkaAUTFailures07242019.txt, 
> KafkaUTFailures07242019.GIF
>
>
> Hi
> We have just imported KAFKA 2.3.0 source code from git repo and compiling 
> using Gradle 4.7 on Oracle VM with following info:
> [vagrant@localhost kafka-2.3.0]$ uname -a
>  Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
>  [vagrant@localhost kafka-2.3.0]$
>  
> Upon compiling (#gradle build) , there are 6 test failures at the end. Failed 
> Tests are reported as following:
> DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
>  SaslSslAdminClientIntegrationTest. 
> testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
>  UserQuotaTest. testQuotaOverrideDelete 
>  UserQuotaTest. testThrottledProducerConsumer 
>  MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
>  SocketServerTest. testControlPlaneRequest
> Attached find the failures.
>  
> [^KafkaAUTFailures.txt]
>  
>  
>  We would like to know if we are missing anything in our build environment or 
> if this is a known test failures in Kafka 2.3.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (KAFKA-8706) Kafka 2.3.0 Unit Test Failures on Oracle Linux - Need help debugging framework or issue.

2019-07-24 Thread Chandrasekhar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16891872#comment-16891872
 ] 

Chandrasekhar commented on KAFKA-8706:
--

Reran the AUT again, and found four failures. The following four are 
consistently failing. NOTE: Initially I had the build run of file descriptors, 
so I bumped up the open file descriptors to get around that issue before first 
run.

Do these failures mean anything to actually running software? i.e. the failures:

testProduceConsumeViaAssign

testThrottledProducerConsumer

testMetricsDuringTopicCreateDelete

testControlPlaneRequest

 

[^KafkaAUTFailures07242019.txt]!KafkaUTFailures07242019.GIF!

> Kafka 2.3.0 Unit Test Failures on Oracle Linux  - Need help debugging 
> framework or issue.
> -
>
> Key: KAFKA-8706
> URL: https://issues.apache.org/jira/browse/KAFKA-8706
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>    Reporter: Chandrasekhar
>Priority: Minor
> Attachments: KafkaAUTFailures.txt, KafkaAUTFailures07242019.txt, 
> KafkaUTFailures07242019.GIF
>
>
> Hi
> We have just imported KAFKA 2.3.0 source code from git repo and compiling 
> using Gradle 4.7 on Oracle VM with following info:
> [vagrant@localhost kafka-2.3.0]$ uname -a
>  Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
>  [vagrant@localhost kafka-2.3.0]$
>  
> Upon compiling (#gradle build) , there are 6 test failures at the end. Failed 
> Tests are reported as following:
> DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
>  SaslSslAdminClientIntegrationTest. 
> testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
>  UserQuotaTest. testQuotaOverrideDelete 
>  UserQuotaTest. testThrottledProducerConsumer 
>  MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
>  SocketServerTest. testControlPlaneRequest
> Attached find the failures.
>  
> [^KafkaAUTFailures.txt]
>  
>  
>  We would like to know if we are missing anything in our build environment or 
> if this is a known test failures in Kafka 2.3.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-8706) Kafka 2.3.0 Unit Test Failures on Oracle Linux - Need help debugging framework or issue.

2019-07-24 Thread Chandrasekhar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar updated KAFKA-8706:
-
Attachment: KafkaUTFailures07242019.GIF

> Kafka 2.3.0 Unit Test Failures on Oracle Linux  - Need help debugging 
> framework or issue.
> -
>
> Key: KAFKA-8706
> URL: https://issues.apache.org/jira/browse/KAFKA-8706
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>    Reporter: Chandrasekhar
>Priority: Minor
> Attachments: KafkaAUTFailures.txt, KafkaAUTFailures07242019.txt, 
> KafkaUTFailures07242019.GIF
>
>
> Hi
> We have just imported KAFKA 2.3.0 source code from git repo and compiling 
> using Gradle 4.7 on Oracle VM with following info:
> [vagrant@localhost kafka-2.3.0]$ uname -a
>  Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
>  [vagrant@localhost kafka-2.3.0]$
>  
> Upon compiling (#gradle build) , there are 6 test failures at the end. Failed 
> Tests are reported as following:
> DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
>  SaslSslAdminClientIntegrationTest. 
> testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
>  UserQuotaTest. testQuotaOverrideDelete 
>  UserQuotaTest. testThrottledProducerConsumer 
>  MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
>  SocketServerTest. testControlPlaneRequest
> Attached find the failures.
>  
> [^KafkaAUTFailures.txt]
>  
>  
>  We would like to know if we are missing anything in our build environment or 
> if this is a known test failures in Kafka 2.3.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-8706) Kafka 2.3.0 Unit Test Failures on Oracle Linux - Need help debugging framework or issue.

2019-07-24 Thread Chandrasekhar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar updated KAFKA-8706:
-
Attachment: KafkaAUTFailures07242019.txt

> Kafka 2.3.0 Unit Test Failures on Oracle Linux  - Need help debugging 
> framework or issue.
> -
>
> Key: KAFKA-8706
> URL: https://issues.apache.org/jira/browse/KAFKA-8706
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>    Reporter: Chandrasekhar
>Priority: Minor
> Attachments: KafkaAUTFailures.txt, KafkaAUTFailures07242019.txt
>
>
> Hi
> We have just imported KAFKA 2.3.0 source code from git repo and compiling 
> using Gradle 4.7 on Oracle VM with following info:
> [vagrant@localhost kafka-2.3.0]$ uname -a
>  Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
>  [vagrant@localhost kafka-2.3.0]$
>  
> Upon compiling (#gradle build) , there are 6 test failures at the end. Failed 
> Tests are reported as following:
> DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
>  SaslSslAdminClientIntegrationTest. 
> testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
>  UserQuotaTest. testQuotaOverrideDelete 
>  UserQuotaTest. testThrottledProducerConsumer 
>  MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
>  SocketServerTest. testControlPlaneRequest
> Attached find the failures.
>  
> [^KafkaAUTFailures.txt]
>  
>  
>  We would like to know if we are missing anything in our build environment or 
> if this is a known test failures in Kafka 2.3.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-8706) Kafka 2.3.0 Unit Test Failures on Oracle Linux - Need help debugging framework or issue.

2019-07-23 Thread Chandrasekhar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar updated KAFKA-8706:
-
Description: 
Hi

We have just imported KAFKA 2.3.0 source code from git repo and compiling using 
Gradle 4.7 on Oracle VM with following info:

[vagrant@localhost kafka-2.3.0]$ uname -a
 Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
2017 x86_64 x86_64 x86_64 GNU/Linux
 [vagrant@localhost kafka-2.3.0]$

 

Upon compiling (#gradle build) , there are 6 test failures at the end. Failed 
Tests are reported as following:

DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
 SaslSslAdminClientIntegrationTest. 
testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
 UserQuotaTest. testQuotaOverrideDelete 
 UserQuotaTest. testThrottledProducerConsumer 
 MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
 SocketServerTest. testControlPlaneRequest

Attached find the failures.

 

[^KafkaAUTFailures.txt]

 

 

 We would like to know if we are missing anything in our build environment or 
if this is a known test failures in Kafka 2.3.0

 

 

  was:
Hi

We have just imported KAFKA 2.3.0 source code from git repo and compiling using 
Gradle 4.7 on Oracle VM with following info:

[vagrant@localhost kafka-2.3.0]$ uname -a
 Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
2017 x86_64 x86_64 x86_64 GNU/Linux
 [vagrant@localhost kafka-2.3.0]$

 

Upon compiling , there are 6 test failures at the end. Failed Tests are 
reported as following:

DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
 SaslSslAdminClientIntegrationTest. 
testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
 UserQuotaTest. testQuotaOverrideDelete 
 UserQuotaTest. testThrottledProducerConsumer 
 MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
 SocketServerTest. testControlPlaneRequest

Attached find the failures.

 

[^KafkaAUTFailures.txt]

 

 

 We would like to know if we are missing anything in our build environment or 
if this is a known test failures in Kafka 2.3.0

 

 


> Kafka 2.3.0 Unit Test Failures on Oracle Linux  - Need help debugging 
> framework or issue.
> -
>
> Key: KAFKA-8706
> URL: https://issues.apache.org/jira/browse/KAFKA-8706
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>    Reporter: Chandrasekhar
>Priority: Minor
> Attachments: KafkaAUTFailures.txt
>
>
> Hi
> We have just imported KAFKA 2.3.0 source code from git repo and compiling 
> using Gradle 4.7 on Oracle VM with following info:
> [vagrant@localhost kafka-2.3.0]$ uname -a
>  Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
>  [vagrant@localhost kafka-2.3.0]$
>  
> Upon compiling (#gradle build) , there are 6 test failures at the end. Failed 
> Tests are reported as following:
> DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
>  SaslSslAdminClientIntegrationTest. 
> testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
>  UserQuotaTest. testQuotaOverrideDelete 
>  UserQuotaTest. testThrottledProducerConsumer 
>  MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
>  SocketServerTest. testControlPlaneRequest
> Attached find the failures.
>  
> [^KafkaAUTFailures.txt]
>  
>  
>  We would like to know if we are missing anything in our build environment or 
> if this is a known test failures in Kafka 2.3.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-8706) Kafka 2.3.0 Unit Test Failures on Oracle Linux - Need help debugging framework or issue.

2019-07-23 Thread Chandrasekhar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar updated KAFKA-8706:
-
Issue Type: Bug  (was: Test)

> Kafka 2.3.0 Unit Test Failures on Oracle Linux  - Need help debugging 
> framework or issue.
> -
>
> Key: KAFKA-8706
> URL: https://issues.apache.org/jira/browse/KAFKA-8706
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>    Reporter: Chandrasekhar
>Priority: Minor
> Attachments: KafkaAUTFailures.txt
>
>
> Hi
> We have just imported KAFKA 2.3.0 source code from git repo and compiling 
> using Gradle 4.7 on Oracle VM with following info:
> [vagrant@localhost kafka-2.3.0]$ uname -a
>  Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
>  [vagrant@localhost kafka-2.3.0]$
>  
> Upon compiling , there are 6 test failures at the end. Failed Tests are 
> reported as following:
> DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
>  SaslSslAdminClientIntegrationTest. 
> testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
>  UserQuotaTest. testQuotaOverrideDelete 
>  UserQuotaTest. testThrottledProducerConsumer 
>  MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
>  SocketServerTest. testControlPlaneRequest
> Attached find the failures.
>  
> [^KafkaAUTFailures.txt]
>  
>  
>  We would like to know if we are missing anything in our build environment or 
> if this is a known test failures in Kafka 2.3.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (KAFKA-8706) Kafka 2.3.0 Unit Test Failures on Oracle Linux - Need help debugging framework or issue.

2019-07-23 Thread Chandrasekhar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandrasekhar updated KAFKA-8706:
-
Summary: Kafka 2.3.0 Unit Test Failures on Oracle Linux  - Need help 
debugging framework or issue.  (was: Kafka 2.3.0 Unit Test Failures on Oracle 
Linux 7.2 - Need help debugging framework or issue.)

> Kafka 2.3.0 Unit Test Failures on Oracle Linux  - Need help debugging 
> framework or issue.
> -
>
> Key: KAFKA-8706
> URL: https://issues.apache.org/jira/browse/KAFKA-8706
> Project: Kafka
>  Issue Type: Test
>  Components: core
>Affects Versions: 2.3.0
>    Reporter: Chandrasekhar
>Priority: Minor
> Attachments: KafkaAUTFailures.txt
>
>
> Hi
> We have just imported KAFKA 2.3.0 source code from git repo and compiling 
> using Gradle 4.7 on Oracle VM with following info:
> [vagrant@localhost kafka-2.3.0]$ uname -a
>  Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
>  [vagrant@localhost kafka-2.3.0]$
>  
> Upon compiling , there are 6 test failures at the end. Failed Tests are 
> reported as following:
> DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
>  SaslSslAdminClientIntegrationTest. 
> testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
>  UserQuotaTest. testQuotaOverrideDelete 
>  UserQuotaTest. testThrottledProducerConsumer 
>  MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
>  SocketServerTest. testControlPlaneRequest
> Attached find the failures.
>  
> [^KafkaAUTFailures.txt]
>  
>  
>  We would like to know if we are missing anything in our build environment or 
> if this is a known test failures in Kafka 2.3.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8706) Kafka 2.3.0 Unit Test Failures on Oracle Linux 7.2 - Need help debugging framework or issue.

2019-07-23 Thread Chandrasekhar (JIRA)
Chandrasekhar created KAFKA-8706:


 Summary: Kafka 2.3.0 Unit Test Failures on Oracle Linux 7.2 - Need 
help debugging framework or issue.
 Key: KAFKA-8706
 URL: https://issues.apache.org/jira/browse/KAFKA-8706
 Project: Kafka
  Issue Type: Test
  Components: core
Affects Versions: 2.3.0
Reporter: Chandrasekhar
 Attachments: KafkaAUTFailures.txt

Hi

We have just imported KAFKA 2.3.0 source code from git repo and compiling using 
Gradle 4.7 on Oracle VM with following info:

[vagrant@localhost kafka-2.3.0]$ uname -a
 Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
2017 x86_64 x86_64 x86_64 GNU/Linux
 [vagrant@localhost kafka-2.3.0]$

 

Upon compiling , there are 6 test failures at the end. Failed Tests are 
reported as following:

DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
 SaslSslAdminClientIntegrationTest. 
testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
 UserQuotaTest. testQuotaOverrideDelete 
 UserQuotaTest. testThrottledProducerConsumer 
 MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
 SocketServerTest. testControlPlaneRequest

Attached find the failures.

 

[^KafkaAUTFailures.txt]

 

 

 We would like to know if we are missing anything in our build environment or 
if this is a known test failures in Kafka 2.3.0

 

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8706) Kafka 2.3.0 Unit Test Failures on Oracle Linux 7.2 - Need help debugging framework or issue.

2019-07-23 Thread Chandrasekhar (JIRA)
Chandrasekhar created KAFKA-8706:


 Summary: Kafka 2.3.0 Unit Test Failures on Oracle Linux 7.2 - Need 
help debugging framework or issue.
 Key: KAFKA-8706
 URL: https://issues.apache.org/jira/browse/KAFKA-8706
 Project: Kafka
  Issue Type: Test
  Components: core
Affects Versions: 2.3.0
Reporter: Chandrasekhar
 Attachments: KafkaAUTFailures.txt

Hi

We have just imported KAFKA 2.3.0 source code from git repo and compiling using 
Gradle 4.7 on Oracle VM with following info:

[vagrant@localhost kafka-2.3.0]$ uname -a
 Linux localhost 4.1.12-112.14.1.el7uek.x86_64 #2 SMP Fri Dec 8 18:37:23 PST 
2017 x86_64 x86_64 x86_64 GNU/Linux
 [vagrant@localhost kafka-2.3.0]$

 

Upon compiling , there are 6 test failures at the end. Failed Tests are 
reported as following:

DescribeConsumerGroupTest. testDescribeOffsetsOfExistingGroupWithNoMembers 
 SaslSslAdminClientIntegrationTest. 
testReplicaCanFetchFromLogStartOffsetAfterDeleteRecords 
 UserQuotaTest. testQuotaOverrideDelete 
 UserQuotaTest. testThrottledProducerConsumer 
 MetricsDuringTopicCreationDeletionTest. testMetricsDuringTopicCreateDelete 
 SocketServerTest. testControlPlaneRequest

Attached find the failures.

 

[^KafkaAUTFailures.txt]

 

 

 We would like to know if we are missing anything in our build environment or 
if this is a known test failures in Kafka 2.3.0

 

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (ZOOKEEPER-3460) Zookeeper 3.4.13: keeps crashing after a repave in cloudnative environment.

2019-07-10 Thread Chandrasekhar (JIRA)


[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-3460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16882425#comment-16882425
 ] 

Chandrasekhar commented on ZOOKEEPER-3460:
--

Hi

 

We are cleaning our workspaces and redeploying ...planned to repave and 
recreate.

 

Can you provide some information on how to enable extra logging in zoo keeper? 
We have following in zoo.cfg
## The number of milliseconds of each ticktickTime=2000# The number of ticks 
that the initial# synchronization phase can takeinitLimit=10# The number of 
ticks that can pass between# sending a request and getting an 
acknowledgementsyncLimit=5# the directory where the snapshot is 
stored.dataDir=/var/zookeeper/data# the port at which the clients will 
connectclientPort=2181# the maximum number of client connections.# increase 
this if you need to handle more 
clientsmaxClientCnxns=60#minSessionTimeout=4000maxSessionTimeout=4## Be 
sure to read the maintenance section of the# administrator guide before turning 
on autopurge.## 
http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance## 
The number of snapshots to retain in dataDirautopurge.snapRetainCount=3# Purge 
task interval in hours# Set to "0" to disable auto purge 
featureautopurge.purgeInterval=1

> Zookeeper 3.4.13: keeps crashing after a repave in cloudnative environment.
> ---
>
> Key: ZOOKEEPER-3460
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3460
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: other
>Affects Versions: 3.4.13
> Environment: Kubernetes Cloud Native Environment.
>Reporter: Chandrasekhar
>Priority: Major
> Attachments: ZooKeeperDeploymentDescription.txt, ZookeeperCrashLog.txt
>
>
> We have used the minimal binary installation for Zookeeper and every time 
> after repave the zookeeper keeps crashing with following logs...
> I have attached the zookeeper crash logs and deployment information. Is this 
> related to one of the NULL Pointer Issues mentioned in 
> https://issues.apache.org/jira/browse/ZOOKEEPER-3009 ?
> We are trying to find the exact issue here so our cloud native platform guys 
> can help us further. Kindly let us know how to turn on debugging further.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ZOOKEEPER-3460) Zookeeper 3.4.13: keeps crashing after a repave in cloudnative environment.

2019-07-10 Thread Chandrasekhar (JIRA)


[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-3460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16882269#comment-16882269
 ] 

Chandrasekhar commented on ZOOKEEPER-3460:
--

Hmm..Need help with RCA to identify if this is the issue fixed in later 
release. That way I can justify that 3.5.5 will fix this issue and we can get 
around this issue.

> Zookeeper 3.4.13: keeps crashing after a repave in cloudnative environment.
> ---
>
> Key: ZOOKEEPER-3460
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3460
> Project: ZooKeeper
>  Issue Type: Bug
>  Components: other
>Affects Versions: 3.4.13
> Environment: Kubernetes Cloud Native Environment.
>    Reporter: Chandrasekhar
>Priority: Major
> Attachments: ZooKeeperDeploymentDescription.txt, ZookeeperCrashLog.txt
>
>
> We have used the minimal binary installation for Zookeeper and every time 
> after repave the zookeeper keeps crashing with following logs...
> I have attached the zookeeper crash logs and deployment information. Is this 
> related to one of the NULL Pointer Issues mentioned in 
> https://issues.apache.org/jira/browse/ZOOKEEPER-3009 ?
> We are trying to find the exact issue here so our cloud native platform guys 
> can help us further. Kindly let us know how to turn on debugging further.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ZOOKEEPER-3460) Zookeeper 3.4.13: keeps crashing after a repave in cloudnative environment.

2019-07-10 Thread Chandrasekhar (JIRA)
Chandrasekhar created ZOOKEEPER-3460:


 Summary: Zookeeper 3.4.13: keeps crashing after a repave in 
cloudnative environment.
 Key: ZOOKEEPER-3460
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3460
 Project: ZooKeeper
  Issue Type: Bug
  Components: other
Affects Versions: 3.4.13
 Environment: Kubernetes Cloud Native Environment.
Reporter: Chandrasekhar
 Attachments: ZooKeeperDeploymentDescription.txt, ZookeeperCrashLog.txt

We have used the minimal binary installation for Zookeeper and every time after 
repave the zookeeper keeps crashing with following logs...

I have attached the zookeeper crash logs and deployment information. Is this 
related to one of the NULL Pointer Issues mentioned in 
https://issues.apache.org/jira/browse/ZOOKEEPER-3009 ?

We are trying to find the exact issue here so our cloud native platform guys 
can help us further. Kindly let us know how to turn on debugging further.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ZOOKEEPER-3460) Zookeeper 3.4.13: keeps crashing after a repave in cloudnative environment.

2019-07-10 Thread Chandrasekhar (JIRA)
Chandrasekhar created ZOOKEEPER-3460:


 Summary: Zookeeper 3.4.13: keeps crashing after a repave in 
cloudnative environment.
 Key: ZOOKEEPER-3460
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3460
 Project: ZooKeeper
  Issue Type: Bug
  Components: other
Affects Versions: 3.4.13
 Environment: Kubernetes Cloud Native Environment.
Reporter: Chandrasekhar
 Attachments: ZooKeeperDeploymentDescription.txt, ZookeeperCrashLog.txt

We have used the minimal binary installation for Zookeeper and every time after 
repave the zookeeper keeps crashing with following logs...

I have attached the zookeeper crash logs and deployment information. Is this 
related to one of the NULL Pointer Issues mentioned in 
https://issues.apache.org/jira/browse/ZOOKEEPER-3009 ?

We are trying to find the exact issue here so our cloud native platform guys 
can help us further. Kindly let us know how to turn on debugging further.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ansible-project] Ansible Replacing the existing Jar files with new Jar files with same file name is not performing any action

2019-05-01 Thread chandrasekhar Mallishetty
Thanks for info...

On Thu, May 2, 2019 at 7:34 AM James Cassell 
wrote:

> Please stop the spam. Your message came thru the first time.  Also, please
> don't spam individual members of the list with the same question you sent
> to the list.
>
> Thanks.
>
>
> R,
> James Cassell
>
>
> On Wed, May 1, 2019, at 9:33 PM, chandrasekhar Mallishetty wrote:
> >
> >
> > On Thursday, May 2, 2019 at 1:52:26 AM UTC+5:30, chandrasekhar
> Mallishetty
> > wrote:
> > >
> > > Dear All.
> > >
> > > Need your help we are in the process of Automating the cognos
> analytics
> > > servers from Linux env with ansible code .
> > >
> > > Requirement : we have two Jar files named  local_policy.jar
> > > and US_export_policy.jar in our current installation directory
> > > ../ibm/jre/lib/security/ and we need to replace the new
> local_policy.jar
> > > and US_export_policy.jar and we are writing the code as below .
> > >
> > > Ansible code:
> > >
> > >
> > >   - name: Put Oracle drivers in cognos directory
> > > get_url: >
> > >   url={{ artifactory_url
> > >
> }}/libs-ibm-lic-local/com/oracle/ojdbc7/12.1.0.2.0/ojdbc7-12.1.0.2.0.jar
> > >   dest={{ cognos_dir }}/drivers/ojdbc7-12.1.0.2.0.jar
> > >   owner='{{ cognos_user }}'
> > >   group='{{ cognos_group }}'
> > >   mode=0755
> > > become: yes
> > >
> > >   - name: Put US export policy file in cognos directory
> > > get_url: >
> > >  url={{ artifactory_url
> > >
> }}/libs-ibm-local/com/ibm/jce/US_export_policy/2013-02-27/US_export_policy-2013-02-27.jar
> > >  dest={{ cognos_dir }}/jre/lib/security/US_export_policy.jar
> > >  owner='{{ cognos_user }}'
> > >  group='{{ cognos_group }}'
> > >  mode=0755
> > > become: yes
> > >
> > >
> > > The above code is not taking any code or copying the new policy files
> > > since the  { cognos_dir }}/jre/lib/security/ has that files and if we
> > > change the name to US_export_policy_1.jar we can see the file new file
> > > US_export_policy_1.jar
> > >
> > > we modified the above code by adding force:Yes not sure if this works
> or
> > > not
> > >
> > > name: Put local policy file in cognos directory
> > > get_url: >
> > >  url={{ artifactory_url
> > >
> }}/libs-ibm-local/com/ibm/jce/local_policy/2013-02-27/local_policy-2013-02-27.jar
> > >  dest={{ cognos_dir }}/jre/lib/security/local_policy.jar
> > >  owner='{{ cognos_user }}'
> > >  group='{{ cognos_group }}'
> > >  mode=0755
> > >  force: yes
> > > become: yes
> > >
> > > Can any one suggest an exact approch to move furture
> > >
> > > Thanks
> > > M.chandra
> > >
> > >
> > >
> > >
> > >
> >
> > --
> > You received this message because you are subscribed to the Google
> > Groups "Ansible Project" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> > an email to ansible-project+unsubscr...@googlegroups.com.
> > To post to this group, send email to ansible-project@googlegroups.com.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/ansible-project/1278b89a-c60f-4fcd-9917-87a843d420a2%40googlegroups.com
> .
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Ansible Project" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to ansible-project+unsubscr...@googlegroups.com.
> To post to this group, send email to ansible-project@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/ansible-project/1556762687.1833972.1669113152.575BBA83%40webmail.messagingengine.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/CA%2BBDj747F%3DTOUA_ntmeN_rJjqy1NieViF9kq%2B_MS_d-x4pKjiw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] Ansible Replacing the existing Jar files with new Jar files with same file name is not performing any action

2019-05-01 Thread chandrasekhar Mallishetty


On Thursday, May 2, 2019 at 1:52:26 AM UTC+5:30, chandrasekhar Mallishetty 
wrote:
>
> Dear All.
>
> Need your help we are in the process of Automating the cognos analytics 
> servers from Linux env with ansible code .
>
> Requirement : we have two Jar files named  local_policy.jar 
> and US_export_policy.jar in our current installation directory  
> ../ibm/jre/lib/security/ and we need to replace the new local_policy.jar 
> and US_export_policy.jar and we are writing the code as below .
>
> Ansible code: 
>  
>
>   - name: Put Oracle drivers in cognos directory
> get_url: >
>   url={{ artifactory_url 
> }}/libs-ibm-lic-local/com/oracle/ojdbc7/12.1.0.2.0/ojdbc7-12.1.0.2.0.jar
>   dest={{ cognos_dir }}/drivers/ojdbc7-12.1.0.2.0.jar
>   owner='{{ cognos_user }}'
>   group='{{ cognos_group }}'
>   mode=0755
> become: yes
>
>   - name: Put US export policy file in cognos directory
> get_url: >
>  url={{ artifactory_url 
> }}/libs-ibm-local/com/ibm/jce/US_export_policy/2013-02-27/US_export_policy-2013-02-27.jar
>  dest={{ cognos_dir }}/jre/lib/security/US_export_policy.jar
>  owner='{{ cognos_user }}'
>  group='{{ cognos_group }}'
>  mode=0755
> become: yes
>
>
> The above code is not taking any code or copying the new policy files  
> since the  { cognos_dir }}/jre/lib/security/ has that files and if we 
> change the name to US_export_policy_1.jar we can see the file new file 
> US_export_policy_1.jar
>
> we modified the above code by adding force:Yes not sure if this works or 
> not
>
> name: Put local policy file in cognos directory
> get_url: >
>  url={{ artifactory_url 
> }}/libs-ibm-local/com/ibm/jce/local_policy/2013-02-27/local_policy-2013-02-27.jar
>  dest={{ cognos_dir }}/jre/lib/security/local_policy.jar
>  owner='{{ cognos_user }}'
>  group='{{ cognos_group }}'
>  mode=0755
>  force: yes
> become: yes
>
> Can any one suggest an exact approch to move furture 
>
> Thanks
> M.chandra
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/1278b89a-c60f-4fcd-9917-87a843d420a2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[PATCH v3] arch_topology: Make cpu_capacity sysfs node as ready-only

2019-03-31 Thread Lingutla Chandrasekhar
If user updates any cpu's cpu_capacity, then the new value is going to
be applied to all its online sibling cpus. But this need not to be correct
always, as sibling cpus (in ARM, same micro architecture cpus) would have
different cpu_capacity with different performance characteristics.
So, updating the user supplied cpu_capacity to all cpu siblings
is not correct.

And another problem is, current code assumes that 'all cpus in a cluster
or with same package_id (core_siblings), would have same cpu_capacity'.
But with commit '5bdd2b3f0f8 ("arm64: topology: add support to remove
cpu topology sibling masks")', when a cpu hotplugged out, the cpu
information gets cleared in its sibling cpus. So, user supplied
cpu_capacity would be applied to only online sibling cpus at the time.
After that, if any cpu hotplugged in, it would have different cpu_capacity
than its siblings, which breaks the above assumption.

So, instead of mucking around the core sibling mask for user supplied
value, use device-tree to set cpu capacity. And make the cpu_capacity
node as read-only to know the asymmetry between cpus in the system.
While at it, remove cpu_scale_mutex usage, which used for sysfs write
protection.

Tested-by: Dietmar Eggemann 
Tested-by: Quentin Perret 
Reviewed-by: Quentin Perret 
Acked-by: Sudeep Holla 
Signed-off-by: Lingutla Chandrasekhar 

---

Changes from v2:
   - Corrected spelling mistakes in commit text.
Changes from v1:
   - Removed cpu_scale_mutex usage, suggested by Dietmar Eggemann.
Changes from v0:
   - Instead of iterating all possible cpus to update cpu capacity,
 removed write capability for the sysfs node.

diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index edfcf8d982e4..1739d7e1952a 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -7,7 +7,6 @@
  */
 
 #include 
-#include 
 #include 
 #include 
 #include 
@@ -31,7 +30,6 @@ void arch_set_freq_scale(struct cpumask *cpus, unsigned long 
cur_freq,
per_cpu(freq_scale, i) = scale;
 }
 
-static DEFINE_MUTEX(cpu_scale_mutex);
 DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE;
 
 void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity)
@@ -51,37 +49,7 @@ static ssize_t cpu_capacity_show(struct device *dev,
 static void update_topology_flags_workfn(struct work_struct *work);
 static DECLARE_WORK(update_topology_flags_work, update_topology_flags_workfn);
 
-static ssize_t cpu_capacity_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf,
- size_t count)
-{
-   struct cpu *cpu = container_of(dev, struct cpu, dev);
-   int this_cpu = cpu->dev.id;
-   int i;
-   unsigned long new_capacity;
-   ssize_t ret;
-
-   if (!count)
-   return 0;
-
-   ret = kstrtoul(buf, 0, _capacity);
-   if (ret)
-   return ret;
-   if (new_capacity > SCHED_CAPACITY_SCALE)
-   return -EINVAL;
-
-   mutex_lock(_scale_mutex);
-   for_each_cpu(i, _topology[this_cpu].core_sibling)
-   topology_set_cpu_scale(i, new_capacity);
-   mutex_unlock(_scale_mutex);
-
-   schedule_work(_topology_flags_work);
-
-   return count;
-}
-
-static DEVICE_ATTR_RW(cpu_capacity);
+static DEVICE_ATTR_RO(cpu_capacity);
 
 static int register_cpu_capacity_sysctl(void)
 {
@@ -141,7 +109,6 @@ void topology_normalize_cpu_scale(void)
return;
 
pr_debug("cpu_capacity: capacity_scale=%u\n", capacity_scale);
-   mutex_lock(_scale_mutex);
for_each_possible_cpu(cpu) {
pr_debug("cpu_capacity: cpu=%d raw_capacity=%u\n",
 cpu, raw_capacity[cpu]);
@@ -151,7 +118,6 @@ void topology_normalize_cpu_scale(void)
pr_debug("cpu_capacity: CPU%d cpu_capacity=%lu\n",
cpu, topology_get_cpu_scale(NULL, cpu));
}
-   mutex_unlock(_scale_mutex);
 }
 
 bool __init topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu)
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of 
Code Aurora Forum,
 a Linux Foundation Collaborative Project.



[PATCH v2] arch_topology: Make cpu_capacity sysfs node as ready-only

2019-03-27 Thread Lingutla Chandrasekhar
If user updates any cpu's cpu_capacity, then the new value is going to
be applied to all its online sibling cpus. But this need not to be correct
always, as sibling cpus (in ARM, same micro architecture cpus) would have
different cpu_capacity with different performance characteristics.
So, updating the user supplied cpu_capacity to all cpu siblings
is not correct.

And another problem is, current code assumes that 'all cpus in a cluster
or with same package_id (core_siblings), would have same cpu_capacity'.
But with commit '5bdd2b3f0f8 ("arm64: topology: add support to remove
cpu topology sibling masks")', when a cpu hotplugged out, the cpu
information gets cleared in its sibling cpus. So, user supplied
cpu_capacity would be applied to only online sibling cpus at the time.
After that, if any cpu hotplugged in, it would have different cpu_capacity
than its siblings, which breaks the above assumption.

So, instead of mucking around the core sibling mask for user supplied
value, use device-tree to set cpu capacity. And make the cpu_capacity
node as read-only to know the asymmetry between cpus in the system.
While at it, remove cpu_scale_mutex usage, which used for sysfs write
protection.

Tested-by: Dietmar Eggemann 
Tested-by: Quentin Perret 
Reviewed-by: Quentin Perret 
Acked-by: Sudeep Holla 
Signed-off-by: Lingutla Chandrasekhar 

diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index edfcf8d982e4..1739d7e1952a 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -7,7 +7,6 @@
  */
 
 #include 
-#include 
 #include 
 #include 
 #include 
@@ -31,7 +30,6 @@ void arch_set_freq_scale(struct cpumask *cpus, unsigned long 
cur_freq,
per_cpu(freq_scale, i) = scale;
 }
 
-static DEFINE_MUTEX(cpu_scale_mutex);
 DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE;
 
 void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity)
@@ -51,37 +49,7 @@ static ssize_t cpu_capacity_show(struct device *dev,
 static void update_topology_flags_workfn(struct work_struct *work);
 static DECLARE_WORK(update_topology_flags_work, update_topology_flags_workfn);
 
-static ssize_t cpu_capacity_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf,
- size_t count)
-{
-   struct cpu *cpu = container_of(dev, struct cpu, dev);
-   int this_cpu = cpu->dev.id;
-   int i;
-   unsigned long new_capacity;
-   ssize_t ret;
-
-   if (!count)
-   return 0;
-
-   ret = kstrtoul(buf, 0, _capacity);
-   if (ret)
-   return ret;
-   if (new_capacity > SCHED_CAPACITY_SCALE)
-   return -EINVAL;
-
-   mutex_lock(_scale_mutex);
-   for_each_cpu(i, _topology[this_cpu].core_sibling)
-   topology_set_cpu_scale(i, new_capacity);
-   mutex_unlock(_scale_mutex);
-
-   schedule_work(_topology_flags_work);
-
-   return count;
-}
-
-static DEVICE_ATTR_RW(cpu_capacity);
+static DEVICE_ATTR_RO(cpu_capacity);
 
 static int register_cpu_capacity_sysctl(void)
 {
@@ -141,7 +109,6 @@ void topology_normalize_cpu_scale(void)
return;
 
pr_debug("cpu_capacity: capacity_scale=%u\n", capacity_scale);
-   mutex_lock(_scale_mutex);
for_each_possible_cpu(cpu) {
pr_debug("cpu_capacity: cpu=%d raw_capacity=%u\n",
 cpu, raw_capacity[cpu]);
@@ -151,7 +118,6 @@ void topology_normalize_cpu_scale(void)
pr_debug("cpu_capacity: CPU%d cpu_capacity=%lu\n",
cpu, topology_get_cpu_scale(NULL, cpu));
}
-   mutex_unlock(_scale_mutex);
 }
 
 bool __init topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu)
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of 
Code Aurora Forum,
 a Linux Foundation Collaborative Project.



[PATCH v2] arch_topology: Make cpu_capacity sysfs node as ready-only

2019-03-27 Thread Lingutla Chandrasekhar
If user updates any cpu's cpu_capacity, then the new value is going to
be applied to all its online sibling cpus. But this need not to be correct
always, as sibling cpus (in ARM, same micro architecture cpus) would have
different cpu_capacity with different performance characteristics.
So updating the user supplied cpu_capacity to all cpu siblings
is not correct.

And another problem is, current code assumes that 'all cpus in a cluster
or with same package_id (core_siblings), would have same cpu_capacity'.
But with commit '5bdd2b3f0f8 ("arm64: topology: add support to remove
cpu topology sibling masks")', when a cpu hotplugged out, the cpu
information gets cleared in its sibling cpus. So user supplied
cpu_capacity would be applied to only online sibling cpus at the time.
After that, if any cpu hot plugged in, it would have different cpu_capacity
than its siblings, which breaks the above assumption.

So instead of mucking around the core sibling mask for user supplied
value, use device-tree to set cpu capacity. And make the cpu_capacity
node as read-only to know the assymetry between cpus in the system.
While at it, remove cpu_scale_mutex usage, which used for sysfs write
protection.

Tested-by: Dietmar Eggemann 
Tested-by: Quentin Perret 
Acked-by: Sudeep Holla 
Reviewed-by: Quentin Perret 
Signed-off-by: Lingutla Chandrasekhar 

diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index edfcf8d982e4..1739d7e1952a 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -7,7 +7,6 @@
  */
 
 #include 
-#include 
 #include 
 #include 
 #include 
@@ -31,7 +30,6 @@ void arch_set_freq_scale(struct cpumask *cpus, unsigned long 
cur_freq,
per_cpu(freq_scale, i) = scale;
 }
 
-static DEFINE_MUTEX(cpu_scale_mutex);
 DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE;
 
 void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity)
@@ -51,37 +49,7 @@ static ssize_t cpu_capacity_show(struct device *dev,
 static void update_topology_flags_workfn(struct work_struct *work);
 static DECLARE_WORK(update_topology_flags_work, update_topology_flags_workfn);
 
-static ssize_t cpu_capacity_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf,
- size_t count)
-{
-   struct cpu *cpu = container_of(dev, struct cpu, dev);
-   int this_cpu = cpu->dev.id;
-   int i;
-   unsigned long new_capacity;
-   ssize_t ret;
-
-   if (!count)
-   return 0;
-
-   ret = kstrtoul(buf, 0, _capacity);
-   if (ret)
-   return ret;
-   if (new_capacity > SCHED_CAPACITY_SCALE)
-   return -EINVAL;
-
-   mutex_lock(_scale_mutex);
-   for_each_cpu(i, _topology[this_cpu].core_sibling)
-   topology_set_cpu_scale(i, new_capacity);
-   mutex_unlock(_scale_mutex);
-
-   schedule_work(_topology_flags_work);
-
-   return count;
-}
-
-static DEVICE_ATTR_RW(cpu_capacity);
+static DEVICE_ATTR_RO(cpu_capacity);
 
 static int register_cpu_capacity_sysctl(void)
 {
@@ -141,7 +109,6 @@ void topology_normalize_cpu_scale(void)
return;
 
pr_debug("cpu_capacity: capacity_scale=%u\n", capacity_scale);
-   mutex_lock(_scale_mutex);
for_each_possible_cpu(cpu) {
pr_debug("cpu_capacity: cpu=%d raw_capacity=%u\n",
 cpu, raw_capacity[cpu]);
@@ -151,7 +118,6 @@ void topology_normalize_cpu_scale(void)
pr_debug("cpu_capacity: CPU%d cpu_capacity=%lu\n",
cpu, topology_get_cpu_scale(NULL, cpu));
}
-   mutex_unlock(_scale_mutex);
 }
 
 bool __init topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu)
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of 
Code Aurora Forum,
 a Linux Foundation Collaborative Project.



[PATCH v2] arch_topology: Make cpu_capacity sysfs node as ready-only

2019-03-08 Thread Lingutla Chandrasekhar
If user updates any cpu's cpu_capacity, then the new value is going to
be applied to all its online sibling cpus. But this need not to be correct
always, as sibling cpus (in ARM, same micro architecture cpus) would have
different cpu_capacity with different performance characteristics.
So updating the user supplied cpu_capacity to all cpu siblings
is not correct.

And another problem is, current code assumes that 'all cpus in a cluster
or with same package_id (core_siblings), would have same cpu_capacity'.
But with commit '5bdd2b3f0f8 ("arm64: topology: add support to remove
cpu topology sibling masks")', when a cpu hotplugged out, the cpu
information gets cleared in its sibling cpus. So user supplied
cpu_capacity would be applied to only online sibling cpus at the time.
After that, if any cpu hot plugged in, it would have different cpu_capacity
than its siblings, which breaks the above assumption.

So instead of mucking around the core sibling mask for user supplied
value, use device-tree to set cpu capacity. And make the cpu_capacity
node as read-only to know the assymetry between cpus in the system.
While at it, remove cpu_scale_mutex usage, which used for sysfs write
protection.

Tested-by: Dietmar Eggemann 
Acked-by: Sudeep Holla 
Signed-off-by: Lingutla Chandrasekhar 
---

Changes from v1:
   - Removed cpu_scale_mutex usage, suggested by Dietmar Eggemann.
---
 drivers/base/arch_topology.c | 36 +---
 1 file changed, 1 insertion(+), 35 deletions(-)

diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index edfcf8d..1739d7e 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -7,7 +7,6 @@
  */
 
 #include 
-#include 
 #include 
 #include 
 #include 
@@ -31,7 +30,6 @@ void arch_set_freq_scale(struct cpumask *cpus, unsigned long 
cur_freq,
per_cpu(freq_scale, i) = scale;
 }
 
-static DEFINE_MUTEX(cpu_scale_mutex);
 DEFINE_PER_CPU(unsigned long, cpu_scale) = SCHED_CAPACITY_SCALE;
 
 void topology_set_cpu_scale(unsigned int cpu, unsigned long capacity)
@@ -51,37 +49,7 @@ static ssize_t cpu_capacity_show(struct device *dev,
 static void update_topology_flags_workfn(struct work_struct *work);
 static DECLARE_WORK(update_topology_flags_work, update_topology_flags_workfn);
 
-static ssize_t cpu_capacity_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf,
- size_t count)
-{
-   struct cpu *cpu = container_of(dev, struct cpu, dev);
-   int this_cpu = cpu->dev.id;
-   int i;
-   unsigned long new_capacity;
-   ssize_t ret;
-
-   if (!count)
-   return 0;
-
-   ret = kstrtoul(buf, 0, _capacity);
-   if (ret)
-   return ret;
-   if (new_capacity > SCHED_CAPACITY_SCALE)
-   return -EINVAL;
-
-   mutex_lock(_scale_mutex);
-   for_each_cpu(i, _topology[this_cpu].core_sibling)
-   topology_set_cpu_scale(i, new_capacity);
-   mutex_unlock(_scale_mutex);
-
-   schedule_work(_topology_flags_work);
-
-   return count;
-}
-
-static DEVICE_ATTR_RW(cpu_capacity);
+static DEVICE_ATTR_RO(cpu_capacity);
 
 static int register_cpu_capacity_sysctl(void)
 {
@@ -141,7 +109,6 @@ void topology_normalize_cpu_scale(void)
return;
 
pr_debug("cpu_capacity: capacity_scale=%u\n", capacity_scale);
-   mutex_lock(_scale_mutex);
for_each_possible_cpu(cpu) {
pr_debug("cpu_capacity: cpu=%d raw_capacity=%u\n",
 cpu, raw_capacity[cpu]);
@@ -151,7 +118,6 @@ void topology_normalize_cpu_scale(void)
pr_debug("cpu_capacity: CPU%d cpu_capacity=%lu\n",
cpu, topology_get_cpu_scale(NULL, cpu));
}
-   mutex_unlock(_scale_mutex);
 }
 
 bool __init topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu)
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of 
Code Aurora Forum,
 a Linux Foundation Collaborative Project.



[PATCH v1] arch_topology: Make cpu_capacity sysfs node as ready-only

2019-03-06 Thread Lingutla Chandrasekhar
If user updates any cpu's cpu_capacity, then the new value is going to
be applied to all its online sibling cpus. But this need not to be correct
always, as sibling cpus (in ARM, same micro architecture cpus) would have
different cpu_capacity with different performance characteristics.
So updating the user supplied cpu_capacity to all cpu siblings
is not correct.

And another problem is, current code assumes that 'all cpus in a cluster
or with same package_id (core_siblings), would have same cpu_capacity'.
But with commit '5bdd2b3f0f8 ("arm64: topology: add support to remove
cpu topology sibling masks")', when a cpu hotplugged out, the cpu
information gets cleared in its sibling cpus. So user supplied
cpu_capacity would be applied to only online sibling cpus at the time.
After that, if any cpu hot plugged in, it would have different cpu_capacity
than its siblings, which breaks the above assumption.

So instead of mucking around the core sibling mask for user supplied
value, use device-tree to set cpu capacity. And make the cpu_capacity
node as read-only to know the assymetry between cpus in the system.

Signed-off-by: Lingutla Chandrasekhar 
---
 drivers/base/arch_topology.c | 33 +
 1 file changed, 1 insertion(+), 32 deletions(-)

diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index edfcf8d..d455897 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -7,7 +7,6 @@
  */
 
 #include 
-#include 
 #include 
 #include 
 #include 
@@ -51,37 +50,7 @@ static ssize_t cpu_capacity_show(struct device *dev,
 static void update_topology_flags_workfn(struct work_struct *work);
 static DECLARE_WORK(update_topology_flags_work, update_topology_flags_workfn);
 
-static ssize_t cpu_capacity_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf,
- size_t count)
-{
-   struct cpu *cpu = container_of(dev, struct cpu, dev);
-   int this_cpu = cpu->dev.id;
-   int i;
-   unsigned long new_capacity;
-   ssize_t ret;
-
-   if (!count)
-   return 0;
-
-   ret = kstrtoul(buf, 0, _capacity);
-   if (ret)
-   return ret;
-   if (new_capacity > SCHED_CAPACITY_SCALE)
-   return -EINVAL;
-
-   mutex_lock(_scale_mutex);
-   for_each_cpu(i, _topology[this_cpu].core_sibling)
-   topology_set_cpu_scale(i, new_capacity);
-   mutex_unlock(_scale_mutex);
-
-   schedule_work(_topology_flags_work);
-
-   return count;
-}
-
-static DEVICE_ATTR_RW(cpu_capacity);
+static DEVICE_ATTR_RO(cpu_capacity);
 
 static int register_cpu_capacity_sysctl(void)
 {
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of 
Code Aurora Forum,
 a Linux Foundation Collaborative Project.



Re: Feedback wanted for Knowledge base for all things cassandra (cassandra.link)

2019-02-25 Thread Chandrasekhar Thumuluru
Wow, this is great! Thanks!

On Mon, Feb 25, 2019 at 7:05 AM Rahul Singh 
wrote:

> Folks,
>
> I've been scrounging time to work on a knowledge resource for all things
> Cassandra ( Cassandra, DSE, Scylla, YugaByte, Elassandra)
>
> I feel like the Cassandra core community still has the most knowledge even
> though people are fragmenting into their brands.
>
> Would love to get your feedback on what you guys would want as a go to
> resource for Cassandra development, administration, architecture, etc.
> resources.
>
> *MVP  1*
> https://anant.github.io/awesome-cassandra
>
>
> *MVP  2*
> https://cassandra.netlify.com/
>
> *MVP  3*
>
> https://leaves-search.netlify.com/documents.html#/q=*:*=tags:(cassandra)=*=20&=
>   -
>
> Each of these were iterated with feedback from the community, so would
> love to get your feedback to make it better.
>
> Up next is to add the RSS feeds from the major Cassandra folks like on
> https://cassandra.alteroot.org
>
> Thanks for your feedback in advance.
>
>


Re: [Rails-core] Re: [ActiveStorage] Feature Request: attachment validations

2019-02-21 Thread Abhishek Chandrasekhar
Hey everyone!

It seems this has been open for a while, and as per George's last statement 
I'm assuming a pull request is still welcome? If so I've been working on 
this the past few days 
(As a small side note, there does already seem to be a PR open but I 
provided my thoughts on that here. 
 I 
believe Igor responded in support as well)

I had a question about the `:presence` validator.


*The Issue*

I assume we want validates :avatar, presence: true to validate that a ::Blob 
is actually attached. However activestorage attachments don't work like 
standard model fields: all changes to attachments and blobs are processed 
*after* the record itself is saved (for good reason).

This unfortunately means any validations for :presence run at a time when 
the attachment changes (stored in @attachment_changes) have not yet been 
applied. 


*Potential Workaround + Another Issue*

One workaround would be to have the validator look at what changes are 
queued up to be applied and decide if an attachment *is expected *to be 
present or blank after the record is saved. 

This would work well, but the :detach, :purge, and :purge_later throw a 
wrench in this logic. They *immediately* delete the attachments (or at 
least queue them up for deletion) before setting the attachment to nil. So 
when the record is finally validated later it might fail :presence 
validations but it's too late because the association ::Attachment/::Blob 
records are already destroyed.



All this makes what should be a "simple" presence check much more complex 
because of the nature of how attachments are stored. Unless I'm missing an 
obvious solution, which I might be!

My personal thought: If we want attachments to feel like true model fields 
then the validation should be performed first and the deletion should be 
blocked/avoided if the validation fails. (And of course, users can always 
bypass validations as needed).

Would love to hear your thoughts. In the mean time, I'm happy to implement 
the size and content type validations.  

As always, thanks for everyone's time here. As an avid Rails user, 
everyone's contribution is much appreciated.


On Tuesday, November 27, 2018 at 3:51:01 PM UTC-5, ifom...@gmail.com wrote:
>
> Hi George,
>
> thanks for your positive feedback! I'll see how far I can get :)
>
> Best regards,
> Ivan
>
> On Tuesday, November 27, 2018 at 4:21:28 PM UTC+1, George Claghorn wrote:
>>
>> Validations are planned for Rails 6. Here’s a rough sketch of the API I 
>> have in mind:
>>
>> validates_attached :logo, presence: true, byte_size: { less_than: 
>> 10.megabytes, message: "must be smaller than 10 MB" }, content_type: 
>> /\Aimage\//
>>
>> I intended to implement this myself, and laid the groundwork for it in 
>> the commit Rob mentioned, but Igor Kasyanchuk asked if he could fold 
>> active_storage_validations into Active Storage proper: 
>> https://github.com/rails/rails/issues/33741. Since September, I’ve been 
>> giving him time to open a PR.
>>
>> Please feel free to investigate yourself. Rails 6 is slated for early 
>> next year, so if nobody else opens a PR before then, I’ll come back to 
>> validations after the holidays.
>>
>> On Mon, Nov 26, 2018 at 4:13 PM  wrote:
>>
>>> Hmmm...
>>>
>>> Hi Rob,
>>>
>>> thanks for your reply!
>>>
>>> I see how one can validate presence of a blob from this change, but I'm 
>>> not sure what could be the syntax for validating content type, filename, or 
>>> file size after this change. Could you please elaborate on that? The commit 
>>> you referred to provides neither documentation nor tests for these cases.
>>>
>>> Best regards,
>>> Ivan
>>>
>>> On Monday, November 26, 2018 at 9:53:12 PM UTC+1, Rob Zolkos wrote:

 Rails 6 will have validations for AS  
 https://github.com/rails/rails/commit/e8682c5bf051517b0b265e446aa1a7eccfd47bf7#diff-c76fb6202b7f95a08fe12f40c4999ac9R11

 On Mon, Nov 26, 2018 at 3:36 PM  wrote:

> Hi all,
>
> I think this is one of the essential features that are missing in 
> Active Storage. Thus I'm pretty sure it's gonna be implemented pretty 
> soon 
> one way or another, and I wonder what is the maintainers' plan for it, if 
> there is any.
>
> I know about active_storage_validations gem, but its functionality is 
> quite limited and the gem itself is pretty self-inconsistent and raw 
> (though it's the best publicly available gem I could find, kudos to the 
> maintainers!)
>
> One approach I'm thinking of would be to adapt paperclip's validators 
> for Active Storage (thanks to MIT license), and I think I could do it, 
> but 
> I'm not sure if it's gonna be accepted. One doesn't have to invent the 
> wheel, but I'd like to hear an expert opinion.
>
> Thank you,
>
> Ivan
>
> -- 
> You received this message because you are subscribed to the Google 

[jira] [Created] (CARBONDATA-3204) Create BloomFilter DataMap Command Fails.

2018-12-27 Thread ChandraSekhar Saripaka (JIRA)
ChandraSekhar Saripaka created CARBONDATA-3204:
--

 Summary: Create BloomFilter DataMap Command Fails.
 Key: CARBONDATA-3204
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3204
 Project: CarbonData
  Issue Type: Bug
Reporter: ChandraSekhar Saripaka


{code:java}
18/12/26 18:19:02 WARN datamap.DataMapStoreManager: failed to get carbon table 
from table PathFile does not exist: 
alluxio:///user/hive/carbondata-query-poc/carbondata/bloom_filter_no_sort_columns_tbl/Metadata/schema
 18/12/26 18:19:02 INFO datamap.CarbonDropDataMapCommand: Table MetaData 
Unlocked Successfully 18/12/26 18:19:02 WARN datamap.DataMapStoreManager: 
failed to get carbon table from table PathFile does not exist: 
alluxio:///user/hive/carbondata-query-poc/carbondata/bloom_filter_no_sort_columns_tbl/Metadata/schema
 18/12/26 18:19:02 AUDIT carbon.audit: {"time":"December 26, 2018 6:19:02 PM 
SGT","username":"hive","opName":"DROP 
DATAMAP","opId":"3195704513056831","opStatus":"SUCCESS","opTime":"1805 
ms","table":"carbondata_hive.bloom_filter_no_sort_columns_tbl","extraInfo":{"dmName":"bloom_filter_no_sort_columns_tbl_map"}}
 18/12/26 18:19:02 AUDIT carbon.audit: {"time":"December 26, 2018 6:19:02 PM 
SGT","username":"hive","opName":"CREATE 
DATAMAP","opId":"3195703381271121","opStatus":"FAILED","opTime":"2935 
ms","table":"carbondata_hive.bloom_filter_no_sort_columns_tbl","extraInfo":{"Exception":"java.lang.NullPointerException"}}
 18/12/26 18:19:02 
ERROR carbondata.Main$: BloomFilter case was failed 
java.lang.NullPointerException at 
org.apache.carbondata.core.metadata.schema.table.DiskBasedDMSchemaStorageProvider.retrieveSchemas(DiskBasedDMSchemaStorageProvider.java:114)
 at 
org.apache.carbondata.core.datamap.DataMapStoreManager.getDataMapSchemasOfTable(DataMapStoreManager.java:152)
 at 
org.apache.carbondata.core.datamap.DataMapStoreManager.getAllDataMap(DataMapStoreManager.java:133)
 at 
org.apache.spark.sql.events.MergeBloomIndexEventListener.onEvent(MergeBloomIndexEventListener.scala:42)
 at 
org.apache.carbondata.events.OperationListenerBus.fireEvent(OperationListenerBus.java:83)
 at 
org.apache.carbondata.datamap.IndexDataMapRebuildRDD$.rebuildDataMap(IndexDataMapRebuildRDD.scala:137)
 at 
org.apache.carbondata.datamap.IndexDataMapRebuildRDD.rebuildDataMap(IndexDataMapRebuildRDD.scala)
 at 
org.apache.carbondata.datamap.IndexDataMapProvider.rebuild(IndexDataMapProvider.java:102)
 at 
org.apache.spark.sql.execution.command.datamap.CarbonCreateDataMapCommand.processData(CarbonCreateDataMapCommand.scala:166)
 at 
org.apache.spark.sql.execution.command.AtomicRunnableCommand$$anonfun$run$3.apply(package.scala:147)
 at 
org.apache.spark.sql.execution.command.AtomicRunnableCommand$$anonfun$run$3.apply(package.scala:144)
 at 
org.apache.spark.sql.execution.command.Auditable$class.runWithAudit(package.scala:104)
 at 
org.apache.spark.sql.execution.command.AtomicRunnableCommand.runWithAudit(package.scala:140)
 at 
org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:144)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
 at org.apache.spark.sql.Dataset.(Dataset.scala:182) at 
org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:91) 
at 
org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:90) 
at org.apache.spark.sql.CarbonSession.withProfiler(CarbonSession.scala:136) at 
org.apache.spark.sql.CarbonSession.sql(CarbonSession.scala:88) at 
com.dbs.carbondata.cases.carbondata.datamaps.DataMap.createDataMap(DataMap.scala:36)
 at 
com.dbs.carbondata.cases.carbondata.datamaps.DataMap.createTable(DataMap.scala:42)
 at 
com.dbs.carbondata.Main$.com$dbs$carbondata$Main$$getTimeReport(Main.scala:35) 
at com.dbs.carbondata.Main$$anonfun$1.apply(Main.scala:80) at 
com.dbs.carbondata.Main$$anonfun$1.apply(Main.scala:80) at 
scala.collection.immutable.List.map(List.scala:277) at 
com.dbs.carbondata.Main$.delayedEndpoint$com$dbs$carbondata$Main$1(Main.scala:80)
 at com.dbs.carbondata.Main$delayedInit$body.apply(Main.scala:29) at 
scala.Function0$class.apply$mcV$sp(Function0.scala:34) at 
scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) at 
scala.App$$anonfun$main$1.apply(App.scala:76) at 
scala.App$$anonfun$main$1.apply(App.scala:

[jira] [Updated] (CARBONDATA-3193) Support for CDH 5.14.2 Spark 2.2.0

2018-12-23 Thread ChandraSekhar Saripaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChandraSekhar Saripaka updated CARBONDATA-3193:
---
Summary: Support for CDH 5.14.2 Spark 2.2.0  (was: Support for CDH 5.14 
Spark 2.2.0)

> Support for CDH 5.14.2 Spark 2.2.0
> --
>
> Key: CARBONDATA-3193
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3193
> Project: CarbonData
>  Issue Type: Improvement
>  Components: spark-integration
>Affects Versions: NONE
>    Reporter: ChandraSekhar Saripaka
>Priority: Major
>  Labels: CDH5
>
> Carbondata build support for CDH5.14.2 spark2.2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-3193) Support for CDH 5.14 Spark 2.2.0

2018-12-21 Thread ChandraSekhar Saripaka (JIRA)
ChandraSekhar Saripaka created CARBONDATA-3193:
--

 Summary: Support for CDH 5.14 Spark 2.2.0
 Key: CARBONDATA-3193
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3193
 Project: CarbonData
  Issue Type: Improvement
  Components: spark-integration
Affects Versions: NONE
Reporter: ChandraSekhar Saripaka


Carbondata build support for CDH5.14.2 spark2.2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [Rails-core] [Feature][ActiveStorage] Pre-defined Variants

2018-11-27 Thread Abhishek Chandrasekhar
@kasper - Allowing the one-off options on a given variant is an even better 
approach and just as easily implementable. Thanks for the suggestion.

> I’m not looking to see this through though, so you’d have to get George 
Claghorn or someone else on board for this ride.

Totally understood :)
@George - would love to hear your thoughts on this. If you agree, I don't 
mind picking up the development and submitting a patch. 

Thanks.

On Tuesday, November 27, 2018 at 10:02:39 AM UTC-5, Kasper Timm Hansen 
wrote:
>
> I do remember proposing this at one point internally:
>
> has_one_attached :avatar do |attachable|
>   attachable.variant :small, resize: ’100x100>’
>   …
> end
>
> I’m also fine with exposing it as `variant(:small)` or for one offs 
> `variant(:small, caption: ’foo’)`.
>
> I’m not looking to see this through though, so you’d have to get George 
> Claghorn or someone else on board for this ride.
>
> Appreciate the extensive write up with reasoning!
>
> Den 27. nov. 2018 kl. 14.28 skrev Abhishek Chandrasekhar <
> abhishek.ch...@gmail.com >:
>
> Hello all -
>
> Firstly, huge thanks to those who have worked on ActiveStorage so far. The 
> library seems to be coming along nicely. 
>
>
> ActiveStorage currently allows you to define variants of an attachment as 
> follows:
>
> ```ruby
> class User < ActiveRecord::Base
>   has_one_attached :avatar
>
>   # ...
> end
>
> user.avatar.variant(resize: "100x100>")
> user.avatar.variant(resize: "100x100>", caption: "foo")
> user.avatar.variant(resize: "200x200", rotate: "-90")
> ```
>
>
> I'd like to propose the following functionality that lets users configure 
> and pre-define variants. 
>
> ```ruby
> class User < ActiveRecord::Base
>   has_one_attached(
> :avatar, 
> variants: { 
>   small: { resize: "100x100>" }
>   small_captioned: { resize: "100x100>", caption: "foo" }
>   medium_rotated: { resize: "200x200", rotate: "-90" }
> }
>   )
>
>   # ...
> end
>
> user.avatar.variant(:small)
> user.avatar.variant(:small_captioned)
> user.avatar.variant(:medium_rotated)
>
> # Something not pre-definied
> user.avatar.variant(rotate: "120")
> ```
>
> This is similar in concept to how existing attachment libraries 
> (paperclip, carrierwave, etc...) have allowed definition and configuration 
> of variants.
>
> It is true that this functionality can be mimicked outside of 
> activestorage by having the developer maintain a manual mapping of key 
> names to variant configurations. However, I believe this should be part of 
> ActiveStorage directly because -
>
>
> 1. It leads to cleaner/more readable code (e.g. `
> user.avatar.variant(:small)` is easy to understand)
> 2. It keeps configuration consolidated inline with `has_one_attached`, 
> which is similar to how options are already defined inline with `has_one`, 
> `has_many`, etc...
> 3. It's fully backward compatible with how variants are invoked right now 
> and doesn't force you to use a particular approach.
>
> Would such a feature be accepted if I were to submit a pull request for 
> it? 
>
> Thank you!
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Ruby on Rails: Core" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to rubyonrails-co...@googlegroups.com .
> To post to this group, send email to rubyonra...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/rubyonrails-core.
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> Kasper
>
>

-- 
You received this message because you are subscribed to the Google Groups "Ruby 
on Rails: Core" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to rubyonrails-core+unsubscr...@googlegroups.com.
To post to this group, send email to rubyonrails-core@googlegroups.com.
Visit this group at https://groups.google.com/group/rubyonrails-core.
For more options, visit https://groups.google.com/d/optout.


[E1000-devel] skb corruption

2018-11-27 Thread Chandrasekhar Nagaraj
Hi,

I have a Intel(R) Xeon(R) CPU E5-2680 based system with X710 based 10Gbe card.
The i40e driver version used is 1.5.16 and firmware version is 5.02.
I have observed one crash where the RIP is pointing to secpath_put when the 
skb->sp is being accessed although there were no IPSec connections established.

[844890.055882] task: 88085439b000 ti: 8808543a8000 task.ti: 
8808543a8000
[844890.063509] RIP: 0010:[]  [] 
skb_release_head_state+0x31/0xf0
[844890.072559] RSP: 0018:88085fc83dd8  EFLAGS: 00010206
[844890.077991] RAX:  RBX: 881011287900 RCX: 
0001
[844890.085270] RDX: 0500 RSI:  RDI: 

[844890.092548] RBP: 88085fc83de0 R08:  R09: 
0001802a0027
[844890.099828] R10: 81378240 R11: ea004044a100 R12: 
881011287900
[844890.107108] R13: ff55 R14: 8807ca20f550 R15: 
880803e84a00
[844890.114387] FS:  () GS:88085fc8() 
knlGS:
[844890.122620] CS:  0010 DS:  ES:  CR0: 80050033
[844890.128486] CR2:  CR3: 01e0d000 CR4: 
001407e0
[844890.135765] Stack:
[844890.137890]  881011287900 88085fc83df8 816d3632 
8810138e6548
[844890.145517]  88085fc83e18 816d3791 8810138e6548 
8807ca20f550
[844890.153141]  88085fc83e28 816e4f85 88085fc83eb0 
a0075790
[844890.160769] Call Trace:
[844890.163328]  
[844890.165352]
[844890.166986]  [] skb_release_all+0x12/0x30
[844890.171289]  [] consume_skb+0x31/0x80
[844890.176641]  [] __dev_kfree_skb_any+0x35/0x40
[844890.182697]  [] i40e_napi_poll+0x100/0x1000 [i40e]
[844890.189180]  [] net_rx_action+0x129/0x220
[844890.194880]  [] __do_softirq+0xb7/0x2a0
[844890.200406]  [] irq_exit+0x95/0xa0
[844890.205489]  [] do_IRQ+0x56/0xe0
[844890.210401]  [] common_interrupt+0x6a/0x6a
[844890.216185]  
[844890.218205]
[844890.219840]  [] ? cpuidle_enter_state+0x46/0xb0
[844890.224663]  [] ? cpuidle_enter_state+0x42/0xb0
[844890.230883]  [] cpuidle_idle_call+0xbe/0x200
[844890.236839]  [] arch_cpu_idle+0xe/0x20
[844890.242277]  [] cpu_startup_entry+0xc5/0x260
[844890.248236]  [] start_secondary+0x190/0x1e0
[844890.254099] Code: 48 89 e5 53 48 89 fb 48 8b 7f 58 48 85 ff 74 12 40 f6 c7 
01 0f 84 a0 00 00 00 48 c7 43 58 00 00 00 00 48 8b 7b 60 48 85 ff 74 05  ff 
0f 74 7a 48 8b 83 80 00 00 00 48 85 c0 74 15 65 8b 14 25
[844890.274478] RIP  [] skb_release_head_state+0x31/0xf0
[844890.288994]  RSP 

This is probably an indication of double free of skb leading to a skb 
corruption. Any pointers would be helpful.

Thanks,
Chandrasekhar

___
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel Ethernet, visit 
http://communities.intel.com/community/wired


[ansible-project] Using junos_config or netconf_config for configuration management on Juniper devices

2018-09-20 Thread chandrasekhar vajpayee madduri
Hi,
I am trying to  use ansible for configuration management on JUNOS devices. 
Getting below error and need help in resolving.



































*ansible-playbook 2.5.4  config file = /etc/ansible/ansible.cfg  configured 
module search path = [u'/home/attcloud/.ansible/plugins/modules', 
u'/usr/share/ansible/plugins/modules']  ansible python module location = 
/usr/lib/python2.7/dist-packages/ansible  executable location = 
/usr/bin/ansible-playbook  python version = 2.7.12 (default, Nov 19 2016, 
06:48:10) [GCC 5.4.0 20160609]Using /etc/ansible/ansible.cfg as config 
fileParsed /home/attcloud/netconf_PoC/netconfNtp/inventory/dbehosts 
inventory source with ini pluginERROR! Unexpected Exception, this is 
probably a bug: 'module' object has no attribute 'SSL_ST_INIT'the full 
traceback was:Traceback (most recent call last):  File 
"/usr/bin/ansible-playbook", line 118, in exit_code = 
cli.run()  File "/usr/lib/python2.7/dist-packages/ansible/cli/playbook.py", 
line 122, in runresults = pbex.run()  File 
"/usr/lib/python2.7/dist-packages/ansible/executor/playbook_executor.py", 
line 89, in runself._tqm.load_callbacks()  File 
"/usr/lib/python2.7/dist-packages/ansible/executor/task_queue_manager.py", 
line 190, in load_callbacksfor callback_plugin in 
callback_loader.all(class_only=True):  File 
"/usr/lib/python2.7/dist-packages/ansible/plugins/loader.py", line 435, in 
allmodule = self._load_module_source(name, path)  File 
"/usr/lib/python2.7/dist-packages/ansible/plugins/loader.py", line 345, in 
_load_module_sourcemodule = imp.load_source(full_name, path, 
module_file)  File 
"/usr/lib/python2.7/dist-packages/ansible/plugins/callback/foreman.py", 
line 68, in import requests  File 
"/usr/lib/python2.7/dist-packages/requests/__init__.py", line 53, in 
from .packages.urllib3.contrib import pyopenssl  File 
"/usr/lib/python2.7/dist-packages/urllib3/contrib/pyopenssl.py", line 54, 
in import OpenSSL.SSL  File 
"/usr/lib/python2.7/dist-packages/OpenSSL/__init__.py", line 8, in 
from OpenSSL import rand, crypto, SSL  File 
"/usr/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 118, in 
SSL_ST_INIT = _lib.SSL_ST_INITAttributeError: 'module' object has no 
attribute 'SSL_ST_INIT'*

I am using an Ubuntu 16.0.4.3 LTS and installing pyOpenSSL package using 
pip/yum/git clone is not working. Please help me out.

Regards,
Chandra Madduri


-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/8bf3236a-58e2-4f74-b4f5-f18cf5eaef14%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [datameet] Extracting NSSO data

2018-08-13 Thread Chandrasekhar S.
Greetings!

If you purchased data from NSSO it comes with a program (nesstar) that
extracts the data for you. Use this program and it will extract to
whichever format you would like including STATA.

Hope this helps.

Chandrasekhar

On Wed, Aug 8, 2018 at 10:59 PM, Tarun Kateja 
wrote:

> Hi Sachin,
>
> I also want to extract 68th round Household and Consumer expenditure data.
> I am little confused and have never worked with Stata. Can you explain what
> is multiplier and how to use it? and can you share your code to extract
> data from .txt file?
>
> This will be a great help!
>
> Thanks
>
> On Monday, September 5, 2016 at 1:03:02 PM UTC+5:30, sachin wrote:
>>
>> Hi,
>> I have used 68th round data for agri consumption and poverty estimation
>> using STATA.
>> I am assuming that the raw data you are referring to is also available in
>> .txt format. As I know, the NSSO data has a highly structured format -
>> Schedule.Level>Block>Item No. The variables are not declared in the raw
>> data. These variables are to be understood from the "layout" file for that
>> specific round (released along with the NSSO round data) and this is
>> available along with raw data.
>>
>> The data is a long string characters. These are read in a specific
>> manner. The layout file will specify how many characters must be read
>> together to form each variable. So it could look like -
>> v11 1-3 v12 4-8 v13 9-10 v14 11-13 v15 14-14 v16 15-15 v17 16-18 v18
>> 19-20 v19 21-22 v110 23-24 v111 25-25 v112 26-26 and so on.
>>
>> Now, this is the data that is then called from your software, to be read
>> from a raw data file (.txt) and then a table of required variables is
>> obtained for analysis. In a sense, the raw data is always excerpted for
>> analysis. And for this one begins with the layout file to check the
>> variables of interest and how they are encoded in the data.
>>
>> I am not sure this helps. With STATA it works a bit easy. With R, I do
>> not know how to assemble the same dataframe, although the analysis using
>> the variables will be a breeze.
>>
>> Best
>> Sachin
>>
>>
>>
>> On Sunday, 4 September 2016 15:27:55 UTC+5:30, Devdatta Tengshe wrote:
>>>
>>> Can you share the link where this data is available? That way we can
>>> have a look at it.
>>>
>>> Regards,
>>> Devdatta Tengshe
>>> Ph: 735-358-0782
>>>
>>> On 04-Sep-2016 3:01 pm, "Jagriti Arora"  wrote:
>>>
>>>> Hi,
>>>> Can anyone tell me how I can make sense of the raw data NSSO provides
>>>> on its website?
>>>> I tried converting the XML to dataframe in R, to no avail. I, now, have
>>>> an excel sheet with references and variables that have not been previously
>>>> declared.
>>>> Can anyone help? I'm looking for data from 38th and 66th round.
>>>>
>>>> Thanks and regards!
>>>>
>>>> --
>>>> Datameet is a community of Data Science enthusiasts in India. Know more
>>>> about us by visiting http://datameet.org
>>>> ---
>>>> You received this message because you are subscribed to the Google
>>>> Groups "datameet" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to datameet+u...@googlegroups.com.
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>> --
> Datameet is a community of Data Science enthusiasts in India. Know more
> about us by visiting http://datameet.org
> ---
> You received this message because you are subscribed to the Google Groups
> "datameet" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to datameet+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
Datameet is a community of Data Science enthusiasts in India. Know more about 
us by visiting http://datameet.org
--- 
You received this message because you are subscribed to the Google Groups 
"datameet" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to datameet+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


RE: Kernel API's for ADC

2018-07-26 Thread chandrasekhar
I am using freescale IMX6UL processor. I have to do it in kernel space, I
have a thermal printer attached to the processor and I have to continuously
monitor temperature of the printer head inorder to change he printing time
values. 
So based on the temperature values I have to decide dynamically how much the
printing time should be or if it reaches threshold I will throw a error. We
already wrote a driver which involves SPI communication, motor rotation and
strobe(heating the paper) handling in driver, within the same driver I want
the temperature/ADC reading has to be done. 

Thanks and Regards,
Chandrasekhar
-Original Message-
From: Greg KH [mailto:g...@kroah.com] 
Sent: Wednesday, July 25, 2018 7:17 PM
To: chandrasekhar
Cc: 'Daniel Baluta'; 'Kernelnewbies'
Subject: Re: Kernel API's for ADC

On Wed, Jul 25, 2018 at 04:47:29PM +0530, chandrasekhar wrote:
> Not on the sysfs interface. I want to read the ADC using kernel API's in
> kernel space.

For your specific driver and device, or for "any" driver/device?

And why do you need this, what are you going to do with that
information?

thanks,

greg k-h

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


RE: Kernel API's for ADC

2018-07-25 Thread chandrasekhar
Not on the sysfs interface. I want to read the ADC using kernel API's in
kernel space. 

Thanks and Regards,
Chandrasekhar

-Original Message-
From: Daniel Baluta [mailto:daniel.bal...@gmail.com] 
Sent: Wednesday, July 25, 2018 4:37 PM
To: chandrasekhar
Cc: Kernelnewbies
Subject: Re: Kernel API's for ADC

On Wed, Jul 25, 2018 at 1:25 PM, chandrasekhar 
wrote:
> Hi,
>
>
>
> Are there any kernel API's for ADC. I am using NXP IMX6UL processor. I
have
> to read ADC values in kernel space instead of sysfs/userspace.

It depends on the type of driver used. Either input or IIO.

The sysfs interface for IIO is described here:

https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-bus-iio


thanks,
Daniel.

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


Kernel API's for ADC

2018-07-25 Thread chandrasekhar
Hi,

 

Are there any kernel API's for ADC. I am using NXP IMX6UL processor. I have
to read ADC values in kernel space instead of sysfs/userspace.

 

Thanks and Regards,

Chandrasekhar

 

___
Kernelnewbies mailing list
Kernelnewbies@kernelnewbies.org
https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies


random seeds file hung on AIX 7.2

2018-07-22 Thread Chandrasekhar Velpula
Hi Team,

Could you pls some one help me while running the encrypt process the 
process is getting hung and random seed file is not updating

AIX version: 7.2
GPG version: gpg (GnuPG) 1.4.7

Regards,
Chandra Sekhar Velpula
SME - Unix
Email: chandra.velp...@in.ibm.com

Unix DL: Cemex_unix_india ___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: [yocto] Enabling the recipe from menuconfig

2018-05-18 Thread chandrasekhar
I think you can use Toaster for that which has a web interface you can select 
packages/receipes

 

Regards,

Chandrasekhar

 

From: Ugesh Reddy [mailto:kumar.ugesh...@yahoo.com] 
Sent: Friday, May 18, 2018 11:15 AM
To: chandrasek...@evolute.in; 'Yocto-mailing-list'
Subject: RE: [yocto] Enabling the recipe from menuconfig

 

The similar kind is required for my custom layer. So that I could enable the 
receipt from the menu config

Sent from Yahoo <https://overview.mail.yahoo.com/mobile/?.src=Android>  Mail on 
Android

 

On Fri, 18 May 2018 at 9:44 a.m., chandrasekhar

<chandrasek...@evolute.in> wrote:

you want to see in kernel menuconfig??

 

From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org] On 
Behalf Of Ugesh Reddy
Sent: Friday, May 18, 2018 7:52 AM
To: 'Yocto-mailing-list'; chandrasekhar
Subject: Re: [yocto] Enabling the recipe from menuconfig

 

Hi,

Thanks for the response,

 

 This will add the recipe as a part of image but I want to build and add the 
recipe when it has been selected from the menuconfig.

How to make visible the recipe in menuconfig?

On Thursday, 17 May, 2018, 9:32:15 AM IST, chandrasekhar 
<chandrasek...@evolute.in> wrote: 

 

 

Hi 

You can use 

IMAGE_INSTALL += "Package Name/Recipes name"

 

Regards,

Chandrasekhar

 

From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org] On 
Behalf Of Ugesh Reddy
Sent: Wednesday, May 16, 2018 9:57 PM
To: Yocto-mailing-list
Subject: [yocto] Enabling the recipe from menuconfig

 

Hello Team,

 

 I have a list of recipes in my custom layer. The recipes in this layer shall 
be enabled/selected through the menuconfig, once it is enabled then it shall be 
the part of image. is it possible to achieve it ? do we have any references? 

 

Regards,

Ugesh

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Enabling the recipe from menuconfig

2018-05-17 Thread chandrasekhar
you want to see in kernel menuconfig??

 

From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org] On 
Behalf Of Ugesh Reddy
Sent: Friday, May 18, 2018 7:52 AM
To: 'Yocto-mailing-list'; chandrasekhar
Subject: Re: [yocto] Enabling the recipe from menuconfig

 

Hi,

Thanks for the response,

 

 This will add the recipe as a part of image but I want to build and add the 
recipe when it has been selected from the menuconfig.

How to make visible the recipe in menuconfig?

On Thursday, 17 May, 2018, 9:32:15 AM IST, chandrasekhar 
<chandrasek...@evolute.in> wrote: 

 

 

Hi 

You can use 

IMAGE_INSTALL += "Package Name/Recipes name"

 

Regards,

Chandrasekhar

 

From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org] On 
Behalf Of Ugesh Reddy
Sent: Wednesday, May 16, 2018 9:57 PM
To: Yocto-mailing-list
Subject: [yocto] Enabling the recipe from menuconfig

 

Hello Team,

 

 I have a list of recipes in my custom layer. The recipes in this layer shall 
be enabled/selected through the menuconfig, once it is enabled then it shall be 
the part of image. is it possible to achieve it ? do we have any references? 

 

Regards,

Ugesh

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Enabling the recipe from menuconfig

2018-05-16 Thread chandrasekhar
Hi 

You can use 

IMAGE_INSTALL += "Package Name/Recipes name"

 

Regards,

Chandrasekhar

 

From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org] On 
Behalf Of Ugesh Reddy
Sent: Wednesday, May 16, 2018 9:57 PM
To: Yocto-mailing-list
Subject: [yocto] Enabling the recipe from menuconfig

 

Hello Team,

 

 I have a list of recipes in my custom layer. The recipes in this layer shall 
be enabled/selected through the menuconfig, once it is enabled then it shall be 
the part of image. is it possible to achieve it ? do we have any references? 

 

Regards,

Ugesh

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[jira] [Created] (CARBONDATA-2218) AlluxioCarbonFile while trying to force rename causes a FileSytem error and is not a DistributionFileSystem.

2018-03-01 Thread ChandraSekhar Saripaka (JIRA)
ChandraSekhar Saripaka created CARBONDATA-2218:
--

 Summary: AlluxioCarbonFile while trying to force rename causes a 
FileSytem error and is not a DistributionFileSystem.
 Key: CARBONDATA-2218
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2218
 Project: CarbonData
  Issue Type: Bug
  Components: core
Affects Versions: 1.4.0
Reporter: ChandraSekhar Saripaka
 Attachments: spark2-shell_carbondata_2.11-1.4.0-SNAPSHOT.log

AlluxioCarbonFile appears to rely on a DistributedFileSystem, but this always 
returns a o.a.hadoop.fs.FileSystem. So, kindly provide a way to make it work 
with Alluxio as the file system they have is a AbstractFileSystem. Also, please 
add more tests to the Alluxio.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[tip:timers/urgent] timers: Forward timer base before migrating timers

2018-02-28 Thread tip-bot for Lingutla Chandrasekhar
Commit-ID:  c52232a49e203a65a6e1a670cd5262f59e9364a0
Gitweb: https://git.kernel.org/tip/c52232a49e203a65a6e1a670cd5262f59e9364a0
Author: Lingutla Chandrasekhar <clingu...@codeaurora.org>
AuthorDate: Thu, 18 Jan 2018 17:20:22 +0530
Committer:  Thomas Gleixner <t...@linutronix.de>
CommitDate: Wed, 28 Feb 2018 23:34:33 +0100

timers: Forward timer base before migrating timers

On CPU hotunplug the enqueued timers of the unplugged CPU are migrated to a
live CPU. This happens from the control thread which initiated the unplug.

If the CPU on which the control thread runs came out from a longer idle
period then the base clock of that CPU might be stale because the control
thread runs prior to any event which forwards the clock.

In such a case the timers from the unplugged CPU are queued on the live CPU
based on the stale clock which can cause large delays due to increased
granularity of the outer timer wheels which are far away from base:;clock.

But there is a worse problem than that. The following sequence of events
illustrates it:

 - CPU0 timer1 is queued expires = 59969 and base->clk = 59131.

   The timer is queued at wheel level 2, with resulting expiry time = 60032
   (due to level granularity).

 - CPU1 enters idle @60007, with next timer expiry @60020.

 - CPU0 is hotplugged at @60009

 - CPU1 exits idle and runs the control thread which migrates the
   timers from CPU0

   timer1 is now queued in level 0 for immediate handling in the next
   softirq because the requested expiry time 59969 is before CPU1 base->clk
   60007

 - CPU1 runs code which forwards the base clock which succeeds because the
   next expiring timer. which was collected at idle entry time is still set
   to 60020.

   So it forwards beyond 60007 and therefore misses to expire the migrated
   timer1. That timer gets expired when the wheel wraps around again, which
   takes between 63 and 630ms depending on the HZ setting.

Address both problems by invoking forward_timer_base() for the control CPUs
timer base. All other places, which might run into a similar problem
(mod_timer()/add_timer_on()) already invoke forward_timer_base() to avoid
that.

[ tglx: Massaged comment and changelog ]

Fixes: a683f390b93f ("timers: Forward the wheel clock whenever possible")
Co-developed-by: Neeraj Upadhyay <neer...@codeaurora.org>
Signed-off-by: Neeraj Upadhyay <neer...@codeaurora.org>
Signed-off-by: Lingutla Chandrasekhar <clingu...@codeaurora.org>
Signed-off-by: Thomas Gleixner <t...@linutronix.de>
Cc: Anna-Maria Gleixner <anna-ma...@linutronix.de>
Cc: linux-arm-...@vger.kernel.org
Cc: sta...@vger.kernel.org
Link: https://lkml.kernel.org/r/20180118115022.6368-1-clingu...@codeaurora.org
---
 kernel/time/timer.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 48150ab42de9..4a4fd567fb26 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1894,6 +1894,12 @@ int timers_dead_cpu(unsigned int cpu)
raw_spin_lock_irq(_base->lock);
raw_spin_lock_nested(_base->lock, SINGLE_DEPTH_NESTING);
 
+   /*
+* The current CPUs base clock might be stale. Update it
+* before moving the timers over.
+*/
+   forward_timer_base(new_base);
+
BUG_ON(old_base->running_timer);
 
for (i = 0; i < WHEEL_SIZE; i++)


[tip:timers/urgent] timers: Forward timer base before migrating timers

2018-02-28 Thread tip-bot for Lingutla Chandrasekhar
Commit-ID:  c52232a49e203a65a6e1a670cd5262f59e9364a0
Gitweb: https://git.kernel.org/tip/c52232a49e203a65a6e1a670cd5262f59e9364a0
Author: Lingutla Chandrasekhar 
AuthorDate: Thu, 18 Jan 2018 17:20:22 +0530
Committer:  Thomas Gleixner 
CommitDate: Wed, 28 Feb 2018 23:34:33 +0100

timers: Forward timer base before migrating timers

On CPU hotunplug the enqueued timers of the unplugged CPU are migrated to a
live CPU. This happens from the control thread which initiated the unplug.

If the CPU on which the control thread runs came out from a longer idle
period then the base clock of that CPU might be stale because the control
thread runs prior to any event which forwards the clock.

In such a case the timers from the unplugged CPU are queued on the live CPU
based on the stale clock which can cause large delays due to increased
granularity of the outer timer wheels which are far away from base:;clock.

But there is a worse problem than that. The following sequence of events
illustrates it:

 - CPU0 timer1 is queued expires = 59969 and base->clk = 59131.

   The timer is queued at wheel level 2, with resulting expiry time = 60032
   (due to level granularity).

 - CPU1 enters idle @60007, with next timer expiry @60020.

 - CPU0 is hotplugged at @60009

 - CPU1 exits idle and runs the control thread which migrates the
   timers from CPU0

   timer1 is now queued in level 0 for immediate handling in the next
   softirq because the requested expiry time 59969 is before CPU1 base->clk
   60007

 - CPU1 runs code which forwards the base clock which succeeds because the
   next expiring timer. which was collected at idle entry time is still set
   to 60020.

   So it forwards beyond 60007 and therefore misses to expire the migrated
   timer1. That timer gets expired when the wheel wraps around again, which
   takes between 63 and 630ms depending on the HZ setting.

Address both problems by invoking forward_timer_base() for the control CPUs
timer base. All other places, which might run into a similar problem
(mod_timer()/add_timer_on()) already invoke forward_timer_base() to avoid
that.

[ tglx: Massaged comment and changelog ]

Fixes: a683f390b93f ("timers: Forward the wheel clock whenever possible")
Co-developed-by: Neeraj Upadhyay 
Signed-off-by: Neeraj Upadhyay 
Signed-off-by: Lingutla Chandrasekhar 
Signed-off-by: Thomas Gleixner 
Cc: Anna-Maria Gleixner 
Cc: linux-arm-...@vger.kernel.org
Cc: sta...@vger.kernel.org
Link: https://lkml.kernel.org/r/20180118115022.6368-1-clingu...@codeaurora.org
---
 kernel/time/timer.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 48150ab42de9..4a4fd567fb26 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1894,6 +1894,12 @@ int timers_dead_cpu(unsigned int cpu)
raw_spin_lock_irq(_base->lock);
raw_spin_lock_nested(_base->lock, SINGLE_DEPTH_NESTING);
 
+   /*
+* The current CPUs base clock might be stale. Update it
+* before moving the timers over.
+*/
+   forward_timer_base(new_base);
+
BUG_ON(old_base->running_timer);
 
for (i = 0; i < WHEEL_SIZE; i++)


[PATCH v2] timer: Forward timer base before migrating timers

2018-01-18 Thread Lingutla Chandrasekhar
In case when timers are migrated to a CPU, after it exits
idle, but before timer base is forwarded, either from
run_timer_softirq()/mod_timer()/add_timer_on(), it's
possible that migrated timers are queued, based on older
clock value. This can cause delays in handling those timers.

For example, consider below sequence of events:

- CPU0 timer1 expires = 59969 and base->clk = 59131. So,
  timer is queued at level 2, with next expiry for this timer
  = 60032 (due to granularity addition).
- CPU1 enters idle @60007, with next timer expiry @60020.
- CPU1 exits idle.
- CPU0 is hotplugged at 60009, and timers are migrated to
  CPU1, with new base->clk = 60007. timer1 is queued,
  based on 60007 at level 0, for immediate handling (in
  next timer softirq handling).
- CPU1's base->clk is forwarded to 60009, so, in next sched
  timer interrupt, timer1 is not handled.

The issue happens as timer wheel collects expired timers
starting from the current clk's index onwards, but migrated
timers, if enqueued, based on older clk value can result
in their index being less than clk's current index.
This can only happen if new base->clk is ahead of
timer->expires, resulting in timer being queued at
new base->clk's current index.

Co-developed-by: Neeraj Upadhyay <neer...@codeaurora.org>
Signed-off-by: Neeraj Upadhyay <neer...@codeaurora.org>
Signed-off-by: Lingutla Chandrasekhar <clingu...@codeaurora.org>

diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 89a9e1b4264a..f66c7ad55d7a 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1886,6 +1886,12 @@ int timers_dead_cpu(unsigned int cpu)
raw_spin_lock_irq(_base->lock);
raw_spin_lock_nested(_base->lock, SINGLE_DEPTH_NESTING);
 
+   /*
+* Before migrating timers, update new base clk to avoid
+* queueing timers based on older clock value.
+*/
+   forward_timer_base(new_base);
+
BUG_ON(old_base->running_timer);
 
for (i = 0; i < WHEEL_SIZE; i++)
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of 
Code Aurora Forum,
 a Linux Foundation Collaborative Project.



[PATCH v2] timer: Forward timer base before migrating timers

2018-01-18 Thread Lingutla Chandrasekhar
In case when timers are migrated to a CPU, after it exits
idle, but before timer base is forwarded, either from
run_timer_softirq()/mod_timer()/add_timer_on(), it's
possible that migrated timers are queued, based on older
clock value. This can cause delays in handling those timers.

For example, consider below sequence of events:

- CPU0 timer1 expires = 59969 and base->clk = 59131. So,
  timer is queued at level 2, with next expiry for this timer
  = 60032 (due to granularity addition).
- CPU1 enters idle @60007, with next timer expiry @60020.
- CPU1 exits idle.
- CPU0 is hotplugged at 60009, and timers are migrated to
  CPU1, with new base->clk = 60007. timer1 is queued,
  based on 60007 at level 0, for immediate handling (in
  next timer softirq handling).
- CPU1's base->clk is forwarded to 60009, so, in next sched
  timer interrupt, timer1 is not handled.

The issue happens as timer wheel collects expired timers
starting from the current clk's index onwards, but migrated
timers, if enqueued, based on older clk value can result
in their index being less than clk's current index.
This can only happen if new base->clk is ahead of
timer->expires, resulting in timer being queued at
new base->clk's current index.

Co-developed-by: Neeraj Upadhyay 
Signed-off-by: Neeraj Upadhyay 
Signed-off-by: Lingutla Chandrasekhar 

diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 89a9e1b4264a..f66c7ad55d7a 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1886,6 +1886,12 @@ int timers_dead_cpu(unsigned int cpu)
raw_spin_lock_irq(_base->lock);
raw_spin_lock_nested(_base->lock, SINGLE_DEPTH_NESTING);
 
+   /*
+* Before migrating timers, update new base clk to avoid
+* queueing timers based on older clock value.
+*/
+   forward_timer_base(new_base);
+
BUG_ON(old_base->running_timer);
 
for (i = 0; i < WHEEL_SIZE; i++)
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of 
Code Aurora Forum,
 a Linux Foundation Collaborative Project.



[PATCH v1] timer: Forward timer base before migrating timers

2018-01-17 Thread Lingutla Chandrasekhar
In case when timers are migrated to a CPU, after it exits
idle, but before timer base is forwarded, either from
run_timer_softirq()/mod_timer()/add_timer_on(), it's
possible that migrated timers are queued, based on older
clock value. This can cause delays in handling those timers.

For example, consider below sequence of events:

- CPU0 timer1 expires = 59969 and base->clk = 59131. So,
  timer is queued at level 2, with next expiry for this timer
  = 60032 (due to granularity addition).
- CPU1 enters idle @60007, with next timer expiry @60020.
- CPU1 exits idle.
- CPU0 is hotplugged at 60009, and timers are migrated to
  CPU1, with new base->clk = 60007. timer1 is queued,
  based on 60007 at level 0, for immediate handling (in
  next timer softirq handling).
- CPU1's base->clk is forwarded to 60009, so, in next sched
  timer interrupt, timer1 is not handled.

The issue happens as timer wheel collects expired timers
starting from the current clk's index onwards, but migrated
timers, if enqueued, based on older clk value can result
in their index being less than clk's current index.
This can only happen if new base->clk is ahead of
timer->expires, resulting in timer being queued at
new base->clk's current index.

Signed-off-by: Lingutla Chandrasekhar <clingu...@codeaurora.org>
Signed-off-by: Neeraj Upadhyay <neer...@codeaurora.org>

diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 89a9e1b4264a..f66c7ad55d7a 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1886,6 +1886,12 @@ int timers_dead_cpu(unsigned int cpu)
raw_spin_lock_irq(_base->lock);
raw_spin_lock_nested(_base->lock, SINGLE_DEPTH_NESTING);
 
+   /*
+* Before migrating timers, update new base clk to avoid
+* queueing timers based on older clock value.
+*/
+   forward_timer_base(new_base);
+
BUG_ON(old_base->running_timer);
 
for (i = 0; i < WHEEL_SIZE; i++)
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of 
Code Aurora Forum,
 a Linux Foundation Collaborative Project.



[PATCH v1] timer: Forward timer base before migrating timers

2018-01-17 Thread Lingutla Chandrasekhar
In case when timers are migrated to a CPU, after it exits
idle, but before timer base is forwarded, either from
run_timer_softirq()/mod_timer()/add_timer_on(), it's
possible that migrated timers are queued, based on older
clock value. This can cause delays in handling those timers.

For example, consider below sequence of events:

- CPU0 timer1 expires = 59969 and base->clk = 59131. So,
  timer is queued at level 2, with next expiry for this timer
  = 60032 (due to granularity addition).
- CPU1 enters idle @60007, with next timer expiry @60020.
- CPU1 exits idle.
- CPU0 is hotplugged at 60009, and timers are migrated to
  CPU1, with new base->clk = 60007. timer1 is queued,
  based on 60007 at level 0, for immediate handling (in
  next timer softirq handling).
- CPU1's base->clk is forwarded to 60009, so, in next sched
  timer interrupt, timer1 is not handled.

The issue happens as timer wheel collects expired timers
starting from the current clk's index onwards, but migrated
timers, if enqueued, based on older clk value can result
in their index being less than clk's current index.
This can only happen if new base->clk is ahead of
timer->expires, resulting in timer being queued at
new base->clk's current index.

Signed-off-by: Lingutla Chandrasekhar 
Signed-off-by: Neeraj Upadhyay 

diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 89a9e1b4264a..f66c7ad55d7a 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1886,6 +1886,12 @@ int timers_dead_cpu(unsigned int cpu)
raw_spin_lock_irq(_base->lock);
raw_spin_lock_nested(_base->lock, SINGLE_DEPTH_NESTING);
 
+   /*
+* Before migrating timers, update new base clk to avoid
+* queueing timers based on older clock value.
+*/
+   forward_timer_base(new_base);
+
BUG_ON(old_base->running_timer);
 
for (i = 0; i < WHEEL_SIZE; i++)
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of 
Code Aurora Forum,
 a Linux Foundation Collaborative Project.



[PATCH] kernel: time: forward timer base before migrating timers

2018-01-17 Thread Lingutla Chandrasekhar
In case when timers are migrated to a CPU, after it exits
idle, but before timer base is forwarded, either from
run_timer_softirq()/mod_timer()/add_timer_on(), it's
possible that migrated timers are queued, based on older
clock value. This can cause delays in handling those timers.

For example, consider below sequence of events:

- CPU0 timer1 expires = 59969 and base->clk = 59131. So,
  timer is queued at level 2, with next expiry for this timer
  = 60032 (due to granularity addition).
- CPU1 enters idle @60007, with next timer expiry @60020.
- CPU1 exits idle.
- CPU0 is hotplugged at 60009, and timers are migrated to
  CPU1, with new base->clk = 60007. timer1 is queued,
  based on 60007 at level 0, for immediate handling (in
  next timer softirq handling).
- CPU1's base->clk is forwarded to 60009, so, in next sched
  timer interrupt, timer1 is not handled.

The issue happens as timer wheel collects expired timers
starting from the current clk's index onwards, but migrated
timers, if enqueued, based on older clk value can result
in their index being less than clk's current index.
This can only happen if new base->clk is ahead of
timer->expires, resulting in timer being queued at
new base->clk's current index.

Change-Id: Idbe737b346f00e6e7241b93181bbbd80871f1400
Signed-off-by: Neeraj Upadhyay <neer...@codeaurora.org>
Signed-off-by: Lingutla Chandrasekhar <clingu...@codeaurora.org>

diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 89a9e1b4264a..ae94aa97b5a9 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1886,6 +1886,11 @@ int timers_dead_cpu(unsigned int cpu)
raw_spin_lock_irq(_base->lock);
raw_spin_lock_nested(_base->lock, SINGLE_DEPTH_NESTING);
 
+   /* Before migrating timers, update new base clk to avoid
+* queueing timers based on older clock value.
+*/
+   forward_timer_base(new_base);
+
BUG_ON(old_base->running_timer);
 
for (i = 0; i < WHEEL_SIZE; i++)
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of 
Code Aurora Forum,
 a Linux Foundation Collaborative Project.



[PATCH] kernel: time: forward timer base before migrating timers

2018-01-17 Thread Lingutla Chandrasekhar
In case when timers are migrated to a CPU, after it exits
idle, but before timer base is forwarded, either from
run_timer_softirq()/mod_timer()/add_timer_on(), it's
possible that migrated timers are queued, based on older
clock value. This can cause delays in handling those timers.

For example, consider below sequence of events:

- CPU0 timer1 expires = 59969 and base->clk = 59131. So,
  timer is queued at level 2, with next expiry for this timer
  = 60032 (due to granularity addition).
- CPU1 enters idle @60007, with next timer expiry @60020.
- CPU1 exits idle.
- CPU0 is hotplugged at 60009, and timers are migrated to
  CPU1, with new base->clk = 60007. timer1 is queued,
  based on 60007 at level 0, for immediate handling (in
  next timer softirq handling).
- CPU1's base->clk is forwarded to 60009, so, in next sched
  timer interrupt, timer1 is not handled.

The issue happens as timer wheel collects expired timers
starting from the current clk's index onwards, but migrated
timers, if enqueued, based on older clk value can result
in their index being less than clk's current index.
This can only happen if new base->clk is ahead of
timer->expires, resulting in timer being queued at
new base->clk's current index.

Change-Id: Idbe737b346f00e6e7241b93181bbbd80871f1400
Signed-off-by: Neeraj Upadhyay 
Signed-off-by: Lingutla Chandrasekhar 

diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 89a9e1b4264a..ae94aa97b5a9 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1886,6 +1886,11 @@ int timers_dead_cpu(unsigned int cpu)
raw_spin_lock_irq(_base->lock);
raw_spin_lock_nested(_base->lock, SINGLE_DEPTH_NESTING);
 
+   /* Before migrating timers, update new base clk to avoid
+* queueing timers based on older clock value.
+*/
+   forward_timer_base(new_base);
+
BUG_ON(old_base->running_timer);
 
for (i = 0; i < WHEEL_SIZE; i++)
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of 
Code Aurora Forum,
 a Linux Foundation Collaborative Project.



  1   2   3   4   5   6   7   8   >