Re: Read timeouts on primary key queries

2016-08-31 Thread Joseph Tech
Patrick,

The desc table is below (only col names changed) :

CREATE TABLE db.tbl (
id1 text,
id2 text,
id3 text,
id4 text,
f1 text,
f2 map,
f3 map,
created timestamp,
updated timestamp,
PRIMARY KEY (id1, id2, id3, id4)
) WITH CLUSTERING ORDER BY (id2 ASC, id3 ASC, id4 ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'sstable_size_in_mb': '50', 'class':
'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
AND compression = {'sstable_compression':
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = '99.0PERCENTILE';

and the query is select * from tbl where id1=? and id2=? and id3=? and id4=?

The timeouts happen within ~2s to ~5s, while the successful calls have avg
of 8ms and p99 of 15s. These times are seen from app side, the actual query
times would be slightly lower.

Is there a way to capture traces only when queries take longer than a
specified duration? . We can't enable tracing in production given the
volume of traffic. We see that the same query which timed out works fine
later, so not sure if the trace of a successful run would help.

Thanks,
Joseph


On Wed, Aug 31, 2016 at 8:05 PM, Patrick McFadin  wrote:

> If you are getting a timeout on one table, then a mismatch of RF and node
> count doesn't seem as likely.
>
> Time to look at your query. You said it was a 'select * from table where
> key=?' type query. I would next use the trace facility in cqlsh to
> investigate further. That's a good way to find hard to find issues. You
> should be looking for clear ledge where you go from single digit ms to 4 or
> 5 digit ms times.
>
> The other place to look is your data model for that table if you want to
> post the output from a desc table.
>
> Patrick
>
>
>
> On Tue, Aug 30, 2016 at 11:07 AM, Joseph Tech 
> wrote:
>
>> On further analysis, this issue happens only on 1 table in the KS which
>> has the max reads.
>>
>> @Atul, I will look at system health, but didnt see anything standing out
>> from GC logs. (using JDK 1.8_92 with G1GC).
>>
>> @Patrick , could you please elaborate the "mismatch on node count + RF"
>> part.
>>
>> On Tue, Aug 30, 2016 at 5:35 PM, Atul Saroha 
>> wrote:
>>
>>> There could be many reasons for this if it is intermittent. CPU usage +
>>> I/O wait status. As read are I/O intensive, your IOPS requirement should be
>>> met that time load. Heap issue if CPU is busy for GC only. Network health
>>> could be the reason. So better to look system health during that time when
>>> it comes.
>>>
>>> 
>>> -
>>> Atul Saroha
>>> *Lead Software Engineer*
>>> *M*: +91 8447784271 *T*: +91 124-415-6069 *EXT*: 12369
>>> Plot # 362, ASF Centre - Tower A, Udyog Vihar,
>>>  Phase -4, Sector 18, Gurgaon, Haryana 122016, INDIA
>>>
>>> On Tue, Aug 30, 2016 at 5:10 PM, Joseph Tech 
>>> wrote:
>>>
 Hi Patrick,

 The nodetool status shows all nodes up and normal now. From OpsCenter
 "Event Log" , there are some nodes reported as being down/up etc. during
 the timeframe of timeout, but these are Search workload nodes from the
 remote (non-local) DC. The RF is 3 and there are 9 nodes per DC.

 Thanks,
 Joseph

 On Mon, Aug 29, 2016 at 11:07 PM, Patrick McFadin 
 wrote:

> You aren't achieving quorum on your reads as the error is explains.
> That means you either have some nodes down or your topology is not 
> matching
> up. The fact you are using LOCAL_QUORUM might point to a datacenter
> mis-match on node count + RF.
>
> What does your nodetool status look like?
>
> Patrick
>
> On Mon, Aug 29, 2016 at 10:14 AM, Joseph Tech 
> wrote:
>
>> Hi,
>>
>> We recently started getting intermittent timeouts on primary key
>> queries (select * from table where key=)
>>
>> The error is : com.datastax.driver.core.exceptions.ReadTimeoutException:
>> Cassandra timeout during read query at consistency LOCAL_QUORUM (2
>> responses were required but only 1 replica
>> a responded)
>>
>> The same query would work fine when tried directly from cqlsh. There
>> are no indications in system.log for the table in question, though there
>> were compactions in progress for tables in another keyspace which is more
>> frequently accessed.
>>
>> My understanding is that the 

Re: cassandra database design

2016-08-31 Thread Stone Fang
access pattern is

select *from datacenter where datacentername = '' and publish>$time and
publish<$time

On Wed, Aug 31, 2016 at 8:37 PM, Carlos Alonso  wrote:

> Maybe a good question could be:
>
> Which is your access pattern to this data?
>
> Carlos Alonso | Software Engineer | @calonso 
>
> On 31 August 2016 at 11:47, Stone Fang  wrote:
>
>> Hi all,
>> have some questions on how to define clustering key.
>>
>> have a table like this
>>
>> CREATE TABLE datacenter{
>>
>> datacentername varchar,
>>
>> publish timestamp,
>>
>> value varchar,
>>
>> PRIMARY KEY(datacentername,publish)
>>
>> }
>>
>>
>> *issues:*
>> there are only two datacenter,so the data would only have two
>> partitions.and store
>> in two nodes.want to spread the data evenly around the cluster.
>>
>> take this post for reference
>> http://www.datastax.com/dev/blog/basic-rules-of-cassandra-data-modeling
>>
>> CREATE TABLE datacenter{
>>
>> datacentername varchar,
>>
>> publish_pre text,
>>
>> publish timestamp,
>>
>> value varchar,
>>
>> PRIMARY KEY((datacentername,publish_pre),publish)
>>
>> }
>>
>> publish_pre is from 1~12 hours.*but the workload is high.i dont want to
>> all workload inserted into one node in a hour.*
>>
>> have no idea on how to define the partition key to spread data evenly
>> around the cluster,and the partition not split by time.which means that
>> data should not be inserted one node at a certain time window.
>>
>> thanks
>> stone
>>
>
>


Cassandra distinct partitionkey read fails with com.datastax.driver.core.exceptions.ReadFailureException

2016-08-31 Thread Penukonda, Pushpa
Hi Users,
One of our client deployment with single node we are experiencing Cassandra 
read failures through java client for below query . I could see similar error 
through cqlsh client as well. This was working fine few months started 
happening couple of days ago. We might be having close to 10 million records in 
this table. We tried to increase "read_request_timeout_in_ms" to 20 second, but 
did not help. Please share solution or advice if anyone has experienced similar 
issues . Thanks lot.

CQLSh client: select distinct [partition key column] from table;
Error: code=1300 [Replica(s) failed to execute read] message="Operation failed 
- received 0 responses and 1 failures" info={'failures': 1, 
'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}

>From java client:
Select distinct [partitionKey column] from table

Complete stacktrace:

om.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure 
during read query at consistency LOCAL_ONE (1 responses were required but only 
0 replica responded, 1 failed)
java.util.concurrent.ExecutionException: 
com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure 
during read query at consistency LOCAL_ONE (1 responses were required but only 
0 replica responded, 1 failed)
at 
com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
at 
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at 
rx.internal.operators.OnSubscribeToObservableFuture$ToObservableFuture.call(OnSubscribeToObservableFuture.java:74)
at 
rx.internal.operators.OnSubscribeToObservableFuture$ToObservableFuture.call(OnSubscribeToObservableFuture.java:43)
at rx.Observable.unsafeSubscribe(Observable.java:8314)
at 
rx.internal.operators.OperatorSubscribeOn$1.call(OperatorSubscribeOn.java:94)
at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:55)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.ReadFailureException: Cassandra 
failure during read query at consistency LOCAL_ONE (1 responses were required 
but only 0 replica responded, 1 failed)
at 
com.datastax.driver.core.exceptions.ReadFailureException.copy(ReadFailureException.java:95)
at 
com.datastax.driver.core.Responses$Error.asException(Responses.java:128)
at 
com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
at 
com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:184)
at 
com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:43)
at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:798)
at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617)
at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1005)
at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:928)
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at 
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at 
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
at 
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:354)
at 

Question about end of life support for Apache Cassandra 2.1 and 2.2

2016-08-31 Thread Anmol Sharma
According to the download  page,
Apache Cassandra 2.1 is supported with critical fixes only till Nov 2016
and and Apache Cassandra 2.2 is supported till Nov 2016.

I wanted to know what is the policy for such "unsupported" versions,
especially related to kernel vulnerabilities / security threats from
dependent libraries that are discovered after a project has reached the
"unsupported" stage?

Will the upstream versions of Apache Cassandra 2.1 and 2.2 still receive
security updates / patches or is it entirely up to the end users to fix
these?

Thanks,
Anmol


mutation checksum failure during commit log replay

2016-08-31 Thread John Sanda
What could cause an error like:

ERROR 07:11:56 Exiting due to error while processing commit log during
initialization.
org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException:
Mutation checksum failure at 818339 in CommitLog-5-1470234746867.log


This is with Cassandra 2.2.4. It's clear that the commit log segment
is corrupted in some way. Cassandra is running in docker. This happens
after destroying the container and starting Cassandra in a new
container on a different host machine which is using the same commit
log and data directories which are attached via network storage.

-- 

- John


testing retry policy

2016-08-31 Thread Jimmy Lin
hi all,
I have some customized retry policies that want to test.
In my single node local cluster, is there anyway to simulate the read/write
timeout and or unavailable exception?
I tried to kill the Cassandra process but it won't result in unavailable
exception but no host available exception and so not going through the
retry policy logic

thanks


Replication time across regions in AWS

2016-08-31 Thread cass savy
Has anybody tested time taken to replicate data across regions between
us-east and west or AZ in AWS? Few docs/blogs says its in few 10s of
millisecs.

Please can you provide your insights. Also is there a way to measure the
time lag.


Re: Read timeouts on primary key queries

2016-08-31 Thread Patrick McFadin
If you are getting a timeout on one table, then a mismatch of RF and node
count doesn't seem as likely.

Time to look at your query. You said it was a 'select * from table where
key=?' type query. I would next use the trace facility in cqlsh to
investigate further. That's a good way to find hard to find issues. You
should be looking for clear ledge where you go from single digit ms to 4 or
5 digit ms times.

The other place to look is your data model for that table if you want to
post the output from a desc table.

Patrick



On Tue, Aug 30, 2016 at 11:07 AM, Joseph Tech  wrote:

> On further analysis, this issue happens only on 1 table in the KS which
> has the max reads.
>
> @Atul, I will look at system health, but didnt see anything standing out
> from GC logs. (using JDK 1.8_92 with G1GC).
>
> @Patrick , could you please elaborate the "mismatch on node count + RF"
> part.
>
> On Tue, Aug 30, 2016 at 5:35 PM, Atul Saroha 
> wrote:
>
>> There could be many reasons for this if it is intermittent. CPU usage +
>> I/O wait status. As read are I/O intensive, your IOPS requirement should be
>> met that time load. Heap issue if CPU is busy for GC only. Network health
>> could be the reason. So better to look system health during that time when
>> it comes.
>>
>> 
>> -
>> Atul Saroha
>> *Lead Software Engineer*
>> *M*: +91 8447784271 *T*: +91 124-415-6069 *EXT*: 12369
>> Plot # 362, ASF Centre - Tower A, Udyog Vihar,
>>  Phase -4, Sector 18, Gurgaon, Haryana 122016, INDIA
>>
>> On Tue, Aug 30, 2016 at 5:10 PM, Joseph Tech 
>> wrote:
>>
>>> Hi Patrick,
>>>
>>> The nodetool status shows all nodes up and normal now. From OpsCenter
>>> "Event Log" , there are some nodes reported as being down/up etc. during
>>> the timeframe of timeout, but these are Search workload nodes from the
>>> remote (non-local) DC. The RF is 3 and there are 9 nodes per DC.
>>>
>>> Thanks,
>>> Joseph
>>>
>>> On Mon, Aug 29, 2016 at 11:07 PM, Patrick McFadin 
>>> wrote:
>>>
 You aren't achieving quorum on your reads as the error is explains.
 That means you either have some nodes down or your topology is not matching
 up. The fact you are using LOCAL_QUORUM might point to a datacenter
 mis-match on node count + RF.

 What does your nodetool status look like?

 Patrick

 On Mon, Aug 29, 2016 at 10:14 AM, Joseph Tech 
 wrote:

> Hi,
>
> We recently started getting intermittent timeouts on primary key
> queries (select * from table where key=)
>
> The error is : com.datastax.driver.core.exceptions.ReadTimeoutException:
> Cassandra timeout during read query at consistency LOCAL_QUORUM (2
> responses were required but only 1 replica
> a responded)
>
> The same query would work fine when tried directly from cqlsh. There
> are no indications in system.log for the table in question, though there
> were compactions in progress for tables in another keyspace which is more
> frequently accessed.
>
> My understanding is that the chances of primary key queries timing out
> is very minimal. Please share the possible reasons / ways to debug this
> issue.
>
> We are using Cassandra 2.1 (DSE 4.8.7).
>
> Thanks,
> Joseph
>
>
>
>

>>>
>>
>


unsubscribe

2016-08-31 Thread Mike Yeap



Re: LCS Increasing the sstable size

2016-08-31 Thread Jérôme Mainaud
Hello DuyHai,

I have no problem with performance even if I'm using 3 HDD in RAID 0.
Last 4 years of data were imported in two weeks with is acceptable for the
client.
Daily data will be much less intensive and my client is more concerned with
storage price than with pure latency.
To be more precise, Cassandra was not chosen for its latency but because it
is a distributed, multi-datacenter and no downtime database.

Tests show that write amplification is not a problem in our case. So, if
LCS may not be the best technical choice, it is a relatively correct one as
long as the number of sstables doesn't explode.

The first thing I looked at was compaction stats and there is no compaction
pending.
The size of most sstable is 160 MB, the expected size.
If you do the math, 4 TB divided by 160 MB equals 26,214 sstables. With 8
files per sstable, you get 209,715 files.
With 6 TB, we get 39,321 sstables and 314,572 files.

If I change sstable_size_in_mb to 512, it would end with 12,2881 sstables
and 98,304 files for 6 TB.
That seems to be a good compromise if there is no trap.

The only problem I can see would be a latency drop due to the size of index
and summary.
But as average line size is 70 KB. So there should not be so many entries
per file.

Am I missing something ?


-- 
Jérôme Mainaud
jer...@mainaud.com

2016-08-31 13:28 GMT+02:00 DuyHai Doan :

> Some random thoughts
>
> 1) Are they using SSD ?
>
> 2) If using SSD, I remember that one recommendation is not to exceed
> ~3Tb/node, unless they're using DateTiered or better TimeWindow compaction
> strategy
>
> 3) LCS is very disk intensive and usually exacerbates write amp the more
> you have data
>
> 4) The huge number of SSTable let me suspect some issue with compaction
> not keeping up. Can you post here a "nodetool tablestats"  and
> "compactionstats" ? Are there many pending compactions ?
>
> 5) Last but not least, what does "dstat" shows ? Is there any frequent CPU
> wait ?
>
> On Wed, Aug 31, 2016 at 12:34 PM, Jérôme Mainaud 
> wrote:
>
>> Hello,
>>
>> My cluster use LeveledCompactionStrategy on rather big nodes (9 TB disk
>> per node with a target of 6 TB of data and the 3 remaining TB are reserved
>> for compaction and snapshots). There is only one table for this application.
>>
>> With default sstable_size_in_mb at 160 MB, we have a huge number of
>> sstables (25,000+ for 4TB already loaded) which lead to IO errors due to
>> open files limit (set at 100,000).
>>
>> Increasing the open files limit can be a solution but at this level, I
>> would rather increase sstable_size to 500 MB which would keep the file
>> number around 100,000.
>>
>> Could increasing sstable size lead to any problem I don't see ?
>> Do you have any advice about this ?
>>
>> Thank you.
>>
>> --
>> Jérôme Mainaud
>> jer...@mainaud.com
>>
>
>


Re: cassandra database design

2016-08-31 Thread Carlos Alonso
Maybe a good question could be:

Which is your access pattern to this data?

Carlos Alonso | Software Engineer | @calonso 

On 31 August 2016 at 11:47, Stone Fang  wrote:

> Hi all,
> have some questions on how to define clustering key.
>
> have a table like this
>
> CREATE TABLE datacenter{
>
> datacentername varchar,
>
> publish timestamp,
>
> value varchar,
>
> PRIMARY KEY(datacentername,publish)
>
> }
>
>
> *issues:*
> there are only two datacenter,so the data would only have two
> partitions.and store
> in two nodes.want to spread the data evenly around the cluster.
>
> take this post for reference
> http://www.datastax.com/dev/blog/basic-rules-of-cassandra-data-modeling
>
> CREATE TABLE datacenter{
>
> datacentername varchar,
>
> publish_pre text,
>
> publish timestamp,
>
> value varchar,
>
> PRIMARY KEY((datacentername,publish_pre),publish)
>
> }
>
> publish_pre is from 1~12 hours.*but the workload is high.i dont want to
> all workload inserted into one node in a hour.*
>
> have no idea on how to define the partition key to spread data evenly
> around the cluster,and the partition not split by time.which means that
> data should not be inserted one node at a certain time window.
>
> thanks
> stone
>


Re: LCS Increasing the sstable size

2016-08-31 Thread DuyHai Doan
Some random thoughts

1) Are they using SSD ?

2) If using SSD, I remember that one recommendation is not to exceed
~3Tb/node, unless they're using DateTiered or better TimeWindow compaction
strategy

3) LCS is very disk intensive and usually exacerbates write amp the more
you have data

4) The huge number of SSTable let me suspect some issue with compaction not
keeping up. Can you post here a "nodetool tablestats"  and
"compactionstats" ? Are there many pending compactions ?

5) Last but not least, what does "dstat" shows ? Is there any frequent CPU
wait ?

On Wed, Aug 31, 2016 at 12:34 PM, Jérôme Mainaud  wrote:

> Hello,
>
> My cluster use LeveledCompactionStrategy on rather big nodes (9 TB disk
> per node with a target of 6 TB of data and the 3 remaining TB are reserved
> for compaction and snapshots). There is only one table for this application.
>
> With default sstable_size_in_mb at 160 MB, we have a huge number of
> sstables (25,000+ for 4TB already loaded) which lead to IO errors due to
> open files limit (set at 100,000).
>
> Increasing the open files limit can be a solution but at this level, I
> would rather increase sstable_size to 500 MB which would keep the file
> number around 100,000.
>
> Could increasing sstable size lead to any problem I don't see ?
> Do you have any advice about this ?
>
> Thank you.
>
> --
> Jérôme Mainaud
> jer...@mainaud.com
>


LCS Increasing the sstable size

2016-08-31 Thread Jérôme Mainaud
Hello,

My cluster use LeveledCompactionStrategy on rather big nodes (9 TB disk per
node with a target of 6 TB of data and the 3 remaining TB are reserved for
compaction and snapshots). There is only one table for this application.

With default sstable_size_in_mb at 160 MB, we have a huge number of
sstables (25,000+ for 4TB already loaded) which lead to IO errors due to
open files limit (set at 100,000).

Increasing the open files limit can be a solution but at this level, I
would rather increase sstable_size to 500 MB which would keep the file
number around 100,000.

Could increasing sstable size lead to any problem I don't see ?
Do you have any advice about this ?

Thank you.

-- 
Jérôme Mainaud
jer...@mainaud.com


cassandra database design

2016-08-31 Thread Stone Fang
Hi all,
have some questions on how to define clustering key.

have a table like this

CREATE TABLE datacenter{

datacentername varchar,

publish timestamp,

value varchar,

PRIMARY KEY(datacentername,publish)

}


*issues:*
there are only two datacenter,so the data would only have two
partitions.and store
in two nodes.want to spread the data evenly around the cluster.

take this post for reference
http://www.datastax.com/dev/blog/basic-rules-of-cassandra-data-modeling

CREATE TABLE datacenter{

datacentername varchar,

publish_pre text,

publish timestamp,

value varchar,

PRIMARY KEY((datacentername,publish_pre),publish)

}

publish_pre is from 1~12 hours.*but the workload is high.i dont want to all
workload inserted into one node in a hour.*

have no idea on how to define the partition key to spread data evenly
around the cluster,and the partition not split by time.which means that
data should not be inserted one node at a certain time window.

thanks
stone


Re: Output of "select token from system.local where key = 'local' "

2016-08-31 Thread Moshe Levy
 .
P
On Wednesday, 31 August 2016, Alexander DEJANOVSKI 
wrote:

> Hi Siddharth,
>
> yes, we are sure token ranges will never overlap (I think the start token
> in describering output is excluded and the end token included).
>
> You can get per host information in the Datastax Java driver using :
>
> Set rangesForKeyspace =  cluster.getMetadata().getTokenRanges(
> keyspaceName, host);
> Bye,
>
> Alex
>
> Le mar. 30 août 2016 à 10:04, Siddharth Verma <
> verma.siddha...@snapdeal.com
> > a écrit :
>
>> Hi ,
>> Can we be sure that, token ranges in nodetool describering will be non
>> overlapping?
>>
>> Thanks
>> Siddharth Verma
>>
>