etl.DataLoadingException: The input file does not exist

2016-12-22 Thread 251469031
Hi,

when i run the following script:


scala>val dataFilePath = new File("/carbondata/pt/sample.csv").getCanonicalPath
scala>cc.sql(s"load data inpath '$dataFilePath' into table test_table")


is turns out:


org.apache.carbondata.processing.etl.DataLoadingException: The input file does 
not exist: hdfs://master:9000hdfs://master/opt/data/carbondata/pt/sample.csv
at 
org.apache.spark.util.FileUtils$$anonfun$getPaths$1.apply$mcVI$sp(FileUtils.scala:66)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)


It confused me that why there is a string "hdfs://master:9000" before 
"hdfs://master/opt/data/carbondata/pt/sample.csv", I can't found some 
configuration that contains "hdfs://master:9000", could any one help me~

Re: etl.DataLoadingException: The input file does not exist

2016-12-22 Thread Liang Chen
Hi

This is because that you use cluster mode, but the input file is local file.
1.If you use cluster mode, please load hadoop files
2.If you just want to load local files, please use local mode. 


李寅威 wrote
> Hi,
> 
> when i run the following script:
> 
> 
> scala>val dataFilePath = new
> File("/carbondata/pt/sample.csv").getCanonicalPath
> scala>cc.sql(s"load data inpath '$dataFilePath' into table test_table")
> 
> 
> is turns out:
> 
> 
> org.apache.carbondata.processing.etl.DataLoadingException: The input file
> does not exist:
> hdfs://master:9000hdfs://master/opt/data/carbondata/pt/sample.csv
>   at
> org.apache.spark.util.FileUtils$$anonfun$getPaths$1.apply$mcVI$sp(FileUtils.scala:66)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
> 
> 
> It confused me that why there is a string "hdfs://master:9000" before
> "hdfs://master/opt/data/carbondata/pt/sample.csv", I can't found some
> configuration that contains "hdfs://master:9000", could any one help me~





--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/etl-DataLoadingException-The-input-file-does-not-exist-tp4853p4854.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.


[jira] [Created] (CARBONDATA-552) Unthrown FilterUnsupportedException in catch block

2016-12-22 Thread Jaechang Nam (JIRA)
Jaechang Nam created CARBONDATA-552:
---

 Summary: Unthrown FilterUnsupportedException in catch block
 Key: CARBONDATA-552
 URL: https://issues.apache.org/jira/browse/CARBONDATA-552
 Project: CarbonData
  Issue Type: Bug
  Components: core
Reporter: Jaechang Nam
Priority: Trivial


new FilterUnsupportedException(e) is not thrown in 
core/src/main/java/org/apache/carbondata/scan/filter/resolver/RowLevelRangeFilterResolverImpl.java

{code}
230   }
231 } catch (FilterIllegalMemberException e) {
232   new FilterUnsupportedException(e);
233 }
234 return filterValuesList;
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


?????? etl.DataLoadingException: The input file does not exist

2016-12-22 Thread 251469031
Well, In the source code of carbondata, the filetype is determined as :


if (property.startsWith(CarbonUtil.HDFS_PREFIX)) {
storeDefaultFileType = FileType.HDFS;
  }


and  CarbonUtil.HDFS_PREFIX="hdfs://"


but when I run the following script, the dataFilePath is still local:


scala> val dataFilePath = new 
File("hdfs://master:9000/carbondata/sample.csv").getCanonicalPath
dataFilePath: String = 
/home/hadoop/carbondata/hdfs:/master:9000/carbondata/sample.csv





--  --
??: "Liang Chen";;
: 2016??12??22??(??) 8:47
??: "dev"; 

: Re: etl.DataLoadingException: The input file does not exist



Hi

This is because that you use cluster mode, but the input file is local file.
1.If you use cluster mode, please load hadoop files
2.If you just want to load local files, please use local mode. 


?? wrote
> Hi,
> 
> when i run the following script:
> 
> 
> scala>val dataFilePath = new
> File("/carbondata/pt/sample.csv").getCanonicalPath
> scala>cc.sql(s"load data inpath '$dataFilePath' into table test_table")
> 
> 
> is turns out:
> 
> 
> org.apache.carbondata.processing.etl.DataLoadingException: The input file
> does not exist:
> hdfs://master:9000hdfs://master/opt/data/carbondata/pt/sample.csv
>   at
> org.apache.spark.util.FileUtils$$anonfun$getPaths$1.apply$mcVI$sp(FileUtils.scala:66)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
> 
> 
> It confused me that why there is a string "hdfs://master:9000" before
> "hdfs://master/opt/data/carbondata/pt/sample.csv", I can't found some
> configuration that contains "hdfs://master:9000", could any one help me~





--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/etl-DataLoadingException-The-input-file-does-not-exist-tp4853p4854.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.

Re: carbondata-0.2 load data failed in yarn molde

2016-12-22 Thread Lu Cao
Hi team,
Looks like I've met the same problem about dictionary file is locked. Could
you share what changes you made about the configuration?

ERROR 23-12 09:55:26,222 - Executor task launch worker-0
java.lang.RuntimeException: Dictionary file vehsyspwrmod is locked for
updation. Please try after some time
at scala.sys.package$.error(package.scala:27)
at
org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.(CarbonGlobalDictionaryRDD.scala:353)
at
org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:293)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR 23-12 09:55:26,223 - Executor task launch worker-7
java.lang.RuntimeException: Dictionary file vehindlightleft is locked for
updation. Please try after some time
at scala.sys.package$.error(package.scala:27)
at
org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.(CarbonGlobalDictionaryRDD.scala:353)
at
org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:293)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR 23-12 09:55:26,223 - Executor task launch worker-4
java.lang.RuntimeException: Dictionary file vehwindowrearleft is locked for
updation. Please try after some time
at scala.sys.package$.error(package.scala:27)
at
org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.(CarbonGlobalDictionaryRDD.scala:353)
at
org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:293)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR 23-12 09:55:26,226 - Exception in task 5.0 in stage 5.0 (TID 9096)
java.lang.RuntimeException: Dictionary file vehsyspwrmod is locked for
updation. Please try after some time
at scala.sys.package$.error(package.scala:27)
at
org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.(CarbonGlobalDictionaryRDD.scala:353)
at
org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:293)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
ERROR 23-12 09:55:26,226 - Exception in task 13.0 in stage 5.0 (TID 9104)
java.lang.RuntimeException: Dictionary file vehwindowrearleft is locked for
updation. Please try after some time
at scala.sys.package$.error(package.scala:27)
at
org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.(CarbonGlobalDictionaryRDD.scala:353)
at
org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:293)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:

Re: carbondata-0.2 load data failed in yarn molde

2016-12-22 Thread QiangCai
I think the root cause is metadata lock type. 
Please add "carbon.lock.type" configuration to carbon.properties as
following.
#Local mode
carbon.lock.type=LOCALLOCK

#Cluster mode
carbon.lock.type=HDFSLOCK



--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/carbondata-0-2-load-data-failed-in-yarn-molde-tp3908p4887.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.


Re: 回复: etl.DataLoadingException: The input file does not exist

2016-12-22 Thread QiangCai
Please find the following item in carbon.properties file, give a proper
path(hdfs://master:9000/)
carbon.ddl.base.hdfs.url

During loading, will combine this url and data file path.

BTW, better to provide the version number.



--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/etl-DataLoadingException-The-input-file-does-not-exist-tp4853p4888.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.


[jira] [Created] (CARBONDATA-553) Create integration test-case for dataframe API

2016-12-22 Thread Rahul Kumar (JIRA)
Rahul Kumar created CARBONDATA-553:
--

 Summary: Create integration test-case for dataframe API
 Key: CARBONDATA-553
 URL: https://issues.apache.org/jira/browse/CARBONDATA-553
 Project: CarbonData
  Issue Type: Test
Reporter: Rahul Kumar






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: 回复: etl.DataLoadingException: The input file does not exist

2016-12-22 Thread manish gupta
Hi 251469031,

Thanks for showing interest in carbon. For your question please refer the
explanation below.

scala> val dataFilePath = new File("hdfs://master:9000/
carbondata/sample.csv").getCanonicalPath
dataFilePath: String = /home/hadoop/carbondata/hdfs:/
master:9000/carbondata/sample.csv

If you use new File, it will always return the pointer for path from local
file system. So Incase you are not appending hdfs url to the file/folder
path in the Load data DDL command, you can configure
*carbon.ddl.base.hdfs.url* in carbon.properties file as suggested by
QiangCai.

*carbon.ddl.base.hdfs.url=hdfs://:*

example
*carbon.ddl.base.hdfs.url=hdfs://9.82.101.42:54310
*

Regards
Manish Gupta

On Fri, Dec 23, 2016 at 10:09 AM, QiangCai  wrote:

> Please find the following item in carbon.properties file, give a proper
> path(hdfs://master:9000/)
> carbon.ddl.base.hdfs.url
>
> During loading, will combine this url and data file path.
>
> BTW, better to provide the version number.
>
>
>
> --
> View this message in context: http://apache-carbondata-
> mailing-list-archive.1130556.n5.nabble.com/etl-DataLoadingException-The-
> input-file-does-not-exist-tp4853p4888.html
> Sent from the Apache CarbonData Mailing List archive mailing list archive
> at Nabble.com.
>


Re: same query and I change the value than throw a error

2016-12-22 Thread QiangCai
Please provide executor side log



--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/same-query-and-I-change-the-value-than-throw-a-error-tp4811p4893.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.


[jira] [Created] (CARBONDATA-554) Maven build failed when run command "mvn clean install -DskipTests"

2016-12-22 Thread Gin-zhj (JIRA)
Gin-zhj created CARBONDATA-554:
--

 Summary: Maven build failed when run command "mvn clean install 
-DskipTests"
 Key: CARBONDATA-554
 URL: https://issues.apache.org/jira/browse/CARBONDATA-554
 Project: CarbonData
  Issue Type: Bug
Reporter: Gin-zhj
Assignee: Gin-zhj
Priority: Minor


Maven build failed  because of checkstyle



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: carbondata-0.2 load data failed in yarn molde

2016-12-22 Thread manish gupta
Hi Lu Cao,

The problem you are facing "Dictionary file is locked for updation" can
also come when the path formation is incorrect for the dictionary files.

You have to set carbon.properties file path both in driver and executor
side. In spark application master executor logs you will find the path
printed for dictionary files. Just validate that path with the one you have
configured in carbon.properties file.

Regards
Manish Gupta

On Fri, Dec 23, 2016 at 7:43 AM, Lu Cao  wrote:

> Hi team,
> Looks like I've met the same problem about dictionary file is locked. Could
> you share what changes you made about the configuration?
>
> ERROR 23-12 09:55:26,222 - Executor task launch worker-0
> java.lang.RuntimeException: Dictionary file vehsyspwrmod is locked for
> updation. Please try after some time
> at scala.sys.package$.error(package.scala:27)
> at
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerate
> RDD$$anon$1.(CarbonGlobalDictionaryRDD.scala:353)
> at
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(
> CarbonGlobalDictionaryRDD.scala:293)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:89)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> ERROR 23-12 09:55:26,223 - Executor task launch worker-7
> java.lang.RuntimeException: Dictionary file vehindlightleft is locked for
> updation. Please try after some time
> at scala.sys.package$.error(package.scala:27)
> at
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerate
> RDD$$anon$1.(CarbonGlobalDictionaryRDD.scala:353)
> at
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(
> CarbonGlobalDictionaryRDD.scala:293)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:89)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> ERROR 23-12 09:55:26,223 - Executor task launch worker-4
> java.lang.RuntimeException: Dictionary file vehwindowrearleft is locked for
> updation. Please try after some time
> at scala.sys.package$.error(package.scala:27)
> at
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerate
> RDD$$anon$1.(CarbonGlobalDictionaryRDD.scala:353)
> at
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(
> CarbonGlobalDictionaryRDD.scala:293)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:89)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> ERROR 23-12 09:55:26,226 - Exception in task 5.0 in stage 5.0 (TID 9096)
> java.lang.RuntimeException: Dictionary file vehsyspwrmod is locked for
> updation. Please try after some time
> at scala.sys.package$.error(package.scala:27)
> at
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerate
> RDD$$anon$1.(CarbonGlobalDictionaryRDD.scala:353)
> at
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(
> CarbonGlobalDictionaryRDD.scala:293)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:89)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> ERROR 23-12 09:55:26,226 - Exception in task 13.0 in stage 5.0 (TID 9104)
> java.lang.RuntimeException: Dictionary file vehwindowrearleft is locked for
> updation. Please try after some time
> at scala.sys.package$.error(package.scala:27)
> at
> org.apache.carbondata.spark.rdd.CarbonGlobalDi

[jira] [Created] (CARBONDATA-555) Configure Integration testcases to be run on hadoop cluster

2016-12-22 Thread Rahul Kumar (JIRA)
Rahul Kumar created CARBONDATA-555:
--

 Summary: Configure Integration testcases to be run on hadoop 
cluster
 Key: CARBONDATA-555
 URL: https://issues.apache.org/jira/browse/CARBONDATA-555
 Project: CarbonData
  Issue Type: Improvement
Reporter: Rahul Kumar






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Greater than/less-than/Like filters optmization

2016-12-22 Thread Venkata Gollamudi
+1, Please raise an Issue for improvement

On Thu, Dec 22, 2016 at 7:24 AM, Kumar Vishal 
wrote:

> Hi Sujith,
> +1 I think this will be a good optimization for dictionary column.
>
> -Regards
> Kumar Vishal
>
> On Mon, Dec 12, 2016 at 3:26 AM, sujith chacko <
> sujithchacko.2...@gmail.com>
> wrote:
>
> > Hi All,
> >
> >   I am having a suggestion for improving the filter queries which require
> > expression evaluation for
> > identifying its dictionary value.
> >
> > *Current design *
> > In *greater than/less-than/Like* *filters*, system first iterates each
> row
> > present in the dictionary cache for identifying valid filter actual
> members
> >  by applying the filter expression , once evaluation done system will
> hold
> > the list of identified valid filter actual member values(String), now in
> > next step again  system will look up the dictionary cache in order to
> > identify the dictionary surrogate values of the identified members. this
> > look up is an additional cost to our system even though the look up
> > methodology is an binary search in dictionary cache.
> >
> > *Proposed design/solution:*
> > *Identify the dictionary surrogate values in filter expression evaluation
> > step itself  when actual dictionary values will be scanned for
> identifying
> > valid filter members .*
> >
> > Keep a dictionary counter variable which will be increased  when system
> > iterates through  the dictionary cache in order to retrieve each actual
> > member stored in dictionary cache , after this system will evaluate each
> > row against the filter expression to identify whether its a valid filter
> > member or not, while doing this process itself counter value can be taken
> > as valid selected dictionary value since the actual member values and
> > its  dictionary
> > values will be kept in same order in dictionary cache as the iteration
> > order.
> >
> > *thus it will eliminate the further dictionary look up* *step *which is
> > required  to retrieve the dictionary surrogate value against identified
> > actual valid filter member. this can also increase significantly the
> filter
> > query performance of such filter queries which require expression
> > evaluation to identify it the filter members by looking up dictionary
> > cache, like *greater than/less-than/Like* filters .
> >
> > *Note : this optimization is applicable for dictionary columns.*
> >
> > Please let me know for valid inputs/suggestions.
> >
> > Thanks,
> > Sujith
> >
>