Re: [DISCUSSION] Support Database Location Configuration while Creating Database

2017-10-05 Thread Sea
Hi,  Khan:
I have some questions for your design:
1. It looks like the the following comamd is supported by spark(hive).

CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name[COMMENT 
'database_comment']   [LOCATION hdfs_path];
2. For spark and hive, the default tablePath is `tablePath = 
databaseLocation + "/" + tableName`?? I think keep the default behavior is 
better.
For Backward Compatibility, we also support 
tablePath = carbon.storeLocation + ??/?? + database_Name +??/??  + tableName
   

3. What does `Carbon.update.sync.folder` means?
   

-- Original --
From:  "Mohammad Shahid Khan";;
Date:  Tue, Oct 3, 2017 09:06 PM
To:  "dev";

Subject:  [DISCUSSION] Support Database Location Configuration while Creating 
Database



Hi Dev,Please find the design document for Support Database Location 
Configuration while Creating Database.


Regards,
Shahid

Re: [DISCUSSION] Support Database Location Configuration while Creating Database

2017-10-05 Thread Mohammad Shahid Khan
Thank you for the clarification.

On 6 Oct 2017 05:08, "Jacky Li"  wrote:

> I mean, either spark.sql.warehouse.dir is set by user,
> the carbon core should not be aware of database and related
> path construction logic should be kept in spark or
> spark-integration module only. We should achieve that
> inside carbon, it should only know the upper layer is
> specified a table location to write table data.
>
> All database concept and commands should be managed by
> upper layer. This is not conflicting with your requirement.
>
> Regards,
> Jacky
>
> 发自坚果 Pro
> Mohammad Shahid Khan  于 2017年10月5日
> 下午11:56写道:
>
> n case of spark the where to write the table content is decided in two
> ways.
>
> 1. If a table is created under a database for which loaction attribute is
> not configured by the end user,
>then the content of table will be written in the
> "spark.sql.warehouse.dir".
>For example: if spark.sql.warehouse.dir = /opt/hive/warehouse/
>  then table content will be writetn at /opt/hive/warehouse/
>
> 2. But if while database creation the location attribute is set by the
> user
> then
>the table created under such database the content of table will be
> written to the configured location.
>For example if for database x user set location '/user/cutom/warehouse'
>
>then table created under the database x will be writen at
> '/user/cutom/warehouse'
>
>
>Currently for carbon we have cutomized the store writting and always
> writting to the
>fixed path i.e. 'spark.sql.warehouse.dir'.
>
>This is to address the same issue.
>
>
>
>   Q 1. Compute engine can manage the carbon table location the same way as
> ORC and parquet table.
>And user uses same API or SQL syntax to create carbondata table, like
> `df.format(“carbondata”).save(“path”) `
>using spark dataframe API. There should be no carbon storePath
> involved.
>
>A. I think this differnt requirement it does not even consider the
> database and table. This is something the table content
>   will be written at the desired location in the specified format.
>
>   Q 2. User should be able to save table in HDFS location or S3 location
> in
> the same context.
>Since there are several carbon property involved when determining the
> FS
> type, such as LOCK file,
>etc, it is not possible to create tables on HDFS and on S3 in same
> context, which also break the table level abstraction.
>
>A. This requirement is for viewfs file system, where differnt database
> can lie in differnt nameservices.
>
>
> On Wed, Oct 4, 2017 at 7:09 PM, Jacky Li  wrote:
>
> > Hi,
> >
> > What carbon provides are two level concepts:
> > 1. File format, which can be used by compute engine to write and read
> > data. CarbonData is a self-describing and type-aware columnar file
> format
> > for Hadoop environement which is just as what orc, parquet provides.
> >
> > 2. Table level storage, which include not just file format but also
> > aggregated index file (datamap), global dictionary, and segment
> metadata.
> > It provides more functionality regarding segment management and SQL
> > optimization (like lazy decode) through deep integration with compute
> > engine (currently only spark deep integration is supported).
> >
> > In my opinion, these two level abstraction are the core of carbondata
> > project. But the database concept should be of compute engine which
> > managing the store level metadata, since spark, hive, presto they all
> have
> > these part in their layer.
> >
> > I think what currently carbondata missing is that for table level
> storage,
> > user should be enable to specify the table location to save the table
> data.
> > This is to achieve:
> > 1. Compute engine can manage the carbon table location the same way as
> ORC
> > and parquet table. And user uses same API or SQL syntax to create
> > carbondata table, like `df.format(“carbondata”).save(“path”) ` using
> > spark dataframe API. There should be no carbon storePath involved.
> >
> > 2. User should be able to save table in HDFS location or S3 location in
> > the same context. Since there are several carbon property involved when
> > determining the FS type, such as LOCK file, etc, it is not possible to
> > create tables on HDFS and on S3 in same context, which also break the
> table
> > level abstraction.
> >
> > Regards,
> > Jacky
> >
> > > 在 2017年10月3日,下午10:36,Mohammad Shahid Khan <
> mohdshahidkhan1...@gmail.com>
> > 写道:
> > >
> > > Hi Dev,
> > > Please find the design document for Support Database Location
> > Configuration while Creating Database.
> > >
> > > Regards,
> > > Shahid
> > > 
> >
> >
> >
> >
>
>


Re: [DISCUSSION] Support Database Location Configuration while Creating Database

2017-10-05 Thread Mohammad Shahid Khan
n case of spark the where to write the table content is decided in two ways.

1. If a table is created under a database for which loaction attribute is
not configured by the end user,
   then the content of table will be written in the
"spark.sql.warehouse.dir".
   For example: if spark.sql.warehouse.dir = /opt/hive/warehouse/
 then table content will be writetn at /opt/hive/warehouse/

2. But if while database creation the location attribute is set by the user
then
   the table created under such database the content of table will be
written to the configured location.
   For example if for database x user set location '/user/cutom/warehouse'

   then table created under the database x will be writen at
'/user/cutom/warehouse'


   Currently for carbon we have cutomized the store writting and always
writting to the
   fixed path i.e. 'spark.sql.warehouse.dir'.

   This is to address the same issue.



  Q 1. Compute engine can manage the carbon table location the same way as
ORC and parquet table.
   And user uses same API or SQL syntax to create carbondata table, like
`df.format(“carbondata”).save(“path”) `
   using spark dataframe API. There should be no carbon storePath involved.

   A. I think this differnt requirement it does not even consider the
database and table. This is something the table content
  will be written at the desired location in the specified format.

  Q 2. User should be able to save table in HDFS location or S3 location in
the same context.
   Since there are several carbon property involved when determining the FS
type, such as LOCK file,
   etc, it is not possible to create tables on HDFS and on S3 in same
context, which also break the table level abstraction.

   A. This requirement is for viewfs file system, where differnt database
can lie in differnt nameservices.


On Wed, Oct 4, 2017 at 7:09 PM, Jacky Li  wrote:

> Hi,
>
> What carbon provides are two level concepts:
> 1. File format, which can be used by compute engine to write and read
> data. CarbonData is a self-describing and type-aware columnar file format
> for Hadoop environement which is just as what orc, parquet provides.
>
> 2. Table level storage, which include not just file format but also
> aggregated index file (datamap), global dictionary, and segment metadata.
> It provides more functionality regarding segment management and SQL
> optimization (like lazy decode) through deep integration with compute
> engine (currently only spark deep integration is supported).
>
> In my opinion, these two level abstraction are the core of carbondata
> project. But the database concept should be of compute engine which
> managing the store level metadata, since spark, hive, presto they all have
> these part in their layer.
>
> I think what currently carbondata missing is that for table level storage,
> user should be enable to specify the table location to save the table data.
> This is to achieve:
> 1. Compute engine can manage the carbon table location the same way as ORC
> and parquet table. And user uses same API or SQL syntax to create
> carbondata table, like `df.format(“carbondata”).save(“path”) ` using
> spark dataframe API. There should be no carbon storePath involved.
>
> 2. User should be able to save table in HDFS location or S3 location in
> the same context. Since there are several carbon property involved when
> determining the FS type, such as LOCK file, etc, it is not possible to
> create tables on HDFS and on S3 in same context, which also break the table
> level abstraction.
>
> Regards,
> Jacky
>
> > 在 2017年10月3日,下午10:36,Mohammad Shahid Khan 
> 写道:
> >
> > Hi Dev,
> > Please find the design document for Support Database Location
> Configuration while Creating Database.
> >
> > Regards,
> > Shahid
> > 
>
>
>
>


Re: [DISCUSSION] support user specified segment reading for query

2017-10-05 Thread Rahul Kumar
@Ravindra Thanks. Using query hint may be better approach.

I suggest following syntax :

*select * from t1 [in SEGMENTS(1,3,5)]; *

  Thanks and Regards

*   Rahul Kumar *



On Thu, Oct 5, 2017 at 1:04 PM, Ravindra Pesala 
wrote:

> Hi,
>
> Instead of using SET command to use for segments why don't you use QUERY
> HINT . Using query hint we can mention the segments inside the query itself
> as a hint.
>
> For example  SELECT /*+SEGMENTS(1,3,5) */ from t1.
>
> By using the above custom hint we can query from selected segments only,
> This concept is supported in Spark also and this concept will be helpful in
> our any future optimizations
>
> Regards,
> Ravindra.
>
> On 5 October 2017 at 12:22, Rahul Kumar  wrote:
>
> > @Jacky please find the reply of your doubts as follow :
> >
> >
> > *1.If user uses following command in two different beeline session,
> > will there be problem due to multithreading?   SET
> > carbon.input.segments.default.*
> >
> > *carbontable=1,3,5;   select * from carbontable;   SET
> > carbon.input.segments.default.**carbontable=*;*
> >
> > *Ans: *In case of multithreading ,yes there will be problem.
> >
> >   So threadSet() can be use to set the same property in multithread
> > mode.
> > *  Folowing syntax can be used to set segment ids for multithread
> mode*
> > :
> >Syntax : CarbonSession.threadSet(“carbon.input.segments.<
> > databese_name>.”,” > of segment ids>”)
> >e.g =>*future{*
> >
> > * CarbonSession.threadSet(“**carbon.input.segments.
> > default.carbontable”,”1,3,5”)*
> >
> > * sparkSession.sql(“select * from carbontable”).show*
> >
> > * CarbonSession.threadSet(“carbon.input.segments.
> > default.carbontable”,”*”)*
> >
> > * }*
> >
> > *Above will override the property at thread level. So property will be
> set
> > for each thread .*
> >
> >
> > *2.   The RESET command is not clear, why this is needed? It seems SET
> > carbon.input.segments.default.**carbontable=* is enough, right? and what
> > parameter it has?*
> >
> > *Ans:* RESET command doesn't take any parameter. RESET is already
> > implemented behavior which resets all the properties to their default
> > value.So simillarly RESET query will set the above property also to its
> > default value.
> >
> >   Thanks and Regards
> >
> > *   Rahul Kumar *
> >
> >
> >
> > On Wed, Oct 4, 2017 at 7:21 PM, Jacky Li  wrote:
> >
> > > I have 2 doubts:
> > > 1. If user uses following command in two different beeline session,
> will
> > > there be problem due to multithreading?
> > > SET carbon.input.segments.default.carbontable=1,3,5;
> > > select * from carbontable;
> > > SET carbon.input.segments.default.carbontable=*;
> > >
> > >
> > > 2. The RESET command is not clear, why this is needed? It seems SET
> > > carbon.input.segments.default.carbontable=* is enough, right? and what
> > > parameter it has?
> > >
> > > Regards,
> > > Jacky
> > >
> > > > 在 2017年10月4日,上午12:42,Rahul Kumar  写道:
> > > >
> > > > 
> > >
> > >
> >
>
>
>
> --
> Thanks & Regards,
> Ravi
>


Re: [DISCUSSION] support user specified segment reading for query

2017-10-05 Thread Ravindra Pesala
Hi,

Instead of using SET command to use for segments why don't you use QUERY
HINT . Using query hint we can mention the segments inside the query itself
as a hint.

For example  SELECT /*+SEGMENTS(1,3,5) */ from t1.

By using the above custom hint we can query from selected segments only,
This concept is supported in Spark also and this concept will be helpful in
our any future optimizations

Regards,
Ravindra.

On 5 October 2017 at 12:22, Rahul Kumar  wrote:

> @Jacky please find the reply of your doubts as follow :
>
>
> *1.If user uses following command in two different beeline session,
> will there be problem due to multithreading?   SET
> carbon.input.segments.default.*
>
> *carbontable=1,3,5;   select * from carbontable;   SET
> carbon.input.segments.default.**carbontable=*;*
>
> *Ans: *In case of multithreading ,yes there will be problem.
>
>   So threadSet() can be use to set the same property in multithread
> mode.
> *  Folowing syntax can be used to set segment ids for multithread mode*
> :
>Syntax : CarbonSession.threadSet(“carbon.input.segments.<
> databese_name>.”,” of segment ids>”)
>e.g =>*future{*
>
> * CarbonSession.threadSet(“**carbon.input.segments.
> default.carbontable”,”1,3,5”)*
>
> * sparkSession.sql(“select * from carbontable”).show*
>
> * CarbonSession.threadSet(“carbon.input.segments.
> default.carbontable”,”*”)*
>
> * }*
>
> *Above will override the property at thread level. So property will be set
> for each thread .*
>
>
> *2.   The RESET command is not clear, why this is needed? It seems SET
> carbon.input.segments.default.**carbontable=* is enough, right? and what
> parameter it has?*
>
> *Ans:* RESET command doesn't take any parameter. RESET is already
> implemented behavior which resets all the properties to their default
> value.So simillarly RESET query will set the above property also to its
> default value.
>
>   Thanks and Regards
>
> *   Rahul Kumar *
>
>
>
> On Wed, Oct 4, 2017 at 7:21 PM, Jacky Li  wrote:
>
> > I have 2 doubts:
> > 1. If user uses following command in two different beeline session, will
> > there be problem due to multithreading?
> > SET carbon.input.segments.default.carbontable=1,3,5;
> > select * from carbontable;
> > SET carbon.input.segments.default.carbontable=*;
> >
> >
> > 2. The RESET command is not clear, why this is needed? It seems SET
> > carbon.input.segments.default.carbontable=* is enough, right? and what
> > parameter it has?
> >
> > Regards,
> > Jacky
> >
> > > 在 2017年10月4日,上午12:42,Rahul Kumar  写道:
> > >
> > > 
> >
> >
>



-- 
Thanks & Regards,
Ravi