Re: Embedding Calcite, adjusting convertlets

2016-11-17 Thread Julian Hyde
I was wrong earlier… FrameworkConfig already has a getConvertletTable method. 
But regarding using FrameworkConfig from within the JDBC driver, It’s 
complicated. FrameworkConfig only works if you are “outside” Calcite, whereas 
CalcitePrepare is when you are customizing from the inside, and sadly 
CalcitePrepare does not use a FrameworkConfig.

Compare and contrast:
 * CalcitePrepareImpl.getSqlToRelConverter [ 
https://github.com/apache/calcite/blob/3f92157d5742dd10f3b828d22d7a753e0a2899cc/core/src/main/java/org/apache/calcite/prepare/CalcitePrepareImpl.java#L1114
 

 ]
 * PlannerImpl.rel [ 
https://github.com/apache/calcite/blob/105bba1f83cd9631e8e1211d262e4886a4a863b7/core/src/main/java/org/apache/calcite/prepare/PlannerImpl.java#L225
 

 ]

The latter uses a convertletTable sourced from a FrameworkConfig. 

The ideal thing would be to get CalcitePrepareImpl to use a PlannerImpl to do 
its dirty work. Then “inside” and “outside” would work the same. Would 
definitely appreciate that as a patch.

If you choose to go the JDBC driver route, you could override 
Driver.createPrepareFactory to produce a sub-class of CalcitePrepare that works 
for your environment, one with an explicit convertletTable rather than just 
using the default.

Julian


> On Nov 17, 2016, at 5:01 PM, Gian Merlino  wrote:
> 
> Hey Julian,
> 
> If the convertlets were customizable with a FrameworkConfig, how would I
> use that configure the JDBC driver (given that I'm doing it with the code
> upthread)? Or would that suggest using a different approach to embedding
> Calcite?
> 
> Gian
> 
> On Thu, Nov 17, 2016 at 4:02 PM, Julian Hyde  wrote:
> 
>> Convertlets have a similar effect to planner rules (albeit they act on
>> scalar expressions, not relational expressions) so people should be able to
>> change the set of active convertlets.
>> 
>> Would you like to propose a change that makes the convertlet table
>> pluggable? Maybe as part of FrameworkConfig? Regardless, please log a JIRA
>> to track this.
>> 
>> And by the way, RexImpTable, which defines how operators are implemented
>> by generating java code, should also be pluggable. It’s been on my mind for
>> a long time to allow the “engine” — related to the data format, and how
>> code is generated to access fields and evaluate expressions and operators —
>> to be pluggable.
>> 
>> Regarding whether the JDBC driver is the right way to embed Calcite.
>> There’s no easy answer. You might want to embed Calcite as a library in
>> your own server (as Drill and Hive do). Or you might want to make yourself
>> just an adapter that runs inside a Calcite JDBC server (as the CSV adapter
>> does). Or something in the middle, like what Phoenix does: using Calcite
>> for JDBC, SQL, planning, but with your own metadata and runtime engine.
>> 
>> As long as you build the valuable stuff into planner rules, new relational
>> operators (if necessary) and use the schema SPI, you should be able to
>> change packaging in the future.
>> 
>> Julian
>> 
>> 
>> 
>> 
>>> On Nov 17, 2016, at 1:59 PM, Gian Merlino  wrote:
>>> 
>>> Hey Calcites,
>>> 
>>> I'm working on embedding Calcite into Druid (http://druid.io/,
>>> https://github.com/druid-io/druid/pull/3682) and am running into a
>> problem
>>> that is making me wonder if the approach I'm using makes sense.
>>> 
>>> Consider the expression EXTRACT(YEAR FROM __time). Calcite has a standard
>>> convertlet rule "convertExtract" that changes this into some arithmetic
>> on
>>> __time casted to an int type. But Druid has some builtin functions to do
>>> this, and I'd rather use those than arithmetic (for a bunch of reasons).
>>> Ideally, in my RelOptRules that convert Calcite rels to Druid queries,
>> I'd
>>> see the EXTRACT as a normal RexCall with the time flag and an expression
>> to
>>> apply it to. That's a lot easier to translate than the arithmetic stuff,
>>> which I'd have to pattern match and undo first before translating.
>>> 
>>> So the problem I have is that I want to disable convertExtract, but I
>> don't
>>> see a way to do that or to swap out the convertlet table.
>>> 
>>> The code I'm using to set up a connection is:
>>> 
>>> public CalciteConnection createCalciteConnection(
>>> final DruidSchema druidSchema
>>> ) throws SQLException
>>> {
>>>   final Properties props = new Properties();
>>>   props.setProperty("caseSensitive", "true");
>>>   props.setProperty("unquotedCasing", "UNCHANGED");
>>>   final Connection connection =
>>> DriverManager.getConnection("jdbc:calcite:", props);
>>>   final CalciteConnection calciteConnection =
>>> connection.unwrap(CalciteConnection.class);
>>>   

Re: Spark with Calcite JDBC and Druid adapter

2016-11-17 Thread Julian Hyde
I see that in one place hashTagId is quoted, and in another it is not. So, 
whoever is generating the SQL (Spark?) is not being consistent, which is a 
worry.

In the default lexical convention, unquoted columns are converted to upper 
case. But your column is mixed case. So you need to fix Spark to generate 
appropriate quoting, or use lex=JAVA in Calcite so that unquoted columns stay 
the same case.

I’m glad to see that “value” and “count” are quoted. They are SQL reserved 
words, so they have to be. But if you could change the column names to 
something else it will make your life easier.

Julian

> On Nov 15, 2016, at 10:33 AM, herman...@teeupdata.com wrote:
> 
> Played around Calcite JDBC settings, especially with lexical, some settings 
> returns empty set (default with caseSensitive=false) when where is 
> filter/join, some just failed at parsing phase (e.g. lex=JAVA) :
> 
> ava.sql.SQLException: Error while preparing statement [SELECT 
> "timestamp","commentorId","hashTagId”,"value","count" FROM yyy WHERE 
> (hashTagId IS NOT NULL) AND (hashTagId = 'hashTag_01')]
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
>   at 
> org.apache.calcite.jdbc.CalciteConnectionImpl.prepareStatement_(CalciteConnectionImpl.java:204)
>   at 
> org.apache.calcite.jdbc.CalciteConnectionImpl.prepareStatement(CalciteConnectionImpl.java:186)
>   at 
> org.apache.calcite.jdbc.CalciteConnectionImpl.prepareStatement(CalciteConnectionImpl.java:87)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.prepareStatement(AvaticaConnection.java:264)
> 
> when with default setting, after removing "(hashTagId IS NOT NULL) AND" from 
> where clause, correct result set returned. so it does seem to me that this is 
> calcite configuration issue. Does anybody have any experience using Calcite 
> JDBC with Spark?
> 
> thanks
> Herman.
> 
> 
>> On Nov 15, 2016, at 10:45, herman...@teeupdata.com wrote:
>> 
>> Hi everyone,
>> 
>> When accessing Druid through Calcite JDBC and Druid adapter from Spark, I 
>> have been experiencing strange results. 
>> 
>> Druid data schema is defined as:
>> 
>>   {
>> "type": "custom",
>> "name": “xxx",
>> "factory": "org.apache.calcite.adapter.druid.DruidSchemaFactory",
>> "operand": {
>>   "url": "http://:8082",
>>   "coordinatorUrl": "http://:8081"
>> },
>> "tables": [
>>   {
>> "name": “yyy",
>> "factory": "org.apache.calcite.adapter.druid.DruidTableFactory",
>> "operand": {
>>   "dataSource": “t",
>>   "interval": "2016-11-08T00:00:00.000Z/2016-12-31T00:00:00.000Z",
>>   "timestampColumn": "timestamp",
>>   "dimensions": [
>> "commentorId", 
>> "hashTagId",
>> “value"
>>   ],
>> "metrics": [
>>  {
>>"type" : "count",
>>"name" : "count"
>>  }
>> ]
>> }
>>   }
>> ]
>> 
>> with a  JDBC client, queries like these work fine:
>> 1. select * from yay where hashTagId=“hashTag_01”
>> 2. select badcount.hashTagId, badcount.bad,totalcount.total, 
>> badcount.bad/totalcount.total*100 as bad_pct 
>>  from 
>>  (select hashTagId, cast(count(*) as double) as bad from yyy 
>> where value='bad' group by hashTagId) as badcount
>>  join 
>>  (select hashTagId, cast(count(*) as double) as 
>> total from yyy group by hashTagId) as totalcount
>>  on 
>>  
>> (badcount.hashTagId=totalcount.hashTagId)
>> 
>> However, in spark 2.0, it is strange
>> 1. df_yyy = spark.read.format(“jdbc”).option(“url”,jdbc:calcite:model=> to schema json>;caseSensitive=false”)…
>> 2. df_yyy.show() —— works fine, returns all records
>> 3. df_yyy.filter($”hashTagId”=“hashTag_01”).count() — returns the correct 
>> number of records
>> 4. df_yyy.filter($”hashTagId”=“hashTag_01”).show() — returns empty result set
>> 5. df_yyy.join(, ).show() —— 
>> returns empty result set (any joins returns empty result set)
>> 
>> I am suspecting there are conflicts between  how spark parses SQL and how 
>> Calcite JDBC does. are there special properties to set as of JDBC string to 
>> make it work with Spark? Is there a Calcite JDBC log file that I can dig 
>> through? I did some googling and don’t see similar usage with 
>> spark/calcite/druid, is this the right way accessing druid from spark? (may 
>> be this is a question better for spark/druid community…)
>> 
>> Thanks.
>> Herman.
>> 
> 



Re: Embedding Calcite, adjusting convertlets

2016-11-17 Thread Gian Merlino
Hey Julian,

If the convertlets were customizable with a FrameworkConfig, how would I
use that configure the JDBC driver (given that I'm doing it with the code
upthread)? Or would that suggest using a different approach to embedding
Calcite?

Gian

On Thu, Nov 17, 2016 at 4:02 PM, Julian Hyde  wrote:

> Convertlets have a similar effect to planner rules (albeit they act on
> scalar expressions, not relational expressions) so people should be able to
> change the set of active convertlets.
>
> Would you like to propose a change that makes the convertlet table
> pluggable? Maybe as part of FrameworkConfig? Regardless, please log a JIRA
> to track this.
>
> And by the way, RexImpTable, which defines how operators are implemented
> by generating java code, should also be pluggable. It’s been on my mind for
> a long time to allow the “engine” — related to the data format, and how
> code is generated to access fields and evaluate expressions and operators —
> to be pluggable.
>
> Regarding whether the JDBC driver is the right way to embed Calcite.
> There’s no easy answer. You might want to embed Calcite as a library in
> your own server (as Drill and Hive do). Or you might want to make yourself
> just an adapter that runs inside a Calcite JDBC server (as the CSV adapter
> does). Or something in the middle, like what Phoenix does: using Calcite
> for JDBC, SQL, planning, but with your own metadata and runtime engine.
>
> As long as you build the valuable stuff into planner rules, new relational
> operators (if necessary) and use the schema SPI, you should be able to
> change packaging in the future.
>
> Julian
>
>
>
>
> > On Nov 17, 2016, at 1:59 PM, Gian Merlino  wrote:
> >
> > Hey Calcites,
> >
> > I'm working on embedding Calcite into Druid (http://druid.io/,
> > https://github.com/druid-io/druid/pull/3682) and am running into a
> problem
> > that is making me wonder if the approach I'm using makes sense.
> >
> > Consider the expression EXTRACT(YEAR FROM __time). Calcite has a standard
> > convertlet rule "convertExtract" that changes this into some arithmetic
> on
> > __time casted to an int type. But Druid has some builtin functions to do
> > this, and I'd rather use those than arithmetic (for a bunch of reasons).
> > Ideally, in my RelOptRules that convert Calcite rels to Druid queries,
> I'd
> > see the EXTRACT as a normal RexCall with the time flag and an expression
> to
> > apply it to. That's a lot easier to translate than the arithmetic stuff,
> > which I'd have to pattern match and undo first before translating.
> >
> > So the problem I have is that I want to disable convertExtract, but I
> don't
> > see a way to do that or to swap out the convertlet table.
> >
> > The code I'm using to set up a connection is:
> >
> >  public CalciteConnection createCalciteConnection(
> >  final DruidSchema druidSchema
> >  ) throws SQLException
> >  {
> >final Properties props = new Properties();
> >props.setProperty("caseSensitive", "true");
> >props.setProperty("unquotedCasing", "UNCHANGED");
> >final Connection connection =
> > DriverManager.getConnection("jdbc:calcite:", props);
> >final CalciteConnection calciteConnection =
> > connection.unwrap(CalciteConnection.class);
> >calciteConnection.getRootSchema().setCacheEnabled(false);
> >calciteConnection.getRootSchema().add(DRUID_SCHEMA_NAME,
> druidSchema);
> >return calciteConnection;
> >  }
> >
> > This CalciteConnection is then used by the Druid HTTP server to offer a
> SQL
> > API.
> >
> > Is there some way to swap out the convertlet table that I'm missing?
> >
> > Also, just in general, am I going about this the right way? Is using the
> > JDBC driver the right way to embed Calcite? Or should I be calling into
> it
> > at some lower level?
> >
> > Thanks!
> >
> > Gian
>
>


Re: Embedding Calcite, adjusting convertlets

2016-11-17 Thread herman...@teeupdata.com

I have been trying to access Druid from Spark via Calcite JDBC, but somehow sql 
statement generated by Spark causes exceptions. Not sure if it is Calcite JDBC 
or Druid related. Have you see, or do you think Spark -> Calcite JDBC -> Druid 
is a right way to connect Spark and Druid?

Thanks
Herman.


> On Nov 17, 2016, at 19:02, Julian Hyde  wrote:
> 
> Convertlets have a similar effect to planner rules (albeit they act on scalar 
> expressions, not relational expressions) so people should be able to change 
> the set of active convertlets.
> 
> Would you like to propose a change that makes the convertlet table pluggable? 
> Maybe as part of FrameworkConfig? Regardless, please log a JIRA to track this.
> 
> And by the way, RexImpTable, which defines how operators are implemented by 
> generating java code, should also be pluggable. It’s been on my mind for a 
> long time to allow the “engine” — related to the data format, and how code is 
> generated to access fields and evaluate expressions and operators — to be 
> pluggable.
> 
> Regarding whether the JDBC driver is the right way to embed Calcite. There’s 
> no easy answer. You might want to embed Calcite as a library in your own 
> server (as Drill and Hive do). Or you might want to make yourself just an 
> adapter that runs inside a Calcite JDBC server (as the CSV adapter does). Or 
> something in the middle, like what Phoenix does: using Calcite for JDBC, SQL, 
> planning, but with your own metadata and runtime engine.
> 
> As long as you build the valuable stuff into planner rules, new relational 
> operators (if necessary) and use the schema SPI, you should be able to change 
> packaging in the future. 
> 
> Julian
> 
> 
> 
> 
>> On Nov 17, 2016, at 1:59 PM, Gian Merlino  wrote:
>> 
>> Hey Calcites,
>> 
>> I'm working on embedding Calcite into Druid (http://druid.io/,
>> https://github.com/druid-io/druid/pull/3682) and am running into a problem
>> that is making me wonder if the approach I'm using makes sense.
>> 
>> Consider the expression EXTRACT(YEAR FROM __time). Calcite has a standard
>> convertlet rule "convertExtract" that changes this into some arithmetic on
>> __time casted to an int type. But Druid has some builtin functions to do
>> this, and I'd rather use those than arithmetic (for a bunch of reasons).
>> Ideally, in my RelOptRules that convert Calcite rels to Druid queries, I'd
>> see the EXTRACT as a normal RexCall with the time flag and an expression to
>> apply it to. That's a lot easier to translate than the arithmetic stuff,
>> which I'd have to pattern match and undo first before translating.
>> 
>> So the problem I have is that I want to disable convertExtract, but I don't
>> see a way to do that or to swap out the convertlet table.
>> 
>> The code I'm using to set up a connection is:
>> 
>> public CalciteConnection createCalciteConnection(
>> final DruidSchema druidSchema
>> ) throws SQLException
>> {
>>   final Properties props = new Properties();
>>   props.setProperty("caseSensitive", "true");
>>   props.setProperty("unquotedCasing", "UNCHANGED");
>>   final Connection connection =
>> DriverManager.getConnection("jdbc:calcite:", props);
>>   final CalciteConnection calciteConnection =
>> connection.unwrap(CalciteConnection.class);
>>   calciteConnection.getRootSchema().setCacheEnabled(false);
>>   calciteConnection.getRootSchema().add(DRUID_SCHEMA_NAME, druidSchema);
>>   return calciteConnection;
>> }
>> 
>> This CalciteConnection is then used by the Druid HTTP server to offer a SQL
>> API.
>> 
>> Is there some way to swap out the convertlet table that I'm missing?
>> 
>> Also, just in general, am I going about this the right way? Is using the
>> JDBC driver the right way to embed Calcite? Or should I be calling into it
>> at some lower level?
>> 
>> Thanks!
>> 
>> Gian
> 



Re: Unable to Instantiate Java Compiler

2016-11-17 Thread Julian Hyde
By the way, here is the original thread: 
http://mail-archives.apache.org/mod_mbox/calcite-dev/201611.mbox/%3CD440C726.2432E%25Eunsil.Recksiek%40ca.com%3E
 

 

The original error message was slightly different:

  java.lang.ClassCastException: 
org.codehaus.commons.compiler.jdk.CompilerFactory
cannot be cast to org.codehaus.commons.compiler.ICompilerFactory

This suggests a version mismatch of the commons-compiler library.

Julian

> On Nov 17, 2016, at 2:37 PM, Julian Hyde  wrote:
> 
> The most likely explanation, by far, is that you have multiple versions of 
> Janino on your class path.
> 
> In janino-2.7.6, which is what Calcite uses, 
> org.codehaus.janino.CompilerFactory extends 
> org.codehaus.commons.compiler.AbstractCompilerFactory, which implements 
> org.codehaus.commons.compiler.ICompilerFactory. And by the way, the 
> org.codehaus parts live in org.codehaus.janino:commons-compiler:2.7.6, so 
> it’s maintained as part of janino. So, statically, your ClassCastException is 
> impossible. 
> 
> Its only circumstantial evidence, but based on stack-traces I see around the 
> web, Jaspersoft does use Janino.
> 
> Please log a JIRA case for this. I don’t want to re-cap the same discussion 
> in a few months.
> 
> Julian
> 
> 
>> On Nov 16, 2016, at 9:06 PM, Meehan, Kevin M > > wrote:
>> 
>> Hello,
>> I wanted to return to a question that was asked a few weeks back regarding 
>> failure of the java compiler to instantiate when executing a SQL query.  It 
>> seems to be having issues with janino compiler, but the .jar is within the 
>> classpath and there doesn’t appear to be any conflicts.  The driver is being 
>> loaded into a BI Tool (JasperReports Server), a tomcat application, and 
>> while we can successfully test a connection using the driver, pull and 
>> present the schema, when the SQL execution occurs, the following error 
>> happens.
>> Any help is greatly appreciated as we would really like to use calcite as 
>> the driver for this REST API data adapter.
>>  
>> java.sql.SQLException: Error while executing SQL "select * from emps": 
>> Unable to instantiate java compiler
>>   at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
>>   at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
>>   at 
>> org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
>>   at 
>> org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
>>   at 
>> org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
>>   at 
>> org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
>>   at 
>> com.jaspersoft.commons.semantic.metaapi.impl.jdbc.BaseJdbcMetaDataFactoryImpl.getColumnsFromJDBCQuery(BaseJdbcMetaDataFactoryImpl.java:192)
>>   at 
>> com.jaspersoft.ji.semantic.action.DomainDesignerAction.runJDBCQuery(DomainDesignerAction.java:2110)
>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>   at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>>   at java.lang.reflect.Method.invoke(Unknown Source)
>>   at 
>> org.springframework.webflow.action.DispatchMethodInvoker.invoke(DispatchMethodInvoker.java:98)
>>   at 
>> org.springframework.webflow.action.MultiAction.doExecute(MultiAction.java:123)
>>   at 
>> org.springframework.webflow.action.AbstractAction.execute(AbstractAction.java:188)
>>   at 
>> org.springframework.webflow.execution.AnnotatedAction.execute(AnnotatedAction.java:145)
>>   at 
>> org.springframework.webflow.execution.ActionExecutor.execute(ActionExecutor.java:51)
>>   at 
>> org.springframework.webflow.action.EvaluateAction.doExecute(EvaluateAction.java:77)
>>   at 
>> org.springframework.webflow.action.AbstractAction.execute(AbstractAction.java:188)
>>   at 
>> org.springframework.webflow.execution.AnnotatedAction.execute(AnnotatedAction.java:145)
>>   at 
>> org.springframework.webflow.execution.ActionExecutor.execute(ActionExecutor.java:51)
>>   at 
>> org.springframework.webflow.engine.ActionState.doEnter(ActionState.java:101)
>>   at org.springframework.webflow.engine.State.enter(State.java:194)
>>   at 
>> org.springframework.webflow.engine.Transition.execute(Transition.java:227)
>>   at 
>> org.springframework.webflow.engine.impl.FlowExecutionImpl.execute(FlowExecutionImpl.java:393)
>>   at 
>> org.springframework.webflow.engine.impl.RequestControlContextImpl.execute(RequestControlContextImpl.java:214)
>>   at 
>> org.springframework.webflow.engine.TransitionableState.handleEvent(TransitionableState.java:119)
>>   at 

Re: Unable to Instantiate Java Compiler

2016-11-17 Thread Julian Hyde
The most likely explanation, by far, is that you have multiple versions of 
Janino on your class path.

In janino-2.7.6, which is what Calcite uses, 
org.codehaus.janino.CompilerFactory extends 
org.codehaus.commons.compiler.AbstractCompilerFactory, which implements 
org.codehaus.commons.compiler.ICompilerFactory. And by the way, the 
org.codehaus parts live in org.codehaus.janino:commons-compiler:2.7.6, so it’s 
maintained as part of janino. So, statically, your ClassCastException is 
impossible. 

Its only circumstantial evidence, but based on stack-traces I see around the 
web, Jaspersoft does use Janino.

Please log a JIRA case for this. I don’t want to re-cap the same discussion in 
a few months.

Julian


> On Nov 16, 2016, at 9:06 PM, Meehan, Kevin M  wrote:
> 
> Hello,
> I wanted to return to a question that was asked a few weeks back regarding 
> failure of the java compiler to instantiate when executing a SQL query.  It 
> seems to be having issues with janino compiler, but the .jar is within the 
> classpath and there doesn’t appear to be any conflicts.  The driver is being 
> loaded into a BI Tool (JasperReports Server), a tomcat application, and while 
> we can successfully test a connection using the driver, pull and present the 
> schema, when the SQL execution occurs, the following error happens.
> Any help is greatly appreciated as we would really like to use calcite as the 
> driver for this REST API data adapter.
>  
> java.sql.SQLException: Error while executing SQL "select * from emps": Unable 
> to instantiate java compiler
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
>   at 
> org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
>   at 
> org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
>   at 
> org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
>   at 
> org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
>   at 
> com.jaspersoft.commons.semantic.metaapi.impl.jdbc.BaseJdbcMetaDataFactoryImpl.getColumnsFromJDBCQuery(BaseJdbcMetaDataFactoryImpl.java:192)
>   at 
> com.jaspersoft.ji.semantic.action.DomainDesignerAction.runJDBCQuery(DomainDesignerAction.java:2110)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>   at java.lang.reflect.Method.invoke(Unknown Source)
>   at 
> org.springframework.webflow.action.DispatchMethodInvoker.invoke(DispatchMethodInvoker.java:98)
>   at 
> org.springframework.webflow.action.MultiAction.doExecute(MultiAction.java:123)
>   at 
> org.springframework.webflow.action.AbstractAction.execute(AbstractAction.java:188)
>   at 
> org.springframework.webflow.execution.AnnotatedAction.execute(AnnotatedAction.java:145)
>   at 
> org.springframework.webflow.execution.ActionExecutor.execute(ActionExecutor.java:51)
>   at 
> org.springframework.webflow.action.EvaluateAction.doExecute(EvaluateAction.java:77)
>   at 
> org.springframework.webflow.action.AbstractAction.execute(AbstractAction.java:188)
>   at 
> org.springframework.webflow.execution.AnnotatedAction.execute(AnnotatedAction.java:145)
>   at 
> org.springframework.webflow.execution.ActionExecutor.execute(ActionExecutor.java:51)
>   at 
> org.springframework.webflow.engine.ActionState.doEnter(ActionState.java:101)
>   at org.springframework.webflow.engine.State.enter(State.java:194)
>   at 
> org.springframework.webflow.engine.Transition.execute(Transition.java:227)
>   at 
> org.springframework.webflow.engine.impl.FlowExecutionImpl.execute(FlowExecutionImpl.java:393)
>   at 
> org.springframework.webflow.engine.impl.RequestControlContextImpl.execute(RequestControlContextImpl.java:214)
>   at 
> org.springframework.webflow.engine.TransitionableState.handleEvent(TransitionableState.java:119)
>   at org.springframework.webflow.engine.Flow.handleEvent(Flow.java:555)
>   at 
> org.springframework.webflow.engine.impl.FlowExecutionImpl.handleEvent(FlowExecutionImpl.java:388)
>   at 
> org.springframework.webflow.engine.impl.RequestControlContextImpl.handleEvent(RequestControlContextImpl.java:210)
>   at 
> org.springframework.webflow.engine.ViewState.handleEvent(ViewState.java:232)
>   at 
> org.springframework.webflow.engine.ViewState.resume(ViewState.java:196)
>   at org.springframework.webflow.engine.Flow.resume(Flow.java:545)
>   at 
> org.springframework.webflow.engine.impl.FlowExecutionImpl.resume(FlowExecutionImpl.java:261)
>   at 
> org.springframework.webflow.executor.FlowExecutorImpl.resumeExecution(FlowExecutorImpl.java:169)
>   at 

Re: Moderators

2016-11-17 Thread Julian Hyde
Josh, If you still want to be a moderator, can you please enroll by emailing 
apm...@apache.org, per https://www.apache.org/dev/infra-contact 
.

Julian


> On Nov 8, 2016, at 4:28 PM, Julian Hyde  wrote:
> 
> Thanks for offering, Ashutosh. I suspect the most efficient configuration is 
> me + Josh as moderators. It helps that moderators are reasonably active on 
> the lists, so they know whether a message has been moderated through or not. 
> And Josh is more active on the Calcite lists than you.
> 
> Julian
> 
>> On Nov 8, 2016, at 1:32 PM, Ashutosh Chauhan  wrote:
>> 
>> I am one of the moderators, as I was champion during incubation. I don't
>> pay much attention to list moderation messages, as I was under the
>> impression that Julian is taking care of it. If you want more active
>> involvement, let me know. Happy to help.
>> 
>> Thanks,
>> Ashutosh
>> 
>> On Tue, Nov 8, 2016 at 11:51 AM, Julian Hyde  wrote:
>> 
>>> How many moderators do we have for the dev and private lists? I am a
>>> moderator, but I don’t know if there are any others. I don’t think we need
>>> a huge number of moderators, but for latency & redundancy we ought to have
>>> at least 2, maybe 3.
>>> 
>>> Any volunteers? (A moderator must be a PMC member, IIRC.)
>>> 
>>> Julian
>>> 
>>> 
> 



Unable to Instantiate Java Compiler

2016-11-17 Thread Meehan, Kevin M
Hello,
I wanted to return to a question that was asked a few weeks back regarding 
failure of the java compiler to instantiate when executing a SQL query.  It 
seems to be having issues with janino compiler, but the .jar is within the 
classpath and there doesn't appear to be any conflicts.  The driver is being 
loaded into a BI Tool (JasperReports Server), a tomcat application, and while 
we can successfully test a connection using the driver, pull and present the 
schema, when the SQL execution occurs, the following error happens.
Any help is greatly appreciated as we would really like to use calcite as the 
driver for this REST API data adapter.

java.sql.SQLException: Error while executing SQL "select * from emps": Unable 
to instantiate java compiler
  at org.apache.calcite.avatica.Helper.createException(Helper.java:56)
  at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
  at 
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
  at 
org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
  at 
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
  at 
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)
  at 
com.jaspersoft.commons.semantic.metaapi.impl.jdbc.BaseJdbcMetaDataFactoryImpl.getColumnsFromJDBCQuery(BaseJdbcMetaDataFactoryImpl.java:192)
  at 
com.jaspersoft.ji.semantic.action.DomainDesignerAction.runJDBCQuery(DomainDesignerAction.java:2110)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
  at java.lang.reflect.Method.invoke(Unknown Source)
  at 
org.springframework.webflow.action.DispatchMethodInvoker.invoke(DispatchMethodInvoker.java:98)
  at 
org.springframework.webflow.action.MultiAction.doExecute(MultiAction.java:123)
  at 
org.springframework.webflow.action.AbstractAction.execute(AbstractAction.java:188)
  at 
org.springframework.webflow.execution.AnnotatedAction.execute(AnnotatedAction.java:145)
  at 
org.springframework.webflow.execution.ActionExecutor.execute(ActionExecutor.java:51)
  at 
org.springframework.webflow.action.EvaluateAction.doExecute(EvaluateAction.java:77)
  at 
org.springframework.webflow.action.AbstractAction.execute(AbstractAction.java:188)
  at 
org.springframework.webflow.execution.AnnotatedAction.execute(AnnotatedAction.java:145)
  at 
org.springframework.webflow.execution.ActionExecutor.execute(ActionExecutor.java:51)
  at 
org.springframework.webflow.engine.ActionState.doEnter(ActionState.java:101)
  at org.springframework.webflow.engine.State.enter(State.java:194)
  at 
org.springframework.webflow.engine.Transition.execute(Transition.java:227)
  at 
org.springframework.webflow.engine.impl.FlowExecutionImpl.execute(FlowExecutionImpl.java:393)
  at 
org.springframework.webflow.engine.impl.RequestControlContextImpl.execute(RequestControlContextImpl.java:214)
  at 
org.springframework.webflow.engine.TransitionableState.handleEvent(TransitionableState.java:119)
  at org.springframework.webflow.engine.Flow.handleEvent(Flow.java:555)
  at 
org.springframework.webflow.engine.impl.FlowExecutionImpl.handleEvent(FlowExecutionImpl.java:388)
  at 
org.springframework.webflow.engine.impl.RequestControlContextImpl.handleEvent(RequestControlContextImpl.java:210)
  at 
org.springframework.webflow.engine.ViewState.handleEvent(ViewState.java:232)
  at org.springframework.webflow.engine.ViewState.resume(ViewState.java:196)
  at org.springframework.webflow.engine.Flow.resume(Flow.java:545)
  at 
org.springframework.webflow.engine.impl.FlowExecutionImpl.resume(FlowExecutionImpl.java:261)
  at 
org.springframework.webflow.executor.FlowExecutorImpl.resumeExecution(FlowExecutorImpl.java:169)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
  at java.lang.reflect.Method.invoke(Unknown Source)
  at 
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317)
  at 
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183)
  at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150)
  at 
org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:64)
  at 
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
  at 
org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
  at