I think the reason why it works in tests is because we create all tables 
(including HIVE_LOCKS) using a script. I am not sure lock tables are always 
created in embedded mode.

> On 7 Aug 2019, at 16:49, Ryan Blue <rb...@netflix.com> wrote:
> 
> This is the right list. Iceberg is fairly low in the stack, so most questions 
> are probably dev questions.
> 
> I'm surprised that this doesn't work with an embedded metastore because we 
> use an embedded metastore in tests: 
> https://github.com/apache/incubator-iceberg/blob/master/hive/src/test/java/org/apache/iceberg/hive/TestHiveMetastore.java
> 
> But we are also using Hive 1.2.1 and a metastore schema for 3.1.0. I wonder 
> if a newer version of Hive would avoid this problem? What version are you 
> linking with?
> 
> On Tue, Aug 6, 2019 at 8:59 PM Saisai Shao <sai.sai.s...@gmail.com> wrote:
> Hi team, 
> 
> I just met some issues when trying Iceberg with quick start guide. Not sure 
> if it is proper to send this to @dev mail list (seems there's no user mail 
> list).
> 
> One issue is that seems current Iceberg cannot run with embedded metastore. 
> It will throw an exception. Is this an on-purpose behavior (force to use 
> remote HMS), or just a bug?
> 
> Caused by: org.apache.hadoop.hive.metastore.api.MetaException: Unable to 
> update transaction database java.sql.SQLSyntaxErrorException: Table/View 
> 'HIVE_LOCKS' does not exist.
> at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown 
> Source)
> at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
> at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
> Source)
> 
> Followed by this issue, seems like current Iceberg only binds to HMS as 
> catalog, this is fine for production usage. But I'm wondering if we could 
> have a simple catalog like in-memory catalog as Spark, so that it is easy for 
> user to test and play. Is there any concern or plan?
> 
> Best regards,
> Saisai
> 
> 
> 
> 
> -- 
> Ryan Blue
> Software Engineer
> Netflix

Reply via email to