Hey Yu,

I tried to reproduce on a CDH5.13 cluster, but your exact commands
work as expected for me. Are you using Impala 2.10 on a CDH5.13
cluster, or something else? Can you share your catalog and Hive
metastore logs?

Thanks.

On 12 October 2017 at 19:39, yu feng <olaptes...@gmail.com> wrote:
> I try to use ' invalidate metadata' for the whole catalog, But the modified
> table is still empty.  I am doubt the only way is restart catalogd.
>
> BTW, I test with the newest version(2.10.0)
>
> 2017-10-13 0:17 GMT+08:00 Jeszy <jes...@gmail.com>:
>
>> This does sound like a bug. What version are you using? Do you see any
>> errors in the catalog logs?
>> I think a global invalidate metadata should work, and it's a bit less
>> intrusive than a catalog restart. In general, it is a good idea to do
>> all metadata operations from Impala if you are using Impala at all, it
>> helps a lot in making metadata operations seamless.
>>
>> On 12 October 2017 at 02:53, yu feng <olaptes...@gmail.com> wrote:
>> > In our scene, users always do metadata modifications in hive, and do some
>> > query in impala.
>> >
>> > 2017-10-12 16:31 GMT+08:00 sky <x_h...@163.com>:
>> >
>> >> Why is the second step performed in hive, not impala?
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> At 2017-10-12 15:12:38, "yu feng" <olaptes...@gmail.com> wrote:
>> >> >I open impala-shell and hive-cli.
>> >> >1、execute 'show create table impala_test.sales_fact_1997' in
>> impala-shell
>> >> ,
>> >> >return :
>> >> >
>> >> >+----------------------------------------------------------
>> >> -----------------------------------------------------------------+
>> >> >| result
>> >> >                                                 |
>> >> >+----------------------------------------------------------
>> >> -----------------------------------------------------------------+
>> >> >| CREATE TABLE impala_test.sales_fact_1997 (
>> >> >                                                 |
>> >> >|   product_id INT,
>> >> >                                                |
>> >> >|   time_id INT,
>> >> >                                                 |
>> >> >|   customer_id INT,
>> >> >                                                 |
>> >> >|   promotion_id INT,
>> >> >                                                |
>> >> >|   store_id INT,
>> >> >                                                |
>> >> >|   store_sales DOUBLE,
>> >> >                                                |
>> >> >|   store_cost DOUBLE,
>> >> >                                                 |
>> >> >|   unit_sales DOUBLE
>> >> >                                                |
>> >> >| )
>> >> >                                                |
>> >> >|  COMMENT 'Imported by sqoop on 2017/06/09 20:25:40'
>> >> >                                                |
>> >> >| ROW FORMAT DELIMITED FIELDS TERMINATED BY '\u0001' LINES TERMINATED
>> BY
>> >> >'\n'                                               |
>> >> >| WITH SERDEPROPERTIES ('field.delim'='\u0001', 'line.delim'='\n',
>> >> >'serialization.format'='\u0001')                         |
>> >> >| STORED AS PARQUET
>> >> >                                                |
>> >> >| LOCATION
>> >> >'hdfs://hz-cluster1/user/nrpt/hive-server/impala_test.
>> db/sales_fact_1997'
>> >> >                                     |
>> >> >| TBLPROPERTIES ('COLUMN_STATS_ACCURATE'='true', 'numFiles'='3',
>> >> >'numRows'='10', 'rawDataSize'='80', 'totalSize'='1619937') |
>> >> >+----------------------------------------------------------
>> >> -----------------------------------------------------------------+
>> >> >
>> >> >2、execute 'alter table impala_test.sales_fact_1997 change column
>> >> product_id
>> >> >pproduct_id int;'  in hive -cli, return OK.
>> >> >3、execute 'invalidate metadata impala_test.sales_fact_1997 '.
>> >> >4、execute 'show create table impala_test.sales_fact_1997' again in
>> >> >impala-shell, return :
>> >> >
>> >> >+----------------------------------------------------------
>> >> -----------------------------------------------------------------+
>> >> >| result
>> >> >                                                 |
>> >> >+----------------------------------------------------------
>> >> -----------------------------------------------------------------+
>> >> >| CREATE TABLE impala_test.sales_fact_1997
>> >> >                                                 |
>> >> >|  COMMENT 'Imported by sqoop on 2017/06/09 20:25:40'
>> >> >                                                |
>> >> >| ROW FORMAT DELIMITED FIELDS TERMINATED BY '\u0001' LINES TERMINATED
>> BY
>> >> >'\n'                                               |
>> >> >| WITH SERDEPROPERTIES ('field.delim'='\u0001', 'line.delim'='\n',
>> >> >'serialization.format'='\u0001')                         |
>> >> >| STORED AS PARQUET
>> >> >                                                |
>> >> >| LOCATION
>> >> >'hdfs://hz-cluster1/user/nrpt/hive-server/impala_test.
>> db/sales_fact_1997'
>> >> >                                     |
>> >> >| TBLPROPERTIES ('COLUMN_STATS_ACCURATE'='true', 'numFiles'='3',
>> >> >'numRows'='10', 'rawDataSize'='80', 'totalSize'='1619937') |
>> >> >+----------------------------------------------------------
>> >> -----------------------------------------------------------------+
>> >> >
>> >> >all columns disappear, the column change will correct if I restart
>> >> >catalogd, I think it is a BUG caused by hive metastore client, It is
>> any
>> >> >good idea overcome the problem except restart catalogd.
>> >> >
>> >> > I think we can check columns after getTable from HiveMetastoreClient,
>> if
>> >> >it is empty, try to recreate the HiveMetastoreClient(hive do not
>> support
>> >> >0-column table). is it a good way to overcome the problem if modify
>> code
>> >> >like this?
>> >>
>>

Reply via email to