in the meantime given my tables are in s3 i've written a utility to do a
'aws s3 ls' on the bucket and folder in question, change the folder syntax
to partition syntax and then issued my own 'alter table ... add partition'
for each partition.


so essentially it does what msck repair tables does but in a non-portable
way.  oh well.  gotta do what ya gotta do.

On Wed, Jul 13, 2016 at 9:29 PM, Stephen Sprague <sprag...@gmail.com> wrote:

> hey guys,
> i'm using hive version 2.1.0 and i can't seem to get msck repair table to
> work.  no matter what i try i get the 'ol NPE.  I've set the log level to
> 'DEBUG' but yet i still am not seeing any smoking gun.
>
> would anyone here have any pointers or suggestions to figure out what's
> going wrong?
>
> thanks,
> Stephen.
>
>
>
> hive> create external table foo (a int) partitioned by (date_key bigint)
> location 'hdfs:/tmp/foo';
> OK
> Time taken: 3.359 seconds
>
> hive> msck repair table foo;
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask
>
>
> from the log...
>
> 2016-07-14T04:08:02,431 DEBUG [MSCK-GetPaths-1]:
> httpclient.RestStorageService (:()) - Found 13 objects in one batch
> 2016-07-14T04:08:02,431 DEBUG [MSCK-GetPaths-1]:
> httpclient.RestStorageService (:()) - Found 0 common prefixes in one batch
> 2016-07-14T04:08:02,433 ERROR [main]: metadata.HiveMetaStoreChecker (:())
> - java.lang.NullPointerException
> 2016-07-14T04:08:02,434 WARN  [main]: exec.DDLTask (:()) - Failed to run
> metacheck:
> org.apache.hadoop.hive.ql.metadata.HiveException:
> java.lang.NullPointerException
>         at
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.getAllLeafDirs(HiveMetaStoreChecker.java:444)
>         at
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.getAllLeafDirs(HiveMetaStoreChecker.java:448)
>         at
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.getAllLeafDirs(HiveMetaStoreChecker.java:388)
>         at
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.findUnknownPartitions(HiveMetaStoreChecker.java:309)
>         at
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkTable(HiveMetaStoreChecker.java:285)
>         at
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkTable(HiveMetaStoreChecker.java:230)
>         at
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkMetastore(HiveMetaStoreChecker.java:109)
>         at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1814)
>         at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:403)
>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
>         at
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1858)
>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1562)
>         at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1313)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1084)
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1072)
>         at
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
>         at
> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183)
>         at
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399)
>         at
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776)
>         at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714)
>         at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>

Reply via email to