Yes, that always make sense

On Wed, Aug 30, 2023 at 8:30 AM Ryan Blue <b...@tabular.io> wrote:

> This is one of the many problems with not using a real catalog. If there
> are any metadata files under the table location, then the table exists so
> we have to delete. The solution is to not use the HadoopCatalog, which has
> been the community's recommendation for several years.
>
> On Wed, Aug 30, 2023 at 4:06 AM <russell.spit...@gmail.com> wrote:
>
>> There is no way to drop a Hadoop catalog table without removing the
>> directory so I’m not sure what the alternative would be
>>
>> Sent from my iPhone
>>
>> On Aug 29, 2023, at 10:10 PM, Manu Zhang <owenzhang1...@gmail.com> wrote:
>>
>> 
>> Hi all,
>>
>> The current behavior of dropping a table with HadoopCatalog looks
>> inconsistent to me. When stored at the default location under table path,
>> metadata and data will be deleted regardless of the "purge" flag. When
>> stored elsewhere, they will be left there if not "purge". Is this by design?
>>
>> @Override
>> public boolean dropTable(TableIdentifier identifier, boolean purge) {
>> if (!isValidIdentifier(identifier)) {
>> throw new NoSuchTableException("Invalid identifier: %s", identifier);
>> }
>>
>> Path tablePath = new Path(defaultWarehouseLocation(identifier));
>> TableOperations ops = newTableOps(identifier);
>> TableMetadata lastMetadata = ops.current();
>> try {
>> if (lastMetadata == null) {
>> LOG.debug("Not an iceberg table: {}", identifier);
>> return false;
>> } else {
>> if (purge) {
>> // Since the data files and the metadata files may store in different
>> locations,
>> // so it has to call dropTableData to force delete the data file.
>> CatalogUtil.dropTableData(ops.io(), lastMetadata);
>> }
>> return fs.delete(tablePath, true /* recursive */);
>> }
>> } catch (IOException e) {
>> throw new RuntimeIOException(e, "Failed to delete file: %s", tablePath);
>> }
>> }
>>
>>
>> Thanks,
>> Manu
>>
>>
>
> --
> Ryan Blue
> Tabular
>

Reply via email to