This is an automated email from the ASF dual-hosted git repository.
yiguolei pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/doris.git
The following commit(s) were added to refs/heads/branch-2.1 by this push:
new 3604d63184f [Branch 2.1] backport systable PR
(#34384,#40153,#40456,#40455,#40568) (#40687)
3604d63184f is described below
commit 3604d63184f431371bfc610a012a9e65d355f3ea
Author: Vallish Pai <[email protected]>
AuthorDate: Thu Sep 12 09:20:09 2024 +0530
[Branch 2.1] backport systable PR (#34384,#40153,#40456,#40455,#40568)
(#40687)
backport
https://github.com/apache/doris/pull/40568
https://github.com/apache/doris/pull/40455
https://github.com/apache/doris/pull/40456
https://github.com/apache/doris/pull/40153
https://github.com/apache/doris/pull/34384
Test result:
2024-09-11 11:00:45.618 INFO [suite-thread-1] (SuiteContext.groovy:309)
- Recover original connection
2024-09-11 11:00:45.619 INFO [suite-thread-1] (Suite.groovy:359) -
Execute sql: REVOKE SELECT_PRIV ON
test_partitions_schema_db.duplicate_table FROM partitions_user
2024-09-11 11:00:45.625 INFO [suite-thread-1] (SuiteContext.groovy:299)
- Create new connection for user 'partitions_user'
2024-09-11 11:00:45.632 INFO [suite-thread-1] (Suite.groovy:1162) -
Execute tag: select_check_5, sql: select
TABLE_CATALOG,TABLE_SCHEMA,TABLE_NAME,PARTITION_NAME,SUBPARTITION_NAME,PARTITION_ORDINAL_POSITION,SUBPARTITION_ORDINAL_POSITION,PARTITION_METHOD,SUBPARTITION_METHOD,PARTITION_EXPRESSION,SUBPARTITION_EXPRESSION,PARTITION_DESCRIPTION,TABLE_ROWS,AVG_ROW_LENGTH,DATA_LENGTH,MAX_DATA_LENGTH,INDEX_LENGTH,DATA_FREE,CHECKSUM,PARTITION_COMMENT,NODEGROUP,TABLESPACE_NAME
from information_schema.partitions where
table_schema="test_partitions_schema_db" order by
TABLE_CATALOG,TABLE_SCHEMA,TABLE_NAME,PARTITION_NAME,SUBPARTITION_NAME,PARTITION_ORDINAL_POSITION,SUBPARTITION_ORDINAL_POSITION,PARTITION_METHOD,SUBPARTITION_METHOD,PARTITION_EXPRESSION,SUBPARTITION_EXPRESSION,PARTITION_DESCRIPTION,TABLE_ROWS,AVG_ROW_LENGTH,DATA_LENGTH,MAX_DATA_LENGTH,INDEX_LENGTH,DATA_FREE,CHECKSUM,PARTITION_COMMENT,NODEGROUP,TABLESPACE_NAME
2024-09-11 11:00:45.644 INFO [suite-thread-1] (SuiteContext.groovy:309)
- Recover original connection
2024-09-11 11:00:45.645 INFO [suite-thread-1] (ScriptContext.groovy:120)
- Run test_partitions_schema in
/root/doris/workspace/doris/regression-test/suites/query_p0/system/test_partitions_schema.groovy
succeed
2024-09-11 11:00:45.652 INFO [main] (RegressionTest.groovy:259) - Start
to run single scripts
2024-09-11 11:01:10.321 INFO [main] (RegressionTest.groovy:380) -
Success suites:
/root/doris/workspace/doris/regression-test/suites/query_p0/system/test_partitions_schema.groovy:
group=default,p0, name=test_partitions_schema
2024-09-11 11:01:10.322 INFO [main] (RegressionTest.groovy:459) - All
suites success.
____ _ ____ ____ _____ ____
| _ \ / \ / ___/ ___|| ____| _ \
| |_) / _ \ \___ \___ \| _| | | | |
| __/ ___ \ ___) |__) | |___| |_| |
|_| /_/ \_\____/____/|_____|____/
2024-09-11 11:01:10.322 INFO [main] (RegressionTest.groovy:410) - Test 1
suites, failed 0 suites, fatal 0 scripts, skipped 0 scripts
2024-09-11 11:01:10.322 INFO [main] (RegressionTest.groovy:119) - Test
finished
2024-09-11 11:03:00.712 INFO [suite-thread-1] (Suite.groovy:1162) -
Execute tag: select_check_5, sql: select * from
information_schema.table_options ORDER BY
TABLE_CATALOG,TABLE_SCHEMA,TABLE_NAME,TABLE_MODEL,TABLE_MODEL_KEY,DISTRIBUTE_KEY,DISTRIBUTE_TYPE,BUCKETS_NUM,PARTITION_NUM;
2024-09-11 11:03:00.729 INFO [suite-thread-1] (SuiteContext.groovy:309)
- Recover original connection
2024-09-11 11:03:00.731 INFO [suite-thread-1] (ScriptContext.groovy:120)
- Run test_table_options in
/root/doris/workspace/doris/regression-test/suites/query_p0/system/test_table_options.groovy
succeed
2024-09-11 11:03:04.817 INFO [main] (RegressionTest.groovy:259) - Start
to run single scripts
2024-09-11 11:03:28.741 INFO [main] (RegressionTest.groovy:380) -
Success suites:
/root/doris/workspace/doris/regression-test/suites/query_p0/system/test_table_options.groovy:
group=default,p0, name=test_table_options
2024-09-11 11:03:28.742 INFO [main] (RegressionTest.groovy:459) - All
suites success.
____ _ ____ ____ _____ ____
| _ \ / \ / ___/ ___|| ____| _ \
| |_) / _ \ \___ \___ \| _| | | | |
| __/ ___ \ ___) |__) | |___| |_| |
|_| /_/ \_\____/____/|_____|____/
2024-09-11 11:03:28.742 INFO [main] (RegressionTest.groovy:410) - Test 1
suites, failed 0 suites, fatal 0 scripts, skipped 0 scripts
2024-09-11 11:03:28.742 INFO [main] (RegressionTest.groovy:119) - Test
finished
*************************** 7. row ***************************
PartitionId: 18035
PartitionName: p100
VisibleVersion: 2
VisibleVersionTime: 2024-09-11 10:59:28
State: NORMAL
PartitionKey: col_1
Range: [types: [INT]; keys: [83647]; ..types: [INT]; keys: [2147483647];
)
DistributionKey: pk
Buckets: 10
ReplicationNum: 1
StorageMedium: HDD
CooldownTime: 9999-12-31 15:59:59
RemoteStoragePolicy:
LastConsistencyCheckTime: NULL
DataSize: 2.872 KB
IsInMemory: false
ReplicaAllocation: tag.location.default: 1
IsMutable: true
SyncWithBaseTables: true
UnsyncTables: NULL
CommittedVersion: 2
RowCount: 4
7 rows in set (0.01 sec)
---------
Co-authored-by: Mingyu Chen <[email protected]>
---
be/src/exec/schema_scanner.cpp | 14 ++
.../schema_active_queries_scanner.cpp | 37 +--
.../schema_scanner/schema_partitions_scanner.cpp | 120 +++++++--
.../schema_scanner/schema_partitions_scanner.h | 15 +-
.../exec/schema_scanner/schema_routine_scanner.cpp | 35 +--
.../exec/schema_scanner/schema_scanner_helper.cpp | 73 ++++++
...artitions_scanner.h => schema_scanner_helper.h} | 33 +--
.../schema_table_options_scanner.cpp | 173 +++++++++++++
...ns_scanner.h => schema_table_options_scanner.h} | 27 +-
be/src/runtime/runtime_query_statistics_mgr.cpp | 47 +---
be/src/runtime/workload_group/workload_group.cpp | 1 +
.../workload_group/workload_group_manager.cpp | 35 +--
.../org/apache/doris/analysis/SchemaTableType.java | 5 +-
.../apache/doris/catalog/ListPartitionItem.java | 8 +
.../java/org/apache/doris/catalog/OlapTable.java | 14 ++
.../java/org/apache/doris/catalog/Partition.java | 19 ++
.../org/apache/doris/catalog/PartitionInfo.java | 13 +
.../org/apache/doris/catalog/PartitionItem.java | 4 +
.../apache/doris/catalog/RangePartitionItem.java | 8 +
.../java/org/apache/doris/catalog/SchemaTable.java | 12 +
.../org/apache/doris/catalog/TableProperty.java | 11 +
.../doris/common/proc/PartitionsProcDir.java | 4 +
.../doris/tablefunction/MetadataGenerator.java | 275 ++++++++++++++++++++-
gensrc/thrift/FrontendService.thrift | 1 +
.../query_p0/system/test_partitions_schema.out | 48 ++++
.../data/query_p0/system/test_query_sys_tables.out | 2 -
.../data/query_p0/system/test_table_options.out | 27 ++
.../query_p0/system/test_partitions_schema.groovy | 195 +++++++++++++++
.../query_p0/system/test_query_sys_tables.groovy | 6 +-
.../query_p0/system/test_table_options.groovy | 217 ++++++++++++++++
30 files changed, 1291 insertions(+), 188 deletions(-)
diff --git a/be/src/exec/schema_scanner.cpp b/be/src/exec/schema_scanner.cpp
index 933de634350..ce75c6d0cd1 100644
--- a/be/src/exec/schema_scanner.cpp
+++ b/be/src/exec/schema_scanner.cpp
@@ -43,6 +43,7 @@
#include "exec/schema_scanner/schema_rowsets_scanner.h"
#include "exec/schema_scanner/schema_schema_privileges_scanner.h"
#include "exec/schema_scanner/schema_schemata_scanner.h"
+#include "exec/schema_scanner/schema_table_options_scanner.h"
#include "exec/schema_scanner/schema_table_privileges_scanner.h"
#include "exec/schema_scanner/schema_table_properties_scanner.h"
#include "exec/schema_scanner/schema_tables_scanner.h"
@@ -239,6 +240,8 @@ std::unique_ptr<SchemaScanner>
SchemaScanner::create(TSchemaTableType::type type
return SchemaTablePropertiesScanner::create_unique();
case TSchemaTableType::SCH_CATALOG_META_CACHE_STATISTICS:
return SchemaCatalogMetaCacheStatsScanner::create_unique();
+ case TSchemaTableType::SCH_TABLE_OPTIONS:
+ return SchemaTableOptionsScanner::create_unique();
default:
return SchemaDummyScanner::create_unique();
break;
@@ -449,6 +452,17 @@ Status SchemaScanner::insert_block_column(TCell cell, int
col_index, vectorized:
break;
}
+ case TYPE_DATETIME: {
+ std::vector<void*> datas(1);
+ VecDateTimeValue src[1];
+ src[0].from_date_str(cell.stringVal.data(), cell.stringVal.size());
+ datas[0] = src;
+ auto data = datas[0];
+
reinterpret_cast<vectorized::ColumnVector<vectorized::Int64>*>(col_ptr)->insert_data(
+ reinterpret_cast<char*>(data), 0);
+ nullable_column->get_null_map_data().emplace_back(0);
+ break;
+ }
default: {
std::stringstream ss;
ss << "unsupported column type:" << type;
diff --git a/be/src/exec/schema_scanner/schema_active_queries_scanner.cpp
b/be/src/exec/schema_scanner/schema_active_queries_scanner.cpp
index 46522a36242..6aa6e758999 100644
--- a/be/src/exec/schema_scanner/schema_active_queries_scanner.cpp
+++ b/be/src/exec/schema_scanner/schema_active_queries_scanner.cpp
@@ -98,41 +98,12 @@ Status
SchemaActiveQueriesScanner::_get_active_queries_block_from_fe() {
}
}
- // todo(wb) reuse this callback function
- auto insert_string_value = [&](int col_index, std::string str_val,
vectorized::Block* block) {
- vectorized::MutableColumnPtr mutable_col_ptr;
- mutable_col_ptr =
std::move(*block->get_by_position(col_index).column).assume_mutable();
- auto* nullable_column =
-
reinterpret_cast<vectorized::ColumnNullable*>(mutable_col_ptr.get());
- vectorized::IColumn* col_ptr = &nullable_column->get_nested_column();
-
reinterpret_cast<vectorized::ColumnString*>(col_ptr)->insert_data(str_val.data(),
-
str_val.size());
- nullable_column->get_null_map_data().emplace_back(0);
- };
- auto insert_int_value = [&](int col_index, int64_t int_val,
vectorized::Block* block) {
- vectorized::MutableColumnPtr mutable_col_ptr;
- mutable_col_ptr =
std::move(*block->get_by_position(col_index).column).assume_mutable();
- auto* nullable_column =
-
reinterpret_cast<vectorized::ColumnNullable*>(mutable_col_ptr.get());
- vectorized::IColumn* col_ptr = &nullable_column->get_nested_column();
-
reinterpret_cast<vectorized::ColumnVector<vectorized::Int64>*>(col_ptr)->insert_value(
- int_val);
- nullable_column->get_null_map_data().emplace_back(0);
- };
-
for (int i = 0; i < result_data.size(); i++) {
TRow row = result_data[i];
-
- insert_string_value(0, row.column_value[0].stringVal,
_active_query_block.get());
- insert_string_value(1, row.column_value[1].stringVal,
_active_query_block.get());
- insert_int_value(2, row.column_value[2].longVal,
_active_query_block.get());
- insert_int_value(3, row.column_value[3].longVal,
_active_query_block.get());
- insert_string_value(4, row.column_value[4].stringVal,
_active_query_block.get());
- insert_string_value(5, row.column_value[5].stringVal,
_active_query_block.get());
- insert_string_value(6, row.column_value[6].stringVal,
_active_query_block.get());
- insert_string_value(7, row.column_value[7].stringVal,
_active_query_block.get());
- insert_string_value(8, row.column_value[8].stringVal,
_active_query_block.get());
- insert_string_value(9, row.column_value[9].stringVal,
_active_query_block.get());
+ for (int j = 0; j < _s_tbls_columns.size(); j++) {
+ RETURN_IF_ERROR(insert_block_column(row.column_value[j], j,
_active_query_block.get(),
+ _s_tbls_columns[j].type));
+ }
}
return Status::OK();
}
diff --git a/be/src/exec/schema_scanner/schema_partitions_scanner.cpp
b/be/src/exec/schema_scanner/schema_partitions_scanner.cpp
index ea7394e15e1..ebe2bd3b70e 100644
--- a/be/src/exec/schema_scanner/schema_partitions_scanner.cpp
+++ b/be/src/exec/schema_scanner/schema_partitions_scanner.cpp
@@ -22,10 +22,13 @@
#include <stdint.h>
#include "exec/schema_scanner/schema_helper.h"
-#include "runtime/decimalv2_value.h"
-#include "runtime/define_primitive_type.h"
-#include "util/runtime_profile.h"
+#include "runtime/client_cache.h"
+#include "runtime/exec_env.h"
+#include "runtime/runtime_state.h"
+#include "util/thrift_rpc_helper.h"
#include "vec/common/string_ref.h"
+#include "vec/core/block.h"
+#include "vec/data_types/data_type_factory.hpp"
namespace doris {
class RuntimeState;
@@ -63,9 +66,7 @@ std::vector<SchemaScanner::ColumnDesc>
SchemaPartitionsScanner::_s_tbls_columns
};
SchemaPartitionsScanner::SchemaPartitionsScanner()
- : SchemaScanner(_s_tbls_columns, TSchemaTableType::SCH_PARTITIONS),
- _db_index(0),
- _table_index(0) {}
+ : SchemaScanner(_s_tbls_columns, TSchemaTableType::SCH_PARTITIONS) {}
SchemaPartitionsScanner::~SchemaPartitionsScanner() {}
@@ -75,21 +76,14 @@ Status SchemaPartitionsScanner::start(RuntimeState* state) {
}
SCOPED_TIMER(_get_db_timer);
TGetDbsParams db_params;
- if (nullptr != _param->common_param->db) {
+ if (_param->common_param->db) {
db_params.__set_pattern(*(_param->common_param->db));
}
- if (nullptr != _param->common_param->catalog) {
+ if (_param->common_param->catalog) {
db_params.__set_catalog(*(_param->common_param->catalog));
}
- if (nullptr != _param->common_param->current_user_ident) {
+ if (_param->common_param->current_user_ident) {
db_params.__set_current_user_ident(*(_param->common_param->current_user_ident));
- } else {
- if (nullptr != _param->common_param->user) {
- db_params.__set_user(*(_param->common_param->user));
- }
- if (nullptr != _param->common_param->user_ip) {
- db_params.__set_user_ip(*(_param->common_param->user_ip));
- }
}
if (nullptr != _param->common_param->ip && 0 !=
_param->common_param->port) {
@@ -98,17 +92,109 @@ Status SchemaPartitionsScanner::start(RuntimeState* state)
{
} else {
return Status::InternalError("IP or port doesn't exists");
}
+ _block_rows_limit = state->batch_size();
+ _rpc_timeout_ms = state->execution_timeout() * 1000;
return Status::OK();
}
+Status SchemaPartitionsScanner::get_onedb_info_from_fe(int64_t dbId) {
+ TNetworkAddress master_addr =
ExecEnv::GetInstance()->master_info()->network_address;
+
+ TSchemaTableRequestParams schema_table_request_params;
+ for (int i = 0; i < _s_tbls_columns.size(); i++) {
+ schema_table_request_params.__isset.columns_name = true;
+
schema_table_request_params.columns_name.emplace_back(_s_tbls_columns[i].name);
+ }
+
schema_table_request_params.__set_current_user_ident(*_param->common_param->current_user_ident);
+ schema_table_request_params.__set_catalog(*_param->common_param->catalog);
+ schema_table_request_params.__set_dbId(dbId);
+
+ TFetchSchemaTableDataRequest request;
+ request.__set_schema_table_name(TSchemaTableName::PARTITIONS);
+ request.__set_schema_table_params(schema_table_request_params);
+
+ TFetchSchemaTableDataResult result;
+
+ RETURN_IF_ERROR(ThriftRpcHelper::rpc<FrontendServiceClient>(
+ master_addr.hostname, master_addr.port,
+ [&request, &result](FrontendServiceConnection& client) {
+ client->fetchSchemaTableData(result, request);
+ },
+ _rpc_timeout_ms));
+
+ Status status(Status::create(result.status));
+ if (!status.ok()) {
+ LOG(WARNING) << "fetch table options from FE failed, errmsg=" <<
status;
+ return status;
+ }
+ std::vector<TRow> result_data = result.data_batch;
+
+ _partitions_block = vectorized::Block::create_unique();
+ for (int i = 0; i < _s_tbls_columns.size(); ++i) {
+ TypeDescriptor descriptor(_s_tbls_columns[i].type);
+ auto data_type =
vectorized::DataTypeFactory::instance().create_data_type(descriptor, true);
+ _partitions_block->insert(vectorized::ColumnWithTypeAndName(
+ data_type->create_column(), data_type,
_s_tbls_columns[i].name));
+ }
+ _partitions_block->reserve(_block_rows_limit);
+ if (result_data.size() > 0) {
+ int col_size = result_data[0].column_value.size();
+ if (col_size != _s_tbls_columns.size()) {
+ return Status::InternalError<false>("table options schema is not
match for FE and BE");
+ }
+ }
+
+ for (int i = 0; i < result_data.size(); i++) {
+ TRow row = result_data[i];
+ for (int j = 0; j < _s_tbls_columns.size(); j++) {
+ RETURN_IF_ERROR(insert_block_column(row.column_value[j], j,
_partitions_block.get(),
+ _s_tbls_columns[j].type));
+ }
+ }
+ return Status::OK();
+}
+
+bool SchemaPartitionsScanner::check_and_mark_eos(bool* eos) const {
+ if (_row_idx == _total_rows) {
+ *eos = true;
+ if (_db_index < _db_result.db_ids.size()) {
+ *eos = false;
+ }
+ return true;
+ }
+ return false;
+}
+
Status SchemaPartitionsScanner::get_next_block_internal(vectorized::Block*
block, bool* eos) {
if (!_is_init) {
return Status::InternalError("Used before initialized.");
}
+
if (nullptr == block || nullptr == eos) {
return Status::InternalError("input pointer is nullptr.");
}
- *eos = true;
+
+ if ((_partitions_block == nullptr) || (_row_idx == _total_rows)) {
+ if (_db_index < _db_result.db_ids.size()) {
+
RETURN_IF_ERROR(get_onedb_info_from_fe(_db_result.db_ids[_db_index]));
+ _row_idx = 0; // reset row index so that it start filling for next
block.
+ _total_rows = _partitions_block->rows();
+ _db_index++;
+ }
+ }
+
+ if (check_and_mark_eos(eos)) {
+ return Status::OK();
+ }
+
+ int current_batch_rows = std::min(_block_rows_limit, _total_rows -
_row_idx);
+ vectorized::MutableBlock mblock =
vectorized::MutableBlock::build_mutable_block(block);
+ RETURN_IF_ERROR(mblock.add_rows(_partitions_block.get(), _row_idx,
current_batch_rows));
+ _row_idx += current_batch_rows;
+
+ if (!check_and_mark_eos(eos)) {
+ *eos = false;
+ }
return Status::OK();
}
diff --git a/be/src/exec/schema_scanner/schema_partitions_scanner.h
b/be/src/exec/schema_scanner/schema_partitions_scanner.h
index 87e55db984a..3c246f36eec 100644
--- a/be/src/exec/schema_scanner/schema_partitions_scanner.h
+++ b/be/src/exec/schema_scanner/schema_partitions_scanner.h
@@ -40,11 +40,18 @@ public:
Status start(RuntimeState* state) override;
Status get_next_block_internal(vectorized::Block* block, bool* eos)
override;
- int _db_index;
- int _table_index;
- TGetDbsResult _db_result;
- TListTableStatusResult _table_result;
static std::vector<SchemaScanner::ColumnDesc> _s_tbls_columns;
+
+private:
+ Status get_onedb_info_from_fe(int64_t dbId);
+ bool check_and_mark_eos(bool* eos) const;
+ int _block_rows_limit = 4096;
+ int _db_index = 0;
+ TGetDbsResult _db_result;
+ int _row_idx = 0;
+ int _total_rows = 0;
+ std::unique_ptr<vectorized::Block> _partitions_block = nullptr;
+ int _rpc_timeout_ms = 3000;
};
} // namespace doris
diff --git a/be/src/exec/schema_scanner/schema_routine_scanner.cpp
b/be/src/exec/schema_scanner/schema_routine_scanner.cpp
index 8c263c99d2d..e8d95f0abd6 100644
--- a/be/src/exec/schema_scanner/schema_routine_scanner.cpp
+++ b/be/src/exec/schema_scanner/schema_routine_scanner.cpp
@@ -99,43 +99,12 @@ Status SchemaRoutinesScanner::get_block_from_fe() {
return Status::InternalError<false>("routine table schema is not
match for FE and BE");
}
}
- auto insert_string_value = [&](int col_index, std::string str_val,
vectorized::Block* block) {
- vectorized::MutableColumnPtr mutable_col_ptr;
- mutable_col_ptr =
std::move(*block->get_by_position(col_index).column).assume_mutable();
- auto* nullable_column =
-
reinterpret_cast<vectorized::ColumnNullable*>(mutable_col_ptr.get());
- vectorized::IColumn* col_ptr = &nullable_column->get_nested_column();
-
reinterpret_cast<vectorized::ColumnString*>(col_ptr)->insert_data(str_val.data(),
-
str_val.size());
- nullable_column->get_null_map_data().emplace_back(0);
- };
- auto insert_datetime_value = [&](int col_index, const std::vector<void*>&
datas,
- vectorized::Block* block) {
- vectorized::MutableColumnPtr mutable_col_ptr;
- mutable_col_ptr =
std::move(*block->get_by_position(col_index).column).assume_mutable();
- auto* nullable_column =
-
reinterpret_cast<vectorized::ColumnNullable*>(mutable_col_ptr.get());
- vectorized::IColumn* col_ptr = &nullable_column->get_nested_column();
- auto data = datas[0];
-
reinterpret_cast<vectorized::ColumnVector<vectorized::Int64>*>(col_ptr)->insert_data(
- reinterpret_cast<char*>(data), 0);
- nullable_column->get_null_map_data().emplace_back(0);
- };
for (int i = 0; i < result_data.size(); i++) {
TRow row = result_data[i];
-
for (int j = 0; j < _s_tbls_columns.size(); j++) {
- if (_s_tbls_columns[j].type == TYPE_DATETIME) {
- std::vector<void*> datas(1);
- VecDateTimeValue src[1];
- src[0].from_date_str(row.column_value[j].stringVal.data(),
- row.column_value[j].stringVal.size());
- datas[0] = src;
- insert_datetime_value(j, datas, _routines_block.get());
- } else {
- insert_string_value(j, row.column_value[j].stringVal,
_routines_block.get());
- }
+ RETURN_IF_ERROR(insert_block_column(row.column_value[j], j,
_routines_block.get(),
+ _s_tbls_columns[j].type));
}
}
return Status::OK();
diff --git a/be/src/exec/schema_scanner/schema_scanner_helper.cpp
b/be/src/exec/schema_scanner/schema_scanner_helper.cpp
new file mode 100644
index 00000000000..fc42044a29c
--- /dev/null
+++ b/be/src/exec/schema_scanner/schema_scanner_helper.cpp
@@ -0,0 +1,73 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+#include "exec/schema_scanner/schema_scanner_helper.h"
+
+#include "runtime/client_cache.h"
+#include "runtime/exec_env.h"
+#include "runtime/runtime_state.h"
+#include "util/thrift_rpc_helper.h"
+#include "vec/common/string_ref.h"
+#include "vec/core/block.h"
+#include "vec/data_types/data_type_factory.hpp"
+
+namespace doris {
+
+void SchemaScannerHelper::insert_string_value(int col_index, std::string
str_val,
+ vectorized::Block* block) {
+ vectorized::MutableColumnPtr mutable_col_ptr;
+ mutable_col_ptr =
std::move(*block->get_by_position(col_index).column).assume_mutable();
+ auto* nullable_column =
reinterpret_cast<vectorized::ColumnNullable*>(mutable_col_ptr.get());
+ vectorized::IColumn* col_ptr = &nullable_column->get_nested_column();
+
reinterpret_cast<vectorized::ColumnString*>(col_ptr)->insert_data(str_val.data(),
+
str_val.size());
+ nullable_column->get_null_map_data().emplace_back(0);
+}
+
+void SchemaScannerHelper::insert_datetime_value(int col_index, const
std::vector<void*>& datas,
+ vectorized::Block* block) {
+ vectorized::MutableColumnPtr mutable_col_ptr;
+ mutable_col_ptr =
std::move(*block->get_by_position(col_index).column).assume_mutable();
+ auto* nullable_column =
reinterpret_cast<vectorized::ColumnNullable*>(mutable_col_ptr.get());
+ vectorized::IColumn* col_ptr = &nullable_column->get_nested_column();
+ auto data = datas[0];
+
reinterpret_cast<vectorized::ColumnVector<vectorized::Int64>*>(col_ptr)->insert_data(
+ reinterpret_cast<char*>(data), 0);
+ nullable_column->get_null_map_data().emplace_back(0);
+}
+
+void SchemaScannerHelper::insert_int_value(int col_index, int64_t int_val,
+ vectorized::Block* block) {
+ vectorized::MutableColumnPtr mutable_col_ptr;
+ mutable_col_ptr =
std::move(*block->get_by_position(col_index).column).assume_mutable();
+ auto* nullable_column =
reinterpret_cast<vectorized::ColumnNullable*>(mutable_col_ptr.get());
+ vectorized::IColumn* col_ptr = &nullable_column->get_nested_column();
+
reinterpret_cast<vectorized::ColumnVector<vectorized::Int64>*>(col_ptr)->insert_value(int_val);
+ nullable_column->get_null_map_data().emplace_back(0);
+}
+
+void SchemaScannerHelper::insert_double_value(int col_index, double double_val,
+ vectorized::Block* block) {
+ vectorized::MutableColumnPtr mutable_col_ptr;
+ mutable_col_ptr =
std::move(*block->get_by_position(col_index).column).assume_mutable();
+ auto* nullable_column =
reinterpret_cast<vectorized::ColumnNullable*>(mutable_col_ptr.get());
+ vectorized::IColumn* col_ptr = &nullable_column->get_nested_column();
+
reinterpret_cast<vectorized::ColumnVector<vectorized::Float64>*>(col_ptr)->insert_value(
+ double_val);
+ nullable_column->get_null_map_data().emplace_back(0);
+}
+} // namespace doris
diff --git a/be/src/exec/schema_scanner/schema_partitions_scanner.h
b/be/src/exec/schema_scanner/schema_scanner_helper.h
similarity index 58%
copy from be/src/exec/schema_scanner/schema_partitions_scanner.h
copy to be/src/exec/schema_scanner/schema_scanner_helper.h
index 87e55db984a..c9fe8881ddb 100644
--- a/be/src/exec/schema_scanner/schema_partitions_scanner.h
+++ b/be/src/exec/schema_scanner/schema_scanner_helper.h
@@ -15,36 +15,29 @@
// specific language governing permissions and limitations
// under the License.
-#pragma once
+#ifndef _SCHEMA_SCANNER_HELPER_H_
-#include <gen_cpp/FrontendService_types.h>
+#include <stdint.h>
+#include <string>
#include <vector>
-#include "common/status.h"
-#include "exec/schema_scanner.h"
-
+// this is a util class which can be used by all shema scanner
+// all common functions are added in this class.
namespace doris {
-class RuntimeState;
+
namespace vectorized {
class Block;
} // namespace vectorized
-
-class SchemaPartitionsScanner : public SchemaScanner {
- ENABLE_FACTORY_CREATOR(SchemaPartitionsScanner);
-
+class SchemaScannerHelper {
public:
- SchemaPartitionsScanner();
- ~SchemaPartitionsScanner() override;
-
- Status start(RuntimeState* state) override;
- Status get_next_block_internal(vectorized::Block* block, bool* eos)
override;
+ static void insert_string_value(int col_index, std::string str_val,
vectorized::Block* block);
+ static void insert_datetime_value(int col_index, const std::vector<void*>&
datas,
+ vectorized::Block* block);
- int _db_index;
- int _table_index;
- TGetDbsResult _db_result;
- TListTableStatusResult _table_result;
- static std::vector<SchemaScanner::ColumnDesc> _s_tbls_columns;
+ static void insert_int_value(int col_index, int64_t int_val,
vectorized::Block* block);
+ static void insert_double_value(int col_index, double double_val,
vectorized::Block* block);
};
} // namespace doris
+#endif
diff --git a/be/src/exec/schema_scanner/schema_table_options_scanner.cpp
b/be/src/exec/schema_scanner/schema_table_options_scanner.cpp
new file mode 100644
index 00000000000..7465d9fe7ae
--- /dev/null
+++ b/be/src/exec/schema_scanner/schema_table_options_scanner.cpp
@@ -0,0 +1,173 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+#include "exec/schema_scanner/schema_table_options_scanner.h"
+
+#include "exec/schema_scanner/schema_helper.h"
+#include "runtime/client_cache.h"
+#include "runtime/exec_env.h"
+#include "runtime/runtime_state.h"
+#include "util/thrift_rpc_helper.h"
+#include "vec/common/string_ref.h"
+#include "vec/core/block.h"
+#include "vec/data_types/data_type_factory.hpp"
+
+namespace doris {
+std::vector<SchemaScanner::ColumnDesc>
SchemaTableOptionsScanner::_s_tbls_columns = {
+ {"TABLE_CATALOG", TYPE_VARCHAR, sizeof(StringRef), true},
+ {"TABLE_SCHEMA", TYPE_VARCHAR, sizeof(StringRef), true},
+ {"TABLE_NAME", TYPE_VARCHAR, sizeof(StringRef), true},
+ {"TABLE_MODEL", TYPE_STRING, sizeof(StringRef), true},
+ {"TABLE_MODEL_KEY", TYPE_STRING, sizeof(StringRef), true},
+ {"DISTRIBUTE_KEY", TYPE_STRING, sizeof(StringRef), true},
+ {"DISTRIBUTE_TYPE", TYPE_STRING, sizeof(StringRef), true},
+ {"BUCKETS_NUM", TYPE_INT, sizeof(int32_t), true},
+ {"PARTITION_NUM", TYPE_INT, sizeof(int32_t), true},
+};
+
+SchemaTableOptionsScanner::SchemaTableOptionsScanner()
+ : SchemaScanner(_s_tbls_columns, TSchemaTableType::SCH_TABLE_OPTIONS)
{}
+
+Status SchemaTableOptionsScanner::start(RuntimeState* state) {
+ if (!_is_init) {
+ return Status::InternalError("used before initialized.");
+ }
+
+ // first get the all the database specific to current catalog
+ SCOPED_TIMER(_get_db_timer);
+ TGetDbsParams db_params;
+
+ if (_param->common_param->catalog) {
+ db_params.__set_catalog(*(_param->common_param->catalog));
+ }
+ if (_param->common_param->current_user_ident) {
+
db_params.__set_current_user_ident(*(_param->common_param->current_user_ident));
+ }
+
+ if (_param->common_param->ip && 0 != _param->common_param->port) {
+ RETURN_IF_ERROR(SchemaHelper::get_db_names(
+ *(_param->common_param->ip), _param->common_param->port,
db_params, &_db_result));
+ } else {
+ return Status::InternalError("IP or port doesn't exists");
+ }
+ _block_rows_limit = state->batch_size();
+ _rpc_timeout_ms = state->execution_timeout() * 1000;
+ return Status::OK();
+}
+
+Status SchemaTableOptionsScanner::get_onedb_info_from_fe(int64_t dbId) {
+ TNetworkAddress master_addr =
ExecEnv::GetInstance()->master_info()->network_address;
+
+ TSchemaTableRequestParams schema_table_request_params;
+ for (int i = 0; i < _s_tbls_columns.size(); i++) {
+ schema_table_request_params.__isset.columns_name = true;
+
schema_table_request_params.columns_name.emplace_back(_s_tbls_columns[i].name);
+ }
+
schema_table_request_params.__set_current_user_ident(*_param->common_param->current_user_ident);
+ schema_table_request_params.__set_catalog(*_param->common_param->catalog);
+ schema_table_request_params.__set_dbId(dbId);
+
+ TFetchSchemaTableDataRequest request;
+ request.__set_schema_table_name(TSchemaTableName::TABLE_OPTIONS);
+ request.__set_schema_table_params(schema_table_request_params);
+
+ TFetchSchemaTableDataResult result;
+
+ RETURN_IF_ERROR(ThriftRpcHelper::rpc<FrontendServiceClient>(
+ master_addr.hostname, master_addr.port,
+ [&request, &result](FrontendServiceConnection& client) {
+ client->fetchSchemaTableData(result, request);
+ },
+ _rpc_timeout_ms));
+
+ Status status(Status::create(result.status));
+ if (!status.ok()) {
+ LOG(WARNING) << "fetch table options from FE failed, errmsg=" <<
status;
+ return status;
+ }
+ std::vector<TRow> result_data = result.data_batch;
+
+ _tableoptions_block = vectorized::Block::create_unique();
+ for (int i = 0; i < _s_tbls_columns.size(); ++i) {
+ TypeDescriptor descriptor(_s_tbls_columns[i].type);
+ auto data_type =
vectorized::DataTypeFactory::instance().create_data_type(descriptor, true);
+ _tableoptions_block->insert(vectorized::ColumnWithTypeAndName(
+ data_type->create_column(), data_type,
_s_tbls_columns[i].name));
+ }
+ _tableoptions_block->reserve(_block_rows_limit);
+ if (result_data.size() > 0) {
+ int col_size = result_data[0].column_value.size();
+ if (col_size != _s_tbls_columns.size()) {
+ return Status::InternalError<false>("table options schema is not
match for FE and BE");
+ }
+ }
+
+ for (int i = 0; i < result_data.size(); i++) {
+ TRow row = result_data[i];
+ for (int j = 0; j < _s_tbls_columns.size(); j++) {
+ RETURN_IF_ERROR(insert_block_column(row.column_value[j], j,
_tableoptions_block.get(),
+ _s_tbls_columns[j].type));
+ }
+ }
+ return Status::OK();
+}
+
+bool SchemaTableOptionsScanner::check_and_mark_eos(bool* eos) const {
+ if (_row_idx == _total_rows) {
+ *eos = true;
+ if (_db_index < _db_result.db_ids.size()) {
+ *eos = false;
+ }
+ return true;
+ }
+ return false;
+}
+
+Status SchemaTableOptionsScanner::get_next_block_internal(vectorized::Block*
block, bool* eos) {
+ if (!_is_init) {
+ return Status::InternalError("Used before initialized.");
+ }
+
+ if (nullptr == block || nullptr == eos) {
+ return Status::InternalError("input pointer is nullptr.");
+ }
+
+ if ((_tableoptions_block == nullptr) || (_row_idx == _total_rows)) {
+ if (_db_index < _db_result.db_ids.size()) {
+
RETURN_IF_ERROR(get_onedb_info_from_fe(_db_result.db_ids[_db_index]));
+ _row_idx = 0; // reset row index so that it start filling for next
block.
+ _total_rows = _tableoptions_block->rows();
+ _db_index++;
+ }
+ }
+
+ if (check_and_mark_eos(eos)) {
+ return Status::OK();
+ }
+
+ int current_batch_rows = std::min(_block_rows_limit, _total_rows -
_row_idx);
+ vectorized::MutableBlock mblock =
vectorized::MutableBlock::build_mutable_block(block);
+ RETURN_IF_ERROR(mblock.add_rows(_tableoptions_block.get(), _row_idx,
current_batch_rows));
+ _row_idx += current_batch_rows;
+
+ if (!check_and_mark_eos(eos)) {
+ *eos = false;
+ }
+ return Status::OK();
+}
+
+} // namespace doris
\ No newline at end of file
diff --git a/be/src/exec/schema_scanner/schema_partitions_scanner.h
b/be/src/exec/schema_scanner/schema_table_options_scanner.h
similarity index 70%
copy from be/src/exec/schema_scanner/schema_partitions_scanner.h
copy to be/src/exec/schema_scanner/schema_table_options_scanner.h
index 87e55db984a..631bd3b07ee 100644
--- a/be/src/exec/schema_scanner/schema_partitions_scanner.h
+++ b/be/src/exec/schema_scanner/schema_table_options_scanner.h
@@ -16,7 +16,6 @@
// under the License.
#pragma once
-
#include <gen_cpp/FrontendService_types.h>
#include <vector>
@@ -30,21 +29,27 @@ namespace vectorized {
class Block;
} // namespace vectorized
-class SchemaPartitionsScanner : public SchemaScanner {
- ENABLE_FACTORY_CREATOR(SchemaPartitionsScanner);
+class SchemaTableOptionsScanner : public SchemaScanner {
+ ENABLE_FACTORY_CREATOR(SchemaTableOptionsScanner);
public:
- SchemaPartitionsScanner();
- ~SchemaPartitionsScanner() override;
+ SchemaTableOptionsScanner();
+ ~SchemaTableOptionsScanner() override = default;
Status start(RuntimeState* state) override;
Status get_next_block_internal(vectorized::Block* block, bool* eos)
override;
- int _db_index;
- int _table_index;
- TGetDbsResult _db_result;
- TListTableStatusResult _table_result;
static std::vector<SchemaScanner::ColumnDesc> _s_tbls_columns;
-};
-} // namespace doris
+private:
+ Status get_onedb_info_from_fe(int64_t dbId);
+ bool check_and_mark_eos(bool* eos) const;
+ int _block_rows_limit = 4096;
+ int _db_index = 0;
+ TGetDbsResult _db_result;
+ int _row_idx = 0;
+ int _total_rows = 0;
+ std::unique_ptr<vectorized::Block> _tableoptions_block = nullptr;
+ int _rpc_timeout_ms = 3000;
+};
+}; // namespace doris
\ No newline at end of file
diff --git a/be/src/runtime/runtime_query_statistics_mgr.cpp
b/be/src/runtime/runtime_query_statistics_mgr.cpp
index d55bbed9761..7e6a34ad1b5 100644
--- a/be/src/runtime/runtime_query_statistics_mgr.cpp
+++ b/be/src/runtime/runtime_query_statistics_mgr.cpp
@@ -17,6 +17,7 @@
#include "runtime/runtime_query_statistics_mgr.h"
+#include "exec/schema_scanner/schema_scanner_helper.h"
#include "runtime/client_cache.h"
#include "runtime/exec_env.h"
#include "util/debug_util.h"
@@ -223,51 +224,29 @@ void
RuntimeQueryStatiticsMgr::get_active_be_tasks_block(vectorized::Block* bloc
std::shared_lock<std::shared_mutex> read_lock(_qs_ctx_map_lock);
int64_t be_id = ExecEnv::GetInstance()->master_info()->backend_id;
- auto insert_int_value = [&](int col_index, int64_t int_val,
vectorized::Block* block) {
- vectorized::MutableColumnPtr mutable_col_ptr;
- mutable_col_ptr =
std::move(*block->get_by_position(col_index).column).assume_mutable();
- auto* nullable_column =
-
reinterpret_cast<vectorized::ColumnNullable*>(mutable_col_ptr.get());
- vectorized::IColumn* col_ptr = &nullable_column->get_nested_column();
-
reinterpret_cast<vectorized::ColumnVector<vectorized::Int64>*>(col_ptr)->insert_value(
- int_val);
- nullable_column->get_null_map_data().emplace_back(0);
- };
-
- auto insert_string_value = [&](int col_index, std::string str_val,
vectorized::Block* block) {
- vectorized::MutableColumnPtr mutable_col_ptr;
- mutable_col_ptr =
std::move(*block->get_by_position(col_index).column).assume_mutable();
- auto* nullable_column =
-
reinterpret_cast<vectorized::ColumnNullable*>(mutable_col_ptr.get());
- vectorized::IColumn* col_ptr = &nullable_column->get_nested_column();
-
reinterpret_cast<vectorized::ColumnString*>(col_ptr)->insert_data(str_val.data(),
-
str_val.size());
- nullable_column->get_null_map_data().emplace_back(0);
- };
-
// block's schema come from
SchemaBackendActiveTasksScanner::_s_tbls_columns
for (auto& [query_id, qs_ctx_ptr] : _query_statistics_ctx_map) {
TQueryStatistics tqs;
qs_ctx_ptr->collect_query_statistics(&tqs);
- insert_int_value(0, be_id, block);
- insert_string_value(1, qs_ctx_ptr->_fe_addr.hostname, block);
- insert_string_value(2, query_id, block);
+ SchemaScannerHelper::insert_int_value(0, be_id, block);
+ SchemaScannerHelper::insert_string_value(1,
qs_ctx_ptr->_fe_addr.hostname, block);
+ SchemaScannerHelper::insert_string_value(2, query_id, block);
int64_t task_time = qs_ctx_ptr->_is_query_finished
? qs_ctx_ptr->_query_finish_time -
qs_ctx_ptr->_query_start_time
: MonotonicMillis() -
qs_ctx_ptr->_query_start_time;
- insert_int_value(3, task_time, block);
- insert_int_value(4, tqs.cpu_ms, block);
- insert_int_value(5, tqs.scan_rows, block);
- insert_int_value(6, tqs.scan_bytes, block);
- insert_int_value(7, tqs.max_peak_memory_bytes, block);
- insert_int_value(8, tqs.current_used_memory_bytes, block);
- insert_int_value(9, tqs.shuffle_send_bytes, block);
- insert_int_value(10, tqs.shuffle_send_rows, block);
+ SchemaScannerHelper::insert_int_value(3, task_time, block);
+ SchemaScannerHelper::insert_int_value(4, tqs.cpu_ms, block);
+ SchemaScannerHelper::insert_int_value(5, tqs.scan_rows, block);
+ SchemaScannerHelper::insert_int_value(6, tqs.scan_bytes, block);
+ SchemaScannerHelper::insert_int_value(7, tqs.max_peak_memory_bytes,
block);
+ SchemaScannerHelper::insert_int_value(8,
tqs.current_used_memory_bytes, block);
+ SchemaScannerHelper::insert_int_value(9, tqs.shuffle_send_bytes,
block);
+ SchemaScannerHelper::insert_int_value(10, tqs.shuffle_send_rows,
block);
std::stringstream ss;
ss << qs_ctx_ptr->_query_type;
- insert_string_value(11, ss.str(), block);
+ SchemaScannerHelper::insert_string_value(11, ss.str(), block);
}
}
diff --git a/be/src/runtime/workload_group/workload_group.cpp
b/be/src/runtime/workload_group/workload_group.cpp
index 63295563699..8b0985d4ecf 100644
--- a/be/src/runtime/workload_group/workload_group.cpp
+++ b/be/src/runtime/workload_group/workload_group.cpp
@@ -27,6 +27,7 @@
#include <utility>
#include "common/logging.h"
+#include "exec/schema_scanner/schema_scanner_helper.h"
#include "io/fs/local_file_reader.h"
#include "olap/storage_engine.h"
#include "pipeline/task_queue.h"
diff --git a/be/src/runtime/workload_group/workload_group_manager.cpp
b/be/src/runtime/workload_group/workload_group_manager.cpp
index 1df0dcc3a46..62fcf0aad23 100644
--- a/be/src/runtime/workload_group/workload_group_manager.cpp
+++ b/be/src/runtime/workload_group/workload_group_manager.cpp
@@ -21,6 +21,7 @@
#include <mutex>
#include <unordered_map>
+#include "exec/schema_scanner/schema_scanner_helper.h"
#include "pipeline/task_scheduler.h"
#include "runtime/memory/mem_tracker_limiter.h"
#include "runtime/workload_group/workload_group.h"
@@ -258,28 +259,6 @@ void WorkloadGroupMgr::refresh_wg_weighted_memory_limit() {
}
void WorkloadGroupMgr::get_wg_resource_usage(vectorized::Block* block) {
- auto insert_int_value = [&](int col_index, int64_t int_val,
vectorized::Block* block) {
- vectorized::MutableColumnPtr mutable_col_ptr;
- mutable_col_ptr =
std::move(*block->get_by_position(col_index).column).assume_mutable();
- auto* nullable_column =
-
reinterpret_cast<vectorized::ColumnNullable*>(mutable_col_ptr.get());
- vectorized::IColumn* col_ptr = &nullable_column->get_nested_column();
-
reinterpret_cast<vectorized::ColumnVector<vectorized::Int64>*>(col_ptr)->insert_value(
- int_val);
- nullable_column->get_null_map_data().emplace_back(0);
- };
-
- auto insert_double_value = [&](int col_index, double double_val,
vectorized::Block* block) {
- vectorized::MutableColumnPtr mutable_col_ptr;
- mutable_col_ptr =
std::move(*block->get_by_position(col_index).column).assume_mutable();
- auto* nullable_column =
-
reinterpret_cast<vectorized::ColumnNullable*>(mutable_col_ptr.get());
- vectorized::IColumn* col_ptr = &nullable_column->get_nested_column();
-
reinterpret_cast<vectorized::ColumnVector<vectorized::Float64>*>(col_ptr)->insert_value(
- double_val);
- nullable_column->get_null_map_data().emplace_back(0);
- };
-
int64_t be_id = ExecEnv::GetInstance()->master_info()->backend_id;
int cpu_num = CpuInfo::num_cores();
cpu_num = cpu_num <= 0 ? 1 : cpu_num;
@@ -288,18 +267,18 @@ void
WorkloadGroupMgr::get_wg_resource_usage(vectorized::Block* block) {
std::shared_lock<std::shared_mutex> r_lock(_group_mutex);
block->reserve(_workload_groups.size());
for (const auto& [id, wg] : _workload_groups) {
- insert_int_value(0, be_id, block);
- insert_int_value(1, wg->id(), block);
- insert_int_value(2, wg->get_mem_used(), block);
+ SchemaScannerHelper::insert_int_value(0, be_id, block);
+ SchemaScannerHelper::insert_int_value(1, wg->id(), block);
+ SchemaScannerHelper::insert_int_value(2, wg->get_mem_used(), block);
double cpu_usage_p =
(double)wg->get_cpu_usage() /
(double)total_cpu_time_ns_per_second * 100;
cpu_usage_p = std::round(cpu_usage_p * 100.0) / 100.0;
- insert_double_value(3, cpu_usage_p, block);
+ SchemaScannerHelper::insert_double_value(3, cpu_usage_p, block);
- insert_int_value(4, wg->get_local_scan_bytes_per_second(), block);
- insert_int_value(5, wg->get_remote_scan_bytes_per_second(), block);
+ SchemaScannerHelper::insert_int_value(4,
wg->get_local_scan_bytes_per_second(), block);
+ SchemaScannerHelper::insert_int_value(5,
wg->get_remote_scan_bytes_per_second(), block);
}
}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/analysis/SchemaTableType.java
b/fe/fe-core/src/main/java/org/apache/doris/analysis/SchemaTableType.java
index 2b6bd0f089e..69787463bc7 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/analysis/SchemaTableType.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/analysis/SchemaTableType.java
@@ -75,7 +75,6 @@ public enum SchemaTableType {
SCH_WORKLOAD_GROUPS("WORKLOAD_GROUPS", "WORKLOAD_GROUPS",
TSchemaTableType.SCH_WORKLOAD_GROUPS),
SCHE_USER("user", "user", TSchemaTableType.SCH_USER),
SCH_PROCS_PRIV("procs_priv", "procs_priv",
TSchemaTableType.SCH_PROCS_PRIV),
-
SCH_WORKLOAD_POLICY("WORKLOAD_POLICY", "WORKLOAD_POLICY",
TSchemaTableType.SCH_WORKLOAD_POLICY),
SCH_FILE_CACHE_STATISTICS("FILE_CACHE_STATISTICS", "FILE_CACHE_STATISTICS",
@@ -87,7 +86,9 @@ public enum SchemaTableType {
SCH_TABLE_PROPERTIES("TABLE_PROPERTIES", "TABLE_PROPERTIES",
TSchemaTableType.SCH_TABLE_PROPERTIES),
SCH_CATALOG_META_CACHE_STATISTICS("CATALOG_META_CACHE_STATISTICS",
"CATALOG_META_CACHE_STATISTICS",
- TSchemaTableType.SCH_CATALOG_META_CACHE_STATISTICS);
+ TSchemaTableType.SCH_CATALOG_META_CACHE_STATISTICS),
+ SCH_TABLE_OPTIONS("TABLE_OPTIONS", "TABLE_OPTIONS",
+ TSchemaTableType.SCH_TABLE_OPTIONS);
private static final String dbName = "INFORMATION_SCHEMA";
private static SelectList fullSelectLists;
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/catalog/ListPartitionItem.java
b/fe/fe-core/src/main/java/org/apache/doris/catalog/ListPartitionItem.java
index dafdcdc49f5..2db1531de9b 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/catalog/ListPartitionItem.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/catalog/ListPartitionItem.java
@@ -58,6 +58,14 @@ public class ListPartitionItem extends PartitionItem {
return partitionKeys;
}
+ public String getItemsString() {
+ return toString();
+ }
+
+ public String getItemsSql() {
+ return toSql();
+ }
+
@Override
public boolean isDefaultPartition() {
return isDefaultPartition;
diff --git a/fe/fe-core/src/main/java/org/apache/doris/catalog/OlapTable.java
b/fe/fe-core/src/main/java/org/apache/doris/catalog/OlapTable.java
index e048ae8ede8..0d4e12cfdd6 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/catalog/OlapTable.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/catalog/OlapTable.java
@@ -2025,6 +2025,20 @@ public class OlapTable extends Table implements
MTMVRelatedTableIf {
return keysNum;
}
+ public String getKeyColAsString() {
+ StringBuilder str = new StringBuilder();
+ str.append("");
+ for (Column column : getBaseSchema()) {
+ if (column.isKey()) {
+ if (str.length() != 0) {
+ str.append(",");
+ }
+ str.append(column.getName());
+ }
+ }
+ return str.toString();
+ }
+
public boolean convertHashDistributionToRandomDistribution() {
boolean hasChanged = false;
if (defaultDistributionInfo.getType() == DistributionInfoType.HASH) {
diff --git a/fe/fe-core/src/main/java/org/apache/doris/catalog/Partition.java
b/fe/fe-core/src/main/java/org/apache/doris/catalog/Partition.java
index a21ba68f0dd..aad35071952 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/catalog/Partition.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/catalog/Partition.java
@@ -478,4 +478,23 @@ public class Partition extends MetaObject implements
Writable {
public boolean isRollupIndex(long id) {
return idToVisibleRollupIndex.containsKey(id);
}
+
+
+ public long getRowCount() {
+ return getBaseIndex().getRowCount();
+ }
+
+ public long getAvgRowLength() {
+ long rowCount = getBaseIndex().getRowCount();
+ long dataSize = getBaseIndex().getDataSize(false);
+ if (rowCount > 0) {
+ return dataSize / rowCount;
+ } else {
+ return 0;
+ }
+ }
+
+ public long getDataLength() {
+ return getBaseIndex().getDataSize(false);
+ }
}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/catalog/PartitionInfo.java
b/fe/fe-core/src/main/java/org/apache/doris/catalog/PartitionInfo.java
index 434812b07d3..8f148188a5a 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/catalog/PartitionInfo.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/catalog/PartitionInfo.java
@@ -122,6 +122,19 @@ public class PartitionInfo implements Writable {
return partitionColumns;
}
+ public String getDisplayPartitionColumns() {
+ StringBuilder sb = new StringBuilder();
+ int index = 0;
+ for (Column c : partitionColumns) {
+ if (index != 0) {
+ sb.append(", ");
+ }
+ sb.append(c.getDisplayName());
+ index++;
+ }
+ return sb.toString();
+ }
+
public Map<Long, PartitionItem> getIdToItem(boolean isTemp) {
if (isTemp) {
return idToTempItem;
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/catalog/PartitionItem.java
b/fe/fe-core/src/main/java/org/apache/doris/catalog/PartitionItem.java
index af1bbc9d0e2..386f7537b03 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/catalog/PartitionItem.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/catalog/PartitionItem.java
@@ -60,4 +60,8 @@ public abstract class PartitionItem implements
Comparable<PartitionItem>, Writab
public abstract boolean isGreaterThanSpecifiedTime(int pos,
Optional<String> dateFormatOptional,
long nowTruncSubSec)
throws AnalysisException;
+
+
+ //get the unique string of the partition item in sql format
+ public abstract String getItemsSql();
}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/catalog/RangePartitionItem.java
b/fe/fe-core/src/main/java/org/apache/doris/catalog/RangePartitionItem.java
index bb7ddabbaa4..e7f2a9cab5d 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/catalog/RangePartitionItem.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/catalog/RangePartitionItem.java
@@ -46,6 +46,14 @@ public class RangePartitionItem extends PartitionItem {
return partitionKeyRange;
}
+ public String getItemsString() {
+ return toString();
+ }
+
+ public String getItemsSql() {
+ return toPartitionKeyDesc().toSql();
+ }
+
@Override
public boolean isDefaultPartition() {
return false;
diff --git a/fe/fe-core/src/main/java/org/apache/doris/catalog/SchemaTable.java
b/fe/fe-core/src/main/java/org/apache/doris/catalog/SchemaTable.java
index eeff956658f..5d6c01ea2a0 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/catalog/SchemaTable.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/catalog/SchemaTable.java
@@ -558,6 +558,18 @@ public class SchemaTable extends Table {
.column("METRIC_VALUE",
ScalarType.createStringType())
.build())
)
+ .put("table_options",
+ new SchemaTable(SystemIdGenerator.getNextId(),
"table_options", TableType.SCHEMA,
+ builder().column("TABLE_CATALOG",
ScalarType.createVarchar(NAME_CHAR_LEN))
+ .column("TABLE_SCHEMA",
ScalarType.createVarchar(NAME_CHAR_LEN))
+ .column("TABLE_NAME",
ScalarType.createVarchar(NAME_CHAR_LEN))
+ .column("TABLE_MODEL",
ScalarType.createStringType())
+ .column("TABLE_MODEL_KEY",
ScalarType.createStringType())
+ .column("DISTRIBUTE_KEY",
ScalarType.createStringType())
+ .column("DISTRIBUTE_TYPE",
ScalarType.createStringType())
+ .column("BUCKETS_NUM",
ScalarType.createType(PrimitiveType.INT))
+ .column("PARTITION_NUM",
ScalarType.createType(PrimitiveType.INT))
+ .build()))
.build();
private boolean fetchAllFe = false;
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/catalog/TableProperty.java
b/fe/fe-core/src/main/java/org/apache/doris/catalog/TableProperty.java
index 2ede77a6180..0d58aabea08 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/catalog/TableProperty.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/catalog/TableProperty.java
@@ -657,4 +657,15 @@ public class TableProperty implements Writable {
properties.remove(DynamicPartitionProperty.REPLICATION_NUM);
}
}
+
+ public String getPropertiesString() {
+ StringBuilder str = new StringBuilder("");
+ for (Map.Entry<String, String> entry : properties.entrySet()) {
+ if (str.length() != 0) {
+ str.append(", ");
+ }
+ str.append(entry.getKey() + " = " + entry.getValue());
+ }
+ return str.toString();
+ }
}
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/common/proc/PartitionsProcDir.java
b/fe/fe-core/src/main/java/org/apache/doris/common/proc/PartitionsProcDir.java
index 7a2acf64154..d0771f3be1c 100644
---
a/fe/fe-core/src/main/java/org/apache/doris/common/proc/PartitionsProcDir.java
+++
b/fe/fe-core/src/main/java/org/apache/doris/common/proc/PartitionsProcDir.java
@@ -76,6 +76,7 @@ public class PartitionsProcDir implements ProcDirInterface {
.add("Buckets").add("ReplicationNum").add("StorageMedium").add("CooldownTime").add("RemoteStoragePolicy")
.add("LastConsistencyCheckTime").add("DataSize").add("IsInMemory").add("ReplicaAllocation")
.add("IsMutable").add("SyncWithBaseTables").add("UnsyncTables").add("CommittedVersion")
+ .add("RowCount")
.build();
private Database db;
@@ -383,6 +384,9 @@ public class PartitionsProcDir implements ProcDirInterface {
partitionInfo.add(partition.getCommittedVersion());
trow.addToColumnValue(new
TCell().setLongVal(partition.getCommittedVersion()));
+ partitionInfo.add(partition.getRowCount());
+ trow.addToColumnValue(new
TCell().setLongVal(partition.getRowCount()));
+
partitionInfos.add(Pair.of(partitionInfo, trow));
}
} finally {
diff --git
a/fe/fe-core/src/main/java/org/apache/doris/tablefunction/MetadataGenerator.java
b/fe/fe-core/src/main/java/org/apache/doris/tablefunction/MetadataGenerator.java
index 929497cbaab..a08b49ccada 100644
---
a/fe/fe-core/src/main/java/org/apache/doris/tablefunction/MetadataGenerator.java
+++
b/fe/fe-core/src/main/java/org/apache/doris/tablefunction/MetadataGenerator.java
@@ -21,9 +21,15 @@ import org.apache.doris.analysis.UserIdentity;
import org.apache.doris.catalog.Column;
import org.apache.doris.catalog.Database;
import org.apache.doris.catalog.DatabaseIf;
+import org.apache.doris.catalog.DistributionInfo;
+import org.apache.doris.catalog.DistributionInfo.DistributionInfoType;
import org.apache.doris.catalog.Env;
+import org.apache.doris.catalog.HashDistributionInfo;
import org.apache.doris.catalog.MTMV;
import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.PartitionItem;
+import org.apache.doris.catalog.PartitionType;
import org.apache.doris.catalog.SchemaTable;
import org.apache.doris.catalog.Table;
import org.apache.doris.catalog.TableIf;
@@ -96,6 +102,7 @@ import java.text.SimpleDateFormat;
import java.time.Instant;
import java.time.LocalDateTime;
import java.util.ArrayList;
+import java.util.Collection;
import java.util.Date;
import java.util.List;
import java.util.Map;
@@ -112,12 +119,16 @@ public class MetadataGenerator {
private static final ImmutableMap<String, Integer>
WORKLOAD_SCHED_POLICY_COLUMN_TO_INDEX;
+ private static final ImmutableMap<String, Integer>
TABLE_OPTIONS_COLUMN_TO_INDEX;
+
private static final ImmutableMap<String, Integer>
WORKLOAD_GROUP_PRIVILEGES_COLUMN_TO_INDEX;
private static final ImmutableMap<String, Integer>
TABLE_PROPERTIES_COLUMN_TO_INDEX;
private static final ImmutableMap<String, Integer>
META_CACHE_STATS_COLUMN_TO_INDEX;
+ private static final ImmutableMap<String, Integer>
PARTITIONS_COLUMN_TO_INDEX;
+
static {
ImmutableMap.Builder<String, Integer> activeQueriesbuilder = new
ImmutableMap.Builder();
List<Column> activeQueriesColList =
SchemaTable.TABLE_MAP.get("active_queries").getFullSchema();
@@ -145,6 +156,13 @@ public class MetadataGenerator {
}
WORKLOAD_SCHED_POLICY_COLUMN_TO_INDEX = policyBuilder.build();
+ ImmutableMap.Builder<String, Integer> optionBuilder = new
ImmutableMap.Builder();
+ List<Column> optionColList =
SchemaTable.TABLE_MAP.get("table_options").getFullSchema();
+ for (int i = 0; i < optionColList.size(); i++) {
+ optionBuilder.put(optionColList.get(i).getName().toLowerCase(), i);
+ }
+ TABLE_OPTIONS_COLUMN_TO_INDEX = optionBuilder.build();
+
ImmutableMap.Builder<String, Integer> wgPrivsBuilder = new
ImmutableMap.Builder();
List<Column> wgPrivsColList =
SchemaTable.TABLE_MAP.get("workload_group_privileges").getFullSchema();
for (int i = 0; i < wgPrivsColList.size(); i++) {
@@ -165,6 +183,13 @@ public class MetadataGenerator {
metaCacheBuilder.put(metaCacheColList.get(i).getName().toLowerCase(), i);
}
META_CACHE_STATS_COLUMN_TO_INDEX = metaCacheBuilder.build();
+
+ ImmutableMap.Builder<String, Integer> partitionsBuilder = new
ImmutableMap.Builder();
+ List<Column> partitionsColList =
SchemaTable.TABLE_MAP.get("partitions").getFullSchema();
+ for (int i = 0; i < partitionsColList.size(); i++) {
+
partitionsBuilder.put(partitionsColList.get(i).getName().toLowerCase(), i);
+ }
+ PARTITIONS_COLUMN_TO_INDEX = partitionsBuilder.build();
}
public static TFetchSchemaTableDataResult
getMetadataTable(TFetchSchemaTableDataRequest request) throws TException {
@@ -244,6 +269,10 @@ public class MetadataGenerator {
result = workloadSchedPolicyMetadataResult(schemaTableParams);
columnIndex = WORKLOAD_SCHED_POLICY_COLUMN_TO_INDEX;
break;
+ case TABLE_OPTIONS:
+ result = tableOptionsMetadataResult(schemaTableParams);
+ columnIndex = TABLE_OPTIONS_COLUMN_TO_INDEX;
+ break;
case WORKLOAD_GROUP_PRIVILEGES:
result = workloadGroupPrivsMetadataResult(schemaTableParams);
columnIndex = WORKLOAD_GROUP_PRIVILEGES_COLUMN_TO_INDEX;
@@ -256,6 +285,10 @@ public class MetadataGenerator {
result = metaCacheStatsMetadataResult(schemaTableParams);
columnIndex = META_CACHE_STATS_COLUMN_TO_INDEX;
break;
+ case PARTITIONS:
+ result = partitionsMetadataResult(schemaTableParams);
+ columnIndex = PARTITIONS_COLUMN_TO_INDEX;
+ break;
default:
return errorResult("invalid schema table name.");
}
@@ -1046,6 +1079,123 @@ public class MetadataGenerator {
return result;
}
+ private static void tableOptionsForInternalCatalog(UserIdentity
currentUserIdentity,
+ CatalogIf catalog, DatabaseIf database, List<TableIf>
tables, List<TRow> dataBatch) {
+ for (TableIf table : tables) {
+ if (!(table instanceof OlapTable)) {
+ continue;
+ }
+ if
(!Env.getCurrentEnv().getAccessManager().checkTblPriv(currentUserIdentity,
catalog.getName(),
+ database.getFullName(), table.getName(),
PrivPredicate.SHOW)) {
+ continue;
+ }
+ OlapTable olapTable = (OlapTable) table;
+ TRow trow = new TRow();
+ trow.addToColumnValue(new
TCell().setStringVal(catalog.getName())); // TABLE_CATALOG
+ trow.addToColumnValue(new
TCell().setStringVal(database.getFullName())); // TABLE_SCHEMA
+ trow.addToColumnValue(new TCell().setStringVal(table.getName()));
// TABLE_NAME
+ trow.addToColumnValue(
+ new
TCell().setStringVal(olapTable.getKeysType().toMetadata())); // TABLE_MODEL
+ trow.addToColumnValue(
+ new TCell().setStringVal(olapTable.getKeyColAsString()));
// key columTypes
+
+ DistributionInfo distributionInfo =
olapTable.getDefaultDistributionInfo();
+ if (distributionInfo.getType() == DistributionInfoType.HASH) {
+ HashDistributionInfo hashDistributionInfo =
(HashDistributionInfo) distributionInfo;
+ List<Column> distributionColumns =
hashDistributionInfo.getDistributionColumns();
+ StringBuilder distributeKey = new StringBuilder();
+ for (Column c : distributionColumns) {
+ if (distributeKey.length() != 0) {
+ distributeKey.append(",");
+ }
+ distributeKey.append(c.getName());
+ }
+ if (distributeKey.length() == 0) {
+ trow.addToColumnValue(new TCell().setStringVal(""));
+ } else {
+ trow.addToColumnValue(
+ new
TCell().setStringVal(distributeKey.toString()));
+ }
+ trow.addToColumnValue(new TCell().setStringVal("HASH")); //
DISTRIBUTE_TYPE
+ } else {
+ trow.addToColumnValue(new TCell().setStringVal("RANDOM")); //
DISTRIBUTE_KEY
+ trow.addToColumnValue(new TCell().setStringVal("RANDOM")); //
DISTRIBUTE_TYPE
+ }
+ trow.addToColumnValue(new
TCell().setIntVal(distributionInfo.getBucketNum())); // BUCKETS_NUM
+ trow.addToColumnValue(new
TCell().setIntVal(olapTable.getPartitionNum())); // PARTITION_NUM
+ dataBatch.add(trow);
+ }
+ }
+
+ private static void tableOptionsForExternalCatalog(UserIdentity
currentUserIdentity,
+ CatalogIf catalog, DatabaseIf database, List<TableIf>
tables, List<TRow> dataBatch) {
+ for (TableIf table : tables) {
+ if
(!Env.getCurrentEnv().getAccessManager().checkTblPriv(currentUserIdentity,
catalog.getName(),
+ database.getFullName(), table.getName(),
PrivPredicate.SHOW)) {
+ continue;
+ }
+ TRow trow = new TRow();
+ trow.addToColumnValue(new
TCell().setStringVal(catalog.getName())); // TABLE_CATALOG
+ trow.addToColumnValue(new
TCell().setStringVal(database.getFullName())); // TABLE_SCHEMA
+ trow.addToColumnValue(new TCell().setStringVal(table.getName()));
// TABLE_NAME
+ trow.addToColumnValue(
+ new TCell().setStringVal("")); // TABLE_MODEL
+ trow.addToColumnValue(
+ new TCell().setStringVal("")); // key columTypes
+ trow.addToColumnValue(new TCell().setStringVal("")); //
DISTRIBUTE_KEY
+ trow.addToColumnValue(new TCell().setStringVal("")); //
DISTRIBUTE_TYPE
+ trow.addToColumnValue(new TCell().setIntVal(0)); // BUCKETS_NUM
+ trow.addToColumnValue(new TCell().setIntVal(0)); // PARTITION_NUM
+ dataBatch.add(trow);
+ }
+ }
+
+ private static TFetchSchemaTableDataResult
tableOptionsMetadataResult(TSchemaTableRequestParams params) {
+ if (!params.isSetCurrentUserIdent()) {
+ return errorResult("current user ident is not set.");
+ }
+ if (!params.isSetDbId()) {
+ return errorResult("current db id is not set.");
+ }
+
+ if (!params.isSetCatalog()) {
+ return errorResult("current catalog is not set.");
+ }
+
+ TUserIdentity tcurrentUserIdentity = params.getCurrentUserIdent();
+ UserIdentity currentUserIdentity =
UserIdentity.fromThrift(tcurrentUserIdentity);
+ TFetchSchemaTableDataResult result = new TFetchSchemaTableDataResult();
+ List<TRow> dataBatch = Lists.newArrayList();
+ Long dbId = params.getDbId();
+ String clg = params.getCatalog();
+ CatalogIf catalog =
Env.getCurrentEnv().getCatalogMgr().getCatalog(clg);
+ if (catalog == null) {
+ // catalog is NULL let return empty to BE
+ result.setDataBatch(dataBatch);
+ result.setStatus(new TStatus(TStatusCode.OK));
+ return result;
+ }
+ DatabaseIf database = catalog.getDbNullable(dbId);
+ if (database == null) {
+ // BE gets the database id list from FE and then invokes this
interface
+ // per database. there is a chance that in between database can be
dropped.
+ // so need to handle database not exist case and return ok so that
BE continue the
+ // loop with next database.
+ result.setDataBatch(dataBatch);
+ result.setStatus(new TStatus(TStatusCode.OK));
+ return result;
+ }
+ List<TableIf> tables = database.getTables();
+ if (catalog instanceof InternalCatalog) {
+ tableOptionsForInternalCatalog(currentUserIdentity, catalog,
database, tables, dataBatch);
+ } else if (catalog instanceof ExternalCatalog) {
+ tableOptionsForExternalCatalog(currentUserIdentity, catalog,
database, tables, dataBatch);
+ }
+ result.setDataBatch(dataBatch);
+ result.setStatus(new TStatus(TStatusCode.OK));
+ return result;
+ }
+
private static void tablePropertiesForInternalCatalog(UserIdentity
currentUserIdentity,
CatalogIf catalog, DatabaseIf database, List<TableIf> tables,
List<TRow> dataBatch) {
for (TableIf table : tables) {
@@ -1119,8 +1269,14 @@ public class MetadataGenerator {
TFetchSchemaTableDataResult result = new TFetchSchemaTableDataResult();
Long dbId = params.getDbId();
String clg = params.getCatalog();
- CatalogIf catalog =
Env.getCurrentEnv().getCatalogMgr().getCatalog(clg);
List<TRow> dataBatch = Lists.newArrayList();
+ CatalogIf catalog =
Env.getCurrentEnv().getCatalogMgr().getCatalog(clg);
+ if (catalog == null) {
+ // catalog is NULL let return empty to BE
+ result.setDataBatch(dataBatch);
+ result.setStatus(new TStatus(TStatusCode.OK));
+ return result;
+ }
DatabaseIf database = catalog.getDbNullable(dbId);
if (database == null) {
// BE gets the database id list from FE and then invokes this
interface
@@ -1164,7 +1320,124 @@ public class MetadataGenerator {
fillBatch(dataBatch, icebergCache.getCacheStats(),
catalogIf.getName());
}
}
+ result.setDataBatch(dataBatch);
+ result.setStatus(new TStatus(TStatusCode.OK));
+ return result;
+ }
+
+ private static void partitionsForInternalCatalog(UserIdentity
currentUserIdentity,
+ CatalogIf catalog, DatabaseIf database, List<TableIf> tables,
List<TRow> dataBatch) {
+ for (TableIf table : tables) {
+ if (!(table instanceof OlapTable)) {
+ continue;
+ }
+ if
(!Env.getCurrentEnv().getAccessManager().checkTblPriv(currentUserIdentity,
catalog.getName(),
+ database.getFullName(), table.getName(),
PrivPredicate.SHOW)) {
+ continue;
+ }
+
+ OlapTable olapTable = (OlapTable) table;
+ Collection<Partition> allPartitions = olapTable.getAllPartitions();
+
+ for (Partition partition : allPartitions) {
+ TRow trow = new TRow();
+ trow.addToColumnValue(new
TCell().setStringVal(catalog.getName())); // TABLE_CATALOG
+ trow.addToColumnValue(new
TCell().setStringVal(database.getFullName())); // TABLE_SCHEMA
+ trow.addToColumnValue(new
TCell().setStringVal(table.getName())); // TABLE_NAME
+ trow.addToColumnValue(new
TCell().setStringVal(partition.getName())); // PARTITION_NAME
+ trow.addToColumnValue(new TCell().setStringVal("NULL")); //
SUBPARTITION_NAME (always null)
+
+ trow.addToColumnValue(new TCell().setIntVal(0));
//PARTITION_ORDINAL_POSITION (not available)
+ trow.addToColumnValue(new TCell().setIntVal(0));
//SUBPARTITION_ORDINAL_POSITION (not available)
+ trow.addToColumnValue(new TCell().setStringVal(
+ olapTable.getPartitionInfo().getType().toString()));
// PARTITION_METHOD
+ trow.addToColumnValue(new TCell().setStringVal("NULL")); //
SUBPARTITION_METHOD(always null)
+ PartitionItem item =
olapTable.getPartitionInfo().getItem(partition.getId());
+ if ((olapTable.getPartitionInfo().getType() ==
PartitionType.UNPARTITIONED) || (item == null)) {
+ trow.addToColumnValue(new TCell().setStringVal("NULL"));
// if unpartitioned, its null
+ trow.addToColumnValue(new TCell().setStringVal("NULL"));
// SUBPARTITION_EXPRESSION (always null)
+ trow.addToColumnValue(new TCell().setStringVal("NULL"));
// PARITION DESC, its null
+ } else {
+ trow.addToColumnValue(new TCell().setStringVal(
+ olapTable.getPartitionInfo()
+ .getDisplayPartitionColumns().toString())); //
PARTITION_EXPRESSION
+ trow.addToColumnValue(new TCell().setStringVal("NULL"));
// SUBPARTITION_EXPRESSION (always null)
+ trow.addToColumnValue(new TCell().setStringVal(
+ item.getItemsSql())); // PARITION DESC
+ }
+ trow.addToColumnValue(new
TCell().setLongVal(partition.getRowCount())); //TABLE_ROWS (PARTITION row)
+ trow.addToColumnValue(new
TCell().setLongVal(partition.getAvgRowLength())); //AVG_ROW_LENGTH
+ trow.addToColumnValue(new
TCell().setLongVal(partition.getDataLength())); //DATA_LENGTH
+ trow.addToColumnValue(new TCell().setIntVal(0));
//MAX_DATA_LENGTH (not available)
+ trow.addToColumnValue(new TCell().setIntVal(0));
//INDEX_LENGTH (not available)
+ trow.addToColumnValue(new TCell().setIntVal(0)); //DATA_FREE
(not available)
+ trow.addToColumnValue(new TCell().setStringVal("NULL"));
//CREATE_TIME (not available)
+ trow.addToColumnValue(new TCell().setStringVal(
+
TimeUtils.longToTimeString(partition.getVisibleVersionTime()))); //UPDATE_TIME
+ trow.addToColumnValue(new TCell().setStringVal("NULL")); //
CHECK_TIME (not available)
+ trow.addToColumnValue(new TCell().setIntVal(0)); //CHECKSUM
(not available)
+ trow.addToColumnValue(new TCell().setStringVal("")); //
PARTITION_COMMENT (not available)
+ trow.addToColumnValue(new TCell().setStringVal("")); //
NODEGROUP (not available)
+ trow.addToColumnValue(new TCell().setStringVal("")); //
TABLESPACE_NAME (not available)
+ dataBatch.add(trow);
+ }
+ } // for table
+ }
+
+ private static void partitionsForExternalCatalog(UserIdentity
currentUserIdentity,
+ CatalogIf catalog, DatabaseIf database, List<TableIf> tables,
List<TRow> dataBatch) {
+ for (TableIf table : tables) {
+ if
(!Env.getCurrentEnv().getAccessManager().checkTblPriv(currentUserIdentity,
catalog.getName(),
+ database.getFullName(), table.getName(),
PrivPredicate.SHOW)) {
+ continue;
+ }
+ // TODO
+ } // for table
+ }
+
+ private static TFetchSchemaTableDataResult
partitionsMetadataResult(TSchemaTableRequestParams params) {
+ if (!params.isSetCurrentUserIdent()) {
+ return errorResult("current user ident is not set.");
+ }
+ if (!params.isSetDbId()) {
+ return errorResult("current db id is not set.");
+ }
+
+ if (!params.isSetCatalog()) {
+ return errorResult("current catalog is not set.");
+ }
+
+ TUserIdentity tcurrentUserIdentity = params.getCurrentUserIdent();
+ UserIdentity currentUserIdentity =
UserIdentity.fromThrift(tcurrentUserIdentity);
+ TFetchSchemaTableDataResult result = new TFetchSchemaTableDataResult();
+ Long dbId = params.getDbId();
+ String clg = params.getCatalog();
+ List<TRow> dataBatch = Lists.newArrayList();
+ CatalogIf catalog =
Env.getCurrentEnv().getCatalogMgr().getCatalog(clg);
+ if (catalog == null) {
+ // catalog is NULL let return empty to BE
+ result.setDataBatch(dataBatch);
+ result.setStatus(new TStatus(TStatusCode.OK));
+ return result;
+ }
+ DatabaseIf database = catalog.getDbNullable(dbId);
+ if (database == null) {
+ // BE gets the database id list from FE and then invokes this
interface
+ // per database. there is a chance that in between database can be
dropped.
+ // so need to handle database not exist case and return ok so that
BE continue the
+ // loop with next database.
+ result.setDataBatch(dataBatch);
+ result.setStatus(new TStatus(TStatusCode.OK));
+ return result;
+ }
+ List<TableIf> tables = database.getTables();
+ if (catalog instanceof InternalCatalog) {
+ // only olap tables
+ partitionsForInternalCatalog(currentUserIdentity, catalog,
database, tables, dataBatch);
+ } else if (catalog instanceof ExternalCatalog) {
+ partitionsForExternalCatalog(currentUserIdentity, catalog,
database, tables, dataBatch);
+ }
result.setDataBatch(dataBatch);
result.setStatus(new TStatus(TStatusCode.OK));
return result;
diff --git a/gensrc/thrift/FrontendService.thrift
b/gensrc/thrift/FrontendService.thrift
index cda2392be07..61dce73400b 100644
--- a/gensrc/thrift/FrontendService.thrift
+++ b/gensrc/thrift/FrontendService.thrift
@@ -958,6 +958,7 @@ enum TSchemaTableName {
WORKLOAD_GROUP_PRIVILEGES = 7,
TABLE_PROPERTIES = 8,
CATALOG_META_CACHE_STATS = 9,
+ PARTITIONS = 10,
}
struct TMetadataTableRequestParams {
diff --git a/regression-test/data/query_p0/system/test_partitions_schema.out
b/regression-test/data/query_p0/system/test_partitions_schema.out
new file mode 100644
index 00000000000..ffe2a9cb667
--- /dev/null
+++ b/regression-test/data/query_p0/system/test_partitions_schema.out
@@ -0,0 +1,48 @@
+-- This file is automatically generated. You should know what you did if you
want to edit this
+-- !select_check_0 --
+test_range_table p0 9
+test_range_table p1 2
+test_range_table p100 4
+test_range_table p2 1
+test_range_table p3 1
+test_range_table p4 3
+test_range_table p5 0
+
+-- !select_check_1 --
+internal test_partitions_schema_db duplicate_table duplicate_table
NULL 0 0 UNPARTITIONED NULL NULL NULL NULL 0
0 0 0 0 0 0
+internal test_partitions_schema_db listtable p1_city NULL
0 0 LIST NULL user_id, city NULL (("1", "Beijing"),("1",
"Shanghai")) 0 0 0 0 0 0 0
+internal test_partitions_schema_db listtable p2_city NULL
0 0 LIST NULL user_id, city NULL (("2", "Beijing"),("2",
"Shanghai")) 0 0 0 0 0 0 0
+internal test_partitions_schema_db listtable p3_city NULL
0 0 LIST NULL user_id, city NULL (("3", "Beijing"),("3",
"Shanghai")) 0 0 0 0 0 0 0
+internal test_partitions_schema_db randomtable randomtable
NULL 0 0 UNPARTITIONED NULL NULL NULL NULL 0
0 0 0 0 0 0
+internal test_partitions_schema_db test_range_table p0
NULL 0 0 RANGE NULL col_1 NULL [('-2147483648'),
('4')) 9 636 5728 0 0 0 0
+internal test_partitions_schema_db test_range_table p1
NULL 0 0 RANGE NULL col_1 NULL [('4'), ('6')) 2
959 1919 0 0 0 0
+internal test_partitions_schema_db test_range_table p100
NULL 0 0 RANGE NULL col_1 NULL [('83647'),
('2147483647')) 4 735 2941 0 0 0 0
+internal test_partitions_schema_db test_range_table p2
NULL 0 0 RANGE NULL col_1 NULL [('6'), ('7')) 1
975 975 0 0 0 0
+internal test_partitions_schema_db test_range_table p3
NULL 0 0 RANGE NULL col_1 NULL [('7'), ('8')) 1
959 959 0 0 0 0
+internal test_partitions_schema_db test_range_table p4
NULL 0 0 RANGE NULL col_1 NULL [('8'), ('10')) 3
948 2846 0 0 0 0
+internal test_partitions_schema_db test_range_table p5
NULL 0 0 RANGE NULL col_1 NULL [('10'), ('83647'))
0 0 0 0 0 0 0
+internal test_partitions_schema_db test_row_column_page_size1
test_row_column_page_size1 NULL 0 0 UNPARTITIONED NULL
NULL NULL NULL 0 0 0 0 0 0 0
+internal test_partitions_schema_db test_row_column_page_size2
test_row_column_page_size2 NULL 0 0 UNPARTITIONED NULL
NULL NULL NULL 0 0 0 0 0 0 0
+
+-- !select_check_2 --
+internal test_partitions_schema_db duplicate_table duplicate_table
NULL 0 0 UNPARTITIONED NULL NULL NULL NULL 0
0 0 0 0 0 0
+internal test_partitions_schema_db listtable p1_city NULL
0 0 LIST NULL user_id, city NULL (("1", "Beijing"),("1",
"Shanghai")) 0 0 0 0 0 0 0
+internal test_partitions_schema_db listtable p2_city NULL
0 0 LIST NULL user_id, city NULL (("2", "Beijing"),("2",
"Shanghai")) 0 0 0 0 0 0 0
+internal test_partitions_schema_db listtable p3_city NULL
0 0 LIST NULL user_id, city NULL (("3", "Beijing"),("3",
"Shanghai")) 0 0 0 0 0 0 0
+internal test_partitions_schema_db randomtable randomtable
NULL 0 0 UNPARTITIONED NULL NULL NULL NULL 0
0 0 0 0 0 0
+internal test_partitions_schema_db test_range_table p0
NULL 0 0 RANGE NULL col_1 NULL [('-2147483648'),
('4')) 9 636 5728 0 0 0 0
+internal test_partitions_schema_db test_range_table p1
NULL 0 0 RANGE NULL col_1 NULL [('4'), ('6')) 2
959 1919 0 0 0 0
+internal test_partitions_schema_db test_range_table p100
NULL 0 0 RANGE NULL col_1 NULL [('83647'),
('2147483647')) 4 735 2941 0 0 0 0
+internal test_partitions_schema_db test_range_table p2
NULL 0 0 RANGE NULL col_1 NULL [('6'), ('7')) 1
975 975 0 0 0 0
+internal test_partitions_schema_db test_range_table p3
NULL 0 0 RANGE NULL col_1 NULL [('7'), ('8')) 1
959 959 0 0 0 0
+internal test_partitions_schema_db test_range_table p4
NULL 0 0 RANGE NULL col_1 NULL [('8'), ('10')) 3
948 2846 0 0 0 0
+internal test_partitions_schema_db test_range_table p5
NULL 0 0 RANGE NULL col_1 NULL [('10'), ('83647'))
0 0 0 0 0 0 0
+internal test_partitions_schema_db test_row_column_page_size1
test_row_column_page_size1 NULL 0 0 UNPARTITIONED NULL
NULL NULL NULL 0 0 0 0 0 0 0
+
+-- !select_check_3 --
+
+-- !select_check_4 --
+internal test_partitions_schema_db duplicate_table duplicate_table
NULL 0 0 UNPARTITIONED NULL NULL NULL NULL 0
0 0 0 0 0 0
+
+-- !select_check_5 --
+
diff --git a/regression-test/data/query_p0/system/test_query_sys_tables.out
b/regression-test/data/query_p0/system/test_query_sys_tables.out
index d3a4ef5a57c..215a3d5f1c6 100644
--- a/regression-test/data/query_p0/system/test_query_sys_tables.out
+++ b/regression-test/data/query_p0/system/test_query_sys_tables.out
@@ -152,8 +152,6 @@ PARTITION_COMMENT text Yes false \N
NODEGROUP varchar(256) Yes false \N
TABLESPACE_NAME varchar(268) Yes false \N
--- !select_partitions --
-
-- !desc_rowsets --
BACKEND_ID bigint Yes false \N
ROWSET_ID varchar(64) Yes false \N
diff --git a/regression-test/data/query_p0/system/test_table_options.out
b/regression-test/data/query_p0/system/test_table_options.out
new file mode 100644
index 00000000000..0e94265c23f
--- /dev/null
+++ b/regression-test/data/query_p0/system/test_table_options.out
@@ -0,0 +1,27 @@
+-- This file is automatically generated. You should know what you did if you
want to edit this
+-- !select_check_1 --
+internal test_table_options_db aggregate_table AGG
user_id,date,city,age,sex user_id HASH 1 1
+internal test_table_options_db duplicate_table DUP
timestamp,type,error_code type HASH 1 1
+internal test_table_options_db listtable AGG
user_id,date,timestamp,city,age,sex user_id HASH 16 3
+internal test_table_options_db randomtable DUP
user_id,date,timestamp RANDOM RANDOM 16 1
+internal test_table_options_db rangetable AGG
user_id,date,timestamp,city,age,sex user_id HASH 8 3
+internal test_table_options_db test_row_column_page_size1 DUP
aaa aaa HASH 1 1
+internal test_table_options_db test_row_column_page_size2 DUP
aaa aaa HASH 1 1
+internal test_table_options_db unique_table UNI
user_id,username user_id HASH 1 1
+
+-- !select_check_2 --
+internal test_table_options_db aggregate_table AGG
user_id,date,city,age,sex user_id HASH 1 1
+internal test_table_options_db duplicate_table DUP
timestamp,type,error_code type HASH 1 1
+internal test_table_options_db listtable AGG
user_id,date,timestamp,city,age,sex user_id HASH 16 3
+internal test_table_options_db randomtable DUP
user_id,date,timestamp RANDOM RANDOM 16 1
+internal test_table_options_db rangetable AGG
user_id,date,timestamp,city,age,sex user_id HASH 8 3
+internal test_table_options_db test_row_column_page_size1 DUP
aaa aaa HASH 1 1
+internal test_table_options_db unique_table UNI
user_id,username user_id HASH 1 1
+
+-- !select_check_3 --
+
+-- !select_check_4 --
+internal test_table_options_db duplicate_table DUP
timestamp,type,error_code type HASH 1 1
+
+-- !select_check_5 --
+
diff --git
a/regression-test/suites/query_p0/system/test_partitions_schema.groovy
b/regression-test/suites/query_p0/system/test_partitions_schema.groovy
new file mode 100644
index 00000000000..ac73d3315d0
--- /dev/null
+++ b/regression-test/suites/query_p0/system/test_partitions_schema.groovy
@@ -0,0 +1,195 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+import org.awaitility.Awaitility
+import static java.util.concurrent.TimeUnit.SECONDS
+
+suite("test_partitions_schema") {
+ def dbName = "test_partitions_schema_db"
+ def listOfColum =
"TABLE_CATALOG,TABLE_SCHEMA,TABLE_NAME,PARTITION_NAME,SUBPARTITION_NAME,PARTITION_ORDINAL_POSITION,SUBPARTITION_ORDINAL_POSITION,PARTITION_METHOD,SUBPARTITION_METHOD,PARTITION_EXPRESSION,SUBPARTITION_EXPRESSION,PARTITION_DESCRIPTION,TABLE_ROWS,AVG_ROW_LENGTH,DATA_LENGTH,MAX_DATA_LENGTH,INDEX_LENGTH,DATA_FREE,CHECKSUM,PARTITION_COMMENT,NODEGROUP,TABLESPACE_NAME";
+ sql "drop database if exists ${dbName}"
+ sql "CREATE DATABASE IF NOT EXISTS ${dbName}"
+ sql "use ${dbName}"
+
+ def checkRowCount = { expectedRowCount ->
+ Awaitility.await().atMost(180, SECONDS).pollInterval(1, SECONDS).until(
+ {
+ def result = sql "select table_rows from
information_schema.partitions where table_name='test_range_table' and
partition_name='p0'"
+ logger.info("table: table_name, rowCount: ${result}")
+ return result[0][0] == expectedRowCount
+ }
+ )
+ }
+
+ sql """
+ create table test_range_table (
+ col_1 int,
+ col_2 int,
+ col_3 int,
+ col_4 int,
+ pk int
+ ) engine=olap
+ DUPLICATE KEY(col_1, col_2)
+ PARTITION BY RANGE(col_1) (
+ PARTITION p0 VALUES LESS THAN ('4'),
+ PARTITION p1 VALUES LESS THAN ('6'),
+ PARTITION p2 VALUES LESS THAN ('7'),
+ PARTITION p3 VALUES LESS THAN ('8'),
+ PARTITION p4 VALUES LESS THAN ('10'),
+ PARTITION p5 VALUES LESS THAN ('83647'),
+ PARTITION p100 VALUES LESS THAN ('2147483647')
+ )
+
+ distributed by hash(pk) buckets 10
+ properties("replication_num" = "1");
+ """
+ sql """
+ insert into test_range_table(pk,col_1,col_2,col_3,col_4) values
(0,6,-179064,5213411,5),(1,3,5,2,6),(2,4226261,7,null,3),(3,9,null,4,4),(4,-1003770,2,1,1),(5,8,7,null,8176864),(6,3388266,5,8,8),(7,5,1,2,null),(8,9,2064412,0,null),(9,1489553,8,-446412,6),(10,1,3,0,1),(11,null,3,4621304,null),(12,null,-3058026,-262645,9),(13,null,null,9,3),(14,null,null,5037128,7),(15,299896,-1444893,8,1480339),(16,7,7,0,1470826),(17,-7378014,5,null,5),(18,0,3,6,5),(19,5,3,-4403612,-3103249);
+ """
+ sql """
+ sync
+ """
+ checkRowCount(9);
+
+ qt_select_check_0 """select table_name,partition_name,table_rows from
information_schema.partitions where table_schema=\"${dbName}\" order by
$listOfColum"""
+ sql """
+ CREATE TABLE IF NOT EXISTS listtable
+ (
+ `user_id` LARGEINT NOT NULL COMMENT "User id",
+ `date` DATE NOT NULL COMMENT "Data fill in date time",
+ `timestamp` DATETIME NOT NULL COMMENT "Timestamp of data being
poured",
+ `city` VARCHAR(20) COMMENT "The city where the user is located",
+ `age` SMALLINT COMMENT "User Age",
+ `sex` TINYINT COMMENT "User gender",
+ `last_visit_date` DATETIME REPLACE DEFAULT "1970-01-01 00:00:00"
COMMENT "User last visit time",
+ `cost` BIGINT SUM DEFAULT "0" COMMENT "Total user consumption",
+ `max_dwell_time` INT MAX DEFAULT "0" COMMENT "User maximum dwell
time",
+ `min_dwell_time` INT MIN DEFAULT "99999" COMMENT "User minimum dwell
time"
+ )
+ ENGINE=olap
+ AGGREGATE KEY(`user_id`, `date`, `timestamp`, `city`, `age`, `sex`)
+ PARTITION BY LIST(user_id, city)
+ (
+ PARTITION p1_city VALUES IN (("1", "Beijing"), ("1", "Shanghai")),
+ PARTITION p2_city VALUES IN (("2", "Beijing"), ("2", "Shanghai")),
+ PARTITION p3_city VALUES IN (("3", "Beijing"), ("3", "Shanghai"))
+ )
+ DISTRIBUTED BY HASH(`user_id`) BUCKETS 16
+ PROPERTIES
+ (
+ "replication_num" = "1"
+ );
+ """
+ sql """
+ CREATE TABLE IF NOT EXISTS randomtable
+ (
+ `user_id` LARGEINT NOT NULL COMMENT "User id",
+ `date` DATE NOT NULL COMMENT "Data fill in date time",
+ `timestamp` DATETIME NOT NULL COMMENT "Timestamp of data being
poured",
+ `city` VARCHAR(20) COMMENT "The city where the user is located",
+ `age` SMALLINT COMMENT "User Age",
+ `sex` TINYINT COMMENT "User gender"
+ )
+ ENGINE=olap
+ DISTRIBUTED BY RANDOM BUCKETS 16
+ PROPERTIES
+ (
+ "replication_num" = "1"
+ );
+ """
+ sql """
+ CREATE TABLE IF NOT EXISTS duplicate_table
+ (
+ `timestamp` DATETIME NOT NULL COMMENT "Log time",
+ `type` INT NOT NULL COMMENT "Log type",
+ `error_code` INT COMMENT "Error code",
+ `error_msg` VARCHAR(1024) COMMENT "Error detail message",
+ `op_id` BIGINT COMMENT "Operator ID",
+ `op_time` DATETIME COMMENT "Operation time"
+ )
+ DISTRIBUTED BY HASH(`type`) BUCKETS 1
+ PROPERTIES (
+ "replication_allocation" = "tag.location.default: 1"
+ );
+ """
+
+ // test row column page size
+ sql """
+ CREATE TABLE IF NOT EXISTS test_row_column_page_size1 (
+ `aaa` varchar(170) NOT NULL COMMENT "",
+ `bbb` varchar(20) NOT NULL COMMENT "",
+ `ccc` INT NULL COMMENT "",
+ `ddd` SMALLINT NULL COMMENT ""
+ )
+ DISTRIBUTED BY HASH(`aaa`) BUCKETS 1
+ PROPERTIES (
+ "replication_num" = "1",
+ "store_row_column" = "true"
+ );
+ """
+
+ sql """
+ CREATE TABLE IF NOT EXISTS test_row_column_page_size2 (
+ `aaa` varchar(170) NOT NULL COMMENT "",
+ `bbb` varchar(20) NOT NULL COMMENT "",
+ `ccc` INT NULL COMMENT "",
+ `ddd` SMALLINT NULL COMMENT ""
+ )
+ DISTRIBUTED BY HASH(`aaa`) BUCKETS 1
+ PROPERTIES (
+ "replication_num" = "1",
+ "store_row_column" = "true",
+ "row_store_page_size" = "8190"
+ );
+ """
+ qt_select_check_1 """select $listOfColum from
information_schema.partitions where table_schema=\"${dbName}\" order by
$listOfColum"""
+ sql """
+ drop table test_row_column_page_size2;
+ """
+ qt_select_check_2 """select $listOfColum from
information_schema.partitions where table_schema=\"${dbName}\" order by
$listOfColum"""
+
+ def user = "partitions_user"
+ sql "DROP USER IF EXISTS ${user}"
+ sql "CREATE USER ${user} IDENTIFIED BY '123abc!@#'"
+ //cloud-mode
+ if (isCloudMode()) {
+ def clusters = sql " SHOW CLUSTERS; "
+ assertTrue(!clusters.isEmpty())
+ def validCluster = clusters[0][0]
+ sql """GRANT USAGE_PRIV ON CLUSTER ${validCluster} TO ${user}""";
+ }
+
+ sql "GRANT SELECT_PRIV ON information_schema.partitions TO ${user}"
+
+ def tokens = context.config.jdbcUrl.split('/')
+ def url=tokens[0] + "//" + tokens[2] + "/" + "information_schema" + "?"
+
+ connect(user=user, password='123abc!@#', url=url) {
+ qt_select_check_3 """select $listOfColum from
information_schema.partitions where table_schema=\"${dbName}\" order by
$listOfColum"""
+ }
+
+ sql "GRANT SELECT_PRIV ON ${dbName}.duplicate_table TO ${user}"
+ connect(user=user, password='123abc!@#', url=url) {
+ qt_select_check_4 """select $listOfColum from
information_schema.partitions where table_schema=\"${dbName}\" order by
$listOfColum"""
+ }
+
+ sql "REVOKE SELECT_PRIV ON ${dbName}.duplicate_table FROM ${user}"
+ connect(user=user, password='123abc!@#', url=url) {
+ qt_select_check_5 """select $listOfColum from
information_schema.partitions where table_schema=\"${dbName}\" order by
$listOfColum"""
+ }
+
+}
diff --git
a/regression-test/suites/query_p0/system/test_query_sys_tables.groovy
b/regression-test/suites/query_p0/system/test_query_sys_tables.groovy
index 7d943894168..dd069816fd3 100644
--- a/regression-test/suites/query_p0/system/test_query_sys_tables.groovy
+++ b/regression-test/suites/query_p0/system/test_query_sys_tables.groovy
@@ -135,9 +135,9 @@ suite("test_query_sys_tables", "query,p0") {
// test partitions
- // have no impl
+ // have impl now, partition based on time and date so not doing data
validation.
+ // data validation taken care in another regression test.
qt_desc_partitions """ desc `information_schema`.`partitions` """
- order_qt_select_partitions """ select * from
`information_schema`.`partitions`; """
// test rowsets
@@ -280,4 +280,4 @@ suite("test_query_sys_tables", "query,p0") {
qt_sql "select * from triggers"
qt_sql "select * from parameters"
qt_sql "select * from profiling"
-}
\ No newline at end of file
+}
diff --git a/regression-test/suites/query_p0/system/test_table_options.groovy
b/regression-test/suites/query_p0/system/test_table_options.groovy
new file mode 100644
index 00000000000..9d2e99ab974
--- /dev/null
+++ b/regression-test/suites/query_p0/system/test_table_options.groovy
@@ -0,0 +1,217 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership. The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License. You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+suite("test_table_options") {
+ def dbName = "test_table_options_db"
+ sql "drop database if exists ${dbName}"
+ sql "CREATE DATABASE IF NOT EXISTS ${dbName}"
+ sql "use ${dbName}"
+
+ sql """
+ CREATE TABLE IF NOT EXISTS rangetable
+ (
+ `user_id` LARGEINT NOT NULL COMMENT "User id",
+ `date` DATE NOT NULL COMMENT "Data fill in date time",
+ `timestamp` DATETIME NOT NULL COMMENT "Timestamp of data being
poured",
+ `city` VARCHAR(20) COMMENT "The city where the user is located",
+ `age` SMALLINT COMMENT "User age",
+ `sex` TINYINT COMMENT "User gender",
+ `last_visit_date` DATETIME REPLACE DEFAULT "1970-01-01
00:00:00" COMMENT "User last visit time",
+ `cost` BIGINT SUM DEFAULT "0" COMMENT "Total user consumption",
+ `max_dwell_time` INT MAX DEFAULT "0" COMMENT "User maximum
dwell time",
+ `min_dwell_time` INT MIN DEFAULT "99999" COMMENT "User minimum
dwell time"
+ )
+ ENGINE=olap
+ AGGREGATE KEY(`user_id`, `date`, `timestamp`, `city`, `age`, `sex`)
+ PARTITION BY RANGE(`date`)
+ (
+ PARTITION `p201701` VALUES LESS THAN ("2017-02-01"),
+ PARTITION `p201702` VALUES LESS THAN ("2017-03-01"),
+ PARTITION `p201703` VALUES LESS THAN ("2017-04-01")
+ )
+ DISTRIBUTED BY HASH(`user_id`) BUCKETS 8
+ PROPERTIES
+ (
+ "replication_num" = "1"
+ );
+ """
+ sql """
+ CREATE TABLE IF NOT EXISTS listtable
+ (
+ `user_id` LARGEINT NOT NULL COMMENT "User id",
+ `date` DATE NOT NULL COMMENT "Data fill in date time",
+ `timestamp` DATETIME NOT NULL COMMENT "Timestamp of data being
poured",
+ `city` VARCHAR(20) COMMENT "The city where the user is located",
+ `age` SMALLINT COMMENT "User Age",
+ `sex` TINYINT COMMENT "User gender",
+ `last_visit_date` DATETIME REPLACE DEFAULT "1970-01-01 00:00:00"
COMMENT "User last visit time",
+ `cost` BIGINT SUM DEFAULT "0" COMMENT "Total user consumption",
+ `max_dwell_time` INT MAX DEFAULT "0" COMMENT "User maximum dwell
time",
+ `min_dwell_time` INT MIN DEFAULT "99999" COMMENT "User minimum dwell
time"
+ )
+ ENGINE=olap
+ AGGREGATE KEY(`user_id`, `date`, `timestamp`, `city`, `age`, `sex`)
+ PARTITION BY LIST(`city`)
+ (
+ PARTITION `p_cn` VALUES IN ("Beijing", "Shanghai", "Hong Kong"),
+ PARTITION `p_usa` VALUES IN ("New York", "San Francisco"),
+ PARTITION `p_jp` VALUES IN ("Tokyo")
+ )
+ DISTRIBUTED BY HASH(`user_id`) BUCKETS 16
+ PROPERTIES
+ (
+ "replication_num" = "1"
+ );
+ """
+ sql """
+ CREATE TABLE IF NOT EXISTS randomtable
+ (
+ `user_id` LARGEINT NOT NULL COMMENT "User id",
+ `date` DATE NOT NULL COMMENT "Data fill in date time",
+ `timestamp` DATETIME NOT NULL COMMENT "Timestamp of data being
poured",
+ `city` VARCHAR(20) COMMENT "The city where the user is located",
+ `age` SMALLINT COMMENT "User Age",
+ `sex` TINYINT COMMENT "User gender"
+ )
+ ENGINE=olap
+ DISTRIBUTED BY RANDOM BUCKETS 16
+ PROPERTIES
+ (
+ "replication_num" = "1"
+ );
+ """
+ sql """
+ CREATE TABLE IF NOT EXISTS aggregate_table
+ (
+ `user_id` LARGEINT NOT NULL COMMENT "user id",
+ `date` DATE NOT NULL COMMENT "data import time",
+ `city` VARCHAR(20) COMMENT "city",
+ `age` SMALLINT COMMENT "age",
+ `sex` TINYINT COMMENT "gender",
+ `last_visit_date` DATETIME REPLACE DEFAULT "1970-01-01 00:00:00"
COMMENT "last visit date time",
+ `cost` BIGINT SUM DEFAULT "0" COMMENT "user total cost",
+ `max_dwell_time` INT MAX DEFAULT "0" COMMENT "user max dwell time",
+ `min_dwell_time` INT MIN DEFAULT "99999" COMMENT "user min dwell
time"
+ )
+ AGGREGATE KEY(`user_id`, `date`, `city`, `age`, `sex`)
+ DISTRIBUTED BY HASH(`user_id`) BUCKETS 1
+ PROPERTIES (
+ "replication_allocation" = "tag.location.default: 1"
+ );
+ """
+ sql """
+ CREATE TABLE IF NOT EXISTS unique_table
+ (
+ `user_id` LARGEINT NOT NULL COMMENT "User ID",
+ `username` VARCHAR(50) NOT NULL COMMENT "Username",
+ `city` VARCHAR(20) COMMENT "User location city",
+ `age` SMALLINT COMMENT "User age",
+ `sex` TINYINT COMMENT "User gender",
+ `phone` LARGEINT COMMENT "User phone number",
+ `address` VARCHAR(500) COMMENT "User address",
+ `register_time` DATETIME COMMENT "User registration time"
+ )
+ UNIQUE KEY(`user_id`, `username`)
+ DISTRIBUTED BY HASH(`user_id`) BUCKETS 1
+ PROPERTIES (
+ "replication_allocation" = "tag.location.default: 1"
+ );
+ """
+ sql """
+ CREATE TABLE IF NOT EXISTS duplicate_table
+ (
+ `timestamp` DATETIME NOT NULL COMMENT "Log time",
+ `type` INT NOT NULL COMMENT "Log type",
+ `error_code` INT COMMENT "Error code",
+ `error_msg` VARCHAR(1024) COMMENT "Error detail message",
+ `op_id` BIGINT COMMENT "Operator ID",
+ `op_time` DATETIME COMMENT "Operation time"
+ )
+ DISTRIBUTED BY HASH(`type`) BUCKETS 1
+ PROPERTIES (
+ "replication_allocation" = "tag.location.default: 1"
+ );
+ """
+
+ // test row column page size
+ sql """
+ CREATE TABLE IF NOT EXISTS test_row_column_page_size1 (
+ `aaa` varchar(170) NOT NULL COMMENT "",
+ `bbb` varchar(20) NOT NULL COMMENT "",
+ `ccc` INT NULL COMMENT "",
+ `ddd` SMALLINT NULL COMMENT ""
+ )
+ DISTRIBUTED BY HASH(`aaa`) BUCKETS 1
+ PROPERTIES (
+ "replication_num" = "1",
+ "store_row_column" = "true"
+ );
+ """
+
+ sql """
+ CREATE TABLE IF NOT EXISTS test_row_column_page_size2 (
+ `aaa` varchar(170) NOT NULL COMMENT "",
+ `bbb` varchar(20) NOT NULL COMMENT "",
+ `ccc` INT NULL COMMENT "",
+ `ddd` SMALLINT NULL COMMENT ""
+ )
+ DISTRIBUTED BY HASH(`aaa`) BUCKETS 1
+ PROPERTIES (
+ "replication_num" = "1",
+ "store_row_column" = "true",
+ "row_store_page_size" = "8190"
+ );
+ """
+
+ qt_select_check_1 """select * from information_schema.table_options where
table_schema=\"${dbName}\" order by
TABLE_CATALOG,TABLE_SCHEMA,TABLE_NAME,TABLE_MODEL,TABLE_MODEL_KEY,DISTRIBUTE_KEY,DISTRIBUTE_TYPE,BUCKETS_NUM,PARTITION_NUM;
"""
+ sql """
+ drop table test_row_column_page_size2;
+ """
+ qt_select_check_2 """select * from information_schema.table_options where
table_schema=\"${dbName}\" order by
TABLE_CATALOG,TABLE_SCHEMA,TABLE_NAME,TABLE_MODEL,TABLE_MODEL_KEY,DISTRIBUTE_KEY,DISTRIBUTE_TYPE,BUCKETS_NUM,PARTITION_NUM;
"""
+
+ def user = "table_options_user"
+ sql "DROP USER IF EXISTS ${user}"
+ sql "CREATE USER ${user} IDENTIFIED BY '123abc!@#'"
+ //cloud-mode
+ if (isCloudMode()) {
+ def clusters = sql " SHOW CLUSTERS; "
+ assertTrue(!clusters.isEmpty())
+ def validCluster = clusters[0][0]
+ sql """GRANT USAGE_PRIV ON CLUSTER ${validCluster} TO ${user}""";
+ }
+
+ sql "GRANT SELECT_PRIV ON information_schema.table_properties TO ${user}"
+
+ def tokens = context.config.jdbcUrl.split('/')
+ def url=tokens[0] + "//" + tokens[2] + "/" + "information_schema" + "?"
+
+ connect(user=user, password='123abc!@#', url=url) {
+ qt_select_check_3 """select * from information_schema.table_options
ORDER BY
TABLE_CATALOG,TABLE_SCHEMA,TABLE_NAME,TABLE_MODEL,TABLE_MODEL_KEY,DISTRIBUTE_KEY,DISTRIBUTE_TYPE,BUCKETS_NUM,PARTITION_NUM;
"""
+ }
+
+ sql "GRANT SELECT_PRIV ON ${dbName}.duplicate_table TO ${user}"
+ connect(user=user, password='123abc!@#', url=url) {
+ qt_select_check_4 """select * from information_schema.table_options
ORDER BY
TABLE_CATALOG,TABLE_SCHEMA,TABLE_NAME,TABLE_MODEL,TABLE_MODEL_KEY,DISTRIBUTE_KEY,DISTRIBUTE_TYPE,BUCKETS_NUM,PARTITION_NUM;
"""
+ }
+
+ sql "REVOKE SELECT_PRIV ON ${dbName}.duplicate_table FROM ${user}"
+ connect(user=user, password='123abc!@#', url=url) {
+ qt_select_check_5 """select * from information_schema.table_options
ORDER BY
TABLE_CATALOG,TABLE_SCHEMA,TABLE_NAME,TABLE_MODEL,TABLE_MODEL_KEY,DISTRIBUTE_KEY,DISTRIBUTE_TYPE,BUCKETS_NUM,PARTITION_NUM;
"""
+ }
+
+
+}
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]