We store the database/LDAP user relationship in the AUTHS metadata table.
This relationship is retrieved during the authentication process.
Later we cache credentials by making the SetAuthID  request.

     Roberta

-----Original Message-----
From: Peng, Huai-Tao (Alex) [mailto:[email protected]] 
Sent: Thursday, November 9, 2017 10:46 PM
To: [email protected]
Subject: RE: anybody knows the logic for user authentication on mxosrvr side

Hi Robert,

Thanks for your immediate response, I noticed that while we verify the 
user/password, we not only do the ldap authentication, we also get DB user from 
cache by ldap user. So I found the function cacheUserInfo which will create an 
array. But I could not find where it was invoked, only find we fetch it by 
fetchFromCacheByUsername. Please check. 

My final goal is investigate how we get DB user by the ldap user.

Thanks and regards
Alex

-----Original Message-----
From: Roberta Marton [mailto:[email protected]] 
Sent: Friday, November 10, 2017 12:25 AM
To: [email protected]
Subject: RE: anybody knows the logic for user authentication on mxosrvr side

Alex, 

There is a project called dbsecurity off of the core directory.   This contains 
verify code and other API's used to authenticate the user.  

incubator-trafodion/core/dbsecurity/auth/src

Interesting files:

dbUserAuth -> contains the verify method and other apis used in srvothers.cpp 
ldapconfigfile -> manages the LDAP configuration file that describes parameters 
used to talk to LDAP ldapconfignode -> manages conversation with LDAP 
authEvents -> writes events to a log4cxx file located in the logs directory

Two utilities:

ldapcheck -> a utility that talks directly to LDAP used to verify connections 
ldapconfigcheck -> a utility that verifies the LDAP configuration file is set 
up correctly

     Roberta

-----Original Message-----
From: Peng, Huai-Tao (Alex) [mailto:[email protected]]
Sent: Thursday, November 9, 2017 4:56 AM
To: [email protected]
Subject: anybody knows the logic for user authentication on mxosrvr side

Hi All,

I noticed we used following lines to do verification on mxosrvr side, but I 
could not find the implementation of verify function, anybody knows the logic 
of user authentication.

srvrothers.cpp -> DBUserAuth     *userSession = DBUserAuth::GetInstance();  
line: 5272


retcode = userSession->verify(userDesc->userName
                                                      ,pPWD
                                                      ,authErrorDetail
                                                      ,authenticationInfo
                                                      ,client_info
                                                      ,performanceInfo
                                                      ,&ldapSearchResult
                                                      );

thanks and regards
Alex

-----Original Message-----
From: Zhu, Wen-Jun [mailto:[email protected]]
Sent: Thursday, November 09, 2017 9:41 AM
To: [email protected]
Subject: 答复: timestamp type in Parquet format

It works.

Thanks.

-----邮件原件-----
发件人: Liu, Yuan (Yuan) [mailto:[email protected]]
发送时间: 2017年11月8日 18:19
收件人: [email protected]
主题: RE: timestamp type in Parquet format

I reproduced you issue here, I guess it is a bug. 

As a workaround, we can first cast the column as timestamp.

SQL>showddl hive.hive.t_p;


/* Hive DDL */
CREATE TABLE DEFAULT.T_P
  (
    A                                decimal(15,2)
  , B                                timestamp
  )
  stored as parquet
;

/* Trafodion DDL */

--- SQL operation complete.


SQL>select * from hive.hive.t_p where b >= timestamp '1998-12-01 
SQL>00:00:00';

*** ERROR[8442] Unable to access EXT scanFetch interface. Call to Java 
exception in fetchNextRow() returned error java.lang.IllegalArgumentException: 
FilterPredicate column: b's declared type (java.lang.Long) does not match the 
schema found in file metadata. Column b is of type: INT96 Valid types for this 
column are: [class parquet.io.api.Binary]
parquet.filter2.predicate.ValidTypeMap.assertTypeValid(ValidTypeMap.java:138)
parquet.filter2.predicate.SchemaCompatibilityValidator.validateColumn(SchemaCompatibilityValidator.java:176)
parquet.filter2.predicate.SchemaCompatibilityValidator.validateColumnFilterPredicate(SchemaCompatibilityValidator.java:151)
parquet.filter2.predicate.SchemaCompatibilityValidator.visit(SchemaCompatibilityValidator.java:115)
parquet.filter2.predicate.SchemaCompatibilityValidator.visit(SchemaCompatibilityValidator.java:58)
parquet.filter2.predicate.Operators$GtEq.accept(Operators.java:248)
parquet.filter2.predicate.SchemaCompatibilityValidator.validate(SchemaCompatibilityValidator.java:63)
parquet.filter2.compat.RowGroupFilter.visit(RowGroupFilter.java:59)
parquet.filter2.compat.RowGroupFilter.visit(RowGroupFilter.java:40)
parquet.filter2.compat.FilterCompat$FilterPredicateCompat.accept(FilterCompat.java:126)
parquet.filter2.compat.RowGroupFilter.filterRowGroups(RowGroupFilter.java:46)
parquet.hadoop.ParquetReader.initReader(ParquetReader.java:148)
parquet.hadoop.ParquetReader.read(ParquetReader.java:128)
org.trafodion.sql.TrafParquetFileReader.fetchNextBlock(TrafParquetFileReader.java:591)(18).
 Error detail 0. [2017-11-08 18:13:39]


SQL>select * from hive.hive.t_p where cast(b as timestamp) >= timestamp
SQL>'1998-12-01 00:00:00';

A                 B
----------------- --------------------------
             1.20 2016-01-01 04:30:30.000000

--- 1 row(s) selected.

Best regards,
Yuan

-----Original Message-----
From: Liu, Yuan (Yuan) [mailto:[email protected]]
Sent: Wednesday, November 08, 2017 6:01 PM
To: [email protected]
Subject: RE: timestamp type in Parquet format

Are you sure you use Trafodion 2.3? Trafodion 2.2 have not been released yet.


Best regards,
Yuan

-----Original Message-----
From: Zhu, Wen-Jun [mailto:[email protected]]
Sent: Wednesday, November 08, 2017 5:47 PM
To: [email protected]
Subject: timestamp type in Parquet format

Hi all,
       I am doing TPC-H benchmark test in Trafodion, with version of Trafodion 
2.3, and that of Hive 1.1 .

       The column l_shipdate in table lineitem is of type date, which is not 
supported in Hive until version 1.2..
So I change the type to timestamp.

       Then I got the following error message in trafci:

SQL>showddl hive.hive.parquet_lineitem;
/* Hive DDL */
CREATE TABLE DEFAULT.PARQUET_LINEITEM
  (
    L_ORDERKEY                       int
  , L_PARTKEY                        int
  , L_SUPPKEY                        int
  , L_LINENUMBER                     int
  , L_QUANTITY                       decimal(15,2)
  , L_EXTENDEDPRICE                  decimal(15,2)
  , L_DISCOUNT                       decimal(15,2)
  , L_TAX                            decimal(15,2)
  , L_RETURNFLAG                     char(1)
  , L_LINESTATUS                     char(1)
  , L_SHIPDATE                       timestamp
  , L_COMMITDATE                     timestamp
  , L_RECEIPTDATE                    timestamp
  , L_SHIPINSTRUCT                   char(25)
  , L_SHIPMODE                       char(10)
  , L_COMMENT                        varchar(44)
  )
  stored as parquet
;

SQL> select
        l_shipdate
from
        hive.hive.parquet_lineitem
where
        l_shipdate <= timestamp '1998-12-01 00:00:00'
limit 10;

*** ERROR[8442] Unable to access EXT scanFetch interface. Call to Java 
exception in fetchNextRow() returned error java.lang.IllegalArgumentException: 
FilterPredicate column: l_shipdate's declared type (java.lang.Long) does not 
match the schema found in file metadata. Column l_shipdate is of type: 
FullTypeDescriptor(PrimitiveType: INT96, OriginalType: null) Valid types for 
this column are: null
parquet.filter2.predicate.ValidTypeMap.assertTypeValid(ValidTypeMap.java:132)
parquet.filter2.predicate.SchemaCompatibilityValidator.validateColumn(SchemaCompatibilityValidator.java:185)
parquet.filter2.predicate.SchemaCompatibilityValidator.validateColumnFilterPredicate(SchemaCompatibilityValidator.java:160)
parquet.filter2.predicate.SchemaCompatibilityValidator.visit(SchemaCompatibilityValidator.java:112)
parquet.filter2.predicate.SchemaCompatibilityValidator.visit(SchemaCompatibilityValidator.java:59)
parquet.filter2.predicate.Operators$LtEq.accept(Operators.java:221)
parquet.filter2.predicate.SchemaCompatibilityValidator.validate(SchemaCompatibilityValidator.java:64)
parquet.filter2.compat.RowGroupFilter.visit(RowGroupFilter.java:59)
parquet.filter2.compat.RowGroupFilter.visit(RowGroupFilter.java:40)
parquet.filter2.compat.FilterCompat$FilterPredicateCompat.accept(FilterCompat.java:126)
parquet.filter2.compat.RowGroupFilter.filterRowGroups(RowGroupFilter.java:46)
parquet.hadoop.ParquetReader.initReader(ParquetReader.java:152)
parquet.hadoop.ParquetReader.read(ParquetReader.java:132)
org.trafodion.sql.TrafParquetFileReader.fetchNextBlock(TrafParquetFileReader.java:591)(18).
 Error detail 0. [2017-11-08 17:35:08]

Can anyone help me to find out what happened? If needed, I can provide the data.


Thank you in advance!

Reply via email to