[jira] [Commented] (SPARK-34134) LDAP authentication of spark thrift server support user id mapping

2021-02-05 Thread Timothy Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-34134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17279847#comment-17279847
 ] 

Timothy Zhang commented on SPARK-34134:
---

Almost applications I used can support it, such as Cognos, Jekins, Graylog, 
etc. 

> LDAP authentication of spark thrift server support user id mapping
> --
>
> Key: SPARK-34134
> URL: https://issues.apache.org/jira/browse/SPARK-34134
> Project: Spark
>  Issue Type: Improvement
>  Components: Security
>Affects Versions: 3.0.1
>Reporter: Timothy Zhang
>Priority: Major
>
> I'm trying to configure LDAP authentication of spark thrift server, and would 
> like to implement user id mapping to mail address.
> My scenario is, "uid" is the key of our LDAP system, and "mail"(email 
> address) is one of attributes. Now we want users to input email address, i.e. 
> "mail" when they login thrift client. That is to map "username" input to mail 
> attribute query. e.g.
> {code:none}
> hive.server2.authentication.ldap.customLDAPQuery="(&(objectClass=person)(mail=${uid}))"
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-34134) LDAP authentication of spark thrift server support user id mapping

2021-01-15 Thread Timothy Zhang (Jira)
Timothy Zhang created SPARK-34134:
-

 Summary: LDAP authentication of spark thrift server support user 
id mapping
 Key: SPARK-34134
 URL: https://issues.apache.org/jira/browse/SPARK-34134
 Project: Spark
  Issue Type: Improvement
  Components: Security
Affects Versions: 3.0.1
Reporter: Timothy Zhang


I'm trying to configure LDAP authentication of spark thrift server, and would 
like to implement user id mapping to mail address.

My scenario is, "uid" is the key of our LDAP system, and "mail"(email address) 
is one of attributes. Now we want users to input email address, i.e. "mail" 
when they login thrift client. That is to map "username" input to mail 
attribute query. e.g.

{code:none}
hive.server2.authentication.ldap.customLDAPQuery="(&(objectClass=person)(mail=${uid}))"
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31209) Not compatible with new version of scalatest (3.1.0 and above)

2020-03-24 Thread Timothy Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17066090#comment-17066090
 ] 

Timothy Zhang commented on SPARK-31209:
---

Sure, I'll try to work on it soon. 

> Not compatible with new version of scalatest (3.1.0 and above)
> --
>
> Key: SPARK-31209
> URL: https://issues.apache.org/jira/browse/SPARK-31209
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Tests
>Affects Versions: 3.1.0
>Reporter: Timothy Zhang
>Priority: Major
>
> Since  ScalaTest's style traits and classes were moved and renamed 
> ([http://www.scalatest.org/release_notes/3.1.0]) there are errors as not find 
> FunSpec when I add new version of scalatest in library dependency. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31209) Not compatible with new version of scalatest (3.1.0 and above)

2020-03-23 Thread Timothy Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064561#comment-17064561
 ] 

Timothy Zhang commented on SPARK-31209:
---

Yes, I think so. It is better to upgrade all classes extended from FunSuite, 
FlatSpec, etc.. of scalatest. 

> Not compatible with new version of scalatest (3.1.0 and above)
> --
>
> Key: SPARK-31209
> URL: https://issues.apache.org/jira/browse/SPARK-31209
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Tests
>Affects Versions: 3.1.0
>Reporter: Timothy Zhang
>Priority: Major
>
> Since  ScalaTest's style traits and classes were moved and renamed 
> ([http://www.scalatest.org/release_notes/3.1.0]) there are errors as not find 
> FunSpec when I add new version of scalatest in library dependency. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-31209) Not compatible with new version of scalatest (3.1.0 and above)

2020-03-20 Thread Timothy Zhang (Jira)
Timothy Zhang created SPARK-31209:
-

 Summary: Not compatible with new version of scalatest (3.1.0 and 
above)
 Key: SPARK-31209
 URL: https://issues.apache.org/jira/browse/SPARK-31209
 Project: Spark
  Issue Type: Dependency upgrade
  Components: Tests
Affects Versions: 2.4.5
Reporter: Timothy Zhang


Since  ScalaTest's style traits and classes were moved and renamed 
([http://www.scalatest.org/release_notes/3.1.0]) there are errors as not find 
FunSpec when I add new version of scalatest in library dependency. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28303) Support DELETE/UPDATE/MERGE Operations in DataSource V2

2019-10-01 Thread Timothy Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16942037#comment-16942037
 ] 

Timothy Zhang commented on SPARK-28303:
---

I just found this issue. Here is the issue reported by me: 
https://issues.apache.org/jira/browse/SPARK-29185. Based on my analysis on 
current Spark JDBC codes I found it is easy to extend to support 
DELETE/UPDATE/MERGE just enhance SaveMode and add some statements of 
DELETE/UPDATE/MERGE. Would you please look at the issue 29185? 

> Support DELETE/UPDATE/MERGE Operations in DataSource V2
> ---
>
> Key: SPARK-28303
> URL: https://issues.apache.org/jira/browse/SPARK-28303
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 3.0.0
>Reporter: Xianyin Xin
>Priority: Major
>
> Now many datasources (delta, jdbc, hive with transaction support, kudu, etc) 
> supports deleting/updating data. It's necessary to add related APIs in the 
> datasource V2 API sets.
> For example, we suggest add the below interface in V2 API,
> {code:java|title=SupportsDelete.java|borderStyle=solid}
> public interface SupportsDelete {
>   WriteBuilder delete(Filter[] filters); 
> }
> {code}
> which can delete data by simple predicates (complicated cases like correlated 
> subquery is not considered currently).
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29185) Add new SaveMode types for Spark SQL jdbc datasource

2019-09-19 Thread Timothy Zhang (Jira)
Timothy Zhang created SPARK-29185:
-

 Summary: Add new SaveMode types for Spark SQL jdbc datasource
 Key: SPARK-29185
 URL: https://issues.apache.org/jira/browse/SPARK-29185
 Project: Spark
  Issue Type: New Feature
  Components: Input/Output
Affects Versions: 2.4.4
Reporter: Timothy Zhang


 It is necessary to add new SaveMode for Delete, Update, and Upsert, such as:
 * SaveMode.Delete
 * SaveMode.Update
 * SaveMode.Upsert

So that Spark SQL could support legacy RDBMS much betters, e.g. Oracle, DB2, 
MySQL etc. Actually code implementation of current SaveMode.Append types is 
very flexible. All types could share the same savePartition function, add only 
add new getStatement functions for Delete, Update, Upsert with SQL statements 
DELETE FROM, UPDATE, MERGE INTO respectively. We have an initial 
implementations for them:
{code:java}
def getDeleteStatement(table: String, rddSchema: StructType, dialect: 
JdbcDialect): String = {
val columns = rddSchema.fields.map(x => dialect.quoteIdentifier(x.name) + 
"=?").mkString(" AND ")

s"DELETE FROM ${table.toUpperCase} WHERE $columns"
  }

  def getUpdateStatement(table: String, rddSchema: StructType, priKeys: 
Seq[String], dialect: JdbcDialect): String = {
val fullCols = rddSchema.fields.map(x => dialect.quoteIdentifier(x.name))
val priCols = priKeys.map(dialect.quoteIdentifier(_))
val columns = (fullCols diff priCols).map(_ + "=?").mkString(",")
val cnditns = priCols.map(_ + "=?").mkString(" AND ")

s"UPDATE ${table.toUpperCase} SET $columns WHERE $cnditns"
  }

  def getMergeStatement(table: String, rddSchema: StructType, priKeys: 
Seq[String], dialect: JdbcDialect): String = {
val fullCols = rddSchema.fields.map(x => dialect.quoteIdentifier(x.name))
val priCols = priKeys.map(dialect.quoteIdentifier(_))
val nrmCols = fullCols diff priCols

val fullPart = fullCols.map(c => 
s"${dialect.quoteIdentifier("SRC")}.$c").mkString(",")
val priPart = priCols.map(c => 
s"${dialect.quoteIdentifier("TGT")}.$c=${dialect.quoteIdentifier("SRC")}.$c").mkString("
 AND ")
val nrmPart = nrmCols.map(c => 
s"$c=${dialect.quoteIdentifier("SRC")}.$c").mkString(",")

val columns = fullCols.mkString(",")
val placeholders = fullCols.map(_ => "?").mkString(",")

s"MERGE INTO ${table.toUpperCase} AS ${dialect.quoteIdentifier("TGT")} " +
  s"USING TABLE(VALUES($placeholders)) " +
  s"AS ${dialect.quoteIdentifier("SRC")}($columns) " +
  s"ON $priPart " +
  s"WHEN NOT MATCHED THEN INSERT ($columns) VALUES ($fullPart) " +
  s"WHEN MATCHED THEN UPDATE SET $nrmPart"
  }
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org