Hi,

I'm trying to find what is the approach to read from multi-tenant phoenix
table via phoenix-spark plugin in java.

For scala, I see the following example.

https://github.com/apache/phoenix/blob/master/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkITTenantSpecific.scala

test("Can read from tenant-specific table as DataFrame") {
    val sqlContext = new SQLContext(sc)
    val df = sqlContext.phoenixTableAsDataFrame(
      TenantTable,
      Seq(OrgIdCol, TenantOnlyCol),
      zkUrl = Some(quorumAddress),
      tenantId = Some(TenantId),
      conf = hbaseConfiguration)

    // There should only be 1 row upserted in tenantSetup.sql
    val count = df.count()
    count shouldEqual 1L
  }

 For java, I see the following snippet. But, is there an option to pass in
a url param with tenantId similar to how it is done via JDBC?

spark.read().format("org.apache.phoenix.spark").option("table", "tableName"
).option("zkUrl", "").load();

Appreciate any inputs or suggestions!

Thanks

Reply via email to