[
https://issues.apache.org/jira/browse/PHOENIX-3532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15752339#comment-15752339
]
Josh Mahonin commented on PHOENIX-3532:
---------------------------------------
Thanks for the patch [~nico.pappagianis]
It looks good. I think it's totally unrelated to this patch, but do you have
any issues running {{mvn verify}} in the phoenix-spark folder? I'm getting some
stack traces that are new, could just be my environment (MBP, OS X Sierra)
{noformat}
Exception encountered when invoking run on a nested suite -
java.io.IOException: Cannot create directory
/Users/jmahonin/devel/phoenix/phoenix-spark/target/test-data/a4190a9c-2a90-4376-a2c7-423af433801e/dfscluster_06f4205f-969f-4394-a1c1-2c3c36f2d932/dfs/name1/current
*** ABORTED ***
java.lang.RuntimeException: java.io.IOException: Cannot create directory
/Users/jmahonin/devel/phoenix/phoenix-spark/target/test-data/a4190a9c-2a90-4376-a2c7-423af433801e/dfscluster_06f4205f-969f-4394-a1c1-2c3c36f2d932/dfs/name1/current
at org.apache.phoenix.query.BaseTest.initMiniCluster(BaseTest.java:591)
at org.apache.phoenix.query.BaseTest.setUpTestCluster(BaseTest.java:509)
at
org.apache.phoenix.query.BaseTest.checkClusterInitialized(BaseTest.java:483)
at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:561)
at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:557)
at
org.apache.phoenix.end2end.BaseHBaseManagedTimeIT.doSetup(BaseHBaseManagedTimeIT.java:57)
at
org.apache.phoenix.spark.PhoenixSparkITHelper$.doSetup(AbstractPhoenixSparkIT.scala:33)
at
org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPhoenixSparkIT.scala:88)
at
org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
at
org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPhoenixSparkIT.scala:44)
...
Cause: java.io.IOException: Cannot create directory
/Users/jmahonin/devel/phoenix/phoenix-spark/target/test-data/a4190a9c-2a90-4376-a2c7-423af433801e/dfscluster_06f4205f-969f-4394-a1c1-2c3c36f2d932/dfs/name1/current
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342)
at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:176)
at
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:973)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:811)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:742)
{noformat}
> Enable DataFrames and RDDs to read from a tenant-specific table
> ---------------------------------------------------------------
>
> Key: PHOENIX-3532
> URL: https://issues.apache.org/jira/browse/PHOENIX-3532
> Project: Phoenix
> Issue Type: Bug
> Reporter: Nico Pappagianis
> Original Estimate: 24h
> Remaining Estimate: 24h
>
> Currently the method phoenixTableAsDataFrame in SparkSqlContextFunctions
> and phoenixTableAsRDD in SparkContextFunctions do not pass the tenantId
> parameter along to the PhoenixRDD constructor. The tenantId parameter was
> added as part of PHOENIX-3427 but was not properly implemented (by me). This
> JIRA will fix this issue and add tests around reading tenant-specific tables
> as both DataFrames and RDDs.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)