If someone need, on tableau desktop 8.3 all is ok.

>Вторник, 26 января 2016, 18:04 +03:00 от Я <ma...@bk.ru>:
>
>
>Hi, i'm trying to connect tableau desktop 9.2 to spark sql.
>i'm using this guide  http://samx18.io/blog/2015/09/05/tableau-spark-hive.html
>but on last step, then i'm try to get the contents of the table i'm getting 
>only rows count and empty table.
>
>hadoop 2.6 hive 1.21 spark 1.4.1 tableau desktop 9.2 Simba Spark ODBC Driver 
>1.0 (64-bit)
>
>thriftserver log
>
>16/01/26 17:57:24 INFO SparkExecuteStatementOperation: Running query 'SELECT 1 
>AS `number_of_records`
>FROM `default`.`event_96720107` `event_96720107`
>LIMIT 10'
>16/01/26 17:57:24 INFO ParseDriver: Parsing command: SELECT 1 AS 
>`number_of_records`
>FROM `default`.`event_96720107` `event_96720107`
>LIMIT 10
>16/01/26 17:57:24 INFO ParseDriver: Parse Completed
>16/01/26 17:57:24 INFO HiveMetaStore: 2: get_table : db=default 
>tbl=event_96720107
>16/01/26 17:57:24 INFO audit: ugi=hduser    ip=unknown-ip-addr    
>cmd=get_table : db=default tbl=event_96720107    
>16/01/26 17:57:24 INFO MemoryStore: ensureFreeSpace(400328) called with 
>curMem=1356967, maxMem=278019440
>16/01/26 17:57:24 INFO MemoryStore: Block broadcast_26 stored as values in 
>memory (estimated size 390.9 KB, free 263.5 MB)
>16/01/26 17:57:24 INFO MemoryStore: ensureFreeSpace(36040) called with 
>curMem=1757295, maxMem=278019440
>16/01/26 17:57:24 INFO MemoryStore: Block broadcast_26_piece0 stored as bytes 
>in memory (estimated size 35.2 KB, free 263.4 MB)
>16/01/26 17:57:24 INFO BlockManagerInfo: Added broadcast_26_piece0 in memory 
>on 192.168.245.128:43526 (size: 35.2 KB, free: 265.0 MB)
>16/01/26 17:57:24 INFO SparkContext: Created broadcast 26 from Spark JDBC 
>Server Query
>16/01/26 17:57:24 INFO FileInputFormat: Total input paths to process : 1
>16/01/26 17:57:24 INFO SparkContext: Starting job: Spark JDBC Server Query
>16/01/26 17:57:24 INFO DAGScheduler: Got job 14 (Spark JDBC Server Query) with 
>1 output partitions (allowLocal=false)
>16/01/26 17:57:24 INFO DAGScheduler: Final stage: ResultStage 14(Spark JDBC 
>Server Query)
>16/01/26 17:57:24 INFO DAGScheduler: Parents of final stage: List()
>16/01/26 17:57:24 INFO DAGScheduler: Missing parents: List()
>16/01/26 17:57:24 INFO DAGScheduler: Submitting ResultStage 14 
>(MapPartitionsRDD[84] at Spark JDBC Server Query), which has no missing parents
>16/01/26 17:57:24 INFO MemoryStore: ensureFreeSpace(16768) called with 
>curMem=1793335, maxMem=278019440
>16/01/26 17:57:24 INFO MemoryStore: Block broadcast_27 stored as values in 
>memory (estimated size 16.4 KB, free 263.4 MB)
>16/01/26 17:57:24 INFO MemoryStore: ensureFreeSpace(7945) called with 
>curMem=1810103, maxMem=278019440
>16/01/26 17:57:24 INFO MemoryStore: Block broadcast_27_piece0 stored as bytes 
>in memory (estimated size 7.8 KB, free 263.4 MB)
>16/01/26 17:57:24 INFO BlockManagerInfo: Added broadcast_27_piece0 in memory 
>on 192.168.245.128:43526 (size: 7.8 KB, free: 265.0 MB)
>16/01/26 17:57:24 INFO SparkContext: Created broadcast 27 from broadcast at 
>DAGScheduler.scala:874
>16/01/26 17:57:24 INFO DAGScheduler: Submitting 1 missing tasks from 
>ResultStage 14 (MapPartitionsRDD[84] at Spark JDBC Server Query)
>16/01/26 17:57:24 INFO TaskSchedulerImpl: Adding task set 14.0 with 1 tasks
>16/01/26 17:57:24 INFO TaskSetManager: Starting task 0.0 in stage 14.0 (TID 
>14, 192.168.245.128, ANY, 1448 bytes)
>16/01/26 17:57:24 INFO BlockManagerInfo: Added broadcast_27_piece0 in memory 
>on 192.168.245.128:49810 (size: 7.8 KB, free: 265.0 MB)
>16/01/26 17:57:24 INFO BlockManagerInfo: Added broadcast_26_piece0 in memory 
>on 192.168.245.128:49810 (size: 35.2 KB, free: 265.0 MB)
>16/01/26 17:57:24 INFO TaskSetManager: Finished task 0.0 in stage 14.0 (TID 
>14) in 89 ms on 192.168.245.128 (1/1)
>16/01/26 17:57:24 INFO TaskSchedulerImpl: Removed TaskSet 14.0, whose tasks 
>have all completed, from pool 
>16/01/26 17:57:24 INFO DAGScheduler: ResultStage 14 (Spark JDBC Server Query) 
>finished in 0.088 s
>16/01/26 17:57:24 INFO DAGScheduler: Job 14 finished: Spark JDBC Server Query, 
>took 0.105733 s
>16/01/26 17:57:24 INFO StatsReportListener: Finished stage: 
>org.apache.spark.scheduler.StageInfo@108a7
>16/01/26 17:57:24 INFO SparkExecuteStatementOperation: Result Schema: 
>ArrayBuffer(number_of_records#745)
>16/01/26 17:57:24 INFO SparkExecuteStatementOperation: Result Schema: 
>ArrayBuffer(number_of_records#745)
>16/01/26 17:57:24 INFO StatsReportListener: task runtime:(count: 1, mean: 
>89.000000, stdev: 0.000000, max: 89.000000, min: 89.000000)
>16/01/26 17:57:24 INFO StatsReportListener:     0%    5%    10%    25%    50%  
>  75%    90%    95%    100%
>16/01/26 17:57:24 INFO StatsReportListener:     89.0 ms    89.0 ms    89.0 ms  
>  89.0 ms    89.0 ms    89.0 ms    89.0 ms    89.0 ms    89.0 ms
>16/01/26 17:57:24 INFO StatsReportListener: task result size:(count: 1, mean: 
>1925.000000, stdev: 0.000000, max: 1925.000000, min: 1925.000000)
>16/01/26 17:57:24 INFO StatsReportListener:     0%    5%    10%    25%    50%  
>  75%    90%    95%    100%
>16/01/26 17:57:24 INFO StatsReportListener:     1925.0 B    1925.0 B    1925.0 
>B    1925.0 B    1925.0 B    1925.0 B    1925.0 B    1925.0 B    1925.0 B
>16/01/26 17:57:24 INFO StatsReportListener: executor (non-fetch) time pct: 
>(count: 1, mean: 64.044944, stdev: 0.000000, max: 64.044944, min: 64.044944)
>16/01/26 17:57:24 INFO StatsReportListener:     0%    5%    10%    25%    50%  
>  75%    90%    95%    100%
>16/01/26 17:57:24 INFO StatsReportListener:     64 %    64 %    64 %    64 %   
> 64 %    64 %    64 %    64 %    64 %
>16/01/26 17:57:24 INFO StatsReportListener: other time pct: (count: 1, mean: 
>35.955056, stdev: 0.000000, max: 35.955056, min: 35.955056)
>16/01/26 17:57:24 INFO StatsReportListener:     0%    5%    10%    25%    50%  
>  75%    90%    95%    100%
>16/01/26 17:57:24 INFO StatsReportListener:     36 %    36 %    36 %    36 %   
> 36 %    36 %    36 %    36 %    36 %
>16/01/26 17:57:24 INFO SparkExecuteStatementOperation: Result Schema: 
>ArrayBuffer(number_of_records#745)
>
>
>
>----------------------------------------------------------------------
>
>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail:  user-unsubscr...@spark.apache.org
>For additional commands, e-mail:  user-h...@spark.apache.org




Reply via email to