Re: CarbonData (1.5.2) TPCH Reports

2019-04-10 Thread aaron
Hi Gururaj, I did not see the points carbon is better than parquet from your
report. So i'm wondering why we should use carbon not parquet? we all know
parquet is more popular.




--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Issue] Long string columns config for big strings not work

2018-10-11 Thread aaron
Thanks, I will have a try



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Issue] Long string columns config for big strings not work

2018-10-10 Thread aaron
Hi Community, I found that if I match the table columns order and dataframe
order through below way, then it works.

_df
.select(
  "market_code", "product_id", "country_code", "category_id",
"company_id", "name", "company", "release_date", "price", "version",
  "description", "ss_urls", "size", "web_urls", "created",
"content_rating", "privacy_policy_url", "last_updated", "has_iap", "status",
  "current_release_date", "original_price", "sensitive_status",
"artwork_url", "slug", "scrape_reviews", "date_scraped", "scrape_failed",
  "dead", "sku", "req_version", "req_device", "has_game_center",
"is_mac", "languages", "support_url", "license_url", "link_apps",
  "scrape_review_delay", "requirements", "app_store_notes",
"bundle_id", "product_type", "bundle_product_count", "family_sharing",
  "purchased_separately_price", "seller", "required_devices",
"has_imsg", "is_hidden_from_springboard", "subtitle", "promotional_text",
  "editorial_badge_type", "editorial_badge_name", "only_32_bit",
"class", "installs", "require_os", "downloads_chart_url", "video_url",
  "icon_url", "banner_image_url", "permissions", "whats_new",
"related_apps", "also_installed_apps", "more_from_developer_apps",
  "is_publisher_top", "publisher_email", "scrape_review_status",
"company_code", "source"
)
.write
.format("carbondata")
.option("tableName", s"${tableName}")
.option("compress", "true")
.mode(SaveMode.Append)
.save()





--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


[Issue] Long string columns config for big strings not work

2018-10-10 Thread aaron
Hi Community,

I encounter a issue, the LONG_STRING_COLUMNS config for big strings not
work. My env is spark2.3.2 + carbon 1.5.0

1. DDL Sql

carbon.sql(
  s"""
 |CREATE TABLE IF NOT EXISTS product(
 |market_code STRING,
 |product_id LONG,
 |country_code STRING,
 |category_id LONG,
 |company_id LONG,
 |name STRING,
 |company STRING,
 |release_date STRING,
 |price DOUBLE,
 |version STRING,
 |description STRING,
 |ss_urls STRING,
 |size DOUBLE,
 |web_urls STRING,
 |created STRING,
 |content_rating STRING,
 |privacy_policy_url STRING,
 |last_updated STRING,
 |has_iap BOOLEAN,
 |status LONG,
 |current_release_date STRING,
 |original_price DOUBLE,
 |sensitive_status LONG,
 |artwork_url STRING,
 |slug STRING,
 |scrape_reviews BOOLEAN,
 |date_scraped STRING,
 |scrape_failed LONG,
 |dead BOOLEAN,
 |sku STRING,
 |req_version STRING,
 |req_device LONG,
 |has_game_center BOOLEAN,
 |is_mac BOOLEAN,
 |languages STRING,
 |support_url STRING,
 |license_url STRING,
 |link_apps STRING,
 |scrape_review_delay LONG,
 |requirements STRING,
 |app_store_notes STRING,
 |bundle_id STRING,
 |product_type LONG,
 |bundle_product_count LONG,
 |family_sharing BOOLEAN,
 |purchased_separately_price DOUBLE,
 |seller STRING,
 |required_devices STRING,
 |has_imsg BOOLEAN,
 |is_hidden_from_springboard BOOLEAN,
 |subtitle STRING,
 |promotional_text STRING,
 |editorial_badge_type STRING,
 |editorial_badge_name STRING,
 |only_32_bit BOOLEAN,
 |class STRING,
 |installs STRING,
 |require_os STRING,
 |downloads_chart_url STRING,
 |video_url STRING,
 |icon_url STRING,
 |banner_image_url STRING,
 |permissions STRING,
 |whats_new STRING,
 |related_apps STRING,
 |also_installed_apps STRING,
 |more_from_developer_apps STRING,
 |is_publisher_top BOOLEAN,
 |publisher_email STRING,
 |scrape_review_status LONG,
 |company_code STRING,
 |source STRING)
 |STORED BY 'carbondata'
 |TBLPROPERTIES(
 |'SORT_COLUMNS'='market_code, status, country_code, category_id,
product_id, company_id',
 |'NO_INVERTED_INDEX'='name, company, release_date, artwork_url,
slug, scrape_reviews, price, version, date_scraped, scrape_failed, sku,
size, req_version, languages, created, support_url, license_url,
scrape_review_delay, last_updated, bundle_id, bundle_product_count,
family_sharing, purchased_separately_price, seller, required_devices,
current_release_date, original_price, subtitle, promotional_text,
editorial_badge_type, editorial_badge_name, installs, video_url, icon_url,
banner_image_url, company_code, source',
 |'DICTIONARY_INCLUDE'='market_code,country_code',
 |'LONG_STRING_COLUMNS'='description, downloads_chart_url,
permissions, whats_new, web_urls, related_apps, also_installed_apps,
more_from_developer_apps, privacy_policy_url, publisher_email, ss_urls,
link_apps, content_rating, requirements, app_store_notes',
 |'SORT_SCOPE'='LOCAL_SORT',
 |'CACHE_LEVEL'='BLOCKLET',
 |'TABLE_BLOCKSIZE'='256')
   """.stripMargin)

2. Table

scala> carbon.sql("describe formatted product").show(200, truncate=false)
2018-10-10 21:24:34 STATISTIC QueryStatisticsRecorderImpl:212 - Time taken
for Carbon Optimizer to optimize: 29
2018-10-10 21:24:35 ERROR CarbonUtil:141 - main Unable to unlock Table lock
for table during table status updation
++---++
|col_name|data_type 



|comment |
++---++
|market_code |string
  

Re: [ISSUE] carbondata1.5.0 and spark 2.3.2 query plan issue

2018-10-05 Thread aaron
Data should be right.



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [ISSUE] carbondata1.5.0 and spark 2.3.2 query plan issue

2018-10-01 Thread aaron
I think the query plan info is not right,

1. Total blocklet from carbondata cli is 233 + 86 = 319
2. But query plan tell me that I have 560 blocklet

I hope below info could help you to locate issue.

***
I use carbondata cli print the blocklet summary like below:

java -cp "/home/hadoop/carbontool/*:/opt/spark/jars/*"
org.apache.carbondata.tool.CarbonCli -cmd summary -a -p
hdfs://ec2-dca-aa-p-sdn-16.appannie.org:9000/usr/carbon/data/default/storev3/

## Summary
total: 80 blocks, 9 shards, 233 blocklets, 62,698 pages, 2,006,205,228 rows,
12.40GB
avg: 158.72MB/block, 54.50MB/blocklet, 25,077,565 rows/block, 8,610,322
rows/blocklet

java -cp "/home/hadoop/carbontool/*:/opt/spark/jars/*"
org.apache.carbondata.tool.CarbonCli -cmd summary -a -p
hdfs://ec2-dca-aa-p-sdn-16.appannie.org:9000/usr/carbon/data/default/usage_basickpi/

## Summary
total: 30 blocks, 14 shards, 86 blocklets, 3,498 pages, 111,719,467 rows,
4.24GB
avg: 144.57MB/block, 50.43MB/blocklet, 3,723,982 rows/block, 1,299,063
rows/blocklet



But at the same time, I run a sql, carbon told me below info:

|== CarbonData Profiler ==
Table Scan on storev3
 - total: 194 blocks, 560 blocklets
 - filter: (granularity <> null and date <> null) and date >=
14726880 between date <= 14752800) and true) and granularity
= monthly) and country_code in
(LiteralExpression(US);LiteralExpression(CN);LiteralExpression(JP);)) and
device_code in (LiteralExpression(ios-phone);)) and product_id <> null) and
country_code <> null) and device_code <> null)
 - pruned by Main DataMap
- skipped: 192 blocks, 537 blocklets



The select sql like is

SELECT f.country_code, f.date, f.product_id, f.category_id, f.arpu FROM (
SELECT a.country_code, a.date, a.product_id, a.category_id,
a.revenue/a.average_active_users as arpu
FROM(
SELECT r.device_code, r.category_id, r.country_code, r.date,
r.product_id, r.revenue, u.average_active_users
FROM
(
SELECT b.device_code, b.country_code, b.product_id,  b.date,
b.category_id, sum(b.revenue) as revenue
FROM (
SELECT v.device_code, v.country_code, v.product_id,
v.revenue, v.date, p.category_id FROM
(
SELECT device_code, country_code, product_id,
est_revenue as revenue, timeseries(date, 'month') as date
FROM storev3
WHERE market_code='apple-store' AND date BETWEEN
'2016-09-01' AND '2016-10-01' and device_code in ('ios-phone') and
country_code in ('US', 'CN', 'JP')
) as v
JOIN(
SELECT DISTINCT product_id, category_id
FROM storev3
WHERE market_code='apple-store' AND date BETWEEN
'2016-09-01' AND '2016-10-01' and device_code in ('ios-phone') and
category_id in (10, 11, 100021) and country_code in ('US', 'CN',
'JP')
) as p
ON p.product_id = v.product_id
) as b
GROUP BY b.device_code, b.country_code, b.product_id, b.date,
b.category_id
) AS r
JOIN
(
SELECT country_code, date, product_id, (CASE WHEN
est_average_active_users is not NULL THEN est_average_active_users ELSE 0
END) as average_active_users, device_code
FROM usage_basickpi
WHERE date BETWEEN '2016-09-01' AND '2016-10-01'and granularity
='monthly' and country_code in ('US', 'CN', 'JP') AND device_code in
('ios-phone')
) AS u
ON r.country_code=u.country_code AND r.date=u.date AND
r.product_id=u.product_id AND r.device_code=u.device_code
) AS a
)AS f
ORDER BY f.arpu DESC
LIMIT 10

Thanks
Aaron




--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [ISSUE] carbondata1.5.0 and spark 2.3.2 query plan issue

2018-09-30 Thread aaron


Hi xm_zzc,

Thanks for you response. I test based on 2.3.2, not test 2.2.2. And I have
merged the fix come from ISSUE
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/Issue-Dictionary-and-S3-td63106.html
and
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/Serious-Issue-Query-get-inconsistent-result-on-carbon1-5-0-td63691.html

Create query:
val createStoreTableSql = s"""
 | CREATE TABLE IF NOT EXISTS storev2(
   | market_code STRING,
   | device_code STRING,
   | country_code STRING,
   | category_id INTEGER,
   | product_id LONG,
   | date TIMESTAMP,
   | est_free_app_download LONG,
   | est_paid_app_download LONG,
   | est_revenue LONG
 | )
 | STORED BY 'carbondata'
 | TBLPROPERTIES(
   | 'SORT_COLUMNS'='market_code,
device_code, country_code, category_id, date, product_id',
   |
'NO_INVERTED_INDEX'='est_free_app_download, est_paid_app_download,
est_revenue',
   | 'DICTIONARY_INCLUDE'='market_code,
device_code, country_code, category_id',
   | 'SORT_SCOPE'='GLOBAL_SORT',
   | 'CACHE_LEVEL'='BLOCKLET',
   | 'TABLE_BLOCKSIZE'='256',
 |   'GLOBAL_SORT_PARTITIONS'='3'
 | )
  """.stripMargin
 

val createTimeSeriesDayNoProductTableSql = s"""
| CREATE DATAMAP IF NOT
EXISTS agg_by_day ON TABLE storev2
| USING 'timeSeries'
| DMPROPERTIES (
| 'EVENT_TIME'='date',
| 'DAY_GRANULARITY'='1')
| AS SELECT date,
market_code, device_code, country_code, category_id,
| COUNT(product_id),
COUNT(est_free_app_download), COUNT(est_free_app_download),
COUNT(est_revenue),
|
SUM(est_free_app_download), MIN(est_free_app_download),
MAX(est_free_app_download),
|
SUM(est_paid_app_download), MIN(est_paid_app_download),
MAX(est_paid_app_download),
| SUM(est_revenue),
MIN(est_revenue), MAX(est_revenue)
| FROM storev2
| GROUP BY date,
market_code, device_code, country_code, category_id
  """.stripMargin
carbon.sql(createTimeSeriesDayNoProductTableSql)



One of the query:

SELECT timeseries(date, 'DAY') as day, market_code, device_code,
country_code, category_id,
  COUNT(product_id), COUNT(est_free_app_download),
COUNT(est_free_app_download), COUNT(est_revenue),
  sum(est_free_app_download), min(est_free_app_download),
max(est_free_app_download),
  sum(est_paid_app_download), min(est_paid_app_download),
max(est_paid_app_download),
  sum(est_revenue), min(est_revenue), max(est_revenue)
  FROM storev2 WHERE market_code='apple-store' AND
device_code='ios-phone' AND country_code IN ('US', 'CN')
  GROUP BY timeseries(date, 'DAY'), market_code, device_code,
country_code, category_id LIMIT 10;





--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [ISSUE] carbondata1.5.0 and spark 2.3.2 query plan issue

2018-09-30 Thread aaron
Screen_Shot_2018-09-30_at_5.png

  



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [ISSUE] carbondata1.5.0 and spark 2.3.2 query plan issue

2018-09-30 Thread aaron
Screen_Shot_2018-09-30_at_5.png

  



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


[ISSUE] carbondata1.5.0 and spark 2.3.2 query plan issue

2018-09-30 Thread aaron
Hi community,

I'm afraid of the query plan is broken in spark 2.3.2. Please see the image
in current post and below post. Screen_Shot_2018-09-30_at_5.png

  

 



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: Issues about dictionary and S3

2018-09-29 Thread aaron
It works, thanks a lot!



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: Issues about dictionary and S3

2018-09-29 Thread aaron
Wow, cool!  I will have a try!



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Minor Issue] BETWEEN AND does work as expected

2018-09-29 Thread aaron
Cool! Thanks a lot for your effort!



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Serious Issue] Rows disappeared

2018-09-29 Thread aaron
Cool! It works now.  Thanks a lot!



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Serious Issue] Rows disappeared

2018-09-28 Thread aaron
Great and I will have a try later



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Serious Issue] Rows disappeared

2018-09-27 Thread aaron
@Ajantha, Great! looking forward to your fix:)



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Serious Issue] Rows disappeared

2018-09-27 Thread aaron
This is the method I construct carbon instance, hope this can help you.

def carbonSession(appName: String, masterUrl: String, parallelism: String,
logLevel: String, hdfsUrl:
String="hdfs://ec2-dca-aa-p-sdn-16.appannie.org:9000"): SparkSession = {
val storeLocation = s"${hdfsUrl}/usr/carbon/data"

CarbonProperties.getInstance()
  .addProperty(CarbonCommonConstants.STORE_LOCATION, storeLocation)
  .addProperty(CarbonCommonConstants.ENABLE_UNSAFE_SORT, "true")
  .addProperty(CarbonCommonConstants.ENABLE_OFFHEAP_SORT, "true")
  .addProperty(CarbonCommonConstants.CARBON_TASK_DISTRIBUTION,
CarbonCommonConstants.CARBON_TASK_DISTRIBUTION_BLOCKLET)
  .addProperty(CarbonCommonConstants.CARBON_CUSTOM_BLOCK_DISTRIBUTION,
"false")
  .addProperty(CarbonCommonConstants.ENABLE_VECTOR_READER, "true")
  //.addProperty(CarbonCommonConstants.ENABLE_AUTO_HANDOFF, "true")
  .addProperty(CarbonCommonConstants.ENABLE_AUTO_LOAD_MERGE, "true")
  .addProperty(CarbonCommonConstants.COMPACTION_SEGMENT_LEVEL_THRESHOLD,
"4,3")
  .addProperty(CarbonCommonConstants.DAYS_ALLOWED_TO_COMPACT, "0")
  .addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
s"${hdfsUrl}/usr/carbon/badrecords")
  .addProperty(CarbonCommonConstants.CARBON_QUERY_MIN_MAX_ENABLED,
"true")
  .addProperty(CarbonCommonConstants.ENABLE_QUERY_STATISTICS, "false")
  .addProperty(CarbonCommonConstants.ENABLE_DATA_LOADING_STATISTICS,
"false")
  .addProperty(CarbonCommonConstants.MAX_QUERY_EXECUTION_TIME, "2")  //
2 minutes
  .addProperty(CarbonCommonConstants.LOCK_TYPE, "HDFSLOCK")
  .addProperty(CarbonCommonConstants.LOCK_PATH,
s"${hdfsUrl}/usr/carbon/lock")
  .addProperty(CarbonCommonConstants.CARBON_MERGE_SORT_READER_THREAD,
s"${parallelism}")
 
.addProperty(CarbonCommonConstants.CARBON_INVISIBLE_SEGMENTS_PRESERVE_COUNT,
"100")
  .addProperty(CarbonCommonConstants.LOAD_GLOBAL_SORT_PARTITIONS,
s"${parallelism}")
  .addProperty(CarbonCommonConstants.LOAD_SORT_SCOPE, "LOCAL_SORT")
  .addProperty(CarbonCommonConstants.NUM_CORES_COMPACTING,
s"${parallelism}")
  .addProperty(CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB,
"4096")
  .addProperty(CarbonCommonConstants.NUM_CORES_LOADING,
s"${parallelism}")
  .addProperty(CarbonCommonConstants.CARBON_MAJOR_COMPACTION_SIZE,
"1024")
  .addProperty(CarbonCommonConstants.BLOCKLET_SIZE, "64")
  //.addProperty(CarbonCommonConstants.TABLE_BLOCKLET_SIZE, "64")

import org.apache.spark.sql.CarbonSession._

val carbon = SparkSession
  .builder()
  .master(masterUrl)
  .appName(appName)
  .config("spark.hadoop.fs.s3a.impl",
"org.apache.hadoop.fs.s3a.S3AFileSystem")
  .config("spark.hadoop.dfs.replication", 1)
  .config("spark.cores.max", s"${parallelism}")
  .getOrCreateCarbonSession(storeLocation)

carbon.sparkContext.hadoopConfiguration.setInt("dfs.replication", 1)

carbon.sql(s"SET spark.default.parallelism=${parallelism}")
carbon.sql(s"SET spark.sql.shuffle.partitions=${parallelism}")
carbon.sql(s"SET spark.sql.cbo.enabled=true")
carbon.sql(s"SET carbon.options.bad.records.logger.enable=true")

carbon.sparkContext.setLogLevel(logLevel)
carbon
  }



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Serious Issue] Rows disappeared

2018-09-27 Thread aaron
Yes, you're right.



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Serious Issue] Rows disappeared

2018-09-27 Thread aaron
Another comment, this issue can be reproduces on spark2.3.1 +
carbondata1.5.0, spark2.2.2 + carbondata1.5.0, I can send you the jar I
compiled to you, hope this could help you.



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Serious Issue] Rows disappeared

2018-09-27 Thread aaron
**
a) First can you disable local dictionary and try the same scenario?  I
would test in other time

Good idea, and I think this works, when I use global dictionary, query can
return right result. But the
question is, global dictionary also introduce a bug in spark 2.3, which I
described in another issue.
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/Issue-Dictionary-and-S3-td63106.html

**
b) Can drop datamp and try the same scenario? -- If data is coming from 
data map (can see this in explain command) 
I have confirmed this, datamap is not the reason for this. because this can
reproduce without
datamap.

**
c) Avoid compaction and try the same scenario. 
I've confirmed, if no compaction, query works well.

**
d) If you can share, give me test data and complete steps. (Because 
compaction and other steps are not there in your previous mail) 
The data is kind of huge, the table holds on about 7T csv raw data. I have
no good idea to give you test data:)



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


[Minor Issue] BETWEEN AND does work as expected

2018-09-27 Thread aaron
Hi Community,

The BETWEEN AND work as >= AND <, I guess is should be  >= AND <=. My env is
spark2.2.2 + carbondata1.4.1

%Carbondata

scala> carbon.time(carbon.sql(
 |   s"""SELECT timeseries(date, 'DAY') as day, market_code,
device_code, country_code, category_id,
 |  |sum(est_free_app_download), sum(est_paid_app_download),
sum(est_revenue)
 |  |FROM store WHERE date BETWEEN '2016-09-01' AND '2016-09-06'
AND device_code='ios-phone' AND country_code='EE' AND category_id=100021
 |  |GROUP BY timeseries(date, 'DAY'), market_code, device_code,
country_code, category_id"""
 | .stripMargin).show(truncate=false)
 | )
+---+---+---++---+--+--++
|day   
|market_code|device_code|country_code|category_id|sum(est_free_app_download)|sum(est_paid_app_download)|sum(est_revenue)|
+---+---+---++---+--+--++
|2016-09-02 00:00:00|apple-store|ios-phone  |EE  |100021 |30807 
   
|14092 |648 |
|2016-09-04 00:00:00|apple-store|ios-phone  |EE  |100021 |32137 
   
|14088 |875 |
|2016-09-05 00:00:00|apple-store|ios-phone  |EE  |100021 |30774 
   
|14083 |930 |
|2016-09-01 00:00:00|apple-store|ios-phone  |EE  |100021 |30408 
   
|14096 |932 |
|2016-09-03 00:00:00|apple-store|ios-phone  |EE  |100021 |32476 
   
|14101 |818 |
+---+---+---++---+--+--++




%pyspark

(
  spark.read
 
.parquet("s3a://b2b-prod-int-data-pipeline-unified/unified/app-ss.storeint.v1/metric")
  .where("date BETWEEN '2016-09-01' AND '2016-09-06' AND
device_code='ios-phone' AND country_code='EE' AND category_id=100021")
  .groupBy("date", "market_code", "device_code", "country_code",
"category_id")
  .agg({"est_free_app_download": "sum", "est_paid_app_download": "sum",
"est_revenue": "sum"})
  .show()
)

+--+---+---++---+--+--++
| 
date|market_code|device_code|country_code|category_id|sum(est_free_app_download)|sum(est_paid_app_download)|sum(est_revenue)|
+--+---+---++---+--+--++
|2016-09-04|apple-store|  ios-phone|  EE| 100021|   
 
32137| 14088| 875|
|2016-09-06|apple-store|  ios-phone|  EE| 100021|   
 
31425| 14103| 893|
|2016-09-01|apple-store|  ios-phone|  EE| 100021|   
 
30408| 14096| 932|
|2016-09-05|apple-store|  ios-phone|  EE| 100021|   
 
30774| 14083| 930|
|2016-09-03|apple-store|  ios-phone|  EE| 100021|   
 
32476| 14101| 818|
|2016-09-02|apple-store|  ios-phone|  EE| 100021|   
 
30807| 14092| 648|
+--+---+---++---+--+--++




--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Issue] Load auto compaction failed

2018-09-27 Thread aaron
Good explanation, it works now! thanks



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Issue] Load auto compaction failed

2018-09-27 Thread aaron
Good suggestion, it works now!



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


[Issue] Load auto compaction failed

2018-09-26 Thread aaron
Hi community,

Based on 1.5.0 - the load with local dictionary and local sort, the load
failed when data count arrive 0.5 billion, but I've already load 50billion
before with global dictionary and sort. Do you have any ideas?


18/09/26 08:39:45 AUDIT CarbonTableCompactor:
[ec2-dca-aa-p-sdn-16.appannie.org][hadoop][Thread-1]Compaction request
completed for table default.store
18/09/26 08:46:39 WARN TaskSetManager: Lost task 1.0 in stage 216.0 (TID
1513, 10.2.3.249, executor 2):
org.apache.spark.util.TaskCompletionListenerException:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

Previous exception in task:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)

org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)

org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.(CarbonMergerRDD.scala:224)

org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
org.apache.spark.scheduler.Task.run(Task.scala:109)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
at
org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
at
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
at org.apache.spark.scheduler.Task.run(Task.scala:119)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

18/09/26 08:53:31 WARN TaskSetManager: Lost task 1.1 in stage 216.0 (TID
1515, 10.2.3.11, executor 1):
org.apache.spark.util.TaskCompletionListenerException:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

Previous exception in task:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.closeHandler(CarbonFactDataHandlerColumnar.java:377)

org.apache.carbondata.processing.merger.RowResultMergerProcessor.execute(RowResultMergerProcessor.java:177)

org.apache.carbondata.spark.rdd.CarbonMergerRDD$$anon$1.(CarbonMergerRDD.scala:224)

org.apache.carbondata.spark.rdd.CarbonMergerRDD.internalCompute(CarbonMergerRDD.scala:87)
org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
org.apache.spark.scheduler.Task.run(Task.scala:109)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
at
org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:139)
at
org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:117)
at org.apache.spark.scheduler.Task.run(Task.scala:119)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

18/09/26 09:00:22 WARN TaskSetManager: Lost task 1.2 in stage 216.0 (TID
1516, 10.2.3.11, executor 1):
org.apache.spark.util.TaskCompletionListenerException:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

Previous exception in task:
org.apache.carbondata.core.datastore.exception.CarbonDataWriterException

org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processWriteTaskSubmitList(CarbonFactDataHandlerColumnar.java:353)

org.apache.carbondata.processing

[Serious Issue] Rows disappeared

2018-09-26 Thread aaron
Hi Community,

It seems that rows disappeared, same query get different result

carbon.time(carbon.sql(
  s"""
 |EXPLAIN SELECT date, market_code, device_code, country_code,
category_id, product_id, est_free_app_download, est_paid_app_download,
est_revenue
 |FROM store
 |WHERE date = '2016-09-01' AND device_code='ios-phone' AND
country_code='EE' AND product_id IN (590416158, 590437560)"""
.stripMargin).show(truncate=false)
)


Screen_Shot_2018-09-26_at_11.png

  
Screen_Shot_2018-09-26_at_11.png

  



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: Issues about dictionary and S3

2018-09-26 Thread aaron
Thanks, I've check already and it works well!  Very impressive quick response
!



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Issue] Bloomfilter datamap

2018-09-25 Thread aaron
Based on that fix, drop existed table and data, re-creating the table and
datamap is exactly as you said, no problem. 
But I did not delete the data and table yesterday, just create a new
datamap, there will be some problems.



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Issue] Bloomfilter datamap

2018-09-25 Thread aaron
But one more comment, it seems that bloomfilter datamap disappears from the
query plan in detailed query? so what's the case which is for the
bloomfilter?



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: Issues about dictionary and S3

2018-09-25 Thread aaron
Thanks a lot! Looking forward to your good news.



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Issue] Bloomfilter datamap

2018-09-25 Thread aaron
Yes, you're right. The fix make master work now.



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: Issues about dictionary and S3

2018-09-25 Thread aaron
Hi kunalkapoor, 

Thanks very much for your quick response!

1. For the global dictionary issue, Do you have rough plan about the fix? 
2. How's the local dictionary bug on spark 2.3.1?

Looking forward to the fix!
 
Thanks
Aaron



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Issue] Bloomfilter datamap

2018-09-25 Thread aaron
Great! thanks for your so quick response! I will have a try.  Do you mean
that I merge https://github.com/apache/carbondata/pull/2665?

Thanks aaron



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: [Issue] Bloomfilter datamap

2018-09-24 Thread aaron
I use 1.5.0-SNAPSHOT, but I'm not sure about 1.4.1 (I forget that I have test
it or not)



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


[Issue] Enable/disable datamap not work on 1.5.0-SNAPSHOT

2018-09-24 Thread aaron
Hi community,

Disable/Enable datamap does not work on 1.5.0-SNAPSHOT.


Stacktrace,

scala> carbon.sql("show datamap on table test1").show(truncate=false)
+---+--+-+--+
|DataMapName|ClassName |Associated Table |DataMap Properties
   
|
+---+--+-+--+
|agg_day|timeSeries|default.test1_agg_day|'day_granularity'='1',
'event_time'='date'|
+---+--+-+--+

scala> carbon.sql("SET carbon.datamap.visible.default.test1.agg_day =
false")
org.apache.carbondata.core.exception.InvalidConfigurationException: Invalid
configuration of carbon.datamap.visible.default.test1.agg_day, datamap does
not exist
  at
org.apache.carbondata.core.util.SessionParams.validateKeyValue(SessionParams.java:246)
  at
org.apache.carbondata.core.util.SessionParams.addProperty(SessionParams.java:121)
  at
org.apache.carbondata.core.util.SessionParams.addProperty(SessionParams.java:110)
  at
org.apache.spark.sql.hive.execution.command.CarbonSetCommand$.validateAndSetValue(CarbonHiveCommands.scala:112)
  at
org.apache.spark.sql.hive.execution.command.CarbonSetCommand.run(CarbonHiveCommands.scala:74)
  at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
  at org.apache.spark.sql.Dataset.(Dataset.scala:183)
  at
org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:106)
  at
org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:95)
  at
org.apache.spark.sql.CarbonSession.withProfiler(CarbonSession.scala:153)
  at org.apache.spark.sql.CarbonSession.sql(CarbonSession.scala:93)
  ... 50 elided

Thanks
Aaron



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Bloomfilter datamap with pre agg datamap will break normal group by query

2018-09-24 Thread aaron
Hi Community,

I found that the Bloomfilter datamap with pre agg datamap will break normal
group by query. When I drop the bloom filter datamap, then query works.
*
Demo SQL:

CREATE TABLE IF NOT EXISTS store(
 market_code STRING,
 device_code STRING,
 country_code STRING,
 category_id INTEGER,
 product_id LONG,
 date TIMESTAMP,
 est_free_app_download LONG,
 est_paid_app_download LONG,
 est_revenue LONG
 )
 STORED BY 'carbondata'
 TBLPROPERTIES(
 'SORT_COLUMNS'='market_code, device_code, country_code, category_id, date,
product_id',
 'NO_INVERTED_INDEX'='est_free_app_download, est_paid_app_download,
est_revenue',
 'DICTIONARY_INCLUDE' = 'market_code, device_code, country_code,
category_id, product_id',
 'SORT_SCOPE'='GLOBAL_SORT',
 'CACHE_LEVEL'='BLOCKLET',
 'TABLE_BLOCKSIZE'='256',
 'GLOBAL_SORT_PARTITIONS'='2'
 )


CREATE DATAMAP IF NOT EXISTS agg_by_day ON TABLE store
 USING 'timeSeries'
 DMPROPERTIES (
 'EVENT_TIME'='date',
 'DAY_GRANULARITY'='1')
 AS SELECT date, market_code, device_code, country_code, category_id,
 COUNT(date), COUNT(est_free_app_download), COUNT(est_free_app_download),
COUNT(est_revenue),
 SUM(est_free_app_download), MIN(est_free_app_download),
MAX(est_free_app_download),
 SUM(est_paid_app_download), MIN(est_paid_app_download),
MAX(est_paid_app_download),
 SUM(est_revenue), MIN(est_revenue), MAX(est_revenue)
 FROM store
 GROUP BY date, market_code, device_code, country_code, category_id

CREATE DATAMAP IF NOT EXISTS bloomfilter_all_dimensions ON TABLE store
 USING 'bloomfilter'
 DMPROPERTIES (
 'INDEX_COLUMNS'='market_code, device_code, country_code, category_id, date,
product_id',
 'BLOOM_SIZE'='64',
 'BLOOM_FPP'='0.01',
 'BLOOM_COMPRESS'='true'
 )


*
This is the stack trace,


carbon.time(carbon.sql(
 |   s"""
 |  |SELECT date, market_code, device_code, country_code,
category_id, sum(est_free_app_download)
 |  |FROM store
 |  |WHERE date BETWEEN '2016-09-01' AND '2016-09-03' AND
device_code='ios-phone' AND country_code='EE' AND category_id=100021 AND
product_id IN (590416158, 590437560)
 |  |GROUP BY date, market_code, device_code, country_code,
category_id"""
 | .stripMargin).show(truncate=false)
 | )
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute,
tree:
Exchange hashpartitioning(date#21, market_code#16, device_code#17,
country_code#18, category_id#19, 2)
+- *(1) HashAggregate(keys=[date#21, market_code#16, device_code#17,
country_code#18, category_id#19],
functions=[partial_sum(est_free_app_download#22L)], output=[date#21,
market_code#16, device_code#17, country_code#18, category_id#19, sum#74L])
   +- *(1) CarbonDictionaryDecoder [default_store],
IncludeProfile(ArrayBuffer(category_id#19)), CarbonAliasDecoderRelation(),
org.apache.spark.sql.CarbonSession@213d5189
  +- *(1) Project [market_code#16, device_code#17, country_code#18,
category_id#19, date#21, est_free_app_download#22L]
 +- *(1) FileScan carbondata
default.store[category_id#19,market_code#16,country_code#18,device_code#17,est_free_app_download#22L,date#21]
PushedFilters: [IsNotNull(date), IsNotNull(device_code),
IsNotNull(country_code), IsNotNull(category_id), Greate...

  at
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56)
  at
org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.doExecute(ShuffleExchangeExec.scala:119)
  at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
  at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
  at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
  at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
  at
org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:371)
  at
org.apache.spark.sql.execution.aggregate.HashAggregateExec.inputRDDs(HashAggregateExec.scala:150)
  at
org.apache.spark.sql.CarbonDictionaryDecoder.inputRDDs(CarbonDictionaryDecoder.scala:244)
  at
org.apache.spark.sql.execution.BaseLimitExec$class.inputRDDs(limit.scala:62)
  at org.apache.spark.sql.execution.LocalLimitExec.inputRDDs(limit.scala:97)
  at
org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:605)
  at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
  at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
  at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
  at
org.apache.spark.rdd.RDDOperationScope$.withScope(

Re: Issues about dictionary and S3

2018-09-24 Thread aaron
Hi kunalkapoor,

More info for you.

*1. One comment about how to reproduce this *- query was distributed to
spark workers on different nodes for execution.

*2. Detailed stacktrace*

scala> carbon.time(carbon.sql(
 |   s"""SELECT sum(est_free_app_download), timeseries(date,
'MONTH'), country_code
 |  |FROM store WHERE market_code='apple-store' and
device_code='ios-phone' and country_code IN ('US', 'CN')
 |  |GROUP BY timeseries(date, 'MONTH'), market_code,
device_code, country_code, category_id""".stripMargin).show(truncate=false))
18/09/23 23:42:42 AUDIT CacheProvider:
[ec2-dca-aa-p-sdn-16.appannie.org][hadoop][Thread-1]The key
carbon.query.directQueryOnDataMap.enabled with value true added in the
session param
[Stage 0:>  (0 + 2)
/ 2]18/09/23 23:42:46 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID
1, 10.2.3.19, executor 1): java.lang.RuntimeException: Error while resolving
filter expression
at
org.apache.carbondata.core.metadata.schema.table.CarbonTable.resolveFilter(CarbonTable.java:1043)
at
org.apache.carbondata.core.scan.model.QueryModelBuilder.build(QueryModelBuilder.java:322)
at
org.apache.carbondata.hadoop.api.CarbonInputFormat.createQueryModel(CarbonInputFormat.java:632)
at
org.apache.carbondata.spark.rdd.CarbonScanRDD.internalCompute(CarbonScanRDD.scala:419)
at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:78)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
at
org.apache.carbondata.core.scan.executor.util.QueryUtil.getTableIdentifierForColumn(QueryUtil.java:401)
at
org.apache.carbondata.core.scan.filter.FilterUtil.getForwardDictionaryCache(FilterUtil.java:1416)
at
org.apache.carbondata.core.scan.filter.FilterUtil.getFilterValues(FilterUtil.java:712)
at
org.apache.carbondata.core.scan.filter.resolver.resolverinfo.visitor.DictionaryColumnVisitor.populateFilterResolvedInfo(DictionaryColumnVisitor.java:60)
at
org.apache.carbondata.core.scan.filter.resolver.resolverinfo.DimColumnResolvedFilterInfo.populateFilterInfoBasedOnColumnType(DimColumnResolvedFilterInfo.java:119)
at
org.apache.carbondata.core.scan.filter.resolver.ConditionalFilterResolverImpl.resolve(ConditionalFilterResolverImpl.java:107)
at
org.apache.carbondata.core.scan.filter.FilterExpressionProcessor.traverseAndResolveTree(FilterExpressionProcessor.java:255)
at
org.apache.carbondata.core.scan.filter.FilterExpressionProcessor.traverseAndResolveTree(FilterExpressionProcessor.java:254)
at
org.apache.carbondata.core.scan.filter.FilterExpressionProcessor.traverseAndResolveTree(FilterExpressionProcessor.java:254)
at
org.apache.carbondata.core.scan.filter.FilterExpressionProcessor.traverseAndResolveTree(FilterExpressionProcessor.java:254)
at
org.apache.carbondata.core.scan.filter.FilterExpressionProcessor.traverseAndResolveTree(FilterExpressionProcessor.java:254)
at
org.apache.carbondata.core.scan.filter.FilterExpressionProcessor.getFilterResolvertree(FilterExpressionProcessor.java:235)
at
org.apache.carbondata.core.scan.filter.FilterExpressionProcessor.getFilterResolver(FilterExpressionProcessor.java:84)
at
org.apache.carbondata.core.metadata.schema.table.CarbonTable.resolveFilter(CarbonTable.java:1041)
... 19 more

18/09/23 23:42:48 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times;
aborting job
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in
stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0
(TID 7, 10.2.3.19, executor 1): java.lang.RuntimeException: Error while
resolving filter expression
at
org.apache.carbondata.core.metadata.schema.table.CarbonTable.resolveFilter(CarbonTable.java:1043)
at
org.apache.ca

Re: Issues about dictionary and S3

2018-09-24 Thread aaron
Hi kunalkapoor,

Thanks very much for your quick response. And I care about below issue
mostly, because it would impact our implementation a lot.

For issue "Dictionary decoding does not work when the  dictionary column
used for filter/join on preaggregate(timeseries)  table", I have tested
those combinations when worker are distributed in different machines, but
all of them behaves same - raise exception like "Caused by:
java.lang.RuntimeException: Error while resolving filter expression".

1. carbondata1.4.1 & spark2.2.1
2. carbondata1.5.0-SNAPSHOT & spark2.2.1
3. carbondata1.5.0-SNAPSHOT & spark2.2.2
4. carbondata1.5.0-SNAPSHOT & spark2.3.1

We would use many preaggregate tables in our business, and filter & join
would be very common cases for us.

Looking forward to your good news.

Thanks
Aaron

 




--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: Issues about dictionary and S3

2018-09-23 Thread aaron
One typo fix,  spark version of No 2 should be 2.2.1 Pre built with hadoop
2.7.2



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Issues about dictionary and S3

2018-09-23 Thread aaron
Hi Community,

I found some possible issues about dictionary and S3 compatibility during
POC, and I attach them in CSV, could you please have a look at it?


Thanks
Aaron Possible_Issues.csv
<http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/file/t357/Possible_Issues.csv>
  



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: error occur when I load data to s3

2018-09-05 Thread aaron
Hi kunalkapoor, Thanks very much for your guidance,  you are totally right! 
It works now.





--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: master timeSeries DATAMAP does not work well as 1.4.1

2018-09-05 Thread aaron
My mistake. MONTH and YEAR should roll up from DAY



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


master timeSeries DATAMAP does not work well as 1.4.1

2018-09-04 Thread aaron
Hi All, 

It seems that master timeSeries DATAMAP does not work well as 1.4.1, could
you please have a look?


Demo data:

|market_code|device_code|  
date|country_code|category_id|product_id|revenue|
+---+---+---++---+--+---+
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|10|  73481|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|11| 713316|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|12| 657503|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|13| 764930|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|14| 835665|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|15| 599234|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|16|  22451|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|17|  17284|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|18| 118846|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|19| 735783|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|100010| 698596|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|100011| 788919|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|100012| 817443|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|100013| 839801|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|100014| 880020|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|100015| 808019|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|100016| 740226|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|100017| 473469|
|apple-store|  ios-phone|2018-02-01 00:00:00|  CA| 
1|100018| 322765|

SQL:

carbon.sql("DROP TABLE IF EXISTS test_store_int")
val createMainTableSql = s"""
  | CREATE TABLE test_store_int(
  | market_code VARCHAR(50),
  | device_code VARCHAR(50),
  | date TIMESTAMP,
  | country_code CHAR(2),
  | category_id INTEGER,
  | product_id LONG,
  | revenue INTEGER
  | )
  | STORED BY 'carbondata'
  | TBLPROPERTIES(
  | 'SORT_COLUMNS'='market_code, device_code, country_code, category_id,
date',
  | 'DICTIONARY_INCLUDE'='market_code, device_code, country_code,
category_id, date, product_id',
  | 'NO_INVERTED_INDEX'='revenue',
  | 'SORT_SCOPE'='GLOBAL_SORT'
  | )
""".stripMargin
print(createMainTableSql)
carbon.sql(createMainTableSql)

carbon.sql("DROP DATAMAP test_store_int_agg_by_month ON TABLE
test_store_int")
val createTimeSeriesTableSql = s"""
  | CREATE DATAMAP test_store_int_agg_by_month ON TABLE test_store_int
  | USING 'timeSeries'
  | DMPROPERTIES (
  | 'EVENT_TIME'='date',
  | 'MONTH_GRANULARITY'='1')
  | AS SELECT date, market_code, device_code, country_code, category_id,
product_id, sum(revenue), count(revenue), min(revenue), max(revenue) FROM
test_store_int
  | GROUP BY date, market_code, device_code, country_code, category_id,
product_id
""".stripMargin
print(createTimeSeriesTableSql)
carbon.sql(createTimeSeriesTableSql)

Query plan:

1. By month, work
carbon.sql(s"""explain select market_code, device_code, country_code,
category_id, product_id, sum(revenue), timeseries(date, 'month') from
test_store_int group by timeseries(date, 'month'), market_code, device_code,
country_code, category_id, product_id""".stripMargin).show(200,
truncate=false)

|== CarbonData Profiler ==
Query rewrite based on DataMap:
 - test_store_int_agg_by_month (timeseries)
Table Scan on test_store_int_test_store_int_agg_by_month
 - total blocklets: 4
 - filter: none
 - pruned by Main DataMap
- skipped blocklets: 0

2. By year,  not work
carbon.sql(s"""explain select market_code, device_code, country_code,
category_id, product_id, sum(revenue), timeseries(date, 'year') from
test_store_int group by t

Re: error occur when I load data to s3

2018-09-03 Thread aaron
9/04 14:45:10 DEBUG headers: << Accept-Ranges: bytes
18/09/04 14:45:10 DEBUG headers: << Content-Type: application/octet-stream
18/09/04 14:45:10 DEBUG headers: << Content-Length: 0
18/09/04 14:45:10 DEBUG headers: << Server: AmazonS3
18/09/04 14:45:10 DEBUG SdkHttpClient: Connection can be kept alive
indefinitely
18/09/04 14:45:10 DEBUG request: Received successful response: 200, AWS
Request ID: A1AD0240EBDD2234
18/09/04 14:45:10 DEBUG PoolingClientConnectionManager: Connection [id:
1][route: {s}->https://aa-sdk-test2.s3.us-east-1.amazonaws.com:443] can be
kept alive indefinitely
18/09/04 14:45:10 DEBUG PoolingClientConnectionManager: Connection released:
[id: 1][route:
{s}->https://aa-sdk-test2.s3.us-east-1.amazonaws.com:443][total kept alive:
1; route allocated: 1 of 15; total allocated: 1 of 15]
18/09/04 14:45:10 DEBUG S3AFileSystem: OutputStream for key
'carbon-data/example/LockFiles/concurrentload.lock' writing to tempfile:
/tmp/hadoop-aaron/s3a/output-8508205130207286174.tmp
18/09/04 14:45:10 ERROR CarbonLoadDataCommand: main 
java.lang.ArrayIndexOutOfBoundsException
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:128)
at 
org.apache.hadoop.fs.s3a.S3AOutputStream.write(S3AOutputStream.java:164)
at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at
org.apache.carbondata.core.datastore.filesystem.S3CarbonFile.getDataOutputStream(S3CarbonFile.java:111)
at
org.apache.carbondata.core.datastore.filesystem.S3CarbonFile.getDataOutputStreamUsingAppend(S3CarbonFile.java:93)
at
org.apache.carbondata.core.datastore.impl.FileFactory.getDataOutputStreamUsingAppend(FileFactory.java:289)
at org.apache.carbondata.core.locks.S3FileLock.lock(S3FileLock.java:96)
at
org.apache.carbondata.core.locks.AbstractCarbonLock.lockWithRetries(AbstractCarbonLock.java:41)
at
org.apache.carbondata.core.locks.AbstractCarbonLock.lockWithRetries(AbstractCarbonLock.java:59)
at
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.acquireConcurrentLoadLock(CarbonLoadDataCommand.scala:399)
at
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:259)
at
org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:92)
at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
at
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
at org.apache.spark.sql.Dataset.(Dataset.scala:183)
at
org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:106)
at
org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:95)
at 
org.apache.spark.sql.CarbonSession.withProfiler(CarbonSession.scala:153)
at org.apache.spark.sql.CarbonSession.sql(CarbonSession.scala:93)
at org.apache.carbondata.examples.S3Example$.main(S3Example.scala:91)
at org.apache.carbondata.examples.S3Example.main(S3Example.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)
at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/09/04 14:45:10 AUDIT CarbonLoadDataCommand:
[aaron.lan.appannie.com][aaron][Thread-1]Dataload failure for
default.carbon_table. Please check the logs
18/09/04 14:45:10 DEBUG Client: The ping interval is 6 ms.
18/09/04 14:45:10 DEBUG Client: Connecting to localhost/127.0.0.1:9000
18/09/04 14:45:10 DEBUG Client: IPC Client (777046609) connection to
localhost/127.0.0.1:9000 from aaron: starting, having connections 1
18/09/04 14:45:10 DEBUG Client: IPC Client (777046609) connection to
localhost/127.0.0.1:9000 from aaron sending #3
18/09/04 14:45:10 DEBUG Client: IPC Client (777046609) connection to
localhost/127.0.0.1:9000 from aaron got value #3
18/09/04 14:45:10 DEBUG ProtobufRpcEngine: Call: getFileInfo took 6ms
18/09/04 14:45:10 DEBUG AbstractDFSCarbonFile: main Exception occurred:File
does not exist:
hdfs://localhost:9000/usr/carbon-meta/partition/default/carbon_table
18/09/04 14:45:10 DEBUG Client: I

Re: error occur when I load data to s3

2018-09-03 Thread aaron
Hi kunalkapoor,
   It seems that error not fixed yet. Do you have any idea?

thanks
aaron

aaron:2.2.1 aaron$ spark-shell --executor-memory 4g --driver-memory 2g
Ivy Default Cache set to: /Users/aaron/.ivy2/cache
The jars for the packages stored in: /Users/aaron/.ivy2/jars
:: loading settings :: url =
jar:file:/usr/local/Cellar/apache-spark/2.2.1/lib/apache-carbondata-1.5.0-SNAPSHOT-bin-spark2.2.1-hadoop2.7.2.jar!/org/apache/ivy/core/settings/ivysettings.xml
com.amazonaws#aws-java-sdk added as a dependency
org.apache.hadoop#hadoop-aws added as a dependency
com.databricks#spark-avro_2.11 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
confs: [default]
found com.amazonaws#aws-java-sdk;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-support;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-core;1.10.75.1 in central
found commons-logging#commons-logging;1.1.3 in central
found org.apache.httpcomponents#httpclient;4.3.6 in local-m2-cache
found org.apache.httpcomponents#httpcore;4.3.3 in local-m2-cache
found commons-codec#commons-codec;1.6 in local-m2-cache
found com.fasterxml.jackson.core#jackson-databind;2.5.3 in central
found com.fasterxml.jackson.core#jackson-annotations;2.5.0 in central
found com.fasterxml.jackson.core#jackson-core;2.5.3 in central
found com.fasterxml.jackson.dataformat#jackson-dataformat-cbor;2.5.3 in
central
found joda-time#joda-time;2.8.1 in central
found com.amazonaws#aws-java-sdk-simpledb;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-simpleworkflow;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-storagegateway;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-route53;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-s3;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-kms;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-importexport;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-sts;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-sqs;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-rds;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-redshift;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-elasticbeanstalk;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-glacier;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-sns;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-iam;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-datapipeline;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-elasticloadbalancing;1.10.75.1 in 
central
found com.amazonaws#aws-java-sdk-emr;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-elasticache;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-elastictranscoder;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-ec2;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-dynamodb;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-cloudtrail;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-cloudwatch;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-logs;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-events;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-cognitoidentity;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-cognitosync;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-directconnect;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-cloudformation;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-cloudfront;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-kinesis;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-opsworks;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-ses;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-autoscaling;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-cloudsearch;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-cloudwatchmetrics;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-swf-libraries;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-codedeploy;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-codepipeline;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-config;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-lambda;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-ecs;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-ecr;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-cloudhsm;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-ssm;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-workspaces;1.10.75.1 in central
found com.amazonaws#aws-java-sdk-machinelearning;1.10.75.1 in central

Re: error occur when I load data to s3

2018-09-03 Thread aaron
Thanks, you're right. Succeed already!



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: error occur when I load data to s3

2018-09-03 Thread aaron
Compile failed.

My env is,

aaron:carbondata aaron$ java -version
java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
aaron:carbondata aaron$ mvn -v
Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d;
2017-10-18T15:58:13+08:00)
Maven home: /usr/local/Cellar/maven/3.5.2/libexec
Java version: 1.8.0_144, vendor: Oracle Corporation
Java home:
/Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "mac os x", version: "10.13.6", arch: "x86_64", family: "mac"
aaron:carbondata aaron$ scala -version
Scala code runner version 2.11.8 -- Copyright 2002-2016, LAMP/EPFL

Error info is,

[ERROR] COMPILATION ERROR : 
[INFO] -----
[ERROR]
/Users/aaron/workspace/carbondata/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java:[2230,12]
an enum switch case label must be the unqualified name of an enumeration
constant
[ERROR]
/Users/aaron/workspace/carbondata/core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java:[160,51]
cannot find symbol
  symbol:   variable MAP
  location: class org.apache.carbondata.format.DataType
[ERROR]
/Users/aaron/workspace/carbondata/core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java:[501,12]
an enum switch case label must be the unqualified name of an enumeration
constant
[INFO] 3 errors 
[INFO] -
[INFO]

[INFO] Reactor Summary:
[INFO] 
[INFO] Apache CarbonData :: Parent  SUCCESS [  3.251
s]
[INFO] Apache CarbonData :: Common  SUCCESS [  9.868
s]
[INFO] Apache CarbonData :: Core .. FAILURE [  5.734
s]
[INFO] Apache CarbonData :: Processing  SKIPPED
[INFO] Apache CarbonData :: Hadoop  SKIPPED
[INFO] Apache CarbonData :: Streaming . SKIPPED
[INFO] Apache CarbonData :: Store SDK . SKIPPED
[INFO] Apache CarbonData :: Spark Datasource .. SKIPPED
[INFO] Apache CarbonData :: Spark Common .. SKIPPED
[INFO] Apache CarbonData :: Search  SKIPPED
[INFO] Apache CarbonData :: Lucene Index DataMap .. SKIPPED
[INFO] Apache CarbonData :: Bloom Index DataMap ... SKIPPED
[INFO] Apache CarbonData :: Spark2  SKIPPED
[INFO] Apache CarbonData :: Spark Common Test . SKIPPED
[INFO] Apache CarbonData :: DataMap Examples .. SKIPPED
[INFO] Apache CarbonData :: Assembly .. SKIPPED
[INFO] Apache CarbonData :: Hive .. SKIPPED
[INFO] Apache CarbonData :: presto  SKIPPED
[INFO] Apache CarbonData :: Spark2 Examples ... SKIPPED
[INFO]

[INFO] BUILD FAILURE
[INFO]

[INFO] Total time: 19.595 s
[INFO] Finished at: 2018-09-04T09:06:59+08:00
[INFO] Final Memory: 56M/583M
[INFO]

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile)
on project carbondata-core: Compilation failure: Compilation failure: 
[ERROR]
/Users/aaron/workspace/carbondata/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java:[2230,12]
an enum switch case label must be the unqualified name of an enumeration
constant
[ERROR]
/Users/aaron/workspace/carbondata/core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java:[160,51]
cannot find symbol
[ERROR]   symbol:   variable MAP
[ERROR]   location: class org.apache.carbondata.format.DataType
[ERROR]
/Users/aaron/workspace/carbondata/core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java:[501,12]
an enum switch case label must be the unqualified name of an enumeration
constant
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please
read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the
command
[ERROR]   mvn  -rf :carbondata-core



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: error occur when I load data to s3

2018-09-03 Thread aaron
Thanks, I will have a try!



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: error occur when I load data to s3

2018-09-03 Thread aaron
Thanks, I will have a try.



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


Re: error occur when I load data to s3

2018-09-02 Thread aaron
pret(IMain.scala:569)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:565)
at 
scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:807)
at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:681)
at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:395)
at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:415)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:923)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
at
scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
at org.apache.spark.repl.Main$.doMain(Main.scala:74)
at org.apache.spark.repl.Main$.main(Main.scala:54)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)
at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/09/02 21:49:47 AUDIT CarbonLoadDataCommand:
[aaron.local][aaron][Thread-1]Dataload failure for default.test_s3_table.
Please check the logs
18/09/02 21:49:47 ERROR CarbonLoadDataCommand: main Got exception
java.lang.ArrayIndexOutOfBoundsException when processing data. But this
command does not support undo yet, skipping the undo part.
java.lang.ArrayIndexOutOfBoundsException
  at java.lang.System.arraycopy(Native Method)
  at java.io.BufferedOutputStream.write(BufferedOutputStream.java:128)
  at
org.apache.hadoop.fs.s3a.S3AOutputStream.write(S3AOutputStream.java:164)
  at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
  at java.io.DataOutputStream.write(DataOutputStream.java:107)
  at
org.apache.carbondata.core.datastore.filesystem.S3CarbonFile.getDataOutputStream(S3CarbonFile.java:111)
  at
org.apache.carbondata.core.datastore.filesystem.S3CarbonFile.getDataOutputStreamUsingAppend(S3CarbonFile.java:93)
  at
org.apache.carbondata.core.datastore.impl.FileFactory.getDataOutputStreamUsingAppend(FileFactory.java:276)
  at org.apache.carbondata.core.locks.S3FileLock.lock(S3FileLock.java:96)
  at
org.apache.carbondata.core.locks.AbstractCarbonLock.lockWithRetries(AbstractCarbonLock.java:41)
  at
org.apache.carbondata.core.locks.AbstractCarbonLock.lockWithRetries(AbstractCarbonLock.java:59)
  at
org.apache.carbondata.processing.util.CarbonLoaderUtil.recordNewLoadMetadata(CarbonLoaderUtil.java:247)
  at
org.apache.carbondata.processing.util.CarbonLoaderUtil.recordNewLoadMetadata(CarbonLoaderUtil.java:204)
  at
org.apache.carbondata.processing.util.CarbonLoaderUtil.readAndUpdateLoadProgressInTableMeta(CarbonLoaderUtil.java:437)
  at
org.apache.carbondata.processing.util.CarbonLoaderUtil.readAndUpdateLoadProgressInTableMeta(CarbonLoaderUtil.java:446)
  at
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:263)
  at
org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:92)
  at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
  at org.apache.spark.sql.Dataset.(Dataset.scala:183)
  at
org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:107)
  at
org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:96)
  at
org.apache.spark.sql.CarbonSession.withProfiler(CarbonSession.scala:154)
  at org.apache.spark.sql.CarbonSession.sql(CarbonSession.scala:94)
  ... 52 elided

aaron wrote
> Hi dear community, could anybody please kindly tell me what happened?  
> 
> *Env*:
> 
> 1.spark 2.2.1 + carbon1.4.1
> 2.spark.jars.packages 
> com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.2
> 3.spark.driver.extraClassPath
> file:///usr/local/Cellar/apache-spark/2.2.1/lib/*
> spark.executor.extraClassPath
> file:///usr/local/Cellar/apache-spark/2.2.1/lib/* 
> lib folder include below jars
> -rw-r--r--@ 1 aaron  staff52M Aug 29 20:50
> apache-carbond

Re: [DISCUSSION] Support Standard Spark's FileFormat interface in Carbondata

2018-08-31 Thread aaron
Does this means that we could call carbon in pyspark?




--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/


error occur when I load data to s3

2018-08-29 Thread aaron
Hi dear community, could anybody please kindly tell me what happened?  

*Env*:

1.spark 2.2.1 + carbon1.4.1
2.spark.jars.packages 
com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.2
3.spark.driver.extraClassPath
file:///usr/local/Cellar/apache-spark/2.2.1/lib/*
spark.executor.extraClassPath
file:///usr/local/Cellar/apache-spark/2.2.1/lib/* 
lib folder include below jars
-rw-r--r--@ 1 aaron  staff52M Aug 29 20:50
apache-carbondata-1.4.1-bin-spark2.2.1-hadoop2.7.2.jar
-rw-r--r--  1 aaron  staff   764K Aug 29 21:33 httpclient-4.5.4.jar
-rw-r--r--  1 aaron  staff   314K Aug 29 21:40 httpcore-4.4.jar


*Code*:

import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.CarbonSession._
import org.apache.spark.sql.catalyst.util._
import org.apache.carbondata.core.util.CarbonProperties
import org.apache.carbondata.core.constants.CarbonCommonConstants
CarbonProperties.getInstance().addProperty(CarbonCommonConstants.LOCK_TYPE,
"HDFSLOCK")
val carbon =
SparkSession.builder().config(sc.getConf).config("spark.hadoop.fs.s3a.impl",
"org.apache.hadoop.fs.s3a.S3AFileSystem").config("spark.hadoop.fs.s3a.access.key",
"xxx").config("spark.hadoop.fs.s3a.secret.key",
"xxx").getOrCreateCarbonSession("hdfs://localhost:9000/usr/carbon-meta")

carbon.sql("CREATE TABLE IF NOT EXISTS test_s3_table(id string, name string,
city string, age Int) STORED BY 'carbondata' LOCATION
's3a://key:password@aaron-s3-poc/'")
carbon.sql("LOAD DATA INPATH
'hdfs://localhost:9000/usr/carbon-s3/sample.csv' INTO TABLE test_s3_table")

*s3 files,*

aws s3 ls s3://aaron-s3-poc/ --human --recursive
2018-08-29 22:13:320 Bytes LockFiles/tablestatus.lock
2018-08-29 21:41:36  616 Bytes Metadata/schema


*Issue 1,* when I create table, carbondata raise Exception
"com.amazonaws.AmazonClientException: Unable to load AWS credentials from
any provider in the chain" even if 
a. I set related properties in spark-default.conf like
spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  
spark.hadoop.fs.s3a.awsAccessKeyId=xxx
spark.hadoop.fs.s3a.awsSecretAccessKey=xxx
spark.hadoop.fs.s3a.access.key=xxx
spark.hadoop.fs.s3a.secret.key=xxx
b.config in code
val carbon =
SparkSession.builder().config(sc.getConf).config("spark.hadoop.fs.s3a.impl",
"org.apache.hadoop.fs.s3a.S3AFileSystem").config("spark.hadoop.fs.s3a.access.key",
"xxx").config("spark.hadoop.fs.s3a.secret.key",
"xxx").getOrCreateCarbonSession("hdfs://localhost:9000/usr/carbon-meta")
c. spark-submit conf
Finally I succeed when I put credentials in LOCATION
's3a://key:password@aaron-s3-poc/'", But it's very strange. Who could tell
me why?


*Issue 2,* Load data failed

scala> carbon.sql("LOAD DATA INPATH
'hdfs://localhost:9000/usr/carbon-s3/sample.csv' INTO TABLE test_s3_table")
18/08/29 22:13:35 ERROR CarbonLoaderUtil: main Unable to unlock Table lock
for tabledefault.test_s3_table during table status updation
18/08/29 22:13:35 ERROR CarbonLoadDataCommand: main 
java.lang.ArrayIndexOutOfBoundsException
at java.lang.System.arraycopy(Native Method)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:128)
at 
org.apache.hadoop.fs.s3a.S3AOutputStream.write(S3AOutputStream.java:164)
at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at
org.apache.carbondata.core.datastore.filesystem.S3CarbonFile.getDataOutputStream(S3CarbonFile.java:111)
at
org.apache.carbondata.core.datastore.filesystem.S3CarbonFile.getDataOutputStreamUsingAppend(S3CarbonFile.java:93)
at
org.apache.carbondata.core.datastore.impl.FileFactory.getDataOutputStreamUsingAppend(FileFactory.java:276)
at org.apache.carbondata.core.locks.S3FileLock.lock(S3FileLock.java:96)
at
org.apache.carbondata.core.locks.AbstractCarbonLock.lockWithRetries(AbstractCarbonLock.java:41)
at
org.apache.carbondata.core.locks.AbstractCarbonLock.lockWithRetries(AbstractCarbonLock.java:59)
at
org.apache.carbondata.processing.util.CarbonLoaderUtil.recordNewLoadMetadata(CarbonLoaderUtil.java:247)
at
org.apache.carbondata.processing.util.CarbonLoaderUtil.recordNewLoadMetadata(CarbonLoaderUtil.java:204)
at
org.apache.carbondata.processing.util.CarbonLoaderUtil.readAndUpdateLoadProgressInTableMeta(CarbonLoaderUtil.java:437)
at
org.apache.carbondata.processing.util.CarbonLoaderUtil.readAndUpdateLoadProgressInTableMeta(CarbonLoaderUtil.java:446)
at
org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand.processData(CarbonLoadDataCommand.scala:263)
at
org.apache.spark.sql.execution.command.AtomicRunnableCommand.ru