Hi
Good idea, thank you started this discussion.
Agree with Ravi comments, we need to double-check some limitations after
introducing the feature.
Flink and Kafka integration can be discussed later.
For using SDK to write new data to the existing carbondata table , some
questions:
1.How to ensu
Congratulations Xubo!!!
On Sat, 8 Dec 2018 at 8:37 AM, Liang Chen wrote:
> Hi all
>
> We are pleased to announce that the PMC has invited Bo Xu as new
> Apache CarbonData
> committer, and the invite has been accepted!
>
> Congrats to Bo Xu and welcome aboard.
>
> Regards
> Apache CarbonData PMC
Thanks all. I am very glad that the Apache CarbonData PMC invited me to be a
committer.
I will continue to work hard to contribute to the Apache CarbonData
community.
Thank you!
Best wishes!
Xubo
--
Sent from:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com
Congratulations xubo
On Sat, Dec 8, 2018, 9:53 AM kanaka kumar avvaru <
kanakakumaravv...@gmail.com wrote:
> Congrats Xubo.
>
> -Regards
> Kanaka
>
> On Sat 8 Dec, 2018, 09:41 Raghunandan S wrote:
>
> > Congrats xubo. Welcome on board
> >
> > On Sat, 8 Dec 2018, 8:37 am Liang Chen, wrote:
> >
>
Congrats Xubo.
-Regards
Kanaka
On Sat 8 Dec, 2018, 09:41 Raghunandan S Congrats xubo. Welcome on board
>
> On Sat, 8 Dec 2018, 8:37 am Liang Chen, wrote:
>
> > Hi all
> >
> > We are pleased to announce that the PMC has invited Bo Xu as new
> > Apache CarbonData
> > committer, and the invite has
Congrats xubo. Welcome on board
On Sat, 8 Dec 2018, 8:37 am Liang Chen, wrote:
> Hi all
>
> We are pleased to announce that the PMC has invited Bo Xu as new
> Apache CarbonData
> committer, and the invite has been accepted!
>
> Congrats to Bo Xu and welcome aboard.
>
> Regards
> Apache CarbonDat
Hi all
We are pleased to announce that the PMC has invited Bo Xu as new
Apache CarbonData
committer, and the invite has been accepted!
Congrats to Bo Xu and welcome aboard.
Regards
Apache CarbonData PMC
Hi Jacky,
Its a good idea to support writing transactional table from SDK. But we need
to add following limitations as well
1. It can work on file systems which can take append lock like HDFS.
2. Compaction, delete segment cannot be done on online segments till it is
converted to the transactio
Hi,
I am trying to run example on Carbon data guide.
https://carbondata.apache.org/quick-start-guide.html
Run it through spark-shell on local mode.
Start command:
/opt/spark2.3.2/bin/spark-shell --jars
apache-carbondata-1.5.1-bin-spark2.3.2-hadoop2.7.2.jar --master local
Code:
val store = "hd
Hi all,
I am working on Complex Datatype Map and want to propose a change in the
delimiters which are currently supported. We are currently using '$' and ':'
as delimiters but this not support the TimeStamp datatype as it also has ':'
in its format.
So like Hive we change the delimiters to '\001'
@Raghunandan subramanya
We have tested with *80 string columns with 10 high cardinality
columns(fallback happened for these columns)*, please find the stats:
*Test result is with 1 billion records 385 Gb size*
*1. Load time without local dictionary:* 66 minutes
*2. Load time without fallback loc
GitHub user sraghunandan opened a pull request:
https://github.com/apache/carbondata-site/pull/66
1.5.1 Relase updation
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/sraghunandan/carbondata-site 1.5.1
Alternatively you can rev
+1
We should modify the delimters as per hive. Also update the documentation as
per the change.
Regards
Manish Gupta
--
Sent from:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
+1
We already have a DDL for data type change and the same can be used for
rename column. The DDL is same as that of hive.
Regards
Manish Gupta
--
Sent from:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
14 matches
Mail list logo