I can not load the files to this JIRA, so hier they are.
Jie
Am 22.06.2016 um 08:59 schrieb Jie Tao (JIRA):
Jie Tao created KYLIN-1813:
--
Summary: intermediate table in Hive not cleaned up
Key: KYLIN-1813
URL
Jie Tao created KYLIN-1813:
--
Summary: intermediate table in Hive not cleaned up
Key: KYLIN-1813
URL: https://issues.apache.org/jira/browse/KYLIN-1813
Project: Kylin
Issue Type: Bug
e choice
"diagonose". My Kylin is 1.5.2.1.
Cheers,
Jie
Am 17.06.2016 um 11:05 schrieb ShaoFeng Shi:
by default the web UI only shows the jobs in LAST ONE WEEK, pls have a
check.
2016-06-17 16:58 GMT+08:00 Jie Tao <jie@gameforge.com>:
actually I discarded all jobs and I do
quot; is not a final state; User can resume an "Error" job at any
time, so Kylin skipped to cleanup for that.
If you discard these error jobs, and re-run the cleanup, the intermediate
hive table will be dropped.
The message here is not clear, will change the wording...
2016-06-1
scenarios:
https://kylin.apache.org/docs15/howto/howto_cleanup_storage.html
2016-06-17 15:00 GMT+08:00 Li Yang <liy...@apache.org>:
Woo... something new to me. Anybody knows?
On Tue, Jun 14, 2016 at 6:57 PM, Jie Tao <jie@gameforge.com> wrote:
Kylin actually drops useless intermediat
. Maybe the remote-disk-eriting is too long?
Jie
Am 16.06.2016 um 10:19 schrieb Jie Tao:
Both mappers hangs in spilling map output. May some Hadoop
configuration be wrong with our cluster?
e: Handled 120 records!
2016-06-16 10:12:01,895 INFO [main
Both mappers hangs in spilling map output. May some Hadoop configuration be
wrong with our cluster?
e: Handled 120 records!
2016-06-16 10:12:01,895 INFO [main]
org.apache.kylin.engine.mr.steps.BaseCuboidMapperBase: Handled 130 records!
2016-06-16 10:12:02,571 INFO [main]
Kylin defines some config properties for Hadoop. For example,
mapred.task.timeout is defined in kylin_job_conf.xml. I have the same
property in my mapred-site.xml for Hadoop. Which value is taken when
running Kylin MR jobs for cube building?
The problem is: as I build a cube with a table of
: Failed to find metadata
store by url: kylin_metadata@hbase
Cheers,
Jie
Am 10.06.2016 um 15:46 schrieb Jie Tao:
sorry! There do have problems in catalina.log:
INFORMATION: Starting service Catalina
.
INFORMATION: Deploying web application archive
/home/bi-operator/apache-kylin-1.5.2.1
leak.
So: kylin is not correctly deployed. How to solve this problem?
Cheers,
Jie
Am 10.06.2016 um 15:11 schrieb Jie Tao:
I tried on a cluster of apache BIGTOP installation with Hadoop 2.4.1,
hive 0.13.1 and HBase 0.98.5-hadoop2. Kylin is started and there is no
exception in either
/tomcat/webapps/kylin.war
By the way, Kylin 1.3 runs well with my environment. Maybe something
others with kylin 1.5 related to TOMCAT?
Cheers,
Jie
Am 07.06.2016 um 08:40 schrieb Jie Tao:
Thanks for the reply. The same error is in kylin.log. I think it is
version conflict. I will try on another
d you mind to share your
scenario?
2016-06-07 17:57 GMT+08:00 Jie Tao <jie@gameforge.com>:
is it possible to use this feature to show the last_N records, i.e. let
Kylin sort in ascending order rather than descending order?
Cheers,
Jie
is it possible to use this feature to show the last_N records, i.e. let
Kylin sort in ascending order rather than descending order?
Cheers,
Jie
It is a nice feature to build cube directly from kafka. From the example
on your docs I see that the table schema is extracted from the input
JSON. The question is: do your support recursive JSON structure, i.e., a
JSON attribute is an object containing other attributes? Like:
{
"foo": {
after starting Kylin this URL (http://localhost:7070/kylin/) keeps
connecting to local host but shows nothing. This happened with 1.5.0,
1.5.1 and 1.5.2. In catalina.log there is a warning:
java.io.FileNotFoundException:
/home/tao/hadoop-2.7.1/contrib/capacity-scheduler/*.jar not found. After
-31 14:54 GMT+08:00 Jie Tao<jie@gameforge.com>:
Thanks for the message. You are right, it may be Hbase problem. I have
hbase-0.98.17-hadoop2. It may not be the Hadoop problem because I tested
with Hadoop 2.6 and got the same error:
Java.lang.IllegalArgumentException: No enum co
;
Just check a couple of things:
1. whether the hive table has data;
2. whether the cube has built the date range which covers 2012 to 2013;
2016-03-29 17:06 GMT+08:00 Jie Tao <jie@gameforge.com>:
Hi,
I installed v1.5 and built the sample cube. All was fine. But when I query
with
Hi,
I installed v1.5 and built the sample cube. All was fine. But when I
query with
select part_dt, sum(price) as total_selled, count(distinct seller_id) as
sellers from kylin_sales group by part_dt order by part_dt
I got No Result / Results (0). Kylin.log shows:
User: ADMIN
Success: true
Hallo,
my fact table is in another database rather than "default". I use hive
1.2 and it can not handle commands like hive -e 'select * from
another_database.table'. My cube build failed at the first step with error:
FAILED: SemanticException 1:567 AS clause has an invalid number of aliases.
7, 2016, at 7:33 PM, Jie Tao <jie@gameforge.com> wrote:
Dear Kylin developers,
while building a cube I can only select the date but no hour selection.
Can Kylin do hourly build and how to specify this?
As I got new data I manually build the cube (action->build). Is there
an
Dear Kylin developers,
while building a cube I can only select the date but no hour selection.
Can Kylin do hourly build and how to specify this?
As I got new data I manually build the cube (action->build). Is there
any process of automatic build (say hourly/daily) so that Kylin does the
Thanks. I tried to put the jars in HDFS an set kylin.job.mr.lib.dir, but
this was not my case. I figured out that my problem was actually caused
by HBASE_CLASSPATH, which I changed in my hbase_env.sh with export
HBASE_CLASSPATH=some directory. Actually this should be HBASE_CLASSPATH=
Dear Kylin developers,
I have a cluster environment (3 zookeeper machine, one yarn RM and one
backup RM, 3 node managers) and install hive-hcatalog on all these nodes
under the same directory. While building the sample cube I got the error
at step 2:
Caused by:
23 matches
Mail list logo