[jira] [Commented] (IOTDB-1274) ./sbin/start-cli.sh -e 'select last * from root.*' Parser error : Msg: 401

2021-06-03 Thread Haonan Hou (Jira)


[ 
https://issues.apache.org/jira/browse/IOTDB-1274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17357044#comment-17357044
 ] 

Haonan Hou commented on IOTDB-1274:
---

[https://github.com/apache/iotdb/pull/3336]

 

> ./sbin/start-cli.sh -e 'select last * from root.*'  Parser error  : Msg: 401
> 
>
> Key: IOTDB-1274
> URL: https://issues.apache.org/jira/browse/IOTDB-1274
> Project: Apache IoTDB
>  Issue Type: Bug
>  Components: Client/CLI
>Reporter: 刘珍
>Priority: Minor
>
> 0.11.3  9e371454b05f9a0e24fabb2c84f7ce690f14a882
> {color:#FF}start-cli.sh -e   SQL contains *   parser error{color}
> {color:#FF}Msg: 401: line 1:18 mismatched input 'conf' expecting \{FROM, 
> ',', '.'}{color}
>  
> set storage group root.db1;
> CREATE TIMESERIES root.db1.tab1.id WITH DATATYPE=INT32, ENCODING=PLAIN
> CREATE TIMESERIES root.db1.tab1.name WITH DATATYPE=text, ENCODING=PLAIN
> CREATE TIMESERIES root.db1.tab1.age WITH DATATYPE=INT32, ENCODING=PLAIN
> CREATE TIMESERIES root.db1.tab1.country WITH DATATYPE=text, ENCODING=PLAIN
> CREATE TIMESERIES root.db1.tab1.salary WITH DATATYPE=float, ENCODING=PLAIN
> CREATE TIMESERIES root.db1.tab1.other WITH DATATYPE=double, ENCODING=PLAIN
> CREATE TIMESERIES root.db1.tab1.student WITH DATATYPE=boolean, ENCODING=PLAIN
>  
> insert into root.db1.tab1(time,id,name,age,country,salary,other ,student ) 
> values(now(),1,'lily',25,'usa',5678.34,7.77,false);
> insert into root.db1.tab1(time,id,name,age,country,salary,other ,student ) 
> values(now(),2,'lily2',25,'usa',5678.34,7.77,false);
>  
> ./sbin/start-cli.sh{color:#FF} -e{color} 'select last * from root.*'
> Msg: 401: line 1:18 mismatched input 'conf' expecting \{FROM, ',', '.'}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IOTDB-1401) Tablet's rowsize = 0 ,insertTablets throws an exception ,gives an unfriendly error message

2021-06-03 Thread Haonan Hou (Jira)


[ 
https://issues.apache.org/jira/browse/IOTDB-1401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17357041#comment-17357041
 ] 

Haonan Hou commented on IOTDB-1401:
---

Solved by https://github.com/apache/iotdb/pull/3296

> Tablet's rowsize = 0 ,insertTablets throws an exception ,gives an unfriendly 
> error message
> --
>
> Key: IOTDB-1401
> URL: https://issues.apache.org/jira/browse/IOTDB-1401
> Project: Apache IoTDB
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 0.12.0
>Reporter: 刘珍
>Priority: Minor
> Attachments: image-2021-05-27-16-05-45-882.png
>
>
> [rel/0.12]  05/18  99822c7feb7797f33b0e740b0f3ea38803d11d17
> iotdb log :
>  !image-2021-05-27-16-05-45-882.png|thumbnail! 
> org.apache.iotdb.db.exception.BatchProcessException: {color:red}*Batch 
> process failed:[]*{color}
> at 
> org.apache.iotdb.db.engine.storagegroup.StorageGroupProcessor.insertTablet(StorageGroupProcessor.java:858)
> test case:
> import org.apache.iotdb.rpc.IoTDBConnectionException;
> import org.apache.iotdb.rpc.StatementExecutionException;
> import org.apache.iotdb.rpc.TSStatusCode;
> import org.apache.iotdb.session.Session;
> import org.apache.iotdb.session.SessionDataSet;
> import org.apache.iotdb.session.SessionDataSet.DataIterator;
> import org.apache.iotdb.tsfile.file.metadata.enums.CompressionType;
> import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
> import org.apache.iotdb.tsfile.file.metadata.enums.TSEncoding;
> import org.apache.iotdb.tsfile.write.record.Tablet;
> import org.apache.iotdb.tsfile.write.schema.MeasurementSchema;
> import java.util.ArrayList;
> import java.util.HashMap;
> import java.util.List;
> import java.util.Map;
> import java.util.Random;
> @SuppressWarnings("squid:S106")
> public class SessionExample {
>   private static Session session;
>   private static Session sessionEnableRedirect;
>   private static final String ROOT_SG1_D1_S1 = "root.sg1.d1.s1";
>   private static final String ROOT_SG1_D1_S2 = "root.sg1.d1.s2";
>   private static final String ROOT_SG1_D1_S3 = "root.sg1.d1.s3";
>   private static final String ROOT_SG1_D1_S4 = "root.sg1.d1.s4";
>   private static final String ROOT_SG1_D1_S5 = "root.sg1.d1.s5";
>   private static final String ROOT_SG1_D1 = "root.sg1.d1";
>   private static final String LOCAL_HOST = "127.0.0.1";
>   public static void main(String[] args)
>   throws IoTDBConnectionException, StatementExecutionException {
> session = new Session(LOCAL_HOST, 6667, "root", "root");
> session.open(false);
> // set session fetchSize
> session.setFetchSize(1);
> try {
>   session.setStorageGroup("root.sg1");
> } catch (StatementExecutionException e) {
>   if (e.getStatusCode() != 
> TSStatusCode.PATH_ALREADY_EXIST_ERROR.getStatusCode()) {
> throw e;
>   }
> }
> createTimeseries();
> createMultiTimeseries();
> insertTablets();
> session.close();
>   }
>   private static void createTimeseries()
>   throws IoTDBConnectionException, StatementExecutionException {
> if (!session.checkTimeseriesExists(ROOT_SG1_D1_S1)) {
>   session.createTimeseries(
>   ROOT_SG1_D1_S1, TSDataType.INT64, TSEncoding.RLE, 
> CompressionType.SNAPPY);
> }
> if (!session.checkTimeseriesExists(ROOT_SG1_D1_S2)) {
>   session.createTimeseries(
>   ROOT_SG1_D1_S2, TSDataType.INT64, TSEncoding.RLE, 
> CompressionType.SNAPPY);
> }
> if (!session.checkTimeseriesExists(ROOT_SG1_D1_S3)) {
>   session.createTimeseries(
>   ROOT_SG1_D1_S3, TSDataType.INT64, TSEncoding.RLE, 
> CompressionType.SNAPPY);
> }
> // create timeseries with tags and attributes
> if (!session.checkTimeseriesExists(ROOT_SG1_D1_S4)) {
>   Map tags = new HashMap<>();
>   tags.put("tag1", "v1");
>   Map attributes = new HashMap<>();
>   tags.put("description", "v1");
>   session.createTimeseries(
>   ROOT_SG1_D1_S4,
>   TSDataType.INT64,
>   TSEncoding.RLE,
>   CompressionType.SNAPPY,
>   null,
>   tags,
>   attributes,
>   "temperature");
> }
> // create timeseries with SDT property, SDT will take place when flushing
> if (!session.checkTimeseriesExists(ROOT_SG1_D1_S5)) {
>   // COMPDEV is required
>   // COMPMAXTIME and COMPMINTIME are optional and their unit is ms
>   Map props = new HashMap<>();
>   props.put("LOSS", "sdt");
>   props.put("COMPDEV", "0.01");
>   props.put("COMPMINTIME", "2");
>   props.put("COMPMAXTIME", "10");
>   session.createTimeseries(
>   ROOT_SG1_D1_S5,
>   TSDataType.INT64,
>   TSEncoding.RLE,
>   CompressionType.SNAPPY,
>   props,
>   

[jira] [Created] (IOTDB-1420) Compaction process conflicts with TTL process

2021-06-03 Thread Jira
张凌哲 created IOTDB-1420:
--

 Summary: Compaction process conflicts with TTL process
 Key: IOTDB-1420
 URL: https://issues.apache.org/jira/browse/IOTDB-1420
 Project: Apache IoTDB
  Issue Type: Bug
  Components: Compaction
Reporter: 张凌哲
 Fix For: 0.11.4, 0.12.1


The compaction process needs to get all readers of the files to be compacted 
and close all of them after this compaction finish. However, if one of the 
reader of the file cannot be get(it may be deleted by TTL process), the whole 
compaction will just throw out and the open reader cannot be closed.

If the problem above continuously occurs. There will be too many file reader 
cannot be closed. And cause `too many opened files` error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IOTDB-1419) Continuous compaction is not working in 0.11 when enablePartition is true

2021-06-03 Thread Jialin Qiao (Jira)
Jialin Qiao created IOTDB-1419:
--

 Summary: Continuous compaction is not working in 0.11 when 
enablePartition is true
 Key: IOTDB-1419
 URL: https://issues.apache.org/jira/browse/IOTDB-1419
 Project: Apache IoTDB
  Issue Type: Bug
  Components: Compaction
Affects Versions: 0.11.3, 0.12.0
Reporter: Jialin Qiao


In each storage group, we maintain a isCompactionWorking field.

 

When execute "merge", for each partition, if the isCompactionWorking is true, 
we skip this partition. Otherwise, we set the isCompactionWorking=true and 
submit a compaction thread. (so, threads of each partition are submitted almost 
simultaneously) When the compaction thread is over, the thread set 
isCompactionWorking=false.

 

So, after setting isCompactionWorking=true for the first partition, even the 
first partition do not need to be merged,  when we process other partition, 
this field is still true, other partitions will all be skipped.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)