Recover/Reset Root Password?

2024-06-18 Thread Trevor Hart
Hello



Is there a process to recover or reset the root user password? For example, if 
a customer forgets their root user password.



Is it possible to replace the "root.profile" file under 
data/confignode/system/users with the default? Would this work?


Thanks 

Trevor Hart

Re: ALIGN BY DEVICE: the data types of the same measurement column should be the same across devices.

2024-06-11 Thread Trevor Hart
Ah that is great to hear! Thank you!




Thanks 

Trevor Hart




 On Wed, 12 Jun 2024 13:50:43 +1200 Jialin Qiao  
wrote ---



Hi, 
 
We just changed the default infer type of integer and floating to 
DOUBLE [1], it will be released in 1.3.2. 
 
For <=1.3.1, you could explicitly set this to DOUBLE. 
 
[1] https://github.com/apache/iotdb/pull/12223 
 
Jialin Qiao 
 
Trevor Hart <mailto:tre...@ope.nz> 于2024年6月12日周三 06:41写道: 
> 
> Hello Team 
> 
> 
> 
> I have a customer using my application with IoTDB 1.3.0 in the backend. 
> 
> 
> 
> Auto create schema is enabled as there is no fixed template. Multiple devices 
> are pushing data to IotDB. 
> 
> 
> 
> I have seen some ALIGN BY DEVICE queries fail with the error in the subject 
> ie 
> 
> 
> 
> ALIGN BY DEVICE: the data types of the same measurement column should be the 
> same across devices. 
> 
> 
> 
> The "show timeseries" shows that from one device the value has been inferred 
> as FLOAT and another as DOUBLE. 
> 
> 
> Per the documentation it is suggested that the default 
> "floating_string_infer_type" is DOUBLE but that is not what I am seeing. Why 
> am I getting FLOAT? Is the default "floating_string_infer_type" not being 
> honoured? 
> 
> 
> 
> Currently in the config file "floating_string_infer_type" is commented out. 
> Should I explicitly set this to DOUBLE? 
> 
> 
> Thanks 
> 
> Trevor Hart

ALIGN BY DEVICE: the data types of the same measurement column should be the same across devices.

2024-06-11 Thread Trevor Hart
Hello Team



I have a customer using my application with IoTDB 1.3.0 in the backend.



Auto create schema is enabled as there is no fixed template. Multiple devices 
are pushing data to IotDB.



I have seen some ALIGN BY DEVICE queries fail with the error in the subject ie



ALIGN BY DEVICE: the data types of the same measurement column should be the 
same across devices.



The "show timeseries" shows that from one device the value has been inferred as 
FLOAT and another as DOUBLE.


Per the documentation it is suggested that the default 
"floating_string_infer_type" is DOUBLE but that is not what I am seeing. Why am 
I getting FLOAT? Is the default "floating_string_infer_type" not being honoured?



Currently in the config file "floating_string_infer_type" is commented out. 
Should I explicitly set this to DOUBLE?


Thanks 

Trevor Hart

Re: Upgrading 0.12 -> 1.3

2024-05-27 Thread Trevor Hart
Thank you Jialin. I didnt realise the data was not migrated as part of the 
upgrade.



That is good to see there is a resource to migrate. I will use that approach.



Thanks 

Trevor Hart




 On Tue, 28 May 2024 13:35:41 +1200 Jialin Qiao  
wrote ---



Hi, 
 
The architecture of 1.x is different from 0.13. The upgrading can not 
be in situ. You need to deploy an 1.3 instance, then transfer the 
data. 
There are two ways to transfer data: 
1. Using Session to query from the 0.13, then write into 1.3. 
2. Using TsFile API to rewrite the TsFile, then load TsFiles to 1.3. 
 
1. is easier. You could refer to this: 
https://github.com/apache/iotdb/blob/master/example/session/src/main/java/org/apache/iotdb/DataMigrationExample.java
 
 
Jialin Qiao

Upgrading 0.12 -> 1.3

2024-05-27 Thread Trevor Hart
Hello



I have an application that is running against IoTDB version 0.12 which I would 
like to upgrade to 1.3



However I have some timeseries paths that contain numbers (eg below)


root.ABC.health.url.17.code



>From the documentation it seems this is not supported beyond 0.13



To migrate this data I am proposing this process;



1. Use "select into" to migrate to a new path ie root.ABC.health.url.17.code -> 
root.ABC.health.url.U17.code

2. Delete the old timeseries ie root.ABC.health.url.17.code

3. Upgrade IoTDB


Is this the correct/best approach?


Thanks 

Trevor Hart

Windows Installer

2024-05-19 Thread Trevor Hart
Hello



I dont know if there is some interest in this but I have built a Windows 
installer for installing IoTDB on Windows servers.



The installer includes Open JDK 11 and also does the following;



1. Creates a Windows service for the Config Node

2. Creates a Windows service for the Data Node

3. Open port 6667 on Windows firewall



You can see the installer in action here; 
https://www.youtube.com/watch?v=6BzCd-vAiGc&t=1s&ab_channel=OpeLtd



If there is any interest in this I can make it available.


Thanks

Trevor Hart

Re: Handling Duplicate Timestamps

2024-05-19 Thread Trevor Hart
Hi Jialin



Yes the values would be different.



As as example, these are from a web server log. The device is openzweb01 which 
is an IIS web server which may handle multiple requests at the same time. The 
rows are unique in their own right but the timestamp is the same in the 
logging. 



2024-05-20 00:00:14 W3SVC1 openzweb01 192.168.3.69 POST 
/portal/sharing/rest/community/users/Meriadoc 200 0 0 3339 503 7


2024-05-20 00:00:14 W3SVC1 openzweb01 192.168.3.69 POST 
/portal/sharing/rest/community/users/Peregrin 200 0 0 3327 503 6


2024-05-20 00:00:14 W3SVC1 openzweb01 192.168.3.69 POST 
/portal/sharing/rest/community/users/Samwise 200 0 0 3325 502 6

2024-05-20 00:00:14 W3SVC1 openzweb01 192.168.3.69 POST 
/portal/sharing/rest/community/users/siteadmin 200 0 0 15279 504 5


2024-05-20 00:00:15 W3SVC1 openzweb01 192.168.3.69 POST 
/portal/sharing/rest/community/users/testuser 200 0 0 1794 503 6

2024-05-20 00:00:15 W3SVC1 openzweb01 192.168.3.69 POST 
/portal/sharing/rest/community/users/testuser2 200 0 0 1794 506 6



This particular log file only records in seconds. So what I am doing with these 
rows at the moment is to add an artitifical millisecond to enforce uniqueness.




2024-05-20 00:00:14.000 W3SVC1 openzweb01 192.168.3.69 POST 
/portal/sharing/rest/community/users/Meriadoc 200 0 0 3339 503 7 

2024-05-20 00:00:14.001 W3SVC1 openzweb01 192.168.3.69 POST 
/portal/sharing/rest/community/users/Peregrin 200 0 0 3327 503 6 

2024-05-20 00:00:14.002 W3SVC1 openzweb01 192.168.3.69 POST 
/portal/sharing/rest/community/users/Samwise 200 0 0 3325 502 6

2024-05-20 00:00:14.003 W3SVC1 openzweb01 192.168.3.69 POST 
/portal/sharing/rest/community/users/siteadmin 200 0 0 15279 504 5 

2024-05-20 00:00:15.000 W3SVC1 openzweb01 192.168.3.69 POST 
/portal/sharing/rest/community/users/testuser 200 0 0 1794 503 6

2024-05-20 00:00:15.001 W3SVC1 openzweb01 192.168.3.69 POST 
/portal/sharing/rest/community/users/testuser2 200 0 0 1794 506 6



For some other log files that I am processing they are in milliseconds already 
but there is a (small) chance of dataloss if multiple requests happen to be 
processed at the exact same time.



I have been thinking about this some more and I think that rather than break 
the IoTDB CRUD model I should handle this on the client side. In my use case 
the log data is actually staged in an H2 database before it is sent to IoTDB so 
I can enforce PK validation there. That way it is less expensive that checking 
the timestamp in IoTDB for each record.



Thanks 

Trevor Hart








 On Fri, 17 May 2024 19:11:13 +1200 Jialin Qiao  
wrote ---



Hi Trevor, 
 
Will different values of the same timestamp be the same? 
 
1. Same 
Time, Value 
1, 1 
1, 1 
1, 1 
 
2. Different 
Time, Value 
1, 1 
1, 2 
1, 1 
 
 
Jialin Qiao 
 
Trevor Hart <mailto:tre...@ope.nz> 于2024年5月14日周二 11:20写道: 
> 
> Thank you! I will implment some work around for now. 
> 
> 
> I would appreciate some consideration for this option in the future. 
> 
> 
> Thanks 
> 
> Trevor Hart 
> 
> Ope Limited 
> 
> w: http://www.ope.nz/ 
> 
> m: +64212728039 
> 
> 
> 
> 
> 
> 
> 
> 
>  On Tue, 14 May 2024 15:17:47 +1200 Xiangdong Huang 
> <mailto:saint...@gmail.com> wrote --- 
> 
> 
> 
> > 1. Checking before insert if the timestamp already exists and remedy on the 
> > client before resend 
> > 2. Moving to Nanosecond and introducing some insignificant time value to 
> > keep timestamp values unique. 
> Yes these maybe the best solutions for a specific application. 
> 
> 
> Analysis for IoTDB: 
> - Rejecting the write when receiving an existing timestamp in IoTDB is 
> time-costly (IoTDB needs to check historical data). I think we will do 
> not check it until we find a low-latency method. 
> - Allowing multiple value versions for a timestamp may introduce a 
> chain reaction and there may be a lot of codes that should be 
> modified, which is a huge work. 
> 
> There is a new idea (but I have no time to implement it...) 
> - Add a parameter in IoTDB: replace_strategy: first, last, avg etc... 
> - when an existing timestamp arrives, IoTDB accepts it 
> - when IoTDB runs LSM to merge data and meets multiple values for a 
> timestamp, then handles it according to the replace_startegy. 
> 
> The solution may also introduce some work to do... and we need to 
> think carefully the impact to the query process. 
> Need to survey whether this is a common requirement. 
> 
> Best, 
> --- 
> Xiangdong Huang 
> 
> Trevor Hart <mailto:mailto:tre...@ope.nz> 于2024年5月14日周二 09:55写道: 
> > 
> > Hello Yuan 
> > 
> > 
> > 
> > Correct, the first timestamp and values should be retained. 
> > 
> > 
> > 
> > I realise this is does not alig

Re: [DISCUSS] Drop Java 8?

2024-05-16 Thread Trevor Hart
I think a lot of organisations are sticking to Java 8 because of the change to 
Oracle license that was introduced in Java 11.



If you use Oracle 11 JRE you need to pay Oracle for a license.



This was why Open JDK came about.



Personally I use IotDB with Open JDK 11 (Eclipse Temurin) which does not 
require a license.



Thanks 

Trevor Hart








 On Fri, 17 May 2024 13:30:15 +1200 Yuan Tian  
wrote ---



Hi Chris, 
 
It seems that a lot of people still use jdk1.8 in their product environment. 
 
Best regards, 
- 
Yuan Tian 
 
On Thu, May 16, 2024 at 8:10 PM Christofer Dutz 
<mailto:christofer.d...@c-ware.de> 
wrote: 
 
> Hi all, 
> 
> starting this new thread as I am not sure if others are reading the 
> Jakarta migration thread. 
> 
> I would like to propose planning on dropping Java 8 support. 
> 
> I wouldn’t immediately do that, and I would also propose to do a major 
> version update (Switching to 2.0.0) 
> 
> We could still maintain a 1.x branch for those people not able to update. 
> 
> The main reason is that we are currently blocking ourselves from updating 
> many major plugins and dependencies. 
> I noticed that when updating to the Jakarta namespace. Here there is no 
> Netty version available that supports Jakarta and supports Java 8. 
> 
> Other libraries where we are not able to update without giving up on Java 
> 8: 
> 
>   *   Airlift-Units (Stuck at 1.7 current 1.10) 
>   *   Airlift 
>   *   Antlr (Stuck at 4.9.3 current 4.13.1) 
>   *   Caffeine (Stuck at 2.9.3 current 3.1.8) 
>   *   Logback (Stuck at 1.3.14 current 1.5.6) 
>   *   Mockito (Stuck at 2.23.4 current 5.12.0) 
>   *   Thrift (Stuck at 0.17.0 current 0.20.0) 
> 
> 
> 
>   *   Spotless Plugin (We’ve got a workaround for Java 8) 
> 
> 
> In my branch where I refactored the javax namespace to Jakarta after 
> updating dependencies I was able to remove all exclusions of the 
> BanVulnerableDependencies check. 
> 
> Also does dropping Java 8 and the Jakarta migration allow embedding IoTDB 
> in recent Spring versions. 
> 
> 
> So … what do you think? 
> 
> 
> Chris 
>

Re: Handling Duplicate Timestamps

2024-05-13 Thread Trevor Hart
Thank you! I will implment some work around for now.


I would appreciate some consideration for this option in the future.


Thanks 

Trevor Hart

Ope Limited

w: http://www.ope.nz/

m: +64212728039








 On Tue, 14 May 2024 15:17:47 +1200 Xiangdong Huang  
wrote ---



> 1. Checking before insert if the timestamp already exists and remedy on the 
> client before resend 
> 2. Moving to Nanosecond and introducing some insignificant time value to keep 
> timestamp values unique. 
Yes these maybe the best solutions for a specific application. 
 
 
Analysis for IoTDB: 
- Rejecting the write when receiving an existing timestamp in IoTDB is 
time-costly (IoTDB needs to check historical data). I think we will do 
not check it until we find a low-latency method. 
- Allowing multiple value versions for a timestamp may introduce a 
chain reaction and there may be a lot of codes that should be 
modified, which is a huge work. 
 
There is a new idea (but I have no time to implement it...) 
- Add a parameter in IoTDB: replace_strategy: first, last, avg etc... 
- when an existing timestamp arrives, IoTDB accepts it 
- when IoTDB runs LSM to merge data and meets multiple values for a 
timestamp, then handles it according to the replace_startegy. 
 
The solution may also introduce some work to do... and we need to 
think carefully the impact to the query process. 
Need to survey whether this is a common requirement. 
 
Best, 
--- 
Xiangdong Huang 
 
Trevor Hart <mailto:tre...@ope.nz> 于2024年5月14日周二 09:55写道: 
> 
> Hello Yuan 
> 
> 
> 
> Correct, the first timestamp and values should be retained. 
> 
> 
> 
> I realise this is does not align with the current design. I was just asking 
> whether there was an existing option to operate to block duplicates. 
> 
> 
> 
> In a normal RDBMS if you try to insert with a duplicate the insert will fail 
> with a PK violation. It would be great in some circumstances if IotDB at 
> least had the option to fail this way. 
> 
> 
> 
> I am considering some options such as; 
> 
> 
> 
> 1. Checking before insert if the timestamp already exists and remedy on the 
> client before resend 
> 
> 2. Moving to Nanosecond and introducing some insignificant time value to keep 
> timestamp values unique. 
> 
> 
> 
> I have already done something similar to #2 with storing IIS web log files as 
> they are recorded in seconds and not milliseconds. 
> 
> 
> 
> Thanks 
> 
> Trevor Hart 
> 
> 
> 
> 
>  On Tue, 14 May 2024 13:29:02 +1200 Yuan Tian 
> <mailto:jackietie...@gmail.com> wrote --- 
> 
> 
> 
> Hi Trevor, 
> 
> By "rejects duplicates", you mean you want to keep the first duplicate 
> timestamp and its corresponding values?(because the following duplicated 
> ones will be rejected) 
> 
> Best regards, 
>  
> Yuan Tian 
> 
> On Mon, May 13, 2024 at 6:24 PM Trevor Hart <mailto:mailto:tre...@ope.nz> 
> wrote: 
> 
> > 
> > 
> > 
> > 
> > Correct. I’m not disputing that. What I’m asking is that it 
> > would be good to have a configuration that either allows overwrites or 
> > rejects duplicates.My scenario is request log data from a server (the 
> > device). As it may be processing multiple requests at once there is a 
> > chance that there could be colliding time stamps.As it stands now I would 
> > need to check if the timestamp exists before inserting the data. Which 
> > obviously affects throughput. Thanks Trevor Hart On Fri, 10 May 
> > 2024 00:33:40 +1200  Jialin Qiao<mailto:mailto:qiaojia...@apache.org> wrote 
> >  Hi, 
> > In IoT or IIoT scenarios, we thought each data point represent a metric of 
> > a timestamp.In which case you need to store duplicated values?  Take this 
> > for an example: Time, root.sg1.car1.speed 1, 1 1, 2  Could a car has 
> > different speed at time 1?   Jialin Qiao  Yuan Tian < 
> > mailto:mailto:jackietie...@gmail.com> 于2024年5月9日周四 18:51写道: > > Hi Trevor, 
> > > > Now we 
> > will override the duplicate timestamp with a newer one. There is > nothing 
> > we can do about it now. > > Best regards, > --- > Yuan Tian 
> > > > On Wed, May 8, 2024 at 5:31 PM Trevor Hart 
> > > > <mailto:mailto:tre...@ope.nz> wrote: > > 
> > > Hello > > > > > > > > I’m aware that when inserting a duplicate timestamp 
> > the values will be > > overwritten. This will obviously result in data 
> > loss. > > > > > > > > Is there a config/setting to reject or throw an error 
> > on duplicate > > inserts? Although highly unlikely I would prefer to be 
> > alerted to the > > situation rather than lose data. > > > > > > > > I read 
> > through the documentation but couldn’t find anything. > > > > > > > > 
> > Thanks > > > > Trevor Hart 
> > 
> > 
> > 
> > 
> > 
> > 
> >

Re: Handling Duplicate Timestamps

2024-05-13 Thread Trevor Hart
Hello Yuan



Correct, the first timestamp and values should be retained.



I realise this is does not align with the current design. I was just asking 
whether there was an existing option to operate to block duplicates.



In a normal RDBMS if you try to insert with a duplicate the insert will fail 
with a PK violation. It would be great in some circumstances if IotDB at least 
had the option to fail this way.



I am considering some options such as;



1. Checking before insert if the timestamp already exists and remedy on the 
client before resend

2. Moving to Nanosecond and introducing some insignificant time value to keep 
timestamp values unique.



I have already done something similar to #2 with storing IIS web log files as 
they are recorded in seconds and not milliseconds.



Thanks 

Trevor Hart




 On Tue, 14 May 2024 13:29:02 +1200 Yuan Tian  
wrote ---



Hi Trevor, 
 
By "rejects duplicates", you mean you want to keep the first duplicate 
timestamp and its corresponding values?(because the following duplicated 
ones will be rejected) 
 
Best regards, 
 
Yuan Tian 
 
On Mon, May 13, 2024 at 6:24 PM Trevor Hart <mailto:tre...@ope.nz> wrote: 
 
> 
> 
> 
> 
> Correct. I’m not disputing that. What I’m asking is that it 
> would be good to have a configuration that either allows overwrites or 
> rejects duplicates.My scenario is request log data from a server (the 
> device). As it may be processing multiple requests at once there is a 
> chance that there could be colliding time stamps.As it stands now I would 
> need to check if the timestamp exists before inserting the data. Which 
> obviously affects throughput. Thanks Trevor Hart On Fri, 10 May 
> 2024 00:33:40 +1200  Jialin Qiao<mailto:qiaojia...@apache.org> wrote  Hi, 
> In IoT or IIoT scenarios, we thought each data point represent a metric of 
> a timestamp.In which case you need to store duplicated values?  Take this 
> for an example: Time, root.sg1.car1.speed 1, 1 1, 2  Could a car has 
> different speed at time 1?   Jialin Qiao  Yuan Tian < 
> mailto:jackietie...@gmail.com> 于2024年5月9日周四 18:51写道: > > Hi Trevor, > > Now 
> we 
> will override the duplicate timestamp with a newer one. There is > nothing 
> we can do about it now. > > Best regards, > --- > Yuan Tian 
> > > On Wed, May 8, 2024 at 5:31 PM Trevor Hart <mailto:tre...@ope.nz> wrote: 
> > > > > 
> > Hello > > > > > > > > I’m aware that when inserting a duplicate timestamp 
> the values will be > > overwritten. This will obviously result in data 
> loss. > > > > > > > > Is there a config/setting to reject or throw an error 
> on duplicate > > inserts? Although highly unlikely I would prefer to be 
> alerted to the > > situation rather than lose data. > > > > > > > > I read 
> through the documentation but couldn’t find anything. > > > > > > > > 
> Thanks > > > > Trevor Hart 
> 
> 
> 
> 
> 
> 
>

Re: Handling Duplicate Timestamps

2024-05-13 Thread Trevor Hart




Correct. I’m not disputing that. What I’m asking is that it would 
be good to have a configuration that either allows overwrites or rejects 
duplicates.My scenario is request log data from a server (the device). As it 
may be processing multiple requests at once there is a chance that there could 
be colliding time stamps.As it stands now I would need to check if the 
timestamp exists before inserting the data. Which obviously affects throughput. 
Thanks Trevor Hart On Fri, 10 May 2024 00:33:40 +1200  Jialin 
Qiao wrote  Hi,  In IoT or IIoT scenarios, we 
thought each data point represent a metric of a timestamp.In which case you 
need to store duplicated values?  Take this for an example: Time, 
root.sg1.car1.speed 1, 1 1, 2  Could a car has different speed at time 1?   
Jialin Qiao  Yuan Tian  于2024年5月9日周四 18:51写道: > > Hi 
Trevor, > > Now we will override the duplicate timestamp with a newer one. 
There is > nothing we can do about it now. > > Best regards, > 
--- > Yuan Tian > > On Wed, May 8, 2024 at 5:31 PM Trevor Hart 
 wrote: > > > Hello > > > > > > > > I’m aware that when 
inserting a duplicate timestamp the values will be > > overwritten. This will 
obviously result in data loss. > > > > > > > > Is there a config/setting to 
reject or throw an error on duplicate > > inserts? Although highly unlikely I 
would prefer to be alerted to the > > situation rather than lose data. > > > > 
> > > > I read through the documentation but couldn’t find anything. > > > > > 
> > > Thanks > > > > Trevor Hart  








Handling Duplicate Timestamps

2024-05-08 Thread Trevor Hart
Hello



I’m aware that when inserting a duplicate timestamp the values will be 
overwritten. This will obviously result in data loss. 



Is there a config/setting to reject or throw an error on duplicate inserts? 
Although highly unlikely I would prefer to be alerted to the situation rather 
than lose data.



I read through the documentation but couldn’t find anything. 



Thanks 

Trevor Hart

Test

2024-05-07 Thread Trevor Hart
Just testing, my last two messages have not been published.




Thanks 

Trevor Hart

Handling Duplicate Timestamps

2024-05-02 Thread Trevor Hart
Hello



I’m aware that when inserting a duplicate timestamp the values will be 
overwritten. This can obviously result in data loss. 



Is there a config/setting to reject or throw an error on duplicate inserts? 
Although highly unlikely I would prefer to be alerted to the situation rather 
than lose data.



I read through the documentation but couldn’t find anything. 



Thanks 

Trevor Hart

Re: Fw:dbeaver操作异常

2024-05-02 Thread Trevor Hart




HelloI am also using DBeaver. While it is very handy I have found 
like you there are certain statements that it does not process. I have posted 
before about a Java based UI that I developed. If you are interested I can give 
you the details. Thanks Trevor Hart On Sun, 28 Apr 2024 03:08:42 +1200  
object...@163.com wrote   转发邮件信息 
发件人:"objectboy" 发送日期:2024-04-27 
22:50:17收件人:dev-subscr...@iotdb.apache.org主题:dbeaver操作异常1,    dbeaver查看数据异常2,   
在dbe中新建表,新建字段都是异常的,   dbe是不是不能这样操作?     请问是否还有其他工具推荐,  
还是说只能通过cmd操作版本iotdb-jdbc-1.3.1-jar-with-dependencies.jardbe版本24.0.3 








Duplicate timestamps

2024-05-02 Thread Trevor Hart




HelloI’m aware that when inserting a duplicate timestamp the values 
will be overwritten. This can obviously result in data loss. Is there a 
config/setting to reject or throw an error on duplicate inserts? Although 
highly unlikely I would prefer to be alerted to the situation rather than lose 
data.I read through the documentation but couldn’t find anything. Thanks Trevor 
Hart  










Re: New Table Model for IoTDB

2024-04-10 Thread Trevor Hart




Can you clarify whether the old tree model exists in parallel?I can 
see the table being useful when querying one device but I hope we can still use 
the old model as well. Thanks Trevor Hart On Sun, 07 Apr 2024 23:39:02 
+1200  Yuan Tian wrote  Hi all,  As introduced in 
the official documentation( 
https://iotdb.apache.org/UserGuide/latest/Basic-Concept/Data-Model-and-Terminology.html),
 the previous modeling method of IoTDB was a tree model, which formed a full 
path of a sequence from the root node to the leaf node, and a device path from 
the root node to the second-to-last layer node. The previous query syntax of 
IoTDB was very similar to the standard relational SQL, centered on the 
sequence, with the sequence prefix name in `from clause` and the sequence 
suffix name in `select clause`. This query syntax is not very friendly to users 
who are accustomed to relational SQL, and they cannot apply their previous 
query experience to IoTDB.  Therefore, we are designing a new schema model 
called table model for IoTDB. We will provide data to users in a table view in 
the same way as relational databases. Each kind of device belongs to a table, 
and users can use standard SQL to query this table, which greatly reduces the 
learning curve of IoTDB.  The functional specs for table model can be found in 
https://timechor.feishu.cn/docx/C2eodP84VoJ0kuxgbhlc1fejnsh,and our dev branch 
is ty/TableModelGrammar。Syntax Definition file for Table model can be found in 
iotdb-core/relational-grammar/src/main/antlr4/org/apache/iotdb/db/relational/grammar/sql/RelationalSql.g4
  To support the table model, we also need to change the current tsfile format, 
so we need to upgrade tsfile version from V3 to V4, the new file format for 
tsfile V4 can be seen in 
https://apache-iotdb.feishu.cn/docx/QNeVd7mpVoWaFxxobopcCsw6ne5  Our 
development is currently at a very early stage, and we would like to invite you 
to discuss the functionality of the table model. Your feedback is valuable to 
us and will help us shape the development of this feature.   Best regards, 
-- Yuan Tian  








Re: Get Full Row Count?

2024-04-03 Thread Trevor Hart
Thank you Yuan, that works!


Thanks 

Trevor Hart








 On Thu, 04 Apr 2024 14:10:43 +1300 Yuan Tian  
wrote ---



Hi Trevor, 
 
Maybe you can try count_time agg function, 
https://iotdb.apache.org/UserGuide/latest/Reference/Function-and-Expression.html#count-time
 
. 
 
select count_time(*) from root.logs.device; 
 
 
 
Best regards, 
- 
Yuan Tian 
 
 
On Thu, Apr 4, 2024 at 2:24 AM Trevor Hart <mailto:tre...@ope.nz> wrote: 
 
> Hello All 
> 
> 
> 
> Question; whats the correct way to get a full count of all rows in a 
> timeseries using SQL? 
> 
> 
> 
> I can do this; 
> 
> 
> 
> select count(status) from root.logs.device 
> 
> 
> 
> But this only gives me the count of status that are not null. 
> 
> 
> 
> The queries below both return zero results. 
> 
> 
> select count(time) from root.logs.device 
> 
> and 
> 
> select count(timestamp) from root.logs.device 
> 
> 
> 
> Thanks 
> 
> Trevor Hart

Re: [DISCUSS] Make enabling "fat-jars" the default for the JDBC module?

2024-04-03 Thread Trevor Hart
I would love this.



I currently use a batch file to build a fat jar (just for JDBC).



Thanks 

Trevor Hart








 On Wed, 03 Apr 2024 00:42:40 +1300 Christofer Dutz 
 wrote ---



Hi all, 
 
so right now, I keep on having the issue, that I keep on needing to re-build 
IoTDB, as I forgot to enable the “jar-with-dependencies” profile when building 
the JDBC driver. 
I would assume, that when building a native Java client, someone would possibly 
prefer using the session API and that JDBC is more needed when using IoTDB from 
a JDBC enabled client. 
 
So, I think it would be better to have the fat-jar enabled per default. 
 
What do you others think? 
 
Chris

Get Full Row Count?

2024-04-03 Thread Trevor Hart
Hello All



Question; whats the correct way to get a full count of all rows in a timeseries 
using SQL?



I can do this;



select count(status) from root.logs.device



But this only gives me the count of status that are not null.



The queries below both return zero results.


select count(time) from root.logs.device

and

select count(timestamp) from root.logs.device



Thanks 

Trevor Hart

Re: Inquiry Regarding Key Column Creation for IoTDB

2024-02-15 Thread Trevor Hart
Would it not be better to utilise the hierarchy in IoTDB? This saves space and 
means no need for a key



So



root.vehicles.CAR1

root.vehicles.CAR2

root.vehicles.CAR3



And then under each CAR store lat, long, speed etc



You can still query all of the data 



select * from root.vehicles.*
limit 1000
group by device



That way the key is part of the schema and not a measurement.


I use a similar model for storing server metrics (CPU, RAM, DISK etc)



Thanks 




 On Tue, 16 Jan 2024 22:09:19 +1300 Xiangdong Huang  
wrote ---



Hi Cheongu Kim, 
 
Please send email to mailto:dev@iotdb.apache.org for discussion (remember to 
subscribe the mailing list first). 
 
As for your question, maybe the easiest method is add an offset on the 
timestamp to make they different. 
e.g., 
origin data: , , 
  
to: 
, ,   
 
Best, 
--- 
Xiangdong Huang 
School of Software, Tsinghua University 
 
 
김천구  于2024年1月16日周二 02:33写道: 
 
> 
> Dear Apache IoTDB Development Team 
> 
> Greetings, I hope this email finds you well. My name is Cheongu Kim and I am 
> reaching out to you today to request some assistance. I currently plan to 
> test IoTDB with some spatio-temporal(so-called ST) dataset which consists of 
> time, longitude, latitude, and serial data generated from these 3 elements. 
> 
> Since the data is about trajectories of vehicles, there are hundreds of 
> duplicates at the same time even in one vehicle because of its measurement 
> method. And the dataset can be identified by the serial data(Since space and 
> time are unique) 
> 
> I need to make a database which can be identified by the serial data, Which 
> means the serial data must be a key column or at least make sure not to lose 
> the data because it has the same time value or timestamp. 
> 
> I read about IoTDB papers, and explored the website but was unable to find 
> the information about creating the key column or unique column. I know it's a 
> time series database so time is the key column but how can I make another key 
> or unique column in iotdb? 
> 
> Thank you in advance for your time and assistance. I look forward to any 
> insights or information you can provide. 
> 
> Sincerely, 
> 
> Cheongu Kim 
> mailto:ckim191...@gmail.com

Re: What are all the empty "ext" directories for?

2023-07-12 Thread Trevor Hart
Not sure what all of them are but UDF is where you put custom data functions 
(jar files).



Im guessing the same for triggers?



Thanks 

Trevor Hart

Ope Limited

w: http://www.ope.nz/

m: +64212728039








 On Thu, 13 Jul 2023 00:45:40 +1200 Christofer Dutz 
 wrote ---



Hi, 
 
so playing around with IoTDB running in embedded mode, I noticed that it 
creates a directory “data” which totally makes sense to me, as it contains all 
the data. 
However it also creates a whole tree under “ext” which just seems to contain 
loads of empty directories. 
What’s the purpose of these and could we prevent them from being created, if 
they are not needed? 
 
ext 
 /pipe 
 /install 
 /tmp 
 /trigger 
 /install 
 /tmp 
 /udf 
 /install 
 /tmp 
 
 
Chris

Re: Building the jdbc-diver as a fat-jar?

2023-07-04 Thread Trevor Hart




100% agree. I have been building my own fat jar version for the 
past few years. Thanks Trevor HartOpe Limitedw: www.ope.nzm: +64212728039
 On Wed, 05 Jul 2023 06:07:39 +1200  Christofer 
Dutz wrote  Hi all,  today I wanted to play 
around with the 1.3.0-SNAPSHOT version and wanted to use IntelliJ as the SQL 
client …. But when adding IoTDB JDBC driver I noticed, we’re currently only 
building a default jar.  This is quite a pain, if you want to manually deploy 
the jdbc driver anywhere … locally I added a config, that makes the build also 
build a fat-jar “iotdb-jdbc-1.3.0-SNAPSHOT-jar-with-dependencies.jar” … would 
it be ok for me to commit that change? I think it’s quite useful.  Chris








Re: Remove UserGuide before 0.13 from website

2023-02-23 Thread Trevor Hart
+1 for having an archive available. I still use these docs and I am using 0.12.



Thanks 

Trevor Hart

Ope Limited

w: http://www.ope.nz/

m: +64212728039








 On Fri, 24 Feb 2023 06:10:08 +1300 Xiangdong Huang  
wrote ---



I think it is ok to remove old docs out of the source repo. 
But is there a possible way to archive them? 
 
Best, 
--- 
Xiangdong Huang 
School of Software, Tsinghua University 
 
 黄向东 
清华大学 软件学院 
 
谭新宇 <mailto:1025599...@qq.com.invalid> 于2023年2月23日周四 20:34写道: 
> 
> Hi, 
> 
> +1 for removing old docs 
> 
> 
> Thanks 
> -- 
> Xinyu Tan 
> 
> 
> > 2023年2月23日 18:30,Jialin Qiao <mailto:qiaojia...@apache.org> 写道: 
> > 
> > Hi, 
> > 
> > There are 8 versions UserGuide on our website[1], the old versions are 
> > hardly used anymore. 
> > 
> > I prefer to remove the history versions before 0.13 (not include). 
> > 
> > [1] https://iotdb.apache.org/ 
> > 
> > Thanks, 
> > — 
> > Jialin Qiao 
> > Apache IoTDB PMC 
>

Re: [ANNOUNCE] Apache IoTDB 1.0.0 released

2022-12-05 Thread Trevor Hart
Amazing work team! Very excited!


Thanks 

Trevor Hart




 On Tue, 06 Dec 2022 03:28:49 +1300 Jialin Qiao  
wrote ---



Congratulations! 
— 
Jialin Qiao 
Apache IoTDB PMC

Re: TsFile golden data

2022-08-14 Thread Trevor Hart
Ok I will email you directly.




Thanks 

Trevor

​






 On Mon, 15 Aug 2022 08:43:36 +1200 Giorgio Zoppi  
wrote ---



Yes, the lastest. My goal is to dedicate more time at my modern C++ native 
library since it has been 2 years that it's on hold, it needs to be finished, 
so i would like todedicate at least 3 hours per week in order to complete. The 
current status is not good. The ambitious goal is to write less code possible. 

To be honest I don't like too much the Java verbosity (yeah, it's a necessary 
evil when you're coding in java), so this came this effort. If someone of the 
community wants to 

motivate me and works as product owner, it'll be nice. I think that a feature 
parity we can achieve equal or superior performances of parquet/arrow and also 
very superior performance respect to current implementation. I am striving for 
*beautiful simplicity* so hide whenever possible complexity in a deep interface.

The use case might be a car, in vehicle we usually store gps latitiude,  gps 
longitude, gps height, speed, speed variations, and other parameters every K 
milliseconds.

This gives us in 1km path for example an huge number of points. Parquet works 
fine in this case, but if we can proof  that this format is better in term of 
space,

memory consumption, query performance,etc, we won the lottery :)



Best Regards,

Giorgio



































Il giorno dom 14 ago 2022 alle ore 21:37 Trevor Hart <mailto:tre...@ope.nz> ha 
scritto:







-- 
Life is a chess game - Anonymous.



What version are you looking for? Will V12 be okay?
 
 
 
 Thanks 
 
 Trevor
 
 
 
 
 
 
 
 
  On Sun, 14 Aug 2022 19:45:24 +1200 Giorgio Zoppi 
<mailto:giorgio.zo...@gmail.com> wrote ---
 
 
 
 Hello IOTDBers, 
 i am looking for a last version Tsfile with real or somewhat real data 
 for testing purposes. Is there anyone has to share such a file, with at 
 least 1k 
 records. Or do we have any in the test repo? 
 Best Regards, 
 Giorgio

Re: TsFile golden data

2022-08-14 Thread Trevor Hart
What version are you looking for? Will V12 be okay?



Thanks 

Trevor








 On Sun, 14 Aug 2022 19:45:24 +1200 Giorgio Zoppi  
wrote ---



Hello IOTDBers, 
i am looking for a last version Tsfile with real or somewhat real data 
for testing purposes. Is there anyone has to share such a file, with at 
least 1k 
records. Or do we have any in the test repo? 
Best Regards, 
Giorgio

Re: [DISCUSSION] The name of a UDF

2022-08-02 Thread Trevor Hart
In Windows OS that kind of action is referred to as "deduplication" or dedup.





Thanks 

Trevor Hart

Ope Limited

w: http://www.ope.nz/

m: +64212728039








 On Wed, 03 Aug 2022 01:47:08 +1200 Jialin Qiao  
wrote ---



Hi, 
 
change_points may be a good option. 
 
Refer to https://help.aliyun.com/document_detail/93197.html 
 
Thanks, 
Jialin Qiao 
 
On 2022/08/02 07:32:19 "mailto:18110526...@163.com"; wrote: 
> Hi everyone, 
> 
> We are developing a UDF to remove a sequence of consecutive identical values 
> (keeping only the first one), but have not come up with a proper name for it. 
> We can't name it ‘distinct' because it has a different meaning. Here are some 
> of the alternatives we came up with, and if you have a better one, we are 
> happy to discuss! 
> 
> 1.removeContinuous 
> 2.removeConsecutiveIdentity 
> 
> The issue related: 
> > https://github.com/apache/iotdb/issues/6751 
> > <https://github.com/apache/iotdb/issues/6751> 
> 
> Best, 
>  
> Weihao Li 
> 
>

JDBC vs Java API

2022-06-05 Thread Trevor Hart
Hello Team



Does anyone have any published benchmark results of JDBC vs the Java API?



Firstly Im aware of https://github.com/thulab/iotdb-benchmark but I dont see 
any published results for the various API methods.



I currently use JDBC for my non-realtime ingestion of data and while Ive never 
encountered any bottle necks I am aware that the documentation says that JDBC 
is not recommended for high velocity data.



Ive done some very basic ingestions benchmarking tests of inserting 1 million 
rows and the Java API is around 2x faster. Is this the typical improvement 
between JDBC and Java API?



For my simplistic test I am inserting 1 millions rows of timestamp & 
incrementing row id eg  insert into root.sg1.d1(timestamp,s1) 
values(${DateTime.Now}, ${n})



With JDBC I get around 6000 rows per second.



With the Java native API I get around 12000 rows per second using 
session.executeNonQueryStatement.



I assume insertTablets and insertRecord(s) would be even faster?



Thanks

Trevor Hart

Re: [DISCUSS] Release IoTDB v0.12.5

2022-02-17 Thread Trevor Hart




+1 Thanks Trevor HartOpe Limitedw: www.ope.nzm: +64212728039 On 
Fri, 18 Feb 2022 16:01:48 +1300  HW-Chao Wang<576749...@qq.com.INVALID> wrote 
good,we can release new version, and resolved some bug fixes.
---Original--- From: "Steve Su"

Re: New feature - The ability of nesting expressions in an aggregation query

2021-12-13 Thread Trevor Hart
Thank you so much! Ive been after this for a while now!



Very excited.



Thanks 

Trevor Hart

Ope Limited

w: http://www.ope.nz/

m: +64212728039






 On Mon, 13 Dec 2021 22:19:29 +1300 Eric Pai  wrote 




Dear all, 
 
A new feature, the ability of nesting expressions outside aggregation 
functions, has been finished in master branch. Now we can query like this: 
 
 *   Nested arbitrary expressions outside aggregation queries: select sum(s1) + 
sum(s2), -sum(s3), sum(s4) , sin(sum(s4) + cos(avg(s4)) + 1, sin(cos(sum(s5))) 
from root.sg.d1; 
 *   Nested arbitrary expressions in a GROUP BY query: select sum(s1) + 
sum(s2), -sum(s3), sum(s4) , sin(sum(s4) + cos(avg(s4)) + 1, sin(cos(sum(s5))) 
from root.sg.d1 GROUP BY([0, 9000), 1s); 
Next, I will move forward to implement the new feature under GROUP BY LEVEL and 
GROUP BY FILL query. 
For a brief introduction and example, please reference the user documents in 
https://iotdb.apache.org/UserGuide/Master/IoTDB-SQL-Language/DML-Data-Manipulation-Language.html#aggregate-query
 
JIRA: https://issues.apache.org/jira/browse/IOTDB-2091 
 
Thanks

Re: iotdbUI - GUI for executing queries

2021-11-24 Thread Trevor Hart
Hi



Here is a mac OS zip with the correct Open JDK 11 + JFX runtime.

 
https://ope.nz/public/iotdbUI_macos.zip



Thanks 

Trevor Hart

Ope Limited

w: http://www.ope.nz/

m: +64212728039






 On Thu, 25 Nov 2021 16:23:07 +1300 Xiangdong Huang  
wrote 



Hi Trevor, 
 
Thanks, I download the file and use JDK11 (MacOS). 
 
java version "11.0.5" 2019-10-15 LTS 
Java(TM) SE Runtime Environment 18.9 (build 11.0.5+10-LTS) 
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.5+10-LTS, mixed mode) 
 
When using ` java --module-path ./runtime/java/javafx/lib 
--add-modules ALL-MODULE-PATH -jar iotdbUI.jar`, 
there are errors: 
 
Graphics Device initialization failed for :  es2, sw 
Error initializing QuantumRenderer: no suitable pipeline found 
java.lang.RuntimeException: java.lang.RuntimeException: Error 
initializing QuantumRenderer: no suitable pipeline found 
at 
javafx.graphics/com.sun.javafx.tk.quantum.QuantumRenderer.getInstance(QuantumRenderer.java:280)
 
 
Is it caused by JDK/JavaFX version? 
 
Best, 
--- 
Xiangdong Huang 
School of Software, Tsinghua University 
 
 黄向东 
清华大学 软件学院 
 
Trevor Hart <mailto:tre...@ope.nz> 于2021年11月25日周四 上午10:49写道: 
> 
> Hello All, 
> 
> 
> 
> I know iotdb-web-workbench is nearing release but I thought I would share my 
> iotdbUI that I showed during my ApacheCon presentation. 
> 
> 
> 
> This is a basic portable Java based GUI for querying iotDB (using JDBC). I 
> find this easier to use compared to the CLI as it contains a graphical tree 
> of the hierarchy. 
> 
> 
> 
> https://ope.nz/public/iotdbUI.zip 
> 
> 
> 
> You can see a screenshot of the application here; 
> https://ope.nz/public/iotdbUI_screenshot.png 
> 
> 
> 
> Notes; 
> 
> 
> 
> 1. Open JDK runtime is included (or use your own, requires JFX lib) 
> 
> 2. To launch manually execute; javaw -module-path .\runtime\java\javafx\lib 
> --add-modules ALL-MODULE-PATH -jar iotdbUI.jar 
> 
> 3. if you are on Windows you can use the included executable that runs the 
> above command (it expects included bundled runtime). 
> 
> 
> 
> Any feedback is welcome. 
> 
> 
> Thanks 
> 
> Trevor Hart

iotdbUI - GUI for executing queries

2021-11-24 Thread Trevor Hart
Hello All,



I know iotdb-web-workbench is nearing release but I thought I would share my 
iotdbUI that I showed during my ApacheCon presentation.



This is a basic portable Java based GUI for querying iotDB (using JDBC). I find 
this easier to use compared to the CLI as it contains a graphical tree of the 
hierarchy.



https://ope.nz/public/iotdbUI.zip



You can see a screenshot of the application here; 
https://ope.nz/public/iotdbUI_screenshot.png



Notes;



1. Open JDK runtime is included (or use your own, requires JFX lib)

2. To launch manually execute; javaw -module-path .\runtime\java\javafx\lib 
--add-modules ALL-MODULE-PATH -jar iotdbUI.jar

3. if you are on Windows you can use the included executable that runs the 
above command (it expects included bundled runtime).



Any feedback is welcome.


Thanks 

Trevor Hart

Re: IoTDB-quality applying for subproject

2021-11-24 Thread Trevor Hart
Great news as I would love to contribute to it!



Thanks 

Trevor






 On Thu, 25 Nov 2021 15:15:03 +1300 陈 鹏宇  wrote 



Hi everyone, 
I'm on behalf of IoTDB-quality developers. We have recently decided to make our 
project open source, and formally apply it for a subproject of Apache IoTDB. 
IoTDB-quality is a collection of IoTDB UDFs designed for data quality 
diagnosis, including common statistics, anomaly detection, frequency analysis, 
regular expression processing, data quality analysis and data repairing. 
For more documentation, please visit the homepage of IoTDB-quality. 
https://thulab.github.io/iotdb-quality/ 
We are also hoping for contributions from community. 
 
Pengyu Chen 
Nov 25 2021

Re: New UDTF iotDBExtras

2021-11-02 Thread Trevor Hart
I've updated UDTFDistinctCount to allow sorting using the Time column (the 
pseudo "count" column)



This is useful where you only want to get the top 10 values (based on their 
occurrence count)



Example;



select distinct_count(temperature,'sort'='asc') from root.ln.wf01.wt01 limit 10;






 On Wed, 03 Nov 2021 05:53:02 +1300 Steve Su  
wrote 


Hi Trevor, 
 
Thanks for sharing the usage of UDTF. I have starred your GitHub repo :D 
 
> Besides, could we rename the column from time to count in UDF? 
 
Sadly, no. This can be a new feature. 
 
> Unless I am mistaken a time column must be returned via UDTF - if you look at 
> IoTDB-Quality UDTFDistinct it returns a incrementing Long value as the time 
> column. 
 
In our design, we do expect a UDTF to return an increasing long value column as 
a time column. But as you see, a non-increasing long value column is also okay. 
To be honest, I was very surprised to see this can work. 
 
> Initially I wanted to return Col1 (Distinct Values) and Col2 (Count as Int) 
> but at this release it looks like you have to return Col1 (Time) and Col2 (A 
> computed value) - so at this version it does not look possible to return more 
> than 1 computed value so using the time column is just a work around to get 
> the count in the results. Not ideal but gets the job done. 
 
In fact, many users wanted us to support UDTFs that can output multiple 
columns, but we haven't figured out how to define this feature: what the Java 
interface should look like, what the SQL should look like, and so on. 
 
I want to hear your suggestions :D 
 
Steve Su 
 
-- Original -- 
From: "mailto:dev@iotdb.apache.orgtre...@ope.nz"; <mailto:tre...@ope.nz>; 
Date: Tue, Nov 2, 2021 09:15 AM 
To: "dev"<mailto:dev@iotdb.apache.org>; 
Cc: 
"qiaojialin"<mailto:qiaojia...@apache.org>;"steveyurongsu"<mailto:steveyuron...@qq.com.invalid>;
 
Subject: Re: New UDTF iotDBExtras 
 
Hello Jialin 
 
 
 
Yes digital time does look better - this is how I will access it through JDBC 
anyway. I treat all times as Long. 
 
 
 
Unless I am mistaken a time column must be returned via UDTF - if you look at 
IoTDB-Quality UDTFDistinct it returns a incrementing Long value as the time 
column. 
 
 
 
The UTDF method to put the values together only accepts two parameters ie time 
and a value. 
 
 
 
Initially I wanted to return Col1 (Distinct Values) and Col2 (Count as Int) but 
at this release it looks like you have to return Col1 (Time) and Col2 (A 
computed value) - so at this version it does not look possible to return more 
than 1 computed value so using the time column is just a work around to get the 
count in the results. Not ideal but gets the job done. 
 
 
 
Thanks 
 
Trevor 
 
 
 
 
 
 
 On Tue, 02 Nov 2021 14:07:03 +1300 Jialin Qiao 
<mailto:qiaojia...@apache.org> wrote  
 
 
 
Hi, 
 
Digital time looks more intuitive. 
Besides, could we rename the column from time to count in UDF? 
@mailto:mailto:steveyuron...@qq.com.invalid 
<mailto:mailto:steveyuron...@qq.com.invalid> 
 
Thanks, 
— 
Jialin Qiao 
 
Trevor Hart <mailto:mailto:tre...@ope.nz> 于2021年11月2日周二 上午6:09写道: 
 
> I have put my first UDTF on GitHub - https://github.com/ope-nz/iotDBExtras 
> 
> 
> 
> The first function I have developed is called UDTFDistinctCount. It 
> returns the distinct values (similar to IoTDB-Quality UDTFDistinct) but it 
> includes the count of each distinct values (as the time column). 
> 
> 
> 
> Example query; 
> 
> 
> 
> IoTDB> select distinct_count(temperature) from root.ln.wf01.wt01 
> 
> 
> +-+-+ 
> 
> | 
> Time|distinct_count(root.ln.wf01.wt01.temperature)| 
> 
> 
> +-+-+ 
> 
> |1970-01-01T12:00:00.020+12:00| 
> 24.37| 
> 
> |1970-01-01T12:00:00.009+12:00| 
> 24.12| 
> 
> |1970-01-01T12:00:00.016+12:00| 
> 24.87| 
> 
> 
> +-+-+ 
> 
> 
> 
> Or as digital time; 
> 
> 
> 
> ++-+ 
> 
> 
> |Time|distinct_count(root.ln.wf01.wt01.temperature)| 
> 
> ++---------+ 
> 
> |  20|24.37| 
> 
> |   9|24.12| 
> 
> |  16|24.87| 
> 
> ++-+ 
> 
> 
> 
> 
> 
> Thanks 
> 
> Trevor Hart 
> 
> Ope Limited 
> 
> w: http://www.ope.nz/ 
> 
> m: +64212728039

Re: New UDTF iotDBExtras

2021-11-01 Thread Trevor Hart
Hello Jialin



Yes digital time does look better - this is how I will access it through JDBC 
anyway. I treat all times as Long.



Unless I am mistaken a time column must be returned via UDTF - if you look at 
IoTDB-Quality UDTFDistinct it returns a incrementing Long value as the time 
column.



The UTDF method to put the values together only accepts two parameters ie time 
and a value.



Initially I wanted to return Col1 (Distinct Values) and Col2 (Count as Int) but 
at this release it looks like you have to return Col1 (Time) and Col2 (A 
computed value) - so at this version it does not look possible to return more 
than 1 computed value so using the time column is just a work around to get the 
count in the results. Not ideal but gets the job done.



Thanks 

Trevor






 On Tue, 02 Nov 2021 14:07:03 +1300 Jialin Qiao  
wrote 



Hi, 
 
Digital time looks more intuitive. 
Besides, could we rename the column from time to count in UDF? 
@mailto:steveyuron...@qq.com.invalid <mailto:steveyuron...@qq.com.invalid> 
 
Thanks, 
— 
Jialin Qiao 
 
Trevor Hart <mailto:tre...@ope.nz> 于2021年11月2日周二 上午6:09写道: 
 
> I have put my first UDTF on GitHub - https://github.com/ope-nz/iotDBExtras 
> 
> 
> 
> The first function I have developed is called UDTFDistinctCount. It 
> returns the distinct values (similar to IoTDB-Quality UDTFDistinct) but it 
> includes the count of each distinct values (as the time column). 
> 
> 
> 
> Example query; 
> 
> 
> 
> IoTDB> select distinct_count(temperature) from root.ln.wf01.wt01 
> 
> 
> +-+-+ 
> 
> | 
> Time|distinct_count(root.ln.wf01.wt01.temperature)| 
> 
> 
> +-+-+ 
> 
> |1970-01-01T12:00:00.020+12:00| 
> 24.37| 
> 
> |1970-01-01T12:00:00.009+12:00| 
> 24.12| 
> 
> |1970-01-01T12:00:00.016+12:00| 
> 24.87| 
> 
> 
> +-+-+ 
> 
> 
> 
> Or as digital time; 
> 
> 
> 
> ++-+ 
> 
> 
> |Time|distinct_count(root.ln.wf01.wt01.temperature)| 
> 
> ++-+ 
> 
> |  20|24.37| 
> 
> |   9|        24.12| 
> 
> |  16|24.87| 
> 
> ++-+ 
> 
> 
> 
> 
> 
> Thanks 
> 
> Trevor Hart 
> 
> Ope Limited 
> 
> w: http://www.ope.nz/ 
> 
> m: +64212728039

New UDTF iotDBExtras

2021-11-01 Thread Trevor Hart
I have put my first UDTF on GitHub - https://github.com/ope-nz/iotDBExtras



The first function I have developed is called UDTFDistinctCount. It returns the 
distinct values (similar to IoTDB-Quality UDTFDistinct) but it includes the 
count of each distinct values (as the time column).



Example query;



IoTDB> select distinct_count(temperature) from root.ln.wf01.wt01 

+-+-+

| Time|distinct_count(root.ln.wf01.wt01.temperature)|

+-+-+

|1970-01-01T12:00:00.020+12:00|    24.37|

|1970-01-01T12:00:00.009+12:00|    24.12|

|1970-01-01T12:00:00.016+12:00|    24.87|

+-+-+



Or as digital time;



++-+ 


|Time|distinct_count(root.ln.wf01.wt01.temperature)|

++-+

|  20|    24.37|

|   9|    24.12|

|  16|    24.87|

++-+





Thanks 

Trevor Hart

Ope Limited

w: http://www.ope.nz/

m: +64212728039

Re: Upgrading to 0.12 & iotdb-engine.properties

2021-11-01 Thread Trevor Hart
Thanks Yuan!








 On Fri, 29 Oct 2021 08:48:57 +1300 Trevor Hart  wrote 



Hello Team 
 
 
 
I am finally upgrading to v0.12. I noticed that the new iotdb-engine.properties 
file has almost all options commented out (while my v0.11 does not). 
 
 
 
Is this okay to leave commented out unless I specifically changed a setting in 
v0.11? 
 
 
 
Thanks  
 
Trevor

Upgrading to 0.12 & iotdb-engine.properties

2021-10-28 Thread Trevor Hart
Hello Team



I am finally upgrading to v0.12. I noticed that the new iotdb-engine.properties 
file has almost all options commented out (while my v0.11 does not).



Is this okay to leave commented out unless I specifically changed a setting in 
v0.11?



Thanks 

Trevor

Where clause with wildcard?

2021-09-29 Thread Trevor Hart
I have this query that works.



select count(method) 

from root.org.logs.ags.ABC.device01

where method <> 'job'

group by ([now()-1W,now()),1H)



However if I introduce a wildcard all the results are zero. There is no error 
but all values are zero. Is this expected? If so is there any way to filter 
across all timeseries?


select count(method) 

from root.org.logs.ags.*.device01

where method <> 'job'

group by ([now()-1W,now()),1H)



Thanks 

Trevor

ApacheCon & iotDB

2021-06-07 Thread Trevor Hart
Good news everyone, I put in a submission for ApacheCon in the IOT stream which 
has been accepted.


I will be presenting how I moved an application from using a traditional RDBMS 
to IotDB.



Very excited!


Thanks 

Trevor Hart

Re: [Discussion] The aggregation function name of getting extreme value

2021-06-01 Thread Trevor Hart
My vote is for #3 or "EXT_VALUE" to be consistent with "MAX_VALUE" etc




Thanks 

Trevor Hart

Ope Limited

w: http://www.ope.nz/









 On Wed, 02 Jun 2021 14:50:25 +1200 Xiangwei Wei  
wrote 


Hi guys, 
 
We plan to add a new aggregation function, whose function is to take the 
extreme value, that is, to return the maximum absolute value (the preferred 
positive value). The typical use scenario is the vibration amplitude of the 
bridge. 
It has been contributed by @Liu Yu ~[1]. 
 
One of the issues that we need to discuss with you is the name of this 
aggregate function. At present, it is called `ext`, for example, ext(s1). 
We should take a name which is intuitive and easy to understand. 
Alternative options: 
1. ext 
2. extreme 
3. ext_value 
 
 
 
[1] https://github.com/apache/iotdb/pull/3289 
-- 
Best, 
Xiangwei Wei

Re: High CPU Usage

2021-04-19 Thread Trevor Hart
Just reporting back on this topic again.



The server has been up ~11 days now and CPU is still sitting around 4% 
constantly. Very happy with that!


Thanks 

Trevor Hart

Re: Please submit your talks to the ApacheCons early

2021-04-15 Thread Trevor Hart
Hi Chris,



Do you have a link to the submission page for requirements etc?


Thanks 

Trevor Hart








 On Thu, 15 Apr 2021 23:21:49 +1200 Christofer Dutz 
 wrote 



Hi IoTDb folks, 
 
As the chair of the IoT Track for the NA/EU ApacheCon and Co-Chair of the Asia, 
I regularly review the submissions for the IoT tracks. 
 
I have noticed that there aren't any submissions from your project for the Asia 
ApacheCon. 
 
If you plan on submitting something, I strongly ask to you do it soon ... by 
submitting on one of the last days you eliminate the chance for coordination 
and are more likely to get a "Rejected". If you submit early, we can all work 
on getting the submissions into shape that the chance of getting accepted is 
higher. 
 
AAND you help getting my life a lot easier, because I don't have to do all 
the planning stuff in 2-3 days which I normally have to do. 
 
So ... if you plan on submitting: Do it ASAP ... keep in mind, you can always 
edit proposals, add/remove speakers and even withdraw a talk without any 
effort. 
 
Thanks, 
 
Chris

Re: High CPU Usage

2021-04-10 Thread Trevor Hart
0.11.3 RC3 seems much happier so far. There was an initial spike in CPU after I 
upgradef to 11.3 but it has now settled down.I will keep an eye on it over the 
next week - see attachment. Thanks Trevor Hart  On Sat, 10 Apr 2021 
15:45:36 +1200  tre...@ope.nz  wrote Thanks everyone for the replies, I 
have deployed 0.11.3 RC3. I will try that before I try altering the level 
compaction.Thanks Trevor Hart On Fri, 09 Apr 2021 23:29:07 +1200 Xiangdong 
Huang  wrote And remember to set the strategy back 
after diagnosis...  --- Xiangdong Huang School 
of Software, Tsinghua University   黄向东 清华大学 软件学院   Jialin Qiao 
 于2021年4月9日周五 下午7:24写道:  > Hi, > > This may due to the 
Compaction. To verify the problem, change the compaction_strategy > to 
NO_COMPACTION. > > This is fixed in the 0.11.3 RC3[1]. you could try this 
version :) > > [1] https://dist.apache.org/repos/dist/dev/iotdb/0.11.3/rc3 > > 
Thanks, > — > Jialin Qiao > School of Software, Tsinghua 
University > > 乔嘉林 > 清华大学 软件学院 > > > Trevor Hart  于2021年4月9日周五 
上午10:29写道: > >> Hi Team, >> >> I am still struggling with high CPU usage on 
V11.2. My CPU usage creeps >> up after a few days and basically got to 100% 
today and was unusable. >> >> If I restart the iotDB server then its fine for a 
few days. This Linux >> server has 2 cores and 4Gb RAM. The database is running 
24x7. >> >> The data ingestion is typically small. Maybe a 500 points per 
minute. I >> can never see any data ingestion backing up unless the CPU is over 
90%. >> >> Questions; >> >> 1. Does anyone else see this behaviour? >> 2. Are 
there any maintenance tasks I should call daily? Should I be >> calling Flush 
daily? >> 3. Should I reduce the WAL threshold? >> 4. I am using 
compaction_strategy=LEVEL_COMPACTION >> 5. And have set 
enable_unseq_compaction=false >> >> Is there anything else I can try or do I 
just need bigger hardware? >> >> >> >> >> >> Thanks >> Trevor >> >> >> 

Re: High CPU Usage

2021-04-09 Thread Trevor Hart
Thanks everyone for the replies, I have deployed 0.11.3 RC3. I will try that 
before I try altering the level compaction.


Thanks 

Trevor Hart








 On Fri, 09 Apr 2021 23:29:07 +1200 Xiangdong Huang  
wrote 



And remember to set the strategy back after diagnosis... 
 
--- 
Xiangdong Huang 
School of Software, Tsinghua University 
 
 黄向东 
清华大学 软件学院 
 
 
Jialin Qiao <mailto:qiaojia...@apache.org> 于2021年4月9日周五 下午7:24写道: 
 
> Hi, 
> 
> This may due to the Compaction. To verify the problem, change the 
> compaction_strategy 
> to NO_COMPACTION. 
> 
> This is fixed in the 0.11.3 RC3[1]. you could try this version :) 
> 
> [1] https://dist.apache.org/repos/dist/dev/iotdb/0.11.3/rc3 
> 
> Thanks, 
> — 
> Jialin Qiao 
> School of Software, Tsinghua University 
> 
> 乔嘉林 
> 清华大学 软件学院 
> 
> 
> Trevor Hart <mailto:tre...@ope.nz> 于2021年4月9日周五 上午10:29写道: 
> 
>> Hi Team, 
>> 
>> I am still struggling with high CPU usage on V11.2. My CPU usage creeps 
>> up after a few days and basically got to 100% today and was unusable. 
>> 
>> If I restart the iotDB server then its fine for a few days. This Linux 
>> server has 2 cores and 4Gb RAM. The database is running 24x7. 
>> 
>> The data ingestion is typically small. Maybe a 500 points per minute. I 
>> can never see any data ingestion backing up unless the CPU is over 90%. 
>> 
>> Questions; 
>> 
>> 1. Does anyone else see this behaviour? 
>> 2. Are there any maintenance tasks I should call daily? Should I be 
>> calling Flush daily? 
>> 3. Should I reduce the WAL threshold? 
>> 4. I am using compaction_strategy=LEVEL_COMPACTION 
>> 5. And have set enable_unseq_compaction=false 
>> 
>> Is there anything else I can try or do I just need bigger hardware? 
>> 
>> 
>> 
>> 
>> 
>> Thanks 
>> Trevor 
>> 
>> 
>>

High CPU Usage

2021-04-08 Thread Trevor Hart
Hi Team,



I am still struggling with high CPU usage on V11.2. My CPU usage creeps up 
after a few days and basically got to 100% today and was unusable.



If I restart the iotDB server then its fine for a few days. This Linux server 
has 2 cores and 4Gb RAM. The database is running 24x7.



The data ingestion is typically small. Maybe a 500 points per minute. I can 
never see any data ingestion backing up unless the CPU is over 90%.



Questions;



1. Does anyone else see this behaviour?

2. Are there any maintenance tasks I should call daily? Should I be calling 
Flush daily?

3. Should I reduce the WAL threshold?

4. I am using compaction_strategy=LEVEL_COMPACTION

5. And have set enable_unseq_compaction=false



Is there anything else I can try or do I just need bigger hardware?










Thanks 

Trevor

Re: start to release v0.11.3 RC2 and v0.12 RC1

2021-04-01 Thread Trevor Hart
+1 me too, very excited for v0.12Thanks Trevor  On Thu, 01 Apr 2021 
20:08:27 +1300  neuyi...@163.com  wrote +1,  Looking forward to the release 
of v0.12.Thanks,---Houliang QiBONC, LtdOn 
04/1/2021 15:03,Yuxiang Song<1063877...@qq.com> wrote:+1, look forward to the 
release of v0.12.-- 原始邮件 --发件人: 
"王超"

Re: Considering releasing 0.11.3?

2021-03-15 Thread Trevor Hart
Great news.



Thanks 

Trevor






 On Tue, 16 Mar 2021 13:58:22 +1300 Xiangdong Huang  
wrote 


Hi all, 
 
As all Opened PRs labeled with 0.11 are merged, I will begin to release 
0.11.3 today. 
 
Best, 
--- 
Xiangdong Huang 
School of Software, Tsinghua University 
 
 黄向东 
清华大学 软件学院 
 
 
Xiangdong Huang  于2021年3月2日周二 下午2:18写道: 
 
> Hi, 
> 
> I see all Opened PRs that labeled with 0.11 are merged. 
> Any known bugs left for releasing 0.11.3? 
> 
> Best, 
> --- 
> Xiangdong Huang 
> School of Software, Tsinghua University 
> 
>  黄向东 
> 清华大学 软件学院 
>

Re: Percentile?

2021-03-12 Thread Trevor Hart
Thanks for the replies. Yes I was thinking about a UDF to support this.I will 
look at the iotdb-quality UDF as that shows a percentile method as well as 
distinct.Thanks Trevor  On Sat, 13 Mar 2021 00:33:15 +1300  
saint...@gmail.com  wrote Hi,

Exactly percentile may be a disaster as it needs to sort all data.
Therefore, we have to use some approximate algorithm.

As I know, There is a UDF library [1] that will implement that.

[1] https://thulab.github.io/iotdb-quality/

Best,
---
Xiangdong Huang
School of Software, Tsinghua University

 黄向东
清华大学 软件学院


Jialin Qiao  于2021年3月12日周五 下午6:07写道:

> Hi,
>
> > Is sorting by a value other than time ever going to be supported?
>
> This could be a new feature, you could create an issue for this feature :)
>
>
> > Is a percentile function (eg PERCENTILE_DISC) on the road map?
>
> Percentile function could be implemented through UDF in 0.12.0-SNAPSHOT.
>
> Welcome to contribute this function as a built-in UDF in IoTDB. You could
> refer to [1]
>
>
> [1]
> http://iotdb.apache.org/UserGuide/Master/Operation%20Manual/UDF%20User%20Defined%20Function.html
>
> Thanks,
> --
> Jialin Qiao
> School of Software, Tsinghua University
>
> 乔嘉林
> 清华大学 软件学院
>
> -原始邮件-
> *发件人:*"Trevor Hart" 
> *发送时间:*2021-03-12 17:10:05 (星期五)
> *收件人:* dev 
> *抄送:*
> *主题:* Percentile?
>
> Hello,
>
> I'm trying to determine percentile values from sensor values.
>
> In a normal RDBM this can be achieved by sorting sensor values and then
> using OFFSET and LIMIT to determine the 90th percentile etc. However in
> iotDB we cant sort by sensor values
>
> Is sorting by a value other than time ever going to be supported?
>
> Is a percentile function (eg PERCENTILE_DISC) on the road map?
>
> Thanks
> Trevor
>
>
>


Percentile?

2021-03-12 Thread Trevor Hart
Hello,



I'm trying to determine percentile values from sensor values.



In a normal RDBM this can be achieved by sorting sensor values and then using 
OFFSET and LIMIT to determine the 90th percentile etc. However in iotDB we cant 
sort by sensor values 



Is sorting by a value other than time ever going to be supported?



Is a percentile function (eg PERCENTILE_DISC) on the road map?



Thanks 

Trevor