[ 
https://issues.apache.org/jira/browse/CALCITE-5867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hujianhong updated CALCITE-5867:
--------------------------------
    Description: 
In java.sql.JDBCType, there are binary, varbinary, and blob types for binary 
objects. However, there are only two types of binary and varbinary in 
SqlTypeName  of Apache Calcite.

In addition, for binary and varbinary types, we usually use the following code 
to read and write in JDBC:

 
{code:java}
//write
PreparedStatement ps = conn. preparedStatment();
ps.setBytes(1, new byte[]{xxx});
//read
ResultSet rs = xxx;
rs. next();
byte[] b1 = rs. getBytes(1); {code}
 

 

For the Blob type, we usually use the following code in JDBC for streaming read 
and write:
{code:java}
//write
PreparedStatement ps = conn. preparedStatment();
ps.setBlob(1, inputstream);
//read
ResultSet rs = xxx;
rs. next();
Blob b1 = rs.getBlob(1); // or InputStream in = rs.getBinaryStream(2); {code}
 

Because the BLOB is large, it cannot be read in one row like binary and 
varbinary objects, but needs to be read in a stream.

In our scenario, our storage layer both supports binary, varbinary, and blob 
types, but because there is no native support BLOB data type in Apache Calcite, 
our approach is to map the BLOB type of the storage layer to the 
binary/varbinary type of Apache calcite, so as to read and write BLOB types 
through calcite.

However, BLOB objects are read and written as binary/varbinay types, which will 
put a lot of pressure on the memory of the server, easily cause OOM on the 
server, and bring great challenges to the stability of the system. Therefore, 
we expect to add BLOB type support in Apache Calcite, and use streams to 
support BLOB type reading and writing with less memory, so as to avoid OOM 
pressure caused by one-time reading and writing of BLOB objects to the system.

 

------------------

MySQL support BLOB data type: 
[https://dev.mysql.com/doc/refman/8.0/en/blob.html]

In addition, Avatica currently does not support streaming reading and writing 
for BLOB data type. Therefore, in order to solve this problem, in our scenario, 
we support the MySQL protocol on the server side, and can directly use the 
MySQL client to read and write BLOB objects in streaming mode.

  was:
In java.sql.JDBCType, there are binary, varbinary, and blob types for binary 
objects. However, there are only two types of binary and varbinary in 
SqlTypeName  of Apache Calcite.


In addition, for binary and varbinary types, we usually use the following code 
to read and write in JDBC:

 
{code:java}
//write
PreparedStatement ps = conn. preparedStatment();
ps.setBytes(1, new byte[]{xxx});
//read
ResultSet rs = xxx;
rs. next();
byte[] b1 = rs. getBytes(1); {code}
 

 

For the Blob type, we usually use the following code in JDBC for streaming read 
and write:
{code:java}
//write
PreparedStatement ps = conn. preparedStatment();
ps.setBlob(1, inputstream);
//read
ResultSet rs = xxx;
rs. next();
Blob b1 = rs.getBlob(1); // or InputStream in = rs.getBinaryStream(2); {code}
 

Because the BLOB is large, it cannot be read in one row like binary and 
varbinary objects, but needs to be read in a stream.


In our scenario, our storage layer both supports binary, varbinary, and blob 
types, but because there is no native support BLOB data type in Apache Calcite, 
our approach is to map the BLOB type of the storage layer to the 
binary/varbinary type of Apache calcite, so as to read and write BLOB types 
through calcite.


However, BLOB objects are read and written as binary/varbinay types, which will 
put a lot of pressure on the memory of the server, easily cause OOM on the 
server, and bring great challenges to the stability of the system. Therefore, 
we expect to add BLOB type support in Apache Calcite, and use streams to 
support BLOB type reading and writing with less memory, so as to avoid OOM 
pressure caused by one-time reading and writing of BLOB objects to the system.

 

------------------

MySQL support BLOB data type: https://dev.mysql.com/doc/refman/8.0/en/blob.html

In addition, Avatica currently does not support streaming reading and writing 
of BLOB types. Therefore, in order to solve this problem, in our scenario, we 
support the MySQL protocol on the server side, and can directly use the MySQL 
client to read and write BLOB objects in streaming mode.


> Add BLOB data type in SqlTypeName 
> ----------------------------------
>
>                 Key: CALCITE-5867
>                 URL: https://issues.apache.org/jira/browse/CALCITE-5867
>             Project: Calcite
>          Issue Type: New Feature
>          Components: core
>            Reporter: hujianhong
>            Assignee: hujianhong
>            Priority: Major
>             Fix For: 1.35.0
>
>
> In java.sql.JDBCType, there are binary, varbinary, and blob types for binary 
> objects. However, there are only two types of binary and varbinary in 
> SqlTypeName  of Apache Calcite.
> In addition, for binary and varbinary types, we usually use the following 
> code to read and write in JDBC:
>  
> {code:java}
> //write
> PreparedStatement ps = conn. preparedStatment();
> ps.setBytes(1, new byte[]{xxx});
> //read
> ResultSet rs = xxx;
> rs. next();
> byte[] b1 = rs. getBytes(1); {code}
>  
>  
> For the Blob type, we usually use the following code in JDBC for streaming 
> read and write:
> {code:java}
> //write
> PreparedStatement ps = conn. preparedStatment();
> ps.setBlob(1, inputstream);
> //read
> ResultSet rs = xxx;
> rs. next();
> Blob b1 = rs.getBlob(1); // or InputStream in = rs.getBinaryStream(2); {code}
>  
> Because the BLOB is large, it cannot be read in one row like binary and 
> varbinary objects, but needs to be read in a stream.
> In our scenario, our storage layer both supports binary, varbinary, and blob 
> types, but because there is no native support BLOB data type in Apache 
> Calcite, our approach is to map the BLOB type of the storage layer to the 
> binary/varbinary type of Apache calcite, so as to read and write BLOB types 
> through calcite.
> However, BLOB objects are read and written as binary/varbinay types, which 
> will put a lot of pressure on the memory of the server, easily cause OOM on 
> the server, and bring great challenges to the stability of the system. 
> Therefore, we expect to add BLOB type support in Apache Calcite, and use 
> streams to support BLOB type reading and writing with less memory, so as to 
> avoid OOM pressure caused by one-time reading and writing of BLOB objects to 
> the system.
>  
> ------------------
> MySQL support BLOB data type: 
> [https://dev.mysql.com/doc/refman/8.0/en/blob.html]
> In addition, Avatica currently does not support streaming reading and writing 
> for BLOB data type. Therefore, in order to solve this problem, in our 
> scenario, we support the MySQL protocol on the server side, and can directly 
> use the MySQL client to read and write BLOB objects in streaming mode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to