[ 
https://issues.apache.org/jira/browse/NIFI-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17201413#comment-17201413
 ] 

ZhangCheng commented on NIFI-6061:
----------------------------------

[~mattyb149]I didn't find problems about CLOB, and I found that the record 
field type has been converted to Array[Number] not Array[Byte],  so I only fix 
the BLOB problem in new class AbstractDatabaseAdapter.java, if there are other 
problems about record and sql type, we can add some code in switch-case below:

{code:java}
void psSetValue(PreparedStatement ps, int index, Object value, int sqlType, int 
recordSqlType) throws SQLException, IOException {
        if (null == value) {
            ps.setNull(index, sqlType);
        } else {
            switch (sqlType) {
                case Types.BLOB:
                    //resolve BLOB type for record
                    if (Types.ARRAY == recordSqlType) {
                        Object[] objects = (Object[]) value;
                        byte[] byteArray = new byte[objects.length];
                        for (int k = 0; k < objects.length; k++) {
                            Object o = objects[k];
                            if (o instanceof Number) {
                                byteArray[k] = ((Number) o).byteValue();
                            }
                        }
                        try (InputStream inputStream = new 
ByteArrayInputStream(byteArray)) {
                            ps.setBlob(index, inputStream);
                        } catch (IOException e) {
                            throw new IOException("Unable to parse binary data 
" + value.toString(), e.getCause());
                        }
                    } else {
                        try (InputStream inputStream = new 
ByteArrayInputStream(value.toString().getBytes())) {
                            ps.setBlob(index, inputStream);
                        } catch (IOException e) {
                            throw new IOException("Unable to parse binary data 
" + value.toString(), e.getCause());
                        }
                    }
                    break;
                 //add other Types here to resolve data
                default:
                    ps.setObject(index, value, sqlType);
            }
        }
    }
{code}


> PutDatabaseRecord does not properly handle BLOB/CLOB fields
> -----------------------------------------------------------
>
>                 Key: NIFI-6061
>                 URL: https://issues.apache.org/jira/browse/NIFI-6061
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Extensions
>            Reporter: Matt Burgess
>            Assignee: ZhangCheng
>            Priority: Major
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> BLOB/CLOB fields in NiFi's Record API are returned from the record as 
> Object[Byte], but when PutDatabaseRecord tries to insert Object[] via 
> setObject(), the following error occurs:
> 2019-02-20 15:11:16,216 WARN [Timer-Driven Process Thread-10] 
> o.a.n.p.standard.PutDatabaseRecord 
> PutDatabaseRecord[id=0c84b9de-0169-1000-0164-3fbad7a17664] Failed to process 
> StandardFlowFileRecord[uuid=d739f432-0871-41bb-a0c9-d6ceeac68a6d,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1550690599998-1, container=default, 
> section=1], offset=1728, 
> length=251],offset=0,name=d739f432-0871-41bb-a0c9-d6ceeac68a6d,size=251] due 
> to org.postgresql.util.PSQLException: Can't infer the SQL type to use for an 
> instance of [Ljava.lang.Object;. Use setObject() with an explicit Types value 
> to specify the type to use.: 
> Somewhere in the value conversion/representation, PutDatabaseRecord would 
> likely need to create a java.sql.Blob object and transfer the bytes into it. 
> One issue I see is that the record field type has been converted to 
> Array[Byte], so the information that the field is a BLOB is lost by that 
> point. If this requires DB-specific code, we'd likely need to add a Database 
> Adapter property and delegate out to the various DB adapters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to