Hi there,
> I am using the confluent jdbc-connector to move data from kafka to oracle
> My avro object has 3 elements:
>  
> String  Id, String name, bytes data
>  
> My database has also 3 columns:
>  
> ID      VARCHAR2(255 CHAR),
> NAME     VARCHAR2(255 CHAR),
> DATA    BLOB

> Using connector configs:
>  
> "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
>   "auto.evolve": "false",
>   "value.converter": "io.confluent.connect.avro.AvroConverter",
>   "insert.mode": "insert",
>   "key.converter": "org.apache.kafka.connect.storage.StringConverter",
>   "dialect.name": "OracleDatabaseDialect",
>   "table.name.format": "MY_DB_TABLE",
>   "listeners": "0.0.0.0",
>   "topics": "MY-TOPIC",
>   "value.converter.schema.registry.url": "http://localhost:8081";,
>   "connection.user": "xxxx“,
>   "name": "my-connector-name",
>   "auto.create": "false",
>   "connection.url": "jdbc:oracle:thin:@XXXXX:16767/XXX",
>   "key.converter.schema.registry.url": "http://localhost:8081";,
>   "pk.mode": "record_value",
>   "pk.fields": "ID"
>  
> 
> when I use   "insert.mode": "insert" everything works fine, but when I switch 
> to „upsert“ I am only able to insert binary files „BLOBs“ up to 32767 bytes. 
> Otherwise I am getting:
>   SQLException: ORA-01461: can bind a LONG value only for insert into a LONG 
> column
>  
> Do you have an idea how to solve this issue?

Many thx in advance 
Valentin

Reply via email to