gortiz commented on code in PR #16210:
URL: https://github.com/apache/pinot/pull/16210#discussion_r2200793757
##########
pinot-core/src/main/java/org/apache/pinot/core/util/SegmentProcessorAvroUtils.java:
##########
@@ -83,45 +88,90 @@ public static Schema
convertPinotSchemaToAvroSchema(org.apache.pinot.spi.data.Sc
for (FieldSpec fieldSpec : orderedFieldSpecs) {
String name = fieldSpec.getName();
DataType storedType = fieldSpec.getDataType().getStoredType();
+
+ SchemaBuilder.BaseFieldTypeBuilder<Schema> type;
+ if (fieldSpec.isNullable()) {
+ type = fieldAssembler.name(name).type().nullable();
+ } else {
+ type = fieldAssembler.name(name).type();
+ }
+
+ String logicalType = "pinot." +
fieldSpec.getDataType().toString().toLowerCase(Locale.US);
if (fieldSpec.isSingleValueField()) {
switch (storedType) {
case INT:
- fieldAssembler =
fieldAssembler.name(name).type().intType().noDefault();
+ fieldAssembler = type.intBuilder()
+ .prop("logicalType", logicalType)
+ .endInt()
+ .noDefault();
break;
case LONG:
- fieldAssembler =
fieldAssembler.name(name).type().longType().noDefault();
+ fieldAssembler = type.longBuilder()
+ .prop("logicalType", logicalType)
+ .endLong()
+ .noDefault();
break;
case FLOAT:
- fieldAssembler =
fieldAssembler.name(name).type().floatType().noDefault();
+ fieldAssembler = type.floatBuilder()
+ .prop("logicalType", logicalType)
+ .endFloat()
+ .noDefault();
break;
case DOUBLE:
- fieldAssembler =
fieldAssembler.name(name).type().doubleType().noDefault();
+ fieldAssembler = type.doubleBuilder()
+ .prop("logicalType", logicalType)
+ .endDouble()
+ .noDefault();
break;
case STRING:
- fieldAssembler =
fieldAssembler.name(name).type().stringType().noDefault();
+ case BIG_DECIMAL:
Review Comment:
Yes, and that is what I initially did. But it is not correct. That spec it
says:
> A decimal logical type annotates Avro bytes or fixed types. The byte array
must
contain the two's-complement representation of the unscaled integer value in
big-endian byte
order. The scale is fixed, and is specified using an attribute.
Using here works (the method finishes) but our ingestion pipeline isn't able
to read the value correctly. IIRC (it was a while ago) the issue is that we end
up reading the bytes and then convert these bytes to strings, which doesn't
generate a big decimal compatible string, so when we try to convert that string
into big decimal, it fails.
Given I tried that weeks ago, I may be wrong. I can try again using standard
decimal in the following days.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]