Hi,
i am trying the flink sql api to read json formate data from kafka topic.
My json schema is a nested json like this
{
  "type": "object",
  "properties": {
    "table": {
      "type": "string"
    },
    "str2": {
      "type": "string"
    },
    "obj1": {
      "type": "object",
      "properties": {
        "rkey": {
          "type": "string"
        },
        "val": {
          "type": "string"
        },
        "lastTime": {
          "type": "number"
        }
      },
      "required": ["lastTime", "rkey", "val"]
    },
    "obj2": {
      "type": "object",
      "properties": {
        "val": {
          "type": "string"
        },
        "lastTime": {
          "type": "number"
        }
      },
      "required": ["lastTime", "val"]
    }
  },
  "required": ["table", "str2", "obj1", "obj2"]
}

i define a table sechema like this.

Schema schemaDesc1 = new Schema()
        .......
        .field("tablestr", Types.STRING).from("table")
        .......
        .field("rkey", Types.STRING).from("rkey");


when i run a debug case ,i got error about the "rkey" field (the file in
the nest obj1)
" SQL validation failed. Table field 'rkey' was resolved to TableSource
return type field 'rkey', but field 'rkey' was not found in the return type
Row".

My question is :does the org.apache.flink.table.descriptors.Json format
support nested json schema? If does ,how can i set the right format or
schema ? If not ,then how can i apply flink sql api on nested json data
source.

Reply via email to