Ryan,
Didn't see you put out a pull request in the last two weeks. Let me know if
you're actively working this, if not I can do my own patch.
Thanks,
Mike
On Wed, Aug 8, 2018 at 6:03 AM Mike Thomsen wrote:
> Yes. Use a custom validator.
>
> On Tue, Aug 7, 2018 at 1:58 PM Ryan Hendrickson <
>
I have solved this problem using what you had provided (for the most part)
Here is how i ended up doing it.
1 Processor group for keeping the oauth token up to date every 55 minutes
GenerateFlowFile -> InvokeHTTP -> PutDistributedMapCache
1 processor group to List files and upload them
List File ->
Dave,
At one point (perhaps it is still true), in order to have a default
value its type had to be the first type in the array. For optional
fields, the default is null so try inverting your type arrays to match
the way they're done in the input schema; namely, ["null", "string"]
rather than ["str
Hi Andy – this is the schema we’re getting from the database query:
"type":"record",
"name":"view_Logons",
"namespace":"any.data",
"fields":[
{"name":"id","type":["null","int"]},
{"name":"timestamp","type":["null","string"]},
{"name":"workstation","type":["null","string"]},
i'd also recommend avoiding that processor and instead moving to use
the record oriented processors, readers, and writers.
On Thu, Aug 23, 2018 at 1:28 PM Andy LoPresto wrote:
>
> Hi Dave,
>
> Can you provide the two schemas (redact anything necessary). There is a way
> to specify an “optional” f
Hi Dave,
Can you provide the two schemas (redact anything necessary). There is a way to
specify an “optional” field [1] by setting the type to an array of null and the
type you support. You can also specify a default value if you wish, which will
be set for records that do not contain a value t
Hi - I've got a scenario where I'm trying to convert the implicit AVRO schema
associated with a database record to a different AVRO schema. I'm trying to use
the ConvertAvroSchema processor, but it won't validate because there is an
'unmapped' field that exists in the second schema but not the f
I am having problems reading hadoop sequence files produced from a combo of
Merge Content -> Create Hadoop Sequence File -> PutHDFS. I have tried both a
small dataset and larger dataset and the result is the same. I can read a few
of the records but it seems like the sequence files haven’t