I have a batch pipeline that runs well with small inputs but fails with a
larger dataset.
Looking at stackdriver I find a fair number of the following:

Request failed with code 400, will NOT retry:
https://dataflow.googleapis.com/v1b3/projects/cgs-nonprod/locations/us-central1/jobs/2017-08-03_13_06_11-1588537374036956973/workItems:reportStatus

How do I investigate to learn more about the cause?
Am I reading this correctly that it is the reason the pipeline failed?
Is this perhaps the result of memory pressure?
How would I monitor the running job to determine its memory needs?
Is there a better place to query about what is likely a dataflow-centric
question?

Thanks in advance!
rdm

Reply via email to