I've enabled the Autoscaler in the Flink Kubernetes Operator for several
jobs and since then I'm observing the following errors:

```
Message: 1 AutoscalerError: Failure executing: PUT at:
https://100.64.0.1:443/api/v1/namespaces/my-namespace/configmaps/autoscaler-my-job.
Message: Operation cannot be fulfilled on configmaps "autoscaler-my-job":
StorageError: invalid object, Code: 4, Key:
/registry/configmaps/my-namespace/autoscaler-my-job, ResourceVersion: 0,
AdditionalErrorMsg: Precondition failed: UID in precondition:
a77dcdcb-6fea-4c3e-bae5-ad876c67c50e, UID in object meta: . Received
status: Status(apiVersion=v1, code=409, details=StatusDetails(causes=[],
group=null, kind=configmaps, name=autoscaler-my-job,
retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status,
message=Operation cannot be fulfilled on configmaps "autoscaler-my-job":
StorageError: invalid object, Code: 4, Key:
/registry/configmaps/my-namespace/autoscaler-my-job, ResourceVersion: 0,
AdditionalErrorMsg: Precondition failed: UID in precondition:
a77dcdcb-6fea-4c3e-bae5-ad876c67c50e, UID in object meta: ,
metadata=ListMeta(_continue=null, remainingItemCount=null,
resourceVersion=null, selfLink=null, additionalProperties={}),
reason=Conflict, status=Failure, additionalProperties={}).
```

The affected jobs get into FAILED state after that.

Do you have any ideas why this is happening? Is there a way to
automatically resolve these issues? Ideally the operator should resolve the
conflict and go on from there vs failing...

Regards,

Salva

Reply via email to