Hi Davide,
Thanks for the -A switch! Still need to learn some more Kubernetes. :)))
So no network policy in the "some" namespace but the -A switch was working.
:)
kubectl get networkpolicy -A
NAMESPACE NAME POD-SELECTOR AGE
calico-apiserver allow-apiserver apiserver=true 232d
kubectl describe networkpolicy allow-apiserver -n calico-apiserver
Name: allow-apiserver
Namespace: calico-apiserver
Created on: 2022-05-25 16:39:58 +0200 CEST
Labels: <none>
Annotations: <none>
Spec:
PodSelector: apiserver=true
Allowing ingress traffic:
To Port: 5443/TCP
From: <any> (traffic not restricted by source)
Not affecting egress traffic
Policy Types: Ingress
But as you mentioned earlier, this one doesn't seem to be involved in the
issue.
CNI:
I guess it's Calico. A colleague of mine installed our Kubernetes cluster
but I've found this config at /etc/cni/net.d/10-calico.conflict
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"datastore_type": "kubernetes",
"mtu": 0,
"nodename_file_optional": false,
"log_level": "Info",
"log_file_path": "/var/log/calico/cni/cni.log",
"ipam": { "type": "calico-ipam", "assign_ipv4" : "true",
"assign_ipv6" : "false"},
"container_settings": {
"allow_ip_forwarding": false
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"k8s_api_root":"https://10.X.Y.1:443", (I've changed)
"kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
},
{"type": "portmap", "snat": true, "capabilities": {"portMappings":
true}}
]
}
Aaand I have some other news. Someone on Bacula-users list is helping me
investigate the issue. She thinks it's some database issue. Some INSERT run
into error. We'll see.
Best regards,
Zsolt
On Fri, Jan 13, 2023 at 5:16 AM Davide F. <[email protected]> wrote:
> Hello Zsolt,
>
> You’re really welcome
>
> On Thu, 12 Jan 2023 at 16:29 Zsolt Kozak <[email protected]> wrote:
>
>> Hello Davide!
>>
>> I really appreciate your kind help!
>>
>> kubectl get networkpolicy gave the following:
>>
>>
>> "No resources found in default namespace."
>>
>>
> This is because network policies CR are relative to their namespace.
>
> You can use -A to list a specific resource kind for all namespaces.
>
>>
>>
>> Actually I've tried to run the Kubernetes plugin in a so-called "some"
>> namespace, but there is no networkpolicy in "some" NS neither. (I've changed
>> the name of the NS to some.)
>>
>>
>> kubectl get networkpolicy -n some
>> No resources found in some namespace.
>>
>>
> I’d suggest you to have a look at Kubernetes documentation about Network
> Policies
>
> https://kubernetes.io/docs/concepts/services-networking/network-policies/
>
> Another question, which CNI (container network interface ) are you using
> in your cluster ?
>
>>
>>
>> Best regards,
>>
>> Zsolt
>>
>>
> Again, I’ll have a try on my side and keep you updated.
>
> Best regards
>
> Davide
>
>
>>
>>
>> On Thu, Jan 12, 2023 at 8:00 AM Davide F. <[email protected]> wrote:
>>
>>> Hello Zsolt,
>>>
>>> Indeed, the NetworkPolicy you've provided doesn't seem to be involved in
>>> the issue your facing.
>>>
>>> Let's keep trying to figure out what's going on with your setup
>>>
>>> Could you run these commands below
>>>
>>> kubectl get networkpolicy
>>>
>>>
>>> and if you get some result, run
>>>
>>> kubectl describe networkpolicy <networkpolicy-name>
>>>
>>>
>>> In the meantime, I'll setup a "test" environement and see if I'm facing
>>> the problem.
>>>
>>> I'll keep you updated.
>>>
>>> Best regards
>>>
>>> Davide
>>>
>>> On Wed, Jan 11, 2023 at 5:54 PM Zsolt Kozak <[email protected]> wrote:
>>>
>>>> Hi!
>>>>
>>>> Yes, but only one tiny:
>>>>
>>>> kind: NetworkPolicy
>>>> apiVersion: networking.k8s.io/v1
>>>> metadata:
>>>> name: allow-apiserver
>>>> namespace: calico-apiserver
>>>> ownerReferences:
>>>> - apiVersion: operator.tigera.io/v1
>>>> kind: APIServer
>>>> name: default
>>>> controller: true
>>>> blockOwnerDeletion: true
>>>> managedFields:
>>>> - manager: operator
>>>> operation: Update
>>>> apiVersion: networking.k8s.io/v1
>>>> spec:
>>>> podSelector:
>>>> matchLabels:
>>>> apiserver: 'true'
>>>> ingress:
>>>> - ports:
>>>> - protocol: TCP
>>>> port: 5443
>>>> policyTypes:
>>>> - Ingress
>>>> status: {}
>>>>
>>>> But I guess it's an allow, not a block policy. (I'm somewhat new to
>>>> Kubernetes and not too familiar with network policies...)
>>>>
>>>> Best regards,
>>>> Zsolt
>>>>
>>>> On Wed, Jan 11, 2023 at 5:47 PM Davide F. <[email protected]> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Are you using some kind of network policy within your cluster ?
>>>>>
>>>>> Best,
>>>>>
>>>>> Davide
>>>>>
>>>>> On Wed, 11 Jan 2023 at 10:53 Zsolt Kozak <[email protected]> wrote:
>>>>>
>>>>>> Hello Davide!
>>>>>>
>>>>>> I am running the File Daemon on the master node, on the control
>>>>>> plane. It's Kubernetes vanilla, version 1.25.4.
>>>>>> No, the master node is running on the same subnet as the workers.
>>>>>>
>>>>>> It should be some network issue, I think.
>>>>>>
>>>>>> Best regards,
>>>>>> Zsolt
>>>>>>
>>>>>> On Wed, Jan 11, 2023 at 8:45 AM Davide F. <[email protected]> wrote:
>>>>>>
>>>>>>> Hello Kozak,
>>>>>>>
>>>>>>> I haven’t tried k8s plugin but let me try to understand what could
>>>>>>> be the root cause of your problem.
>>>>>>>
>>>>>>> Could you explain further point 1 please ?
>>>>>>> On which node are you running the file daemon ?
>>>>>>>
>>>>>>> Which version / flavor of Kubernetes are you using ?
>>>>>>>
>>>>>>> Is it Kubernetes vanilla ? OpenShift ? Tansu ?
>>>>>>>
>>>>>>> Depending on your feedback from the first question, does master
>>>>>>> nodes runs in a different subnet than worker’s ?
>>>>>>>
>>>>>>> Thanks for your feedback
>>>>>>>
>>>>>>> Best,
>>>>>>>
>>>>>>> Davide
>>>>>>>
>>>>>>> On Tue, 10 Jan 2023 at 21:12 Zsolt Kozak <[email protected]> wrote:
>>>>>>>
>>>>>>>> Hello,
>>>>>>>>
>>>>>>>> I have some problems with backuping Kubernetes PVCs with Bacula
>>>>>>>> Kubernetes Plugin. (I have asked it on bacula-users mailing list but
>>>>>>>> got no
>>>>>>>> answer.)
>>>>>>>>
>>>>>>>> I am using the latest 13.0.1 Bacula from the community builds on
>>>>>>>> Debian Bullseye hosts.
>>>>>>>>
>>>>>>>> Backuping only the Kubernetes objects except Persistent Volume
>>>>>>>> Claims (PVC) works like a charm. I've installed the Kubernetes plugin
>>>>>>>> and
>>>>>>>> the latest Bacula File Daemon on the master node (control plane) of our
>>>>>>>> Kubernetes cluster. Bacula can access the Kubernetes cluster and backup
>>>>>>>> every single object as YAML files.
>>>>>>>>
>>>>>>>> The interesting part comes with trying to backup a PVC...
>>>>>>>>
>>>>>>>> First of all I could build my own Bacula Backup Proxy Pod Image
>>>>>>>> from the source and it's deployed into our local Docker image
>>>>>>>> repository
>>>>>>>> (repo). The Bacula File Daemon is configured properly I guess. Backup
>>>>>>>> process started and the following things happened.
>>>>>>>>
>>>>>>>> 1. Bacula File Daemon deployed Bacula Backup Proxy Pod Image into
>>>>>>>> the Kubernetes cluster, so Bacula-backup container pod started.
>>>>>>>> 2. I got into the pod and I could see the Baculatar application
>>>>>>>> started and running.
>>>>>>>> 3. The k8s_backend application started on the Bacula File Daemon
>>>>>>>> host (kubernetes.server) in 2 instances.
>>>>>>>> 4. From the Bacula-backup pod I could check that Baculatar could
>>>>>>>> connect to the k8s_backend at the default 9104 port
>>>>>>>> (kubernetes.server:9104).
>>>>>>>> 5. I checked the console messages of the job with Bat that Bacula
>>>>>>>> File Daemon started to process the configured PVC, started to write a
>>>>>>>> pvc.tar but nothing happened.
>>>>>>>> 6. After default 600 sec, after timeout the job was cancelled.
>>>>>>>> 7. It may be important that Bacula File Daemon could not delete the
>>>>>>>> Bacula-backup pod. (It could create it but could not delete it.)
>>>>>>>>
>>>>>>>>
>>>>>>>> Could you please tell me what's wrong?
>>>>>>>>
>>>>>>>>
>>>>>>>> Here are some log parts. (I've changed some sensitive data.)
>>>>>>>>
>>>>>>>>
>>>>>>>> Bacula File Daemon configuration:
>>>>>>>>
>>>>>>>> FileSet {
>>>>>>>> Name = "Kubernetes Set"
>>>>>>>> Include {
>>>>>>>> Options {
>>>>>>>> signature = SHA512
>>>>>>>> compression = GZIP
>>>>>>>> Verify = pins3
>>>>>>>> }
>>>>>>>> Plugin = "kubernetes: \
>>>>>>>> debug=1 \
>>>>>>>> baculaimage=repo/bacula-backup:04jan23 \
>>>>>>>> namespace=namespace \
>>>>>>>> pvcdata \
>>>>>>>> pluginhost=kubernetes.server \
>>>>>>>> timeout=120 \
>>>>>>>> verify_ssl=0 \
>>>>>>>> fdcertfile=/etc/bacula/certs/bacula-backup.cert \
>>>>>>>> fdkeyfile=/etc/bacula/certs/bacula-backup.key"
>>>>>>>> }
>>>>>>>> }
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Bacula File Daemon debug log (parts):
>>>>>>>>
>>>>>>>>
>>>>>>>> DEBUG:[baculak8s/jobs/estimation_job.py:134 in processing_loop]
>>>>>>>> processing get_annotated_namespaced_pods_data:namespace:nrfound:0
>>>>>>>> DEBUG:[baculak8s/plugins/kubernetes_plugin.py:319 in
>>>>>>>> list_pvcdata_for_namespace] list pvcdata for namespace:namespace
>>>>>>>> pvcfilter=True estimate=False
>>>>>>>> DEBUG:[baculak8s/plugins/k8sbackend/pvcdata.py:108 in
>>>>>>>> pvcdata_list_namespaced] pvcfilter: True
>>>>>>>> DEBUG:[baculak8s/plugins/k8sbackend/pvcdata.py:112 in
>>>>>>>> pvcdata_list_namespaced] found:some-claim
>>>>>>>> DEBUG:[baculak8s/plugins/k8sbackend/pvcdata.py:127 in
>>>>>>>> pvcdata_list_namespaced] add pvc: {'name': 'some-claim', 'node_name':
>>>>>>>> None,
>>>>>>>> 'storage_class_name': 'nfs-client', 'capacity': '2Gi', 'fi':
>>>>>>>> <baculak8s.entities.file_info.FileInfo object at 0x7ffaa55bfcc0>}
>>>>>>>> DEBUG:[baculak8s/jobs/estimation_job.py:165 in processing_loop]
>>>>>>>> processing list_pvcdata_for_namespace:namespace:nrfound:1
>>>>>>>> DEBUG:[baculak8s/jobs/estimation_job.py:172 in processing_loop]
>>>>>>>> PVCDATA:some-claim:{'name': 'some-claim', 'node_name': 'node1',
>>>>>>>> 'storage_class_name': 'nfs-client', 'capacity': '2Gi', 'fi':
>>>>>>>> <baculak8s.entities.file_info.FileInfo object at 0x7ffaa55bfcc0>}
>>>>>>>> DEBUG:[baculak8s/io/log.py:110 in save_sent_packet] Sent Packet
>>>>>>>> I000041
>>>>>>>> Start backup volume claim: some-claim
>>>>>>>>
>>>>>>>> DEBUG:[baculak8s/jobs/job_pod_bacula.py:298 in prepare_bacula_pod]
>>>>>>>> prepare_bacula_pod:token=xx88M5oggQJ....4YDbSwBRxTOhT
>>>>>>>> namespace=namespace
>>>>>>>> DEBUG:[baculak8s/jobs/job_pod_bacula.py:136 in prepare_pod_yaml]
>>>>>>>> pvcdata: {'name': 'some-claim', 'node_name': 'node1',
>>>>>>>> 'storage_class_name':
>>>>>>>> 'nfs-client', 'capacity': '2Gi', 'fi':
>>>>>>>> <baculak8s.entities.file_info.FileInfo object at 0x7ffaa55bfcc0>}
>>>>>>>> DEBUG:[baculak8s/plugins/k8sbackend/baculabackup.py:102 in
>>>>>>>> prepare_backup_pod_yaml] host:kubernetes.server port:9104
>>>>>>>> namespace:namespace image:repo/bacula-backup:04jan23
>>>>>>>> job:KubernetesBackup.2023-01-04_21.05.03_10:410706
>>>>>>>> DEBUG:[baculak8s/io/log.py:110 in save_sent_packet] Sent Packet
>>>>>>>> I000149
>>>>>>>> Prepare Bacula Pod on: node1 with: repo/bacula-backup:04jan23
>>>>>>>> <IfNotPresent> kubernetes.server:9104
>>>>>>>>
>>>>>>>> DEBUG:[baculak8s/jobs/job_pod_bacula.py:198 in
>>>>>>>> prepare_connection_server] prepare_connection_server:New
>>>>>>>> ConnectionServer:
>>>>>>>> 0.0.0.0:9104
>>>>>>>> DEBUG:[baculak8s/util/sslserver.py:180 in listen]
>>>>>>>> ConnectionServer:Listening...
>>>>>>>> DEBUG:[baculak8s/jobs/job_pod_bacula.py:307 in prepare_bacula_pod]
>>>>>>>> prepare_bacula_pod:start pod
>>>>>>>> INFO:[baculak8s/plugins/kubernetes_plugin.py:771 in
>>>>>>>> backup_pod_isready] backup_pod_status:isReady: False / 0
>>>>>>>> INFO:[baculak8s/plugins/kubernetes_plugin.py:771 in
>>>>>>>> backup_pod_isready] backup_pod_status:isReady: True / 1
>>>>>>>> DEBUG:[baculak8s/jobs/estimation_job.py:183 in _estimate_file]
>>>>>>>> {'name': 'some-claim', 'node_name': 'node1', 'storage_class_name':
>>>>>>>> 'nfs-client', 'capacity': '2Gi', 'fi':
>>>>>>>> <baculak8s.entities.file_info.FileInfo object at 0x7ffaa55bfcc0>}
>>>>>>>> DEBUG:[baculak8s/jobs/estimation_job.py:190 in _estimate_file]
>>>>>>>> file_info: {FileInfo
>>>>>>>> name:/@kubernetes/namespaces/namespace/persistentvolumeclaims/some-claim.tar
>>>>>>>> namespace:None type:F objtype:pvcdata cached:False}
>>>>>>>> DEBUG:[baculak8s/io/log.py:110 in save_sent_packet] Sent Packet
>>>>>>>> C000079
>>>>>>>>
>>>>>>>> FNAME:/@kubernetes/namespaces/namespace/persistentvolumeclaims/some-claim.tar
>>>>>>>>
>>>>>>>>
>>>>>>>> DEBUG:[baculak8s/io/log.py:110 in save_sent_packet] Sent Packet
>>>>>>>> C000040
>>>>>>>> TSTAMP:1672861077 1672861077 1672861077
>>>>>>>>
>>>>>>>> DEBUG:[baculak8s/io/log.py:110 in save_sent_packet] Sent Packet
>>>>>>>> C000031
>>>>>>>> STAT:F 2147483648 0 0 100640 1
>>>>>>>>
>>>>>>>> DEBUG:[baculak8s/io/log.py:110 in save_sent_packet] Sent Packet
>>>>>>>> F000000
>>>>>>>> (EOD PACKET)
>>>>>>>>
>>>>>>>> DEBUG:[baculak8s/jobs/backup_job.py:77 in __backup_pvcdata]
>>>>>>>> backup_pvcdata:data recv
>>>>>>>> DEBUG:[baculak8s/io/log.py:110 in save_sent_packet] Sent Packet
>>>>>>>> C000005
>>>>>>>> DATA
>>>>>>>>
>>>>>>>> DEBUG:[baculak8s/util/sslserver.py:193 in handle_connection]
>>>>>>>> ConnectionServer:Connection from: ('192.168.XX.YY', 10541)
>>>>>>>> DEBUG:[baculak8s/util/sslserver.py:145 in gethello] ['Hello',
>>>>>>>> 'KubernetesBackup.2023-01-04_21.05.03_10', '410706']
>>>>>>>> DEBUG:[baculak8s/util/token.py:57 in check_auth_data]
>>>>>>>> AUTH_DATA:Token: xx88M5oggQJuGsPbtD........ohQjeU7PkA4YDbSwBRxTOhT
>>>>>>>> DEBUG:[baculak8s/util/token.py:59 in check_auth_data]
>>>>>>>> RECV_TOKEN_DATA:Token: xx88M5oggQJuGsPbtD....ohQjeU7PkA4YDbSwBRxTOhT
>>>>>>>> DEBUG:[baculak8s/util/sslserver.py:105 in authenticate]
>>>>>>>> ConnectionServer:Authenticated
>>>>>>>>
>>>>>>>> .... after timeout
>>>>>>>>
>>>>>>>> DEBUG:[baculak8s/jobs/job_pod_bacula.py:121 in
>>>>>>>> handle_pod_data_recv] handle_pod_data_recv:EOT
>>>>>>>> DEBUG:[baculak8s/util/sslserver.py:201 in handle_connection]
>>>>>>>> ConnectionServer:Finish - disconnect.
>>>>>>>> DEBUG:[baculak8s/jobs/backup_job.py:85 in __backup_pvcdata]
>>>>>>>> backup_pvcdata:logs recv
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Job messages:
>>>>>>>>
>>>>>>>> bacula-dir No prior or suitable Full backup found in catalog for
>>>>>>>> the current FileSet. Doing FULL backup.
>>>>>>>> The FileSet "Kubernetes Set" was modified on 2023-01-04 20:20:41,
>>>>>>>> this is after the last successful backup on 2023-01-04 19:19:49.
>>>>>>>> bacula-sd Ready to append to end of Volume "Full-XXX"
>>>>>>>> size=3,838,161,002
>>>>>>>> bacula-fd Connected to Storage at bacula.server:9103 with TLS
>>>>>>>> bacula-sd Volume "Full-XXXX" previously written, moving to end of
>>>>>>>> data.
>>>>>>>> bacula-dir Connected to Client "bacula-fd" at
>>>>>>>> kubernetes.server:9102 with TLS
>>>>>>>> Using Device "FileStorageEeyoreFull" to write.
>>>>>>>> Connected to Storage "InternalStorageFull" at bacula.server:9103
>>>>>>>> with TLS
>>>>>>>> Start Backup JobId 410830,
>>>>>>>> Job=KubernetesBackup.2023-01-04_21.05.03_10
>>>>>>>> bacula-fd kubernetes: Prepare Bacula Pod on: node with:
>>>>>>>> repo/bacula-backup:04jan23 kubernetes.server:9104
>>>>>>>> kubernetes: Processing namespace: namespace
>>>>>>>> kubernetes: Start backup volume claim: some-claim
>>>>>>>> kubernetes: Connected to Kubernetes 1.25 - v1.25.4.
>>>>>>>> bacula-dir
>>>>>>>> Error: Bacula Enterprise bacula-dir 13.0.1 (05Aug22):
>>>>>>>> Build OS: x86_64-pc-linux-gnu-bacula-enterprise
>>>>>>>> debian 11.2
>>>>>>>> JobId: 410830
>>>>>>>> Job: KubernetesBackup.2023-01-04_21.05.03_10
>>>>>>>> Backup Level: Full (upgraded from Differential)
>>>>>>>> Client: "bacula-fd" 13.0.1 (05Aug22)
>>>>>>>> x86_64-pc-linux-gnu-bacula-enterprise,debian,10.11
>>>>>>>> FileSet: "Kubernetes Set" 2023-01-04 20:20:41
>>>>>>>> Pool: "Full-Pool" (From Job FullPool override)
>>>>>>>> Catalog: "MyCatalog" (From Client resource)
>>>>>>>> Storage: "InternalStorageFull" (From Pool resource)
>>>>>>>> Scheduled time: 04-Jan-2023 21:05:03
>>>>>>>> Start time: 04-Jan-2023 21:27:04
>>>>>>>> End time: 04-Jan-2023 21:29:06
>>>>>>>> Elapsed time: 2 mins 2 secs
>>>>>>>> Priority: 10
>>>>>>>> FD Files Written: 23
>>>>>>>> SD Files Written: 0
>>>>>>>> FD Bytes Written: 52,784 (52.78 KB)
>>>>>>>> SD Bytes Written: 0 (0 B)
>>>>>>>> Rate: 0.4 KB/s
>>>>>>>> Software Compression: 100.0% 1.0:1
>>>>>>>> Comm Line Compression: 5.6% 1.1:1
>>>>>>>> Snapshot/VSS: no
>>>>>>>> Encryption: yes
>>>>>>>> Accurate: yes
>>>>>>>> Volume name(s): Full-XXXX
>>>>>>>> Volume Session Id: 43
>>>>>>>> Volume Session Time: 1672853724
>>>>>>>> Last Volume Bytes: 3,838,244,105 (3.838 GB)
>>>>>>>> Non-fatal FD errors: 3
>>>>>>>> SD Errors: 0
>>>>>>>> FD termination status: OK
>>>>>>>> SD termination status: SD despooling Attributes
>>>>>>>> Termination: *** Backup Error ***
>>>>>>>> Fatal error: catreq.c:680 Restore object create error.
>>>>>>>> bacula-fd
>>>>>>>> Error: kubernetes: PTCOMM cannot get packet header from backend.
>>>>>>>> bacula-dir Fatal error: sql_create.c:1273 Create db Object record
>>>>>>>> INSERT INTO RestoreObject
>>>>>>>> (ObjectName,PluginName,RestoreObject,ObjectLength,ObjectFullLength,ObjectIndex,ObjectType,ObjectCompression,FileIndex,JobId)
>>>>>>>> VALUES ('RestoreOptions','kubernetes: \n debug=1 \n
>>>>>>>> baculaimage=repo/bacula-backup:04jan23 \n
>>>>>>>> namespace=namespace \n pvcdata \n
>>>>>>>> pluginhost=kubernetes.server \n timeout=120 \n
>>>>>>>> verify_ssl=0 \n
>>>>>>>> fdcertfile=/etc/bacula/certs/bacula-backup.cert
>>>>>>>> \n
>>>>>>>> fdkeyfile=/etc/bacula/certs/bacula-backup.key','# Plugin
>>>>>>>> configuration file\n# Version 1\nOptPrompt=\"K8S config
>>>>>>>> file\"\nOptDefault=\"*None*\"\nconfig=@STR@\n\n
>>>>>>>> OptPrompt=\"K8S API server
>>>>>>>> URL/Host\"\nOptDefault=\"*None*\"\nhost=@STR@\n\nOptPrompt=\"K8S
>>>>>>>> Bearertoken\"\nOptDefault=\"*None*\"\ntoken=@STR@\n\nOptPrompt=\"K8S
>>>>>>>> API server cert verification\"\n
>>>>>>>> OptDefault=\"True\"\nverify_ssl=@BOOL@\n\nOptPrompt=\"Custom CA
>>>>>>>> Certs file to
>>>>>>>> use\"\nOptDefault=\"*None*\"\nssl_ca_cert=@STR@\n\nOptPrompt=\"Output
>>>>>>>> format when saving to file (JSON, YAML)\"\n
>>>>>>>> OptDefault=\"RAW\"\noutputformat=@STR@\n\nOptPrompt=\"The address
>>>>>>>> for listen to incoming backup pod
>>>>>>>> data\"\nOptDefault=\"*FDAddress*\"\nfdaddress=@STR@\n\n
>>>>>>>> OptPrompt=\"The port for opening socket for
>>>>>>>> listen\"\nOptDefault=\"9104\"\nfdport=@INT32@\n\nOptPrompt=\"The
>>>>>>>> endpoint address for backup pod to connect\"\n
>>>>>>>> OptDefault=\"*FDAddress*\"\npluginhost=@STR@\n\nOptPrompt=\"The
>>>>>>>> endpoint port to connect\"\nOptDefault=\"9104\"\n
>>>>>>>> pluginport=@INT32@\n\n',859,859,0,27,0,1,410830) failed. ERR=Data
>>>>>>>> too long for column 'PluginName' at row 1
>>>>>>>>
>>>>>>>> bacula-sd Sending spooled attrs to the Director. Despooling 8,214
>>>>>>>> bytes ...
>>>>>>>> bacula-fd
>>>>>>>> Error: kubernetes: Error closing backend. Err=Child exited with
>>>>>>>> code 1
>>>>>>>> Fatal error: kubernetes: Wrong backend response to JobEnd command.
>>>>>>>> bacula-sd Elapsed time=00:02:02, Transfer rate=659 Bytes/second
>>>>>>>> bacula-fd
>>>>>>>> Error: kubernetes: PTCOMM cannot get packet header from backend.
>>>>>>>>
>>>>>>>> Error: kubernetes: Cannot successfully start bacula-backup pod in
>>>>>>>> expected time!
>>>>>>>>
>>>>>>>> Error: kubernetes: Job already running in 'namespace' namespace.
>>>>>>>> Check logs or delete bacula-backup Pod manually.
>>>>>>>>
>>>>>>>>
>>>>>>>> Best regards,
>>>>>>>> Zsolt
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Bacula-devel mailing list
>>>>>>>> [email protected]
>>>>>>>> https://lists.sourceforge.net/lists/listinfo/bacula-devel
>>>>>>>>
>>>>>>>
_______________________________________________
Bacula-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-devel