Hello,
As far as I can see from the directory structure you are running 3
nodes on the same persistence data storage.
So, for manual restore you should do the following:
- stop the cluster;
- backup the whole `work/db` directory (e.g. using mv command);
- copy everything from e.g. `snapshot_1/db
Hey guys,
Can someone explain pls why snaphot restore doesnt work woth control.sh
On Fri, Feb 4, 2022, 18:57 Surinder Mehra wrote:
> hey,
> Did you get a chance to review my queries please.
>
> On Thu, Feb 3, 2022 at 4:40 PM Surinder Mehra wrote:
>
>> Hi,
>> So the way I am thinking to use it i
hey,
Did you get a chance to review my queries please.
On Thu, Feb 3, 2022 at 4:40 PM Surinder Mehra wrote:
> Hi,
> So the way I am thinking to use it is if we lose the EBS volume and we
> need to restore the cluster state back. I would have a secondary EBS as my
> snapshot directory so I can re
Hi,
So the way I am thinking to use it is if we lose the EBS volume and we need
to restore the cluster state back. I would have a secondary EBS as my
snapshot directory so I can restore from it.
It means Application would need to be restarted after EBS data is copied
back to the work directory. I s
Hello,
You don't need to stop the cluster or delete/move any snapshot files
in case you are using the restore procedure from the control.sh, so
the following should work:
- create snapshot
- stop the caches you are intended to restore
- run ./control.sh --snapshot restore snapshot_1 --start
Can y
Hi,
Could you please point out if i missed something?
On Wed, Feb 2, 2022, 13:39 Surinder Mehra wrote:
> Hey thanks for your suggestions.
>
> I tried restoring using control.sh but it doesn't seem to work. Below are
> steps
> 1. Started 3 nodes and added data using a thick client
> 2. created a
Hey thanks for your suggestions.
I tried restoring using control.sh but it doesn't seem to work. Below are
steps
1. Started 3 nodes and added data using a thick client
2. created a snapshot using with ./control.sh --snapshot create snapshot_1
3. I verified, the snapshot directory has data
4. Stop
Hello,
Your case looks correct to me, however, I'd like to mention some
important points that may help you:
- the directories structure of the snapshot has the same structure as
the Ignite native persistence, so you may backup the original cluster
node directory (for binary_data, marshaller and db
Hi,
After a few hiccups, I managed to restore the cluster state from the
snapshot. Please confirm if they look correct. If so documentation page
needs to be updated
1. Create N nodes
2. Add some data to them
3. Create snapshot
4. Stop all nodes(cluster)
5. Delete binary_data, marsh
Hi,
We are using ignite 2.11.1 to experiment with ignite snapshots. We tried
steps mentioned on below page to restore ignite data from snapshot
https://ignite.apache.org/docs/latest/snapshots/snapshots
But we get the below error when we start a cluster after copying data
manually as mentioned on t
10 matches
Mail list logo