After iSCSI initiator logins to the iSCSI target of the VDI,
getting snapshot for that volume causes segault.
This patch fixes false loop.
Signed-off-by: Teruaki Ishizaki ishizaki.teru...@lab.ntt.co.jp
---
dog/vdi.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git
On Wed, Nov 26, 2014 at 10:50:46AM +0900, Saeki Masaki wrote:
For day-to-day backup, repeatedly create snapshot and delete snapshot.
but inode file remains even after delete snapshot.
dog vdi list command would read all inode files, even if vdi/snapshot was
deleted.
this behavior is
On Mon, Nov 10, 2014 at 03:02:13PM +0100, Valerio Pachera wrote:
I see on a mail that Yuan applied the patch the 21th may 2014 so it
should be the default behaviour.
I can't find it in git log though.
It is not the default option because it is not backward compatiable with old
cluster, meaning
At Wed, 3 Dec 2014 17:24:21 +0900,
Teruaki Ishizaki wrote:
After iSCSI initiator logins to the iSCSI target of the VDI,
getting snapshot for that volume causes segault.
This patch fixes false loop.
Signed-off-by: Teruaki Ishizaki ishizaki.teru...@lab.ntt.co.jp
---
dog/vdi.c |2 +-
Hi, Yuan
Thank you for your comment.
but, SD_OP_READ_DEL_VDIS is already defined in internal_proto.h as below.
---
#include sheepdog_proto.h
---
So, I think need not be add again.
Regards,
Masaki Saeki.
(2014/12/03 18:14), Liu Yuan wrote:
On Wed, Nov 26, 2014 at 10:50:46AM +0900, Saeki
2014-12-03 12:24 GMT+03:00 Liu Yuan namei.u...@gmail.com:
It is not the default option because it is not backward compatiable with old
cluster, meaning that you need format your cluster first if you need to enable
it.
If I remember well, it can only be enabled by configure option for now.
2014-12-03 4:38 GMT+03:00 Vladislav Gorbunov vadi...@gmail.com:
What do you think about add
[Service]
Restart=on-abort
to the /usr/lib/systemd/system/sheepdog.service?
Good But may be on-failure ?
--
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru
jabber: v...@selfip.ru
--
sheepdog
2014-12-03 14:26 GMT+03:00 Vasiliy Tolstov v.tols...@selfip.ru:
2014-12-03 4:38 GMT+03:00 Vladislav Gorbunov vadi...@gmail.com:
What do you think about add
[Service]
Restart=on-abort
to the /usr/lib/systemd/system/sheepdog.service?
Good But may be on-failure ?
Also needs to be added
On Wed, Dec 03, 2014 at 08:01:16PM +0900, Saeki Masaki wrote:
Hi, Yuan
Thank you for your comment.
but, SD_OP_READ_DEL_VDIS is already defined in internal_proto.h as below.
---
#include sheepdog_proto.h
---
So, I think need not be add again.
Really? am I missing something? At least
See http://jenkins.sheepdog-project.org:8080/job/sheepdog-build/561/changes
Changes:
[mitake.hitoshi] sheep/http: check correct variable
[mitake.hitoshi] script: fix systemd support
[mitake.hitoshi] dog: fix segfault bug of getting snapshot for
LOCK_STATE_SHARED VDI.
If set to on-failure, the service will be restarted when the process exits
with a non-zero exit code, like bad configuration. If set to on-abort, the
service will be restarted only if the service process exits due to an
uncaught signal not specified as a clean exit status.
2014-12-03 21:26
At Wed, 3 Dec 2014 14:56:21 +0800,
$B=y.$(AAz(B wrote:
[1 multipart/alternative (7bit)]
[1.1 text/plain; UTF-8 (7bit)]
Epoch won't increase if there are only gateway nodes in the cluster.
In this way, when the cluster restarts, it will always recovery from a
latest epoch version which
Current recovery process has a bug of data wipe. After an epoch which
consists only gateway nodes, objects stored in dying nodes will be
wiped when the nodes join to the cluster. This patch solves the
problem with removing invalid call of sd_store-cleanup() during
recovery completion.
Related
At Thu, 4 Dec 2014 16:05:39 +0900,
Hitoshi Mitake wrote:
Current recovery process has a bug of data wipe. After an epoch which
consists only gateway nodes, objects stored in dying nodes will be
wiped when the nodes join to the cluster. This patch solves the
problem with removing invalid
14 matches
Mail list logo