On Wed, 12 Oct 2016 05:41:26 +0200 (CEST)
Alexandre DERUMIER wrote:
>
> Well, this is very dependent of nfs server implementation quality.
> I'm running vm on netapp san though nfs 4.1, and I have very good performance.
>
netapp is a purpose build NFS based storage
>>And what about CIFS?
I can't comment about CIFS, I never run vm on it.
But what I known , is that it's really depending of CIFS version
implementation, both client && server.
Old CIFS version (smb 1.x) (windows2003/xp), was a very bad protocol (very
chatty).
I known that last windows2012
Alexandre Derumier
Ingénieur système et stockage
Manager Infrastructure
Fixe : +33 3 59 82 20 10
125 Avenue de la république
59110 La Madeleine
[ https://twitter.com/OdisoHosting ] [ https://twitter.com/mindbaz ] [
https://www.linkedin.com/company/odiso ] [
> I'm curious to see result of:
>
>>It seems to work :-)
Ok, so it shouldn't be too diffcult to add|remove lun in activate|deactiveate
volume :)
About NFS vs Iscsi, I think we should add both if we can.
Because not everybody love both NFS && ISCSI. (I'm already seeing the long
debate in
> >>NFS does not provide good iops and has the ability to bring down a
> >>node. IMHO NFS is only useful as filestorage server for backups and
> >>iso images.
>
> Well, this is very dependent of nfs server implementation quality.
> I'm running vm on netapp san though nfs 4.1, and I have very good
>>NFS does not provide good iops and has the ability to bring down a
>>node. IMHO NFS is only useful as filestorage server for backups and
>>iso images.
Well, this is very dependent of nfs server implementation quality.
I'm running vm on netapp san though nfs 4.1, and I have very good
On Mon, 10 Oct 2016 22:30:04 +0200 (CEST)
Alexandre DERUMIER wrote:
>
> I'm afraid of behaviour if a node is not joinable to do the delete or rescan.
> Seem to be difficult to manage with big cluster.
> I'm curious to see result of:
>
It seems to work :-)
1) After login
On Tue, 11 Oct 2016 21:26:08 +0200 (CEST)
Dietmar Maurer wrote:
>
> Why not NFS? This would make everything easier. IMHO iSCSI is really clumsy.
>
NFS does not provide good iops and has the ability to bring down a
node. IMHO NFS is only useful as filestorage server for
> > VMs use KVM live backup feature, which is not available for containers.
> >
> I see. A work around could be to make a clone of a snapshot which is
> then exposed through iscsi. Would that be an idea?
Why not NFS? This would make everything easier. IMHO iSCSI is really clumsy.
On Tue, 11 Oct 2016 19:54:05 +0200 (CEST)
Dietmar Maurer wrote:
>
> VMs use KVM live backup feature, which is not available for containers.
>
I see. A work around could be to make a clone of a snapshot which is
then exposed through iscsi. Would that be an idea?
--
> > Besides, iSCSI has many other drawbacks, for example it is not possible
> > to access ZFS snapshots over iSCSI. If we use ZFS/NFS instead, we can have
> > all that functionality?
> >
> To what purpose is it needed to be able to access a ZFS snapshot?
We need that for vzdump container
On Tue, 11 Oct 2016 17:39:02 +0200 (CEST)
Dietmar Maurer wrote:
>
> Besides, iSCSI has many other drawbacks, for example it is not possible
> to access ZFS snapshots over iSCSI. If we use ZFS/NFS instead, we can have
> all that functionality?
>
To what purpose is it needed
> I think this is highly hypothetical since a LUN at any point in time
> can only be active on one node, proxmox that is, so the hole operation
> is serializable which means that every step will be controlled and can
> be rolled back. Eg. we are dealing with a deterministic state machine.
On Tue, 11 Oct 2016 08:33:02 +0200 (CEST)
Alexandre DERUMIER wrote:
>
> I agree with Dietmar.
>
> Think of a missed ssh command to remote node (small network timeout for
> example), for a resize volume for example.
>
> the lun will still be there, but wrong size, and if
Hi,
This is a first try to implement live local storage migration
at the same time than vm migration.
some users have made requests about it
https://forum.proxmox.com/threads/when-will-kvm-live-suspend-migration-on-zfs-work.29049/
This patches serie implement it with 1 disk local disk
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 21 +++--
1 file changed, 15 insertions(+), 6 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index b80f8f1..bbee7cd 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -5800,16
This will create a new drive for first local drive found,
and start the vm with this new drive.
a nbd server is started in qemu and expose volume to network port
Signed-off-by: Alexandre Derumier
---
PVE/API2/Qemu.pm | 14 ++-
PVE/QemuServer.pm | 75
This allow to migrate a local storage (only 1 for now) to a remote node storage.
When the target node start, a new volume is created and exposed through qemu
embedded nbd server.
qemu drive-mirror is done on source vm with nbd server as target.
when drive-mirror is done, the source vm is
Because when only writing the ones which are true we're
breaking the ones which default to true (like the mkdir
option on directory storages, where we need a false value
to be written out explicitly).
---
src/PVE/SectionConfig.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
On Tue, Oct 11, 2016 at 08:24:24AM +0200, Fabian Grünbichler wrote:
> On Tue, Oct 11, 2016 at 07:11:45AM +0200, Dietmar Maurer wrote:
> > > On October 10, 2016 at 1:04 PM Fabian Grünbichler
> > >
> > > wrote:
> > >
> > >
> > > this introduces two new options for
> posses another problem which will be a problem for other things too and
> must be handled by fencing.
not really that easy - we have a quorum system to handle most situations ...
fencing is only required for some special cases.
___
pve-devel mailing
>>Offline nodes is not a problem because when the get online their scsi
>>bus will not have references to disappeared luns. Not reachable nodes
>>posses another problem which will be a problem for other things too and
>>must be handled by fencing.
I agree with Dietmar.
Think of a missed ssh
On Tue, Oct 11, 2016 at 07:11:45AM +0200, Dietmar Maurer wrote:
> > On October 10, 2016 at 1:04 PM Fabian Grünbichler
> >
> > wrote:
> >
> >
> > this introduces two new options for non-volume mount points,
> > modeled after the way we define 'shared' storages:
> > -
On Tue, 11 Oct 2016 06:14:14 +0200 (CEST)
Dietmar Maurer wrote:
>
> Such things cannot work, because nodes can be offline (or worse, online
> but not reachable).
>
Offline nodes is not a problem because when the get online their scsi
bus will not have references to
24 matches
Mail list logo