[DRBD-user] DRBD + ZFS

2011-11-04 Thread Tobias Verbeke

Dear list,

We currently set up a storage server,
using Ubuntu + ZFS kernel module and
serve filesystems using NFS.

Would DRBD be a solution to mirror this
system and move towards a high-availability
setup for storage or would I need to
sacrifice ZFS ?

Many thanks in advance for any pointer.

Best,
Tobias
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[DRBD-user] DRBD + ZFS

2021-08-17 Thread Eric Robinson
I'm considering deploying DRBD between ZFS layers. The lowest layer RAIDZ will 
serve as the DRBD backing device. Then I would build another ZFS filesystem on 
top to benefit from compression. Any thoughs, experiences, opinions, positive 
or negative?

--Eric





Disclaimer : This email and any files transmitted with it are confidential and 
intended solely for intended recipients. If you are not the named addressee you 
should not disseminate, distribute, copy or alter this email. Any views or 
opinions presented in this email are solely those of the author and might not 
represent those of Physician Select Management. Warning: Although Physician 
Select Management has taken reasonable precautions to ensure no viruses are 
present in this email, the company cannot accept responsibility for any loss or 
damage arising from the use of this email or attachments.
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD + ZFS

2011-11-04 Thread Noah Mehl
Tobias,

Yes, theoretically, you could use drbd as primary/secondary as backing for ZFS. 
 But I would caution you against using ZFS on linux.  THIS IS NOT MATURE, and 
the people who made it usable haven't worked on it since April.  So, it doesn't 
look like it's actively supported.  I would NEVER use this on a production 
system.  Instead consider using OpenIndiana and ZFS send and receive instead.

~Noah

On Nov 4, 2011, at 1:31 PM, Tobias Verbeke wrote:

> Dear list,
> 
> We currently set up a storage server,
> using Ubuntu + ZFS kernel module and
> serve filesystems using NFS.
> 
> Would DRBD be a solution to mirror this
> system and move towards a high-availability
> setup for storage or would I need to
> sacrifice ZFS ?
> 
> Many thanks in advance for any pointer.
> 
> Best,
> Tobias
> ___
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user


Scanned for viruses and content by the Tranet Spam Sentinel service.
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD + ZFS

2011-11-04 Thread Alexandre Biancalana
FreeBSD has native and active maintained ZFS. Also has hastd that is
similar to drbd.

On Fri, Nov 4, 2011 at 3:41 PM, Noah Mehl  wrote:
> Tobias,
>
> Yes, theoretically, you could use drbd as primary/secondary as backing for 
> ZFS.  But I would caution you against using ZFS on linux.  THIS IS NOT 
> MATURE, and the people who made it usable haven't worked on it since April.  
> So, it doesn't look like it's actively supported.  I would NEVER use this on 
> a production system.  Instead consider using OpenIndiana and ZFS send and 
> receive instead.
>
> ~Noah
>
> On Nov 4, 2011, at 1:31 PM, Tobias Verbeke wrote:
>
>> Dear list,
>>
>> We currently set up a storage server,
>> using Ubuntu + ZFS kernel module and
>> serve filesystems using NFS.
>>
>> Would DRBD be a solution to mirror this
>> system and move towards a high-availability
>> setup for storage or would I need to
>> sacrifice ZFS ?
>>
>> Many thanks in advance for any pointer.
>>
>> Best,
>> Tobias
>> ___
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>
>
> Scanned for viruses and content by the Tranet Spam Sentinel service.
> ___
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD + ZFS

2011-11-04 Thread Tobias Verbeke

Hi Noah,

On 11/04/2011 06:41 PM, Noah Mehl wrote:


Yes, theoretically, you could use drbd as primary/secondary as backing for ZFS. 
 But I would caution you against using ZFS on linux.  THIS IS NOT MATURE, and 
the people who made it usable haven't worked on it since April.  So, it doesn't 
look like it's actively supported.  I would NEVER use this on a production 
system.  Instead consider using OpenIndiana and ZFS send and receive instead.


Many thanks for your insights. I saw some commit history at

https://github.com/zfsonlinux/zfs

but must admit I already experienced some surprising behaviour,
so thanks again for sharing your assessment!

Would ZFS send and receive be sufficient to take one system down
for maintenance while the other 'automagically' takes over serving
NFS or would that require some extra tools that come with OpenIndiana ?

Best,
Tobias


On Nov 4, 2011, at 1:31 PM, Tobias Verbeke wrote:


Dear list,

We currently set up a storage server,
using Ubuntu + ZFS kernel module and
serve filesystems using NFS.

Would DRBD be a solution to mirror this
system and move towards a high-availability
setup for storage or would I need to
sacrifice ZFS ?

Many thanks in advance for any pointer.

Best,
Tobias
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user



Scanned for viruses and content by the Tranet Spam Sentinel service.
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user



___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD + ZFS

2011-11-04 Thread Tobias Verbeke

On 11/04/2011 08:21 PM, Alexandre Biancalana wrote:


FreeBSD has native and active maintained ZFS. Also has hastd that is
similar to drbd.


Many thanks, Alexandre. The common theme is that
I need to get out of the Linux comfort zone.

Best,
Tobias


On Fri, Nov 4, 2011 at 3:41 PM, Noah Mehl  wrote:

Tobias,

Yes, theoretically, you could use drbd as primary/secondary as backing for ZFS. 
 But I would caution you against using ZFS on linux.  THIS IS NOT MATURE, and 
the people who made it usable haven't worked on it since April.  So, it doesn't 
look like it's actively supported.  I would NEVER use this on a production 
system.  Instead consider using OpenIndiana and ZFS send and receive instead.

~Noah

On Nov 4, 2011, at 1:31 PM, Tobias Verbeke wrote:


Dear list,

We currently set up a storage server,
using Ubuntu + ZFS kernel module and
serve filesystems using NFS.

Would DRBD be a solution to mirror this
system and move towards a high-availability
setup for storage or would I need to
sacrifice ZFS ?

Many thanks in advance for any pointer.

Best,
Tobias
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user



Scanned for viruses and content by the Tranet Spam Sentinel service.
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD + ZFS

2011-11-04 Thread Nick Khamis
For a premature filesystem with a lot more potential, I would go ceph

Nick.

On Fri, Nov 4, 2011 at 3:54 PM, Tobias Verbeke  wrote:
> Hi Noah,
>
> On 11/04/2011 06:41 PM, Noah Mehl wrote:
>
>> Yes, theoretically, you could use drbd as primary/secondary as backing for
>> ZFS.  But I would caution you against using ZFS on linux.  THIS IS NOT
>> MATURE, and the people who made it usable haven't worked on it since April.
>>  So, it doesn't look like it's actively supported.  I would NEVER use this
>> on a production system.  Instead consider using OpenIndiana and ZFS send and
>> receive instead.
>
> Many thanks for your insights. I saw some commit history at
>
> https://github.com/zfsonlinux/zfs
>
> but must admit I already experienced some surprising behaviour,
> so thanks again for sharing your assessment!
>
> Would ZFS send and receive be sufficient to take one system down
> for maintenance while the other 'automagically' takes over serving
> NFS or would that require some extra tools that come with OpenIndiana ?
>
> Best,
> Tobias
>
>> On Nov 4, 2011, at 1:31 PM, Tobias Verbeke wrote:
>>
>>> Dear list,
>>>
>>> We currently set up a storage server,
>>> using Ubuntu + ZFS kernel module and
>>> serve filesystems using NFS.
>>>
>>> Would DRBD be a solution to mirror this
>>> system and move towards a high-availability
>>> setup for storage or would I need to
>>> sacrifice ZFS ?
>>>
>>> Many thanks in advance for any pointer.
>>>
>>> Best,
>>> Tobias
>>> ___
>>> drbd-user mailing list
>>> drbd-user@lists.linbit.com
>>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
>>
>> Scanned for viruses and content by the Tranet Spam Sentinel service.
>> ___
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>
>
> ___
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD + ZFS

2011-11-04 Thread Noah Mehl
Nick,

To be clear, if you're referring to zfs on linux as premature, then yes, I 
would agree.  If you're saying that zfs is  premature on solaris, or even 
freebsd, you would probably be very mistaken.  Either way, this isn't about 
drbd…

~Noah

On Nov 4, 2011, at 4:15 PM, Nick Khamis wrote:

> For a premature filesystem with a lot more potential, I would go ceph
> 
> Nick.
> 
> On Fri, Nov 4, 2011 at 3:54 PM, Tobias Verbeke  
> wrote:
>> Hi Noah,
>> 
>> On 11/04/2011 06:41 PM, Noah Mehl wrote:
>> 
>>> Yes, theoretically, you could use drbd as primary/secondary as backing for
>>> ZFS.  But I would caution you against using ZFS on linux.  THIS IS NOT
>>> MATURE, and the people who made it usable haven't worked on it since April.
>>>  So, it doesn't look like it's actively supported.  I would NEVER use this
>>> on a production system.  Instead consider using OpenIndiana and ZFS send and
>>> receive instead.
>> 
>> Many thanks for your insights. I saw some commit history at
>> 
>> https://github.com/zfsonlinux/zfs
>> 
>> but must admit I already experienced some surprising behaviour,
>> so thanks again for sharing your assessment!
>> 
>> Would ZFS send and receive be sufficient to take one system down
>> for maintenance while the other 'automagically' takes over serving
>> NFS or would that require some extra tools that come with OpenIndiana ?
>> 
>> Best,
>> Tobias
>> 
>>> On Nov 4, 2011, at 1:31 PM, Tobias Verbeke wrote:
>>> 
 Dear list,
 
 We currently set up a storage server,
 using Ubuntu + ZFS kernel module and
 serve filesystems using NFS.
 
 Would DRBD be a solution to mirror this
 system and move towards a high-availability
 setup for storage or would I need to
 sacrifice ZFS ?
 
 Many thanks in advance for any pointer.
 
 Best,
 Tobias
 ___
 drbd-user mailing list
 drbd-user@lists.linbit.com
 http://lists.linbit.com/mailman/listinfo/drbd-user
>>> 
>>> 
>>> Scanned for viruses and content by the Tranet Spam Sentinel service.
>>> ___
>>> drbd-user mailing list
>>> drbd-user@lists.linbit.com
>>> http://lists.linbit.com/mailman/listinfo/drbd-user
>>> 
>> 
>> ___
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>> 

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD + ZFS

2011-11-04 Thread Arnold Krille
On Friday 04 November 2011 21:15:04 Nick Khamis wrote:
> For a premature filesystem with a lot more potential, I would go ceph

Did anyone of you test xtreemfs yet? (www.xtreemfs.org)
Aims to be a mix of gfs2/ocfs2 and drbd. Or like lustre but with a more sloppy 
sync and thus capable for low-bandwidth connections.

Have a nice weekend,

Arnold


signature.asc
Description: This is a digitally signed message part.
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD + ZFS

2011-11-04 Thread Noah Mehl
Tobias,

I guess my biggest question is why you want ZFS in the first place.  If you 
don't need compression and deduplication, then the easiest (I'm assuming, since 
you know linux) approach would be a redundant DRBD cluster with NFS.  Checkout 
this guide for that here:

http://www.linbit.com/en/education/tech-guides/highly-available-nfs-with-drbd-and-pacemaker/

I'm sure the people on this list would be happy to help you achieve that goal.  
If you are more interested in ZFS specific features, feel free to email me 
off-list.  Thanks.

~Noah

On Nov 4, 2011, at 3:57 PM, Tobias Verbeke wrote:

> Hi Noah,
> 
> On 11/04/2011 06:41 PM, Noah Mehl wrote:
> 
>> Yes, theoretically, you could use drbd as primary/secondary as backing for 
>> ZFS.  But I would caution you against using ZFS on linux.  THIS IS NOT 
>> MATURE, and the people who made it usable haven't worked on it since April.  
>> So, it doesn't look like it's actively supported.  I would NEVER use this on 
>> a production system.  Instead consider using OpenIndiana and ZFS send and 
>> receive instead.
> 
> Many thanks for your insights. I saw some commit history at
> 
> https://github.com/zfsonlinux/zfs
> 
> but must admit I already experienced some surprising behaviour,
> so thanks again for sharing your assessment!
> 
> Would ZFS send and receive be sufficient to take one system down
> for maintenance while the other 'automagically' takes over serving
> NFS or would that require some extra tools that come with OpenIndiana ?
> 
> Best,
> Tobias
> 
>> On Nov 4, 2011, at 1:31 PM, Tobias Verbeke wrote:
>> 
>>> Dear list,
>>> 
>>> We currently set up a storage server,
>>> using Ubuntu + ZFS kernel module and
>>> serve filesystems using NFS.
>>> 
>>> Would DRBD be a solution to mirror this
>>> system and move towards a high-availability
>>> setup for storage or would I need to
>>> sacrifice ZFS ?
>>> 
>>> Many thanks in advance for any pointer.
>>> 
>>> Best,
>>> Tobias
>>> ___
>>> drbd-user mailing list
>>> drbd-user@lists.linbit.com
>>> http://lists.linbit.com/mailman/listinfo/drbd-user
>> 
>> 
>> Scanned for viruses and content by the Tranet Spam Sentinel service.
>> ___
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
>> 
> 

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD + ZFS

2011-11-05 Thread Robert Krig

If you are setting up a storage Server, go with what you know and what
his been tried and tested.
A high volume storage system is not the place to be testing out unstable
or experimental implentations of filesystems.

Well, unless you don't mind backing everything up and reformatting
further down the line once you run into trouble.

Also, seriously consider what you actually need vs. what you want.

The simplest setup with DRBD is a primary>secondary node setup. You can
use a common filesystem like ext3/4 or XFS, and you have one node which
is active, and another which is a standby. Switching over can be done
manually in just a few quick commands.
If you need automatic failover you might want to check out heartbeat or
pacemaker in conjuction with drbd.

If you want both nodes to be available simultaeneously, then you have to
go primary/primary setup. Which also requires you to use a cluster aware
filesystem such as GFS or OCFS2. This adds another layer of complexity
to your setup. Especially if something goes wrong.

If you don't need fast random access throughput on tons of small files,
then you might want to take a look at GlusterFS. The advantage of
GlusterFS is that it works with pretty much any filesystem. You
designate a directory to be a glusterfs export. Then you mount that
export on a local or remote machine, and all file operations done
through that mountpoint get replicated/distributed to all your nodes.
If GlusterFS fails to work, you still have your files in the original
directory to work with.

However, GlusterFS is definately NOT recommended for serving a lot of
small files.

Anyway, food for thought.



___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD + ZFS

2021-08-19 Thread Emmanuel Florac
Le Wed, 18 Aug 2021 03:39:01 +
Eric Robinson  écrivait:

> I'm considering deploying DRBD between ZFS layers. The lowest layer
> RAIDZ will serve as the DRBD backing device. Then I would build
> another ZFS filesystem on top to benefit from compression. Any
> thoughs, experiences, opinions, positive or negative?

But isn't ZFS implementing its own replication layer? Why go the DRBD
route instead?

-- 

Emmanuel Florac |   Direction technique
|   Intellique
|   
|   +33 1 78 94 84 02



pgpOQ_XJ1o5yk.pgp
Description: Signature digitale OpenPGP
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD + ZFS

2021-08-19 Thread Eric Robinson
> -Original Message-
> From: Emmanuel Florac 
> Sent: Thursday, August 19, 2021 8:16 AM
> To: Eric Robinson 
> Cc: drbd-user@lists.linbit.com
> Subject: Re: [DRBD-user] DRBD + ZFS
>
> Le Wed, 18 Aug 2021 03:39:01 +
> Eric Robinson  écrivait:
>
> > I'm considering deploying DRBD between ZFS layers. The lowest layer
> > RAIDZ will serve as the DRBD backing device. Then I would build
> > another ZFS filesystem on top to benefit from compression. Any
> > thoughs, experiences, opinions, positive or negative?
>
> But isn't ZFS implementing its own replication layer? Why go the DRBD route
> instead?
>

I'm not well read on ZFS, but I believe ZFS uses periodic scheduled 
replication, not real-time block-level replication. ZFS does not have an 
equivalent to DRBD protocol C. Am I mistaken?

--Eric


Disclaimer : This email and any files transmitted with it are confidential and 
intended solely for intended recipients. If you are not the named addressee you 
should not disseminate, distribute, copy or alter this email. Any views or 
opinions presented in this email are solely those of the author and might not 
represent those of Physician Select Management. Warning: Although Physician 
Select Management has taken reasonable precautions to ensure no viruses are 
present in this email, the company cannot accept responsibility for any loss or 
damage arising from the use of this email or attachments.
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD + ZFS

2021-08-19 Thread 🐧 𝓡𝓪𝓫𝓲𝓷 𝓨𝓪𝓼𝓱𝓪𝓻𝔃𝓪𝓭𝓮𝓱𝓮
Not sure ZFS is the right choice as an underline for a resource,
it is powerful but also complex (as a code base), which will probably will
make it slow.

unless you are going to expose the ZVOL or the dataset directly to be
consumed,
stacking ZFS over DRBD over ZFS, seems to me as a bad idea.



Rabin


On Wed, 18 Aug 2021 at 09:37, Eric Robinson  wrote:

> I’m considering deploying DRBD between ZFS layers. The lowest layer RAIDZ
> will serve as the DRBD backing device. Then I would build another ZFS
> filesystem on top to benefit from compression. Any thoughs, experiences,
> opinions, positive or negative?
>
>
>
> --Eric
>
>
>
>
>
>
>
>
>
>
> Disclaimer : This email and any files transmitted with it are confidential
> and intended solely for intended recipients. If you are not the named
> addressee you should not disseminate, distribute, copy or alter this email.
> Any views or opinions presented in this email are solely those of the
> author and might not represent those of Physician Select Management.
> Warning: Although Physician Select Management has taken reasonable
> precautions to ensure no viruses are present in this email, the company
> cannot accept responsibility for any loss or damage arising from the use of
> this email or attachments.
> ___
> Star us on GITHUB: https://github.com/LINBIT
> drbd-user mailing list
> drbd-user@lists.linbit.com
> https://lists.linbit.com/mailman/listinfo/drbd-user
>
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD + ZFS

2021-08-19 Thread Denis

Il 18/08/21 05:39, Eric Robinson ha scritto:


I’m considering deploying DRBD between ZFS layers. The lowest layer 
RAIDZ will serve as the DRBD backing device.




Hi Eric,

I'm using drbd over zfs raid 10 ssd without problems.

As I know, RAIDZ has bad performances with block devices, so I advise 
you raid 10


Then I would build another ZFS filesystem on top to benefit from 
compression. Any thoughs, experiences, opinions, positive or negative?


--Eric


You don't need this, zfs on the first layer already compresses the data.

Furthermore, zfs works better with physical disks


Emmanuel Florac, zfs replication is asynchronous, even you can set it 
every minute, you always lose some data after a node failure


Denis

___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD + ZFS

2021-08-20 Thread Eric Robinson
My main motivation is the desire for a compressed filesystem. I have 
experimented with using VDO for that purpose and it works, but the setup is 
complex and I don’t know if I trust it to work well when VDO is in a stack of 
Pacemaker cluster resources. If there a better way of getting compression to 
work above DRBD?

-Eric


From: ra...@isoc.org.il 
Sent: Thursday, August 19, 2021 4:43 PM
To: Eric Robinson 
Cc: drbd-user@lists.linbit.com
Subject: Re: [DRBD-user] DRBD + ZFS

Not sure ZFS is the right choice as an underline for a resource,
it is powerful but also complex (as a code base), which will probably will make 
it slow.

unless you are going to expose the ZVOL or the dataset directly to be consumed,
stacking ZFS over DRBD over ZFS, seems to me as a bad idea.



Rabin


On Wed, 18 Aug 2021 at 09:37, Eric Robinson 
mailto:eric.robin...@psmnv.com>> wrote:
I’m considering deploying DRBD between ZFS layers. The lowest layer RAIDZ will 
serve as the DRBD backing device. Then I would build another ZFS filesystem on 
top to benefit from compression. Any thoughs, experiences, opinions, positive 
or negative?

--Eric





Disclaimer : This email and any files transmitted with it are confidential and 
intended solely for intended recipients. If you are not the named addressee you 
should not disseminate, distribute, copy or alter this email. Any views or 
opinions presented in this email are solely those of the author and might not 
represent those of Physician Select Management. Warning: Although Physician 
Select Management has taken reasonable precautions to ensure no viruses are 
present in this email, the company cannot accept responsibility for any loss or 
damage arising from the use of this email or attachments.
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com<mailto:drbd-user@lists.linbit.com>
https://lists.linbit.com/mailman/listinfo/drbd-user
Disclaimer : This email and any files transmitted with it are confidential and 
intended solely for intended recipients. If you are not the named addressee you 
should not disseminate, distribute, copy or alter this email. Any views or 
opinions presented in this email are solely those of the author and might not 
represent those of Physician Select Management. Warning: Although Physician 
Select Management has taken reasonable precautions to ensure no viruses are 
present in this email, the company cannot accept responsibility for any loss or 
damage arising from the use of this email or attachments.
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD + ZFS

2021-08-21 Thread David Bruzos
Hello folks,
I've used DRBD over ZFS for many years and my experience has been very 
possitive.  My primary use case has been virtual machine backing storage for 
Xen hypervisors, with dom0 running ZFS and DRBD.  The realtime nature of DRBD 
replication allows for VM migrations, etc, and ZFS makes remote incremental 
backups awesome.  Overall, it is a combination that is hard to beat.

* Key things to keep in mind:

. The performance of DRBD on ZFS is not the best in the world, but the 
benefits of a properly configured and used setup far outweigh the performance 
costs.
. If you are not limited buy storage size (typical when using rotating 
disks), I would absolutely recommend mirror vdevs with ashift=12 for best 
results in most circumstances.
. If space is a limiting factor (typical with SSD/NVME), I use raidz, but 
careful considerations have to be made, so you don't end up wasting tuns of 
space, because of ashift/blocksize/striping issues.
. Compression works great under the DRBD devices, but volblocksize/ashift 
details are extremely important to get the most out of it.
. I would not create additional ZFS file systems on top of the DRBD devices 
for compression or any other intensive feature, just not worth it, you want 
that as close to the physical storage as possible.

I do run a few ZFS file systems on virtual machines that are backed by DRBD 
devices on top of ZFS, but I am after other ZFS features in those cases.  The 
VMs running ZFS have compression=off, no vdev redundancy, optimized 
volblocksize for the situation/workload in question, etc.  My typical goto 
filesystem for VMs is XFS, because it is lean-and-mean and has the kind of 
features that everyone should want in a general purpose FS.

If you have specific questions, let me know.

David

-- 
David Bruzos (Systems Administrator)
Jacksonville Port Authority
2831 Talleyrand Ave.
Jacksonville, FL  32206
Cell: (904) 625-0969
Office: (904) 357-3069
Email: david.bru...@jaxport.com

On Fri, Aug 20, 2021 at 11:32:31AM +, Eric Robinson wrote:
> EXTERNAL
> This message is from an external sender.
> Please use caution when opening attachments, clicking links, and responding.
> If in doubt, contact the person or the helpdesk by phone.
> 
> 
> My main motivation is the desire for a compressed filesystem. I have 
> experimented with using VDO for that purpose and it works, but the setup is 
> complex and I don’t know if I trust it to work well when VDO is in a stack of 
> Pacemaker cluster resources. If there a better way of getting compression to 
> work above DRBD?
> 
> -Eric
> 
> 
> From: ra...@isoc.org.il 
> Sent: Thursday, August 19, 2021 4:43 PM
> To: Eric Robinson 
> Cc: drbd-user@lists.linbit.com
> Subject: Re: [DRBD-user] DRBD + ZFS
> 
> Not sure ZFS is the right choice as an underline for a resource,
> it is powerful but also complex (as a code base), which will probably will 
> make it slow.
> 
> unless you are going to expose the ZVOL or the dataset directly to be 
> consumed,
> stacking ZFS over DRBD over ZFS, seems to me as a bad idea.
> 
> 
> 
> Rabin
> 
> 
> On Wed, 18 Aug 2021 at 09:37, Eric Robinson 
> mailto:eric.robin...@psmnv.com>> wrote:
> I’m considering deploying DRBD between ZFS layers. The lowest layer RAIDZ 
> will serve as the DRBD backing device. Then I would build another ZFS 
> filesystem on top to benefit from compression. Any thoughs, experiences, 
> opinions, positive or negative?
> 
> --Eric
> 
> 
> 
> 
> 
> Disclaimer : This email and any files transmitted with it are confidential 
> and intended solely for intended recipients. If you are not the named 
> addressee you should not disseminate, distribute, copy or alter this email. 
> Any views or opinions presented in this email are solely those of the author 
> and might not represent those of Physician Select Management. Warning: 
> Although Physician Select Management has taken reasonable precautions to 
> ensure no viruses are present in this email, the company cannot accept 
> responsibility for any loss or damage arising from the use of this email or 
> attachments.
> ___
> Star us on GITHUB: https://github.com/LINBIT
> drbd-user mailing list
> drbd-user@lists.linbit.com<mailto:drbd-user@lists.linbit.com>
> https://lists.linbit.com/mailman/listinfo/drbd-user
> Disclaimer : This email and any files transmitted with it are confidential 
> and intended solely for intended recipients. If you are not the named 
> addressee you should not disseminate, distribute, copy or alter this email. 
> Any views or opinions presented in this email are solely those of the author 
> and might not represent those of Physician Select Management. Warning:

Re: [DRBD-user] DRBD + ZFS

2021-08-24 Thread Eric Robinson

Hi David --

Thanks for your feedback! I do have a couple of follow-up questions/comments.

What degree of performance degradation have you observed with DRBD over ZFS? 
Our servers will be using NVME drives with 25Gbit networking.
Since you don't recommend having ZFS above DRBD, what filesystem do you use 
over DRBD?
Linbit recommends that compression take place above DRBD rather than below. 
What are your thoughts about their recommendation versus your approach?

--Eric




> -Original Message-
> From: David Bruzos 
> Sent: Saturday, August 21, 2021 8:34 AM
> To: Eric Robinson 
> Cc: ra...@isoc.org.il; drbd-user@lists.linbit.com
> Subject: Re: [DRBD-user] DRBD + ZFS
>
> Hello folks,
> I've used DRBD over ZFS for many years and my experience has been very
> possitive.  My primary use case has been virtual machine backing storage for
> Xen hypervisors, with dom0 running ZFS and DRBD.  The realtime nature of
> DRBD replication allows for VM migrations, etc, and ZFS makes remote
> incremental backups awesome.  Overall, it is a combination that is hard to
> beat.
>
> * Key things to keep in mind:
>
> . The performance of DRBD on ZFS is not the best in the world, but the
> benefits of a properly configured and used setup far outweigh the
> performance costs.
> . If you are not limited buy storage size (typical when using rotating 
> disks), I
> would absolutely recommend mirror vdevs with ashift=12 for best results in
> most circumstances.
> . If space is a limiting factor (typical with SSD/NVME), I use raidz, but 
> careful
> considerations have to be made, so you don't end up wasting tuns of space,
> because of ashift/blocksize/striping issues.
> . Compression works great under the DRBD devices, but volblocksize/ashift
> details are extremely important to get the most out of it.
> . I would not create additional ZFS file systems on top of the DRBD 
> devices
> for compression or any other intensive feature, just not worth it, you want
> that as close to the physical storage as possible.
>
> I do run a few ZFS file systems on virtual machines that are backed by 
> DRBD
> devices on top of ZFS, but I am after other ZFS features in those cases.  The
> VMs running ZFS have compression=off, no vdev redundancy, optimized
> volblocksize for the situation/workload in question, etc.  My typical goto
> filesystem for VMs is XFS, because it is lean-and-mean and has the kind of
> features that everyone should want in a general purpose FS.
>
> If you have specific questions, let me know.
>
> David
>
> --
> David Bruzos (Systems Administrator)
> Jacksonville Port Authority
> 2831 Talleyrand Ave.
> Jacksonville, FL  32206
> Cell: (904) 625-0969
> Office: (904) 357-3069
> Email: david.bru...@jaxport.com
>
> On Fri, Aug 20, 2021 at 11:32:31AM +, Eric Robinson wrote:
> > EXTERNAL
> > This message is from an external sender.
> > Please use caution when opening attachments, clicking links, and
> responding.
> > If in doubt, contact the person or the helpdesk by phone.
> > 
> >
> > My main motivation is the desire for a compressed filesystem. I have
> experimented with using VDO for that purpose and it works, but the setup is
> complex and I don’t know if I trust it to work well when VDO is in a stack of
> Pacemaker cluster resources. If there a better way of getting compression to
> work above DRBD?
> >
> > -Eric
> >
> >
> > From: ra...@isoc.org.il 
> > Sent: Thursday, August 19, 2021 4:43 PM
> > To: Eric Robinson 
> > Cc: drbd-user@lists.linbit.com
> > Subject: Re: [DRBD-user] DRBD + ZFS
> >
> > Not sure ZFS is the right choice as an underline for a resource, it is
> > powerful but also complex (as a code base), which will probably will make it
> slow.
> >
> > unless you are going to expose the ZVOL or the dataset directly to be
> > consumed, stacking ZFS over DRBD over ZFS, seems to me as a bad idea.
> >
> >
> >
> > Rabin
> >
> >
> > On Wed, 18 Aug 2021 at 09:37, Eric Robinson
> mailto:eric.robin...@psmnv.com>> wrote:
> > I’m considering deploying DRBD between ZFS layers. The lowest layer
> RAIDZ will serve as the DRBD backing device. Then I would build another ZFS
> filesystem on top to benefit from compression. Any thoughs, experiences,
> opinions, positive or negative?
> >
> > --Eric
> >
> >
> >
> >
> >
> > Disclaimer : This email and any files transmitted with it are confidential 
> > and
> intended solely for intended recipients. If you are not the named addresse

Re: [DRBD-user] DRBD + ZFS

2021-08-24 Thread David Bruzos
Hello Eric:

> What degree of performance degradation have you observed with DRBD over ZFS? 
> Our servers will be using NVME drives with 25Gbit networking:

Unfortunately, I have not had the time to properly benchmark and compare a 
setup like yours with DRBD on top of ZFS.  Very superficial tests show that my 
I/O is more than sufficient for my workload, so I'm then more interested is the 
data integrity, snapshotting, compression, etc.  I would not want to create 
misinformation by sharing I/O stats that are not taking into account the many 
aspects of a proper ZFS benchmark and that are not being compared against an 
alternative setup.
In the days of spinning rust storage, I always used mirrored vdevs, always 
added a fast ZIL, lots of RAM for ARC and a couple of caching devices for 
L2ARC, so the performance was great when compared with the alternatives.

> Since you don't recommend having ZFS above DRBD, what filesystem do you use 
> over DRBD?

I've always had good results with XFS on LVM (very thin).  That combination 
usually gives you good flexibility at the VM level and the performance is 
great.  These days, ext4 is a reasonable choice, but I still use XFS most of 
the time.
I would like to see what other folks think about the XFS+LVM combination 
for VMs vs something like ext4+LVM.

> Linbit recommends that compression take place above DRBD rather than below. 
> What are your thoughts about their recommendation versus your approach?

If you can provide a link to their recommendation, I can be more specific.  
In any case, I'm sure their recommendation is reasonable depending on what your 
specific workload is.  In my case, I mostly use compression at the backing 
storage level, because it gives me a predictable and well understood VM 
environment where I can run a wide variety of guest operating systems, 
applications, workloads, etc, without having to worry about the specifics for 
each possible VM scenario.
The reason I normally don't use ZFS for VMs is because I believe it best 
serves its purpose at the backing storage level for many reasons.  ZFS is 
designed to leverage lots of RAM for ARC, to handle the storage directly, to do 
many things with your hardware that are very much abstracted away at the guest 
level.

What is your specific usage scenario?


-- 
David Bruzos (Systems Administrator)
Jacksonville Port Authority
2831 Talleyrand Ave.
Jacksonville, FL  32206
Cell: (904) 625-0969
Office: (904) 357-3069
Email: david.bru...@jaxport.com

On Tue, Aug 24, 2021 at 03:21:10PM +, Eric Robinson wrote:
> EXTERNAL
> This message is from an external sender.
> Please use caution when opening attachments, clicking links, and responding.
> If in doubt, contact the person or the helpdesk by phone.
> 
> 
> 
> Hi David --
> 
> Thanks for your feedback! I do have a couple of follow-up questions/comments.
> 
> What degree of performance degradation have you observed with DRBD over ZFS? 
> Our servers will be using NVME drives with 25Gbit networking.
> Since you don't recommend having ZFS above DRBD, what filesystem do you use 
> over DRBD?
> Linbit recommends that compression take place above DRBD rather than below. 
> What are your thoughts about their recommendation versus your approach?
> 
> --Eric
> 
> 
> 
> 
> > -Original Message-
> > From: David Bruzos 
> > Sent: Saturday, August 21, 2021 8:34 AM
> > To: Eric Robinson 
> > Cc: ra...@isoc.org.il; drbd-user@lists.linbit.com
> > Subject: Re: [DRBD-user] DRBD + ZFS
> >
> > Hello folks,
> > I've used DRBD over ZFS for many years and my experience has been very
> > possitive.  My primary use case has been virtual machine backing storage for
> > Xen hypervisors, with dom0 running ZFS and DRBD.  The realtime nature of
> > DRBD replication allows for VM migrations, etc, and ZFS makes remote
> > incremental backups awesome.  Overall, it is a combination that is hard to
> > beat.
> >
> > * Key things to keep in mind:
> >
> > . The performance of DRBD on ZFS is not the best in the world, but the
> > benefits of a properly configured and used setup far outweigh the
> > performance costs.
> > . If you are not limited buy storage size (typical when using rotating 
> > disks), I
> > would absolutely recommend mirror vdevs with ashift=12 for best results in
> > most circumstances.
> > . If space is a limiting factor (typical with SSD/NVME), I use raidz, 
> > but careful
> > considerations have to be made, so you don't end up wasting tuns of space,
> > because of ashift/blocksize/striping issues.
> > . Compression works great under the DRBD devices, but 
> > volblocksize/

Re: [DRBD-user] DRBD + ZFS

2021-08-24 Thread Eric Robinson
Hi David --

Here is a link to a Linbit article about using DRBD with VDO. While the focus 
of this article is VDO, I assume the compression recommendation would apply to 
other technologies such as ZFS. As the article states, their goal was to 
compress data before it gets passed off to DRBD, because then DRBD replication 
is faster and more efficient. This was echoed in some follow-up conversation I 
had with a Linbit rep (or someone from Red Hat, I forget which).

https://linbit.com/blog/albireo-virtual-data-optimizer-vdo-on-drbd/

My use case is multi-tenant MySQL servers. I'll have 125+ separate instances of 
MySQL running on each cluster node, all out of separate directories and 
listening on separate ports. The instances will be divided into 4 sets of 50, 
which live on 4 separate filesystems, on 4 separate DRBD disks. I've used this 
approach before very successfully with up to 60 MySQL instances, and now I'm 
dramatically increasing the server power and doubling the number of instances. 
4 separate DRBD threads will handle the replication. I'll be using 
corosync+pacemaker for the HA stack. I'd really like to compress the data and 
make the most of the available NVME media. The servers do not have RAID 
controllers. I'll be using ZFS, mdraid, or LVM to create 4 separate arrays for 
my DRBD backing disks.

--Eric

> -Original Message-
> From: David Bruzos 
> Sent: Tuesday, August 24, 2021 2:03 PM
> To: Eric Robinson 
> Cc: ra...@isoc.org.il; drbd-user@lists.linbit.com
> Subject: Re: [DRBD-user] DRBD + ZFS
>
> Hello Eric:
>
> > What degree of performance degradation have you observed with DRBD
> over ZFS? Our servers will be using NVME drives with 25Gbit networking:
>
> Unfortunately, I have not had the time to properly benchmark and
> compare a setup like yours with DRBD on top of ZFS.  Very superficial tests
> show that my I/O is more than sufficient for my workload, so I'm then more
> interested is the data integrity, snapshotting, compression, etc.  I would not
> want to create misinformation by sharing I/O stats that are not taking into
> account the many aspects of a proper ZFS benchmark and that are not being
> compared against an alternative setup.
> In the days of spinning rust storage, I always used mirrored vdevs, always
> added a fast ZIL, lots of RAM for ARC and a couple of caching devices for
> L2ARC, so the performance was great when compared with the alternatives.
>
> > Since you don't recommend having ZFS above DRBD, what filesystem do
> you use over DRBD?
>
> I've always had good results with XFS on LVM (very thin).  That 
> combination
> usually gives you good flexibility at the VM level and the performance is
> great.  These days, ext4 is a reasonable choice, but I still use XFS most of 
> the
> time.
> I would like to see what other folks think about the XFS+LVM combination
> for VMs vs something like ext4+LVM.
>
> > Linbit recommends that compression take place above DRBD rather than
> below. What are your thoughts about their recommendation versus your
> approach?
>
> If you can provide a link to their recommendation, I can be more specific.
> In any case, I'm sure their recommendation is reasonable depending on what
> your specific workload is.  In my case, I mostly use compression at the
> backing storage level, because it gives me a predictable and well understood
> VM environment where I can run a wide variety of guest operating systems,
> applications, workloads, etc, without having to worry about the specifics for
> each possible VM scenario.
> The reason I normally don't use ZFS for VMs is because I believe it best
> serves its purpose at the backing storage level for many reasons.  ZFS is
> designed to leverage lots of RAM for ARC, to handle the storage directly, to
> do many things with your hardware that are very much abstracted away at
> the guest level.
>
> What is your specific usage scenario?
>
>
> --
> David Bruzos (Systems Administrator)
> Jacksonville Port Authority
> 2831 Talleyrand Ave.
> Jacksonville, FL  32206
> Cell: (904) 625-0969
> Office: (904) 357-3069
> Email: david.bru...@jaxport.com
>
> On Tue, Aug 24, 2021 at 03:21:10PM +, Eric Robinson wrote:
> > EXTERNAL
> > This message is from an external sender.
> > Please use caution when opening attachments, clicking links, and
> responding.
> > If in doubt, contact the person or the helpdesk by phone.
> > 
> >
> >
> > Hi David --
> >
> > Thanks for your feedback! I do have a couple of follow-up
> questions/comments.
> >
> > What degree of performance degradation have you observed with DRBD

Re: [DRBD-user] DRBD + ZFS

2021-08-30 Thread David Bruzos
Hi Eric,
Sorry about the delay.  The article you provided is interesting, but rather 
specific to a workload that would show rather dramatic results on VDO.  In your 
case, the main objective is making the most our of your NVME storage, while 
maintaining good performance.  The article would be very much applicable if you 
were doing replication over a slow WAN link or something like that, but I 
imagine that the network is not going to be a bottleneck for you, so saving 
throughput at the DRBD layer is probably not a big advantage.
The real space and performance killer (if done wrong) in your case is going 
to be proper block alignments to optimize the mysql workload.  Depending on 
your underlining storage optimal block size (usually 4KB) and the vdev type you 
want to use (EG. raidz, mirror), you will have to make sure that everything is 
optimized for mysql's 16KB writes.  As I pointed out earlier, mirror will be 
simplest/fastest and raidz is doable, but will be slower for writes (may not 
matter if you got enough iops).  The key is that with raidz, you will have to 
take more factors into account to ensure everything is optimal.  In my case for 
example, my newest setup uses raidz and compression for making the most our of 
my NVME, but I use ashift=9 (512 byte blocks) to be able to make 4K zvols for 
my VMs and still greatly benefit from compression.
It is important to point out that the raidz details are not unique to ZFS.  
Most people that use tradditional raid5 setups use it in a suboptimal manner 
and actually have terrible performance and either can't tell, or eventually 
move to raid10, because "raid5 sucks".  In any case, to answer your question, I 
would still use ZFS instead of VDO for multiple reasons and I would still use 
it only under DRBD in this case.  You have a standard workload, so you should 
be able to optimize it to fit your objectives.

Here is a good article about mysql on ZFS that should get you started:

https://shatteredsilicon.net/blog/2020/06/05/mysql-mariadb-innodb-on-zfs/


David

-- 
David Bruzos (Systems Administrator)
Jacksonville Port Authority
2831 Talleyrand Ave.
Jacksonville, FL  32206
Cell: (904) 625-0969
Office: (904) 357-3069
Email: david.bru...@jaxport.com

On Tue, Aug 24, 2021 at 09:26:22PM +, Eric Robinson wrote:
> EXTERNAL
> This message is from an external sender.
> Please use caution when opening attachments, clicking links, and responding.
> If in doubt, contact the person or the helpdesk by phone.
> 
> 
> 
> Hi David --
> 
> Here is a link to a Linbit article about using DRBD with VDO. While the focus 
> of this article is VDO, I assume the compression recommendation would apply 
> to other technologies such as ZFS. As the article states, their goal was to 
> compress data before it gets passed off to DRBD, because then DRBD 
> replication is faster and more efficient. This was echoed in some follow-up 
> conversation I had with a Linbit rep (or someone from Red Hat, I forget 
> which).
> 
> https://linbit.com/blog/albireo-virtual-data-optimizer-vdo-on-drbd/
> 
> My use case is multi-tenant MySQL servers. I'll have 125+ separate instances 
> of MySQL running on each cluster node, all out of separate directories and 
> listening on separate ports. The instances will be divided into 4 sets of 50, 
> which live on 4 separate filesystems, on 4 separate DRBD disks. I've used 
> this approach before very successfully with up to 60 MySQL instances, and now 
> I'm dramatically increasing the server power and doubling the number of 
> instances. 4 separate DRBD threads will handle the replication. I'll be using 
> corosync+pacemaker for the HA stack. I'd really like to compress the data and 
> make the most of the available NVME media. The servers do not have RAID 
> controllers. I'll be using ZFS, mdraid, or LVM to create 4 separate arrays 
> for my DRBD backing disks.
> 
> --Eric
> 
> > -Original Message-
> > From: David Bruzos 
> > Sent: Tuesday, August 24, 2021 2:03 PM
> > To: Eric Robinson 
> > Cc: ra...@isoc.org.il; drbd-user@lists.linbit.com
> > Subject: Re: [DRBD-user] DRBD + ZFS
> >
> > Hello Eric:
> >
> > > What degree of performance degradation have you observed with DRBD
> > over ZFS? Our servers will be using NVME drives with 25Gbit networking:
> >
> > Unfortunately, I have not had the time to properly benchmark and
> > compare a setup like yours with DRBD on top of ZFS.  Very superficial tests
> > show that my I/O is more than sufficient for my workload, so I'm then more
> > interested is the data integrity, snapshotting, compression, etc.  I would 
> > not
> > want to create misinformation by sharing I/O stats that are n

Re: [DRBD-user] DRBD + ZFS

2021-08-31 Thread David Bruzos
Eric,
Cool, I'll try to help where I can.  I am not intimately familiar with 
MySQL internals, but the information in the article wil apply to anything that 
writes to ZFS in blocks, so probably still applicable in your case, just making 
whatever size adjustments make sense.  The key is to determine what is your 
primary goal with ZFS and then run some benchmarks and see if your iops are 
where you need them.

Good luck!


-- 
David Bruzos (Systems Administrator)
Jacksonville Port Authority
2831 Talleyrand Ave.
Jacksonville, FL  32206
Cell: (904) 625-0969
Office: (904) 357-3069
Email: david.bru...@jaxport.com

On Tue, Aug 31, 2021 at 08:34:21PM +, Eric Robinson wrote:
> EXTERNAL
> This message is from an external sender.
> Please use caution when opening attachments, clicking links, and responding.
> If in doubt, contact the person or the helpdesk by phone.
> 
> 
> 
> David --
> 
> That is good feedback and thanks much for the link. If I gather correctly, 
> the thrust of the article is related to InnoDB optimization. Believe it or 
> not, we employ a hybrid model. Each of our databases consists of 
> approximately 5000 tables of different sizes and structures. Most of them are 
> still on MyISAM with only 20 or so on InnoDB. (In my experience over the past 
> 15 years of hosting hundreds of MySQL databases, InnoDB is a bloated, 
> fragile, resource-gulping freakshow, so we only use it for the handful of 
> tables that demand it. That said, I realize most other people would see it 
> differently.)
> 
> I hope you won't mind if I circle back and ask you some questions when the 
> new servers get here and I start testing different approaches to storage.
> 
> > -Original Message-
> > From: David Bruzos 
> > Sent: Monday, August 30, 2021 6:26 AM
> > To: Eric Robinson 
> > Cc: ra...@isoc.org.il; drbd-user@lists.linbit.com
> > Subject: Re: [DRBD-user] DRBD + ZFS
> >
> > Hi Eric,
> > Sorry about the delay.  The article you provided is interesting, but 
> > rather
> > specific to a workload that would show rather dramatic results on VDO.  In
> > your case, the main objective is making the most our of your NVME storage,
> > while maintaining good performance.  The article would be very much
> > applicable if you were doing replication over a slow WAN link or something
> > like that, but I imagine that the network is not going to be a bottleneck 
> > for
> > you, so saving throughput at the DRBD layer is probably not a big advantage.
> > The real space and performance killer (if done wrong) in your case is 
> > going
> > to be proper block alignments to optimize the mysql workload.  Depending
> > on your underlining storage optimal block size (usually 4KB) and the vdev
> > type you want to use (EG. raidz, mirror), you will have to make sure that
> > everything is optimized for mysql's 16KB writes.  As I pointed out earlier,
> > mirror will be simplest/fastest and raidz is doable, but will be slower for
> > writes (may not matter if you got enough iops).  The key is that with raidz,
> > you will have to take more factors into account to ensure everything is
> > optimal.  In my case for example, my newest setup uses raidz and
> > compression for making the most our of my NVME, but I use ashift=9 (512
> > byte blocks) to be able to make 4K zvols for my VMs and still greatly 
> > benefit
> > from compression.
> > It is important to point out that the raidz details are not unique to 
> > ZFS.
> > Most people that use tradditional raid5 setups use it in a suboptimal manner
> > and actually have terrible performance and either can't tell, or eventually
> > move to raid10, because "raid5 sucks".  In any case, to answer your 
> > question,
> > I would still use ZFS instead of VDO for multiple reasons and I would still 
> > use it
> > only under DRBD in this case.  You have a standard workload, so you should
> > be able to optimize it to fit your objectives.
> >
> > Here is a good article about mysql on ZFS that should get you started:
> >
> > https://shatteredsilicon.net/blog/2020/06/05/mysql-mariadb-innodb-on-
> > zfs/
> >
> >
> > David
> >
> > --
> > David Bruzos (Systems Administrator)
> > Jacksonville Port Authority
> > 2831 Talleyrand Ave.
> > Jacksonville, FL  32206
> > Cell: (904) 625-0969
> > Office: (904) 357-3069
> > Email: david.bru...@jaxport.com
> >
> > On Tue, Aug 24, 2021 at 09:26:22PM +, Eric Robinson wrote:
> > > EXTERNAL
> > > This message is from an external sender.
>

Re: [DRBD-user] DRBD + ZFS

2021-08-31 Thread Eric Robinson
David --

That is good feedback and thanks much for the link. If I gather correctly, the 
thrust of the article is related to InnoDB optimization. Believe it or not, we 
employ a hybrid model. Each of our databases consists of approximately 5000 
tables of different sizes and structures. Most of them are still on MyISAM with 
only 20 or so on InnoDB. (In my experience over the past 15 years of hosting 
hundreds of MySQL databases, InnoDB is a bloated, fragile, resource-gulping 
freakshow, so we only use it for the handful of tables that demand it. That 
said, I realize most other people would see it differently.)

I hope you won't mind if I circle back and ask you some questions when the new 
servers get here and I start testing different approaches to storage.

> -Original Message-
> From: David Bruzos 
> Sent: Monday, August 30, 2021 6:26 AM
> To: Eric Robinson 
> Cc: ra...@isoc.org.il; drbd-user@lists.linbit.com
> Subject: Re: [DRBD-user] DRBD + ZFS
>
> Hi Eric,
> Sorry about the delay.  The article you provided is interesting, but 
> rather
> specific to a workload that would show rather dramatic results on VDO.  In
> your case, the main objective is making the most our of your NVME storage,
> while maintaining good performance.  The article would be very much
> applicable if you were doing replication over a slow WAN link or something
> like that, but I imagine that the network is not going to be a bottleneck for
> you, so saving throughput at the DRBD layer is probably not a big advantage.
> The real space and performance killer (if done wrong) in your case is 
> going
> to be proper block alignments to optimize the mysql workload.  Depending
> on your underlining storage optimal block size (usually 4KB) and the vdev
> type you want to use (EG. raidz, mirror), you will have to make sure that
> everything is optimized for mysql's 16KB writes.  As I pointed out earlier,
> mirror will be simplest/fastest and raidz is doable, but will be slower for
> writes (may not matter if you got enough iops).  The key is that with raidz,
> you will have to take more factors into account to ensure everything is
> optimal.  In my case for example, my newest setup uses raidz and
> compression for making the most our of my NVME, but I use ashift=9 (512
> byte blocks) to be able to make 4K zvols for my VMs and still greatly benefit
> from compression.
> It is important to point out that the raidz details are not unique to ZFS.
> Most people that use tradditional raid5 setups use it in a suboptimal manner
> and actually have terrible performance and either can't tell, or eventually
> move to raid10, because "raid5 sucks".  In any case, to answer your question,
> I would still use ZFS instead of VDO for multiple reasons and I would still 
> use it
> only under DRBD in this case.  You have a standard workload, so you should
> be able to optimize it to fit your objectives.
>
> Here is a good article about mysql on ZFS that should get you started:
>
> https://shatteredsilicon.net/blog/2020/06/05/mysql-mariadb-innodb-on-
> zfs/
>
>
> David
>
> --
> David Bruzos (Systems Administrator)
> Jacksonville Port Authority
> 2831 Talleyrand Ave.
> Jacksonville, FL  32206
> Cell: (904) 625-0969
> Office: (904) 357-3069
> Email: david.bru...@jaxport.com
>
> On Tue, Aug 24, 2021 at 09:26:22PM +, Eric Robinson wrote:
> > EXTERNAL
> > This message is from an external sender.
> > Please use caution when opening attachments, clicking links, and
> responding.
> > If in doubt, contact the person or the helpdesk by phone.
> > 
> >
> >
> > Hi David --
> >
> > Here is a link to a Linbit article about using DRBD with VDO. While the 
> > focus
> of this article is VDO, I assume the compression recommendation would
> apply to other technologies such as ZFS. As the article states, their goal was
> to compress data before it gets passed off to DRBD, because then DRBD
> replication is faster and more efficient. This was echoed in some follow-up
> conversation I had with a Linbit rep (or someone from Red Hat, I forget
> which).
> >
> > https://linbit.com/blog/albireo-virtual-data-optimizer-vdo-on-drbd/
> >
> > My use case is multi-tenant MySQL servers. I'll have 125+ separate
> instances of MySQL running on each cluster node, all out of separate
> directories and listening on separate ports. The instances will be divided 
> into
> 4 sets of 50, which live on 4 separate filesystems, on 4 separate DRBD disks.
> I've used this approach before very successfully with up to 60 MySQL
> instances, and now I'm dramatically increasing the server power and doubling
> the 

[DRBD-user] DRBD + ZFS on Linux

2013-05-13 Thread Adrian Berlin
Hello,
Does anyone test ZFS on Linux with DRBD?
Is it working fine?

I've created DRBD device from ZVOL on both Linux servers.
Initial replication is going fine.
The problem occurs when I want to write on /dev/drbd0.
DD hangs and DRBD too. In dmesg nothing special.

… root@linux:~# No response from the DRBD driver! Is the module loaded?

After command drbdadm disconnect all I can't do anything...

root@linux:~# drbdadm disconnect all
^[[A
 
 
 
 
^[[A
^[[A
^[[A
^[[A
^[[A
^[[A


-- 

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user