Re: [Gluster-users] Increase redundancy on existing disperse volume

2018-08-01 Thread Benjamin Kingston
Please Ignore, I see your messages, that is the information I'm looking for. On Wed, Aug 1, 2018 at 9:10 AM Benjamin Kingston wrote: > Hello, I accidentally sent this question from an email that isn't > subscribed to the gluster-users list. > I resent from my mailing list address, bu

Re: [Gluster-users] Increase redundancy on existing disperse volume

2018-08-01 Thread Benjamin Kingston
, 2018 at 8:02 PM Ashish Pandey wrote: > > > I think I have replied all the questions you have asked. > Let me know if you need any additional information. > > --- > Ashish > ------ > *From: *"Benjamin Kingston" > *To: *"gluster-

[Gluster-users] Increase redundancy on existing disperse volume

2018-07-31 Thread Benjamin Kingston
I'm working to convert my 3x3 arbiter replicated volume into a disperse volume, however I have to work with the existing disks, maybe adding another 1 or 2 new disks if necessary. I'm hoping to destroy the bricks on one of the replicated nodes and build it into a I'm opting to host this volume on

[Gluster-users] Increase redundancy on existing disperse volume

2018-07-30 Thread Benjamin Kingston
I'm working to convert my 3x3 arbiter replicated volume into a disperse volume, however I have to work with the existing disks, maybe adding another 1 or 2 new disks if necessary. I'm hoping to destroy the bricks on one of the replicated nodes and build it into a I'm opting to host this volume on

Re: [Gluster-users] glustefs as vmware datastore in production

2018-06-05 Thread Benjamin Kingston
You're better off exporting LUNs via iSCSI. I spent a long time trying to get NFS to work via NFS-Ganesha as a datastore and the performance is not there, especially since HA NFS isn't an official feature of NFS-Ganesha. Also keep in mind your write speed is cut in half/thirds/etc... with gluster

Re: [Gluster-users] Wrong volume size with df

2018-01-04 Thread Benjamin Kingston
I'm also having this issue with a volume before and after I broke from a arbiter volume down to a single distribute, and rebuilt to arbiter On Tue, Jan 2, 2018 at 1:51 PM, Tom Fite wrote: > For what it's worth here, after I added a hot tier to the pool, the brick > sizes are

Re: [Gluster-users] Reliability issues with Gluster 3.10 and shard

2017-05-15 Thread Benjamin Kingston
-shared-storage: enable nfs-ganesha: enable -ben On Sat, May 13, 2017 at 12:20 PM, Benjamin Kingston <b...@nexusnebula.net> wrote: > Hers's some log entries from nfs-ganesha gfapi > > [2017-05-13 19:02:54.105936] E [MSGID: 133010] > [shard.c:1706:shard_common_lookup_shards_cbk]

[Gluster-users] Gluster arbiter with tier

2017-05-14 Thread Benjamin Kingston
Are there any plans to enable tiering with arbiter enabled? ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Reliability issues with Gluster 3.10 and shard

2017-05-14 Thread Benjamin Kingston
xlator/features/shard.so(+0xb29b) [0x7f8c495ec29b] ) 0-storage2-shard: Failed to get trusted.glusterfs.shard.file-size for b2745d17-1972-4738-afa9-22e9597fa787 -ben On Fri, May 12, 2017 at 11:46 PM, Benjamin Kingston <b...@nexusnebula.net> wrote: > > Hello all, > > I'm trying to take advantage of t

Re: [Gluster-users] Connect Gluster Node with web interface

2017-05-13 Thread Benjamin Kingston
Why not mount the gluster volume to a subdirectory inside your webroot and point your uploads from users to that folder. Just make sure you set your mount as a required dependency on the web server service On Sat, May 13, 2017 at 9:18 AM, Dwijadas Dey wrote: > Hi >list

[Gluster-users] Fwd: Reliability issues with Gluster 3.10 and shard

2017-05-13 Thread Benjamin Kingston
Hello all, I'm trying to take advantage of the shard xlator, however I've found it causes a lot of issues that I hope is easily resolvable 1) large file operations work well (copy file from folder a to folder b 2) seek operations and list operations frequently fail (ls directory, read bytes xyz

[Gluster-users] Reliability issues with Gluster 3.10 and shard

2017-05-13 Thread Benjamin Kingston
Hello all, I'm trying to take advantage of the shard xlator, however I've found it causes a lot of issues that I hope is easily resolvable 1) large file operations work well (copy file from folder a to folder b 2) seek operations and list operations frequently fail (ls directory, read bytes xyz

Re: [Gluster-users] Fwd: Very slow performance when enabling tierd storage with SSD

2016-09-10 Thread Benjamin Kingston
> > 3. What are the values of the promotion and demotion counters reported? The values have been left at the defaults http://blog.gluster.org/2016/03/automated-tiering-in-gluster/ Thanks! > > > Milind > > On 09/04/2016 10:10 PM, Benjamin Kingston wrote: > >> Thanks for th

Re: [Gluster-users] Fwd: Very slow performance when enabling tierd storage with SSD

2016-09-04 Thread Benjamin Kingston
Thanks for the help, see below: On Sat, Sep 3, 2016 at 11:41 AM, Mohammed Rafi K C wrote: > Files created before attaching hot tier will be present on hot brick until > it gets heated and migrated completely. During this time interval we won't > get the benefit of hot

Re: [Gluster-users] Fwd: Very slow performance when enabling tierd storage with SSD

2016-09-03 Thread Benjamin Kingston
rds > > Rafi KC > > > > > On 09/03/2016 09:16 AM, Benjamin Kingston wrote: > > Hello all, > > I've discovered an issue in my lab that went unnoticed until recently, or > just came about with the latest Centos release. > > When the SSD hot tier is enab

[Gluster-users] Fwd: Very slow performance when enabling tierd storage with SSD

2016-09-02 Thread Benjamin Kingston
Hello all, I've discovered an issue in my lab that went unnoticed until recently, or just came about with the latest Centos release. When the SSD hot tier is enabled read from the volume is 2MB/s, after detaching AND committing, read of the same file is at 150MB/s to /dev/null If I copy the

[Gluster-users] Very slow performance when enabling tierd storage with SSD

2016-09-02 Thread Benjamin Kingston
Hello all, I've discovered an issue in my lab that went unnoticed until recently, or just came about with the latest Centos release. When the SSD hot tier is enabled read from the volume is 2MB/s, after detaching AND committing, read of the same file is at 150MB/s to /dev/null If I copy the

[Gluster-users] nfs-ganesha/samba vfs and replica redundancy

2015-06-03 Thread Benjamin Kingston
Can someone give me a hint on the best way to maintain data availability to a share on a third system using nfs-ganesha and samba? I currently have a round-robbin dns entry that nfs ganesha/samba uses, however even with a short ttl, there's brief downtime when a replica node fails. I can't see in

[Gluster-users] Speed up heal

2015-05-26 Thread Benjamin Kingston
I have a two node replicated volume, I recently rebuilt one and while they are re re synced even with a gigabit interconnect, they only transfer at 300mbps and with 6 cores at 2.0 utilization. I turned on performance.lower.threads.disable which didn't change much and stat'd the whole volume.

Re: [Gluster-users] [Gluster-devel] Fwd: Change in ffilz/nfs-ganesha[next]: pNFS code drop enablement and checkpatch warnings fixed

2015-03-27 Thread Benjamin Kingston
will enabling pnfs just be like fhe VFS FSAL with pnfs = true? otherwise I'll wait for your docs On Tue, Mar 24, 2015 at 1:25 AM, Jiffin Tony Thottan jthot...@redhat.com wrote: On 24/03/15 12:37, Lalatendu Mohanty wrote: On 03/23/2015 12:49 PM, Anand Subramanian wrote: FYI. GlusterFS

Re: [Gluster-users] Compiling on Solaris 11

2014-10-14 Thread Benjamin Kingston
],entry-d_name); ret = lstat (hpath, stbuf); if (!ret S_ISDIR (stbuf.st_mode)) continue; } } /d_ On Sun, Oct 12, 2014 at 11:56 AM, Benjamin Kingston l

Re: [Gluster-users] Use NFS as bricks?

2014-10-13 Thread Benjamin Kingston
I have tried this and unfortunately NFS doesn't support extended attributes in the way that gluster needs them, which prevents brick creation. On Mon, Oct 13, 2014 at 2:49 AM, technocrat 9000 technocrat9...@gmail.com wrote: Hi, I'm interested in using GlusterFS for my simple home NAS system.

Re: [Gluster-users] Compiling on Solaris 11

2014-10-10 Thread Benjamin Kingston
I tried building the 3.6.0 tag last night to no avail, but I'll try the newer betas as well as the master branch tonight, maybe even the 3.7alpha for good measure. Good to hear about recent x-platform work, so maybe a hope. As a side note, I'm considering using Solaris 11 as a tcp/ip NFS brick to

Re: [Gluster-users] Compiling on Solaris 11

2014-10-10 Thread Benjamin Kingston
of ../../contrib/mount/mntent.c:169:1: warning: control reaches end of non-void function [-Wreturn-type] } this may be a bug? On Fri, Oct 10, 2014 at 6:13 PM, Benjamin Kingston l...@nexusnebula.net wrote: I tried building the 3.6.0 tag last night to no avail, but I'll try the newer betas as well

[Gluster-users] Compiling on Solaris 11

2014-10-08 Thread Benjamin Kingston
I'm trying to get gluster 3.5.2, or really any version at this point, to compile on Solaris so I can take advantage of ZFS and encryption. This would be a killer app for me, as I'm a big fan of gluster on linux, but I'm running into a number of road blocks with compiling. any pointers or success

Re: [Gluster-users] Compiling on Solaris 11

2014-10-08 Thread Benjamin Kingston
On 10/08/2014 02:31 PM, Benjamin Kingston wrote: I'm trying to get gluster 3.5.2, or really any version at this point, to compile on Solaris so I can take advantage of ZFS and encryption. This would be a killer app for me, as I'm a big fan of gluster on linux, but I'm running into a number

Re: [Gluster-users] Painfully slow volume actions

2014-05-22 Thread Benjamin Kingston
vm, and the vm system drives (where /var/lib/glusterd resides) are all placed on the same host drive? Glusterd updates happen synchronously even in the latest release and the change to use buffered writes + fsync went into master only recently.. On May 21, 2014 1:25 AM, Benjamin Kingston l

[Gluster-users] Painfully slow volume actions

2014-05-21 Thread Benjamin Kingston
I'm trying to get gluster working on a test lab and had excellent success setting up a volume and 14 bricks on the first go around. However I realized the reasoning behind using a subdirectory in each brick and decommissioned the whole volume to start over. I also deleted the /var/lib/glusterd