Please Ignore,
I see your messages, that is the information I'm looking for.
On Wed, Aug 1, 2018 at 9:10 AM Benjamin Kingston
wrote:
> Hello, I accidentally sent this question from an email that isn't
> subscribed to the gluster-users list.
> I resent from my mailing list address, bu
, 2018 at 8:02 PM Ashish Pandey wrote:
>
>
> I think I have replied all the questions you have asked.
> Let me know if you need any additional information.
>
> ---
> Ashish
> ------
> *From: *"Benjamin Kingston"
> *To: *"gluster-
I'm working to convert my 3x3 arbiter replicated volume into a disperse
volume, however I have to work with the existing disks, maybe adding
another 1 or 2 new disks if necessary. I'm hoping to destroy the bricks on
one of the replicated nodes and build it into a
I'm opting to host this volume on
I'm working to convert my 3x3 arbiter replicated volume into a disperse
volume, however I have to work with the existing disks, maybe adding
another 1 or 2 new disks if necessary. I'm hoping to destroy the bricks on
one of the replicated nodes and build it into a
I'm opting to host this volume on
You're better off exporting LUNs via iSCSI. I spent a long time trying to
get NFS to work via NFS-Ganesha as a datastore and the performance is not
there, especially since HA NFS isn't an official feature of NFS-Ganesha.
Also keep in mind your write speed is cut in half/thirds/etc... with
gluster
I'm also having this issue with a volume before and after I broke from a
arbiter volume down to a single distribute, and rebuilt to arbiter
On Tue, Jan 2, 2018 at 1:51 PM, Tom Fite wrote:
> For what it's worth here, after I added a hot tier to the pool, the brick
> sizes are
-shared-storage: enable
nfs-ganesha: enable
-ben
On Sat, May 13, 2017 at 12:20 PM, Benjamin Kingston <b...@nexusnebula.net>
wrote:
> Hers's some log entries from nfs-ganesha gfapi
>
> [2017-05-13 19:02:54.105936] E [MSGID: 133010]
> [shard.c:1706:shard_common_lookup_shards_cbk]
Are there any plans to enable tiering with arbiter enabled?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
xlator/features/shard.so(+0xb29b)
[0x7f8c495ec29b] ) 0-storage2-shard: Failed to get
trusted.glusterfs.shard.file-size for b2745d17-1972-4738-afa9-22e9597fa787
-ben
On Fri, May 12, 2017 at 11:46 PM, Benjamin Kingston <b...@nexusnebula.net>
wrote:
>
> Hello all,
>
> I'm trying to take advantage of t
Why not mount the gluster volume to a subdirectory inside your webroot and
point your uploads from users to that folder.
Just make sure you set your mount as a required dependency on the web
server service
On Sat, May 13, 2017 at 9:18 AM, Dwijadas Dey wrote:
> Hi
>list
Hello all,
I'm trying to take advantage of the shard xlator, however I've found it
causes a lot of issues that I hope is easily resolvable
1) large file operations work well (copy file from folder a to folder b
2) seek operations and list operations frequently fail (ls directory, read
bytes xyz
Hello all,
I'm trying to take advantage of the shard xlator, however I've found it
causes a lot of issues that I hope is easily resolvable
1) large file operations work well (copy file from folder a to folder b
2) seek operations and list operations frequently fail (ls directory, read
bytes xyz
>
> 3. What are the values of the promotion and demotion counters reported?
The values have been left at the defaults
http://blog.gluster.org/2016/03/automated-tiering-in-gluster/
Thanks!
>
>
> Milind
>
> On 09/04/2016 10:10 PM, Benjamin Kingston wrote:
>
>> Thanks for th
Thanks for the help, see below:
On Sat, Sep 3, 2016 at 11:41 AM, Mohammed Rafi K C
wrote:
> Files created before attaching hot tier will be present on hot brick until
> it gets heated and migrated completely. During this time interval we won't
> get the benefit of hot
rds
>
> Rafi KC
>
>
>
>
> On 09/03/2016 09:16 AM, Benjamin Kingston wrote:
>
> Hello all,
>
> I've discovered an issue in my lab that went unnoticed until recently, or
> just came about with the latest Centos release.
>
> When the SSD hot tier is enab
Hello all,
I've discovered an issue in my lab that went unnoticed until recently, or
just came about with the latest Centos release.
When the SSD hot tier is enabled read from the volume is 2MB/s, after
detaching AND committing, read of the same file is at 150MB/s to /dev/null
If I copy the
Hello all,
I've discovered an issue in my lab that went unnoticed until recently, or
just came about with the latest Centos release.
When the SSD hot tier is enabled read from the volume is 2MB/s, after
detaching AND committing, read of the same file is at 150MB/s to /dev/null
If I copy the
Can someone give me a hint on the best way to maintain data availability to
a share on a third system using nfs-ganesha and samba?
I currently have a round-robbin dns entry that nfs ganesha/samba uses,
however even with a short ttl, there's brief downtime when a replica node
fails. I can't see in
I have a two node replicated volume, I recently rebuilt one and while they
are re re synced even with a gigabit interconnect, they only transfer at
300mbps and with 6 cores at 2.0 utilization.
I turned on performance.lower.threads.disable which didn't change much and
stat'd the whole volume.
will enabling pnfs just be like fhe VFS FSAL with pnfs = true? otherwise
I'll wait for your docs
On Tue, Mar 24, 2015 at 1:25 AM, Jiffin Tony Thottan jthot...@redhat.com
wrote:
On 24/03/15 12:37, Lalatendu Mohanty wrote:
On 03/23/2015 12:49 PM, Anand Subramanian wrote:
FYI.
GlusterFS
],entry-d_name);
ret = lstat (hpath, stbuf);
if (!ret S_ISDIR (stbuf.st_mode))
continue;
}
}
/d_
On Sun, Oct 12, 2014 at 11:56 AM, Benjamin Kingston l
I have tried this and unfortunately NFS doesn't support extended attributes
in the way that gluster needs them, which prevents brick creation.
On Mon, Oct 13, 2014 at 2:49 AM, technocrat 9000 technocrat9...@gmail.com
wrote:
Hi, I'm interested in using GlusterFS for my simple home NAS system.
I tried building the 3.6.0 tag last night to no avail, but I'll try the
newer betas as well as the master branch tonight, maybe even the 3.7alpha
for good measure. Good to hear about recent x-platform work, so maybe a
hope.
As a side note, I'm considering using Solaris 11 as a tcp/ip NFS brick to
of
../../contrib/mount/mntent.c:169:1: warning: control reaches end of
non-void function [-Wreturn-type]
}
this may be a bug?
On Fri, Oct 10, 2014 at 6:13 PM, Benjamin Kingston l...@nexusnebula.net
wrote:
I tried building the 3.6.0 tag last night to no avail, but I'll try the
newer betas as well
I'm trying to get gluster 3.5.2, or really any version at this point, to
compile on Solaris so I can take advantage of ZFS and encryption. This
would be a killer app for me, as I'm a big fan of gluster on linux, but I'm
running into a number of road blocks with compiling.
any pointers or success
On 10/08/2014 02:31 PM, Benjamin Kingston wrote:
I'm trying to get gluster 3.5.2, or really any version at this point, to
compile on Solaris so I can take advantage of ZFS and encryption. This
would be a killer app for me, as I'm a big fan of gluster on linux, but I'm
running into a number
vm, and the vm
system drives (where /var/lib/glusterd resides) are all placed on the same
host drive? Glusterd updates happen synchronously even in the latest
release and the change to use buffered writes + fsync went into master only
recently..
On May 21, 2014 1:25 AM, Benjamin Kingston l
I'm trying to get gluster working on a test lab and had excellent success
setting up a volume and 14 bricks on the first go around. However I
realized the reasoning behind using a subdirectory in each brick and
decommissioned the whole volume to start over. I also deleted the
/var/lib/glusterd
28 matches
Mail list logo