On Wed, Oct 29, 2014 at 1:18 PM, craig w wrote:
> I am trying to run a GlusterFS server in a Docker container. It works fine,
> the problem I have is when I create a volume it's associated with the
> private IP address of the container, which is not accessible to other hosts
> in the network.
>
>
On Sat, Oct 25, 2014 at 11:13 AM, Harshavardhana
wrote:
> On Sat, Oct 25, 2014 at 8:47 AM, Dennis Schafroth wrote:
>> Mounting a distributed replicated linux server sometimes shows duplicate
>> directory entries on mac.
>>
>
> Go ahead and open a bug :-)
>
bash-
On Sat, Oct 25, 2014 at 8:47 AM, Dennis Schafroth wrote:
> Mounting a distributed replicated linux server sometimes shows duplicate
> directory entries on mac.
>
Go ahead and open a bug :-)
--
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
_
>
> Works on linux, but on mac I am hit by the second issue not being able to
> have multiple options. Something must go wrong in the /sbin/mount_glusterfs
> script.
>
> Work-Around for that is calling glusterfs directly:
>
> /usr/local/sbin/glusterfs --attribute-timeout=1 --entry-timeout=1
> --vol
On Sat, Oct 25, 2014 at 1:16 AM, Dennis Schafroth wrote:
> Trying to test OSX towards a Linux (debian 7.7) server, but seems to get the
> same error on both OS X and Linux when attempting to mount:
>
> Running beta3 on both.
>
> sudo mount -t glusterfs -o
> 'log-file=/usr/local/var/log/glusterfs/d
> I have done a `brew fetch osxfuse` and I've got
> /Library/Caches/Homebrew/osxfuse-2.7.2.yosemite.bottle.tar.gz.
>
> ??? Seems like I could just untar that in /usr/local/Cellar. Seem
> reasonable?
We don't necessarily need OSXFUSE headers all, all we need is
/dev/fuse with the
kernel module loa
0 233Gi 28Gi 204Gi13% 7412830 53572512 12%
/private/tmp/gfs
On Thu, Oct 23, 2014 at 12:33 PM, Harshavardhana
wrote:
>> I'm trying to build on my shiny new Yosemite machine... `brew install
>> osxfuse` tells me:
>>
>> osxfuse: OS X Mavericks or older is
> I'm trying to build on my shiny new Yosemite machine... `brew install
> osxfuse` tells me:
>
> osxfuse: OS X Mavericks or older is required for this package
> OS X Yosemit introduced a strict unsigned kext ban which breaks this
> package.
> You should remove this packager from you system a
>
> [2014-10-22 22:28:40.765585] I [MSGID: 100030]
> [glusterfsd.c:2018:main]
> 0-/usr/local/Cellar/glusterfs/3.6.0/sbin/glusterfs: Started running
> /usr/local/Cellar/glusterfs/3.6.0/sbin/glusterfs version 3.6.0beta3
> (args: /usr/local/Cellar/glusterfs/3.6.0/sbin/glusterfs
> --volfile-server=bne-
> Does this mean we'll need to learn Go as well as C and Python?
>
> If so, that doesn't sound completely optimal. :/
>
> That being said, a lot of distributed/networked computing
> projects seem to be written in it these days. Is Go specifically
> a good language for our kind of challenges, or is
On Fri, Aug 22, 2014 at 9:55 AM, Andrew Neuschwander
wrote:
> Hi Gluster Users,
>
> I'm configuring glusterfs pools in my libvirt/kvm setup. Does anyone know if
> the XML file accepts a backup volfile server like the
> mount options in /etc/fstab for gluster fuse mounts? If so, what is the XML
>
Would you mind opening up a bug and provide glusterfs logs server -
also reproduce it while grabbing a tcpdump on server?
Thanks
On Wed, Aug 6, 2014 at 1:02 PM, Eric Horwitz wrote:
> I am having an issue where I can write to a glusterfs striped volume via NFS
> but I cannot read from it. The str
>> I am writing more detailed documentation (and my apologies for not having it
>> ready at launch) for how to get involved but the high level overview will be:
>>
>> For documentation:
>> 1) Grab the docs project
>> 2) Make your changes
>> 3) Commit in git
>
> Excuse me that I am completely lost
Great work! -- ecotech++
On Wed, Jul 9, 2014 at 7:56 PM, Eco Willson wrote:
> Greetings all,
>
> Just a quick note to let everyone know that we switched over to the new
> Gluster.org site earlier this evening, please feel free to take a look for
> yourselves at www.gluster.org. In addition to
> 1) Is-it OK to build my storage on top of Ubuntu server?
>
There are no known limitations of using Ubuntu Server IMHO.
> 2) Is there any flusterfs driver for FreeBSD? I understood that there is
> none so far?
There is an experimental version available to test out - this is fully
GlusterFS for
count. :)
>
> Regards and best wishes,
>
> Justin Clift
>
>
> On 19/06/2014, at 6:40 PM, Harshavardhana wrote:
> > With OSX and NetBSD stability upstream i guess FreeBSD would be easier
> > to approach. Surprised that it was posted for review upstream.
> >
>
Looks like some sort of a automake issue, argument to ./py-compile
--destdir is missing.
On Sun, Jun 22, 2014 at 11:51 AM, Prof. Dr. Christian Baun
wrote:
> Hi,
>
> I want to build up a Cluster of RaspberryPis and realize a distributed
> and replicated storage with GlusterFS.
>
> Raspbian contain
With OSX and NetBSD stability upstream i guess FreeBSD would be easier
to approach. Surprised that it was posted for review upstream.
On Thu, Jun 19, 2014 at 7:56 AM, Justin Clift wrote:
> Hi all,
>
> Are there any FreeBSD developers around who are up for a bit of
> a challenge?
>
> There's a Fre
Excellent stuff! +1
On Thu, May 29, 2014 at 4:10 PM, Eco Willson wrote:
> Dear Community members,
>
> We have been working on a new site design and we would love to get your
> feedback. You can check things out at staging.gluster.org. Things are still
> very much in beta (a few pages not disp
Hey Tom,
http://review.gluster.org/7501 - this fixes your issues, its currently
on the master branch should be backported to release-3.5 branch.
Copying Niels he can explain more.
rpc: implement server.manage-gids for group resolving on the bricks
The new volume option 'server.manage-gid
This is a known issue and pending fix -
https://bugzilla.redhat.com/show_bug.cgi?id=1053579 - this is the
official fix http://review.gluster.org/#/c/7501/
On Thu, May 8, 2014 at 10:23 AM, Tom Young wrote:
> I’ve tested my gluster volume and it seems like there’s a 32 group limit.
> Is there any w
> We're running OSX 10.8.5 with OSXFUSE 2.6.4
> # make -v
> GNU Make 3.81
> Copyright (C) 2006 Free Software Foundation, Inc.
> This is free software; see the source for copying conditions.
> There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
> PARTICULAR PURPOSE.
> This program b
>
> Slightly OT, what would it take to setup a jenkins integration for Mac? We
> seem to be getting close to a NetBSD integration and having one for Mac
> would be cool too.
Oh didn't know we are doing this, we should definitely get Mac also in
place if infrastructure is not the problem :-)
--
R
ret = fflush (fp);
Current language: auto; currently minimal
(gdb)
It segfaults constantly here.
On Sat, May 3, 2014 at 5:23 PM, Harshavardhana
wrote:
>>
>> Btw, if you want remote access to this Mac mini, I can create
>> an account for you on it. :)
>>
>>
>
>
>
> Btw, if you want remote access to this Mac mini, I can create
> an account for you on it. :)
>
>
Sure thing, i would love to look at . This is some incompatibility
issue with 10.7.3 v/s 10.9.2 (which i am running)
--
Religious confuse piety with mere ritual, the virtuous confuse
regulation w
N/A N
N/A -> This is a bug we need to fix
Task Status of Volume dht
--
There are no active volume tasks
On Sat, May 3, 2014 at 3:46 PM, Harshavardhana
wrote:
>>
>> Does it also
>
> Does it also happen for you?
>
>
No sir - :-)
--
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/glust
>
> $ sudo glusterd --debug
> [2014-05-03 20:49:34.684205] I [MSGID: 100030] [glusterfsd.c:2016:main]
> 0-glusterd: Started running glusterd version 3.5qa2 (args: glusterd --debug)
> [2014-05-03 20:49:34.684386] D [MSGID: 0] [glusterfsd.c:614:get_volfp]
> 0-glusterfsd: loading volume file
Justin,
Here you go - http://review.gluster.org/#/c/7651/
On Fri, May 2, 2014 at 11:46 PM, Harshavardhana
wrote:
> Yep happened in a recent change, looking at it :-)
>
> On Fri, May 2, 2014 at 7:10 PM, Justin Clift wrote:
>> On 26/04/2014, at 9:53 AM, Dennis Schafroth wrote:
Yep happened in a recent change, looking at it :-)
On Fri, May 2, 2014 at 7:10 PM, Justin Clift wrote:
> On 26/04/2014, at 9:53 AM, Dennis Schafroth wrote:
>
>> On the server part we are not there yet. I have been running OS X bricks
>> with OS X clients, but only replicated ones seems to work.
Hi Tom,
After adding brick have you 'rebalanced' the volume ?
# gluster volume rebalance start
On Tue, Apr 29, 2014 at 7:28 AM, Tom Young wrote:
> Hi. I have setup a distributed volume called home, and then I expanded it
> by adding another brick from a second server. When I run a job to cre
On Sun, Apr 27, 2014 at 8:57 PM, Dan Mons wrote:
> Preliminary success - compiled, installed and mounting on MacOSX
> 10.8.5 (our production rollout here).
>
> Finder in preview mode loads up a folder with 406 images quite
> quickly, and scrubs back and forth at reasonable speed (much better
> tha
-- slice -: 0
> tx_pkt_start: 2087737864
> tx_pkt_done: 2087737864
> tx_req: 2508370636
> tx_done: 2508370636
> rx_small_cnt: 1504058385
> rx_big_cnt: 2957794484
> wake_queue: 462814
> stop_queue: 462814
> tx_linearized: 1011916
Perhaps a driver bug? - have you verified ethtool -S output?
On Wed, Apr 16, 2014 at 2:42 AM, Franco Broi wrote:
>
> I've increased my tcp_max_syn_backlog to 4096 in the hope it will
> prevent it from happening again but I'm not sure what caused it in the
> first place.
>
> On Wed, 2014-04-16 at
>
> Just distributed.
>
Pure distributed setup you have to take a downtime, since the data
isn't replicated.
>>
>> > 3.4.1 to 3.4.3-3 shouldn't cause problems with existing clients and
>> > other servers, right?
>> >
>>
>> You mean 3.4.1 and 3.4.3 co-existent with in a cluster?
>
> Yes, at least
On Mon, Apr 14, 2014 at 4:23 PM, Franco Broi wrote:
>
> Thanks Justin.
>
> Will it be possible to apply this new version as a rolling update to my
> servers keeping the volume online?
>
If you have distributed + replicated setup - yes its possible to do.
But you should make
sure that self-heal is
]: *** [libglusterfs_la-logging.lo] Error 1
>> make[3]: *** [all] Error 2
>> make[2]: *** [all-recursive] Error 1
>> make[1]: *** [all-recursive] Error 1
>> make: *** [all] Error 2
>>
>>
>> How did you get libintl.h in your system? Also, please add a check f
On Tue, Apr 8, 2014 at 10:05 AM, Jimmy Lu wrote:
> Hello Gluster Guru,
>
> I like to deploy about 5 nodes of gluster using gluster-deploy. Would
> someone please point me to the link where I can download? I do not see it
> in the repo. I am using rhel on my 5 nodes.
>
> Thanks in advance!
>
http
Virt-manager / libvirt is yet to expose perhaps this functionality -
but as far as i remember libvirt should be doing this as a
pass-through for the URL's which have been passed as
":///"
Does libvirt 'invoke' fuse when passed "gluster://" schema?
On Thu, Mar 27, 2014 at 5:34 PM, Dave Christians
+1
On Fri, Feb 21, 2014 at 2:26 PM, Brad Childs wrote:
> I would like to announce a new project on Gluster forge - libgfapi-java-io.
> This project aims at creating a Java 1.4+ interface to gluster using libgfapi
> interface.
>
> https://forge.gluster.org/libgfapi-java-io
>
> libgfapi-java-io
You should take a 'gluster' statedump of the process - and open a
bugzilla for analysis with logs.
On Tue, Jan 28, 2014 at 10:30 AM, Jay Vyas wrote:
> Hi folks :
>
> Im running mahout on top of gluster using the GlusterFileSystem hadoop
> plugin.
>
> It works well, but Im noticing that glusterfsd
>
> Probably by the end of today or so I'll be releasing the long
> requested "Automatic GlusterFS deployments". I'd recommend against
> offering up static iso's, since this solution is a lot nicer, and a
> lot less bandwidth heavy for downloads. You can set how many hosts to
> build, and so on, al
On Tue, Dec 24, 2013 at 8:21 AM, Anirban Ghoshal
wrote:
> Hi, and Thanks a lot, Anand!
>
> I was initially searching for a good answer to why the glusterfs site lists
> knfsd as NOT compatible with the glusterfs. So, now I know. :)
>
> Funnily enough, we didn't have a problem with the failover du
Perhaps here - https://launchpad.net/~semiosis/+archive/ubuntu-glusterfs-3.4
and
http://www.gluster.org/2013/10/glusterfs-3-4-1-packages-for-ubuntu-saucy-13-10/
On Tue, Dec 17, 2013 at 4:56 PM, Knut Moe wrote:
> I added version 3.4 to my Ubuntu repository using the following command:
>
> add-
Default size for GlusterFS brick is limited by the limit set for the disk
based filesystems
ext4 - 1EiB - (recommended) 16TiB
XFS is - 8EiB on 64bit, on 32bit its 16TiB
ext3 - 2TiB - 32TiB
ext2 - 16GiB - 2TiB
Is this what you were looking for ?
On Tue, Dec 17, 2013 at 3:06 PM, Knut Moe wrote
Limiting factor is of number of servers and bricks you have - in fact
'there is no hard limit'
On Tue, Dec 17, 2013 at 9:28 AM, Randy Breunling wrote:
> What's the max size one can achieve in a single gluster cluster?
>
> --Randy Breunling
>
> ___
> Gl
--- http://joejulian.name/blog/broken-32bit-apps-on-glusterfs/ ---
On Sun, Nov 24, 2013 at 6:13 PM, Sharuzzaman Ahmat Raslan <
sharuzza...@gmail.com> wrote:
> Hi all,
>
> I'm following this thread, and found that this corner case (32 bit client
> connecting to 64 bit cluster) will cause this iss
David,
It should be
# gluster volume set gfsv0 features.quota-deem-statfs on
On Wed, Oct 30, 2013 at 6:06 AM, David Gibbons wrote:
> Hi Lala,
>
> Thank you. I should have been more clear and you are correct, I can't
> write data above the quota. I was referring only to the listing of "disk
> s
Completely normal! - and very good values indeed
On Wed, Oct 23, 2013 at 9:47 AM, huan wang wrote:
> I test the performance of glusterfs3.4.1.
>
> 1 client--- 2 servers(raid 5 800MB/s, 10Gb )
>
> client dd test write .DHT 450MB/s AFR 330MB/s
>
> is it normal ?
>
> thank you very much.
>
>
You should perhaps open a [RFE] ?
On Wed, Oct 23, 2013 at 4:56 AM, David Gibbons wrote:
> Hi All,
>
> I'm setting up a gluster cluster that will be accessed via smb. I was
> hoping that the quotas. I've configured a quota on the path itself:
>
> # gluster volume quota gfsv0 list
> path
>
> I've posted to the list about this issue before actually.
> We had/have a similar requirement for storing a very large number of
> fairly small files, and originally had them all in just a few directories
> in glusterfs.
>
Directory layout also matters here "number of files v/s number of
direc
Joe,
Perhaps a typo
"""So first we move server1:/data/brick2 to server3:/data/brick1""" -
http://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/
Should be "server3:/data/brick2"
On Sun, Sep 8, 2013 at 12:34 PM, Joe Julian wrote:
> On 09/05/2013 02:16 AM, Anup
+1
On Thu, Sep 5, 2013 at 4:18 PM, Anand Avati wrote:
>
> On Thu, Sep 5, 2013 at 2:53 PM, Stephen Watt wrote:
>
>> Hi Folks
>>
>> We are pleased to announce a major update to the glusterfs-hadoop project
>> with the release of version 2.1. The glusterfs-hadoop project, available at
>> The glus
On Tue, Sep 3, 2013 at 10:08 PM, John Mark Walker wrote:
> Greetings,
>
> As you probably know, the gluster.org web site has been using the same
> design for over two years now, and we still have the problem of a web site
> that uses multiple applications, none of which are integrated. We are
> fi
> Thanks. I may have a look at that.
> Any reason RPMs can't be provided? I think more people would be willing to
> test RPMs/DEBs.
> The VFS plugin might be interesting if I find an easy way to provide HA
> for Samba without overblowing my setup..
>
>
>
RPMs will be provided when there is samba re
this and perhaps give it a try?
http://shell.gluster.com/~avati/gluster-samba
Cheers
On Fri, Aug 30, 2013 at 5:28 PM, Nux! wrote:
> On 31.08.2013 01:18, Harshavardhana wrote:
>
>> For OS/X ideal use case is use NFS/CIFS interface.
>>
>
> There is no built-in CIFS
For OS/X ideal use case is use NFS/CIFS interface.
On Fri, Aug 30, 2013 at 4:52 PM, Justin Clift wrote:
> On 30/08/2013, at 10:59 PM, Peng Yu wrote:
> > Hi,
> >
> > I run ./configure then make. But I see the following error. Does
> > anybody know how to compile gluster 3.4 on Mac OS X 10.8.4? T
There is a hacker-guide -
https://github.com/gluster/glusterfs/tree/master/doc/hacker-guide/en-US/markdown
But it doesn't explain about the CLI stuff - would you explain in more
details about what you are going to accomplish?
On Thu, Aug 29, 2013 at 8:47 PM, haiwei.xie-soulinfo <
haiwei@sou
Perhaps for checksum verification and data integrity checks - continuously
use this - https://github.com/avati/arequal
On Thu, Aug 29, 2013 at 8:32 AM, Michael Peek wrote:
> On 08/26/2013 01:14 PM, Anand Avati wrote:
> > Michael,
> > The problem looks very strange. We haven't come across such a
I might have to test the ctdb setup again and see if io-cache enabling with
the fix he mentioned fixes the ctdb failover with io-cache.
On Mon, Aug 26, 2013 at 6:10 PM, Joe Julian wrote:
> On 08/26/2013 02:32 PM, Joe Julian wrote:
>
>> Are these really errors? I have a massive number of them in
Looks like an odd fix - http://review.gluster.org/#/c/4916/ - seen that
while using ctdb - failover doesn't work and clearly this is the message
all the time.
Turning off io-cache helps ctdb failovers. Fix is not clearly addressing
the inode context being NULL.
On Mon, Aug 26, 2013 at 2:32 PM,
Alexey,
Can you try with
$ mount -vv -t nfs -overs=3 :/
On Fri, Aug 16, 2013 at 9:17 PM, Alexey Shalin wrote:
>
> root@ispcp:~# mount -t nfs 192.168.15.165:/storage /storage
> mount.nfs: Unknown error 521
> root@ispcp:~#
>
> [2013-08-17 04:09:46.444600] E [nfs3.c:306:__nfs3_get_volume_id]
On Sun, Jul 28, 2013 at 10:41 PM, Amar Tumballi wrote:
> On 07/29/2013 04:46 AM, Harshavardhana wrote:
>
>>
>> http://www.gluster.org/**community/documentation/index.**
>> php/Documenting_the_**undocumented<http://www.gluster.org/community/documentation/index.php/D
http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented
This is something that i wrote recently - options which could be used for
interesting purposes and perhaps you will.
Feel free to share and edit if anything missing, perhaps you have found
other interesting opti
>
>> There are many different models some of which are time tested which have
>> worked for more than a decade and at a scale of 100,000's of patches
>> millions of lines of code.
>>
>> 1. Linux kernel -
>> http://www.oss-watch.ac.uk/resources/benevolentdictatorgovernancemodel
>> 2. Mozilla Foundat
> - Be responsible for maintaining release branch.
> - Deciding branch points in master for release branches.
> - Actively scan commits happening in master and cherry-pick those which
> improve stability of a release branch.
> - Handling commits in the release branch.
> - Deciding what outstanding
A detailed reproducible steps and pointing all this information to
bugzilla is appreciated. Lets takes this discussion back there.
-Harsha
On Fri, Jan 6, 2012 at 2:17 PM, Matt Cowan wrote:
>
> On Fri, 6 Jan 2012, Harshavardhana wrote:
>>
>> Users have mounted /home for year
>
> Do you have any additional references on that (that = user_xattr fixing
> things like firefox)? I'm not seeing anything similar on google...
>
>
I don't, problems were related to something else. But nothing wrong in
really enabling it.
Curious why do you have a 32bit server? and a 64bit clien
Users have mounted /home for years now, its nothing new. Many
customers/use use NIS/LDAP based authentication for users home
directories/application runs etc.
People have even used GlusterFS for remote booting mechanism over PXE.
Gordan Bobic was the last one i heard who was running this
successf
> When the same file has mis-match in gfid self-heal looksup the parent
> directory and see which one is the source (based on the entry xattrs of the
> directory), then it will retain the file in that directory removes the other
> file and does the self-heal of the file.
So the no clear source can
> If it can fix the gfid problems, it will fix otherwise the mount will give
> Input/Output error for the files it cant fix.
What is the frequency of 'can't fix' v/s 'can fix' . was it not the
assumption that the gfid self-heal would fix all the gfid problems?
So if the problems do exist, then we
Strange Known Issues.
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Technical_Notes/ar01s04s02.html
Its called 'rdma' now.
-Harsha
On Mon, Nov 7, 2011 at 10:55 AM, Harshavardhana wrote:
> http://people.redhat.com/dledford/infiniband_get
Ben,
/etc/init.d/openibd start - should do everything for you i presume.
-Harsha
On Mon, Nov 7, 2011 at 6:44 AM, Ben England wrote:
> To Harry Mangalam about Gluster/RDMA:
>
> make sure these modules are loaded
>
> # modprobe -v rdma_ucm
> # modprobe -v ib_uverbs
> # modprobe -v ib_ucm
>
__
This is a 'compliance' problem. Any volume access should adhere to one
standard way. Having it in multiple ways is causing confusion.
I would bet on having a command line option with -o transport=rdma it
would sound better and more explanatory in terms for sysadmins.
-Harsha
>
> use 'mount -t g
Basic thumb rule
RAID 5 , 64k stripe size, use the following. If you have larger files
go further.
# xfs, 5 disks, 64K stripe, units in 512-byte sectors
mkfs -txfs -d sunit=$((64*2)) -d swidth=$((5*64*2))
Use for better memory alignment with Scheduler
echo "16" > /proc/sys/vm/page-cluster - a
On Fri, Aug 12, 2011 at 4:03 PM, Harshavardhana wrote:
>> Since I have replication 'ON', there is no downtime as the brick on the
>> second node serves well, but I want the redundancy/replication to be
>> restored with the introduction of a new node (#3) in the clust
> Since I have replication 'ON', there is no downtime as the brick on the
> second node serves well, but I want the redundancy/replication to be restored
> with the introduction of a new node (#3) in the cluster.
>
Exactly if its in Cloud then the disk be it EBS blocks which can be
reattached ba
> I have a two node cluster, with two bricks replicated, one on each node.
> Lets say one of the node dies and is unreachable.
If you have the disk from the dead node, then all have to do is plug
it in new system and start running following commands.
gluster volume replace-brickstart
gluster
On Thu, Aug 11, 2011 at 8:15 PM, Hraban Luyat wrote:
> Pointer casting: armv5tel forces word-alignment of unsigned integers
> but glusterfs casts buffer pointers (char *) to integer pointers. 75%
> you get a misaligned pointer and when you read from that, well, you
> read some incompatible value t
> The client problem was that, after doing the above to the storage servers,
> the client(s) obviously had cache issues. Doing an “ls” on a native
> GlusterFS mount showed only several subdirectories out of many, and a “du
> –sh” of the mount point gave this message on the client:
>
> ** **
>
>
gineer
>
> Knight Capital Group
>
> ** **
>
> *From:* John Mark Walker [mailto:jwal...@gluster.com]
> *Sent:* Wednesday, June 22, 2011 3:04 PM
> *To:* Harshavardhana; Burnash, James
>
> *Cc:* gluster-users@gluster.org
> *Subject:* RE: [Gluster-users] GlusterFS 3.1
On Wed, Jun 22, 2011 at 11:53 AM, Burnash, James wrote:
> Does anyone know how we can get the native GlusterFS clients to clear their
> cached data short of restarting them (or dismounting and remounting their
> gluster storage?
>
> Clients are running 3.1.3, servers are now running 3.1.5.
>
>
Cli
http://bugs.gluster.com/show_bug.cgi?id=2818 - a bug is applied, easily
reproducible through the steps mentioned in the bug .
Native FUSE works.
On Tue, Apr 19, 2011 at 3:11 PM, Whit Blauvelt
wrote:
> Jerry,
>
> You might well be right. But why would the nfs requirement be different
> from
> the
tdated, which needs to be fixed
by the Debian maintainers. Even the init scripts from glusterfsd are
invalid for 3.1.x releases. Since "glusterd" handles spawning "glusterfsd"
process.
We will be applying bug trackers on debian repositories to get it updated.
--
Harshavardha
NFS native solves it to large extent with its aggressive client side
caching of many attributes, since stats are served locally for NFS client.
Booster never evolved much more and we saw the side effects. Since the
problem is elsewhere, so eventually it has to be dropped.
Regards
--
Harsha
n and add the trace translator. I've sent in a bunch of
trace data and opened a bug:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=1040
-Brian
Thanks a lot Brian, your help is much appreciated.
Regards
--
Harshavardhana
Gluster Inc - http://www.gluster.com
+1(408)-770-188
e, would you mind opening a bug at
http://bugs.gluster.com/
and also "trace" logs from the client side attached with it.
Regards
--
Harshavardhana
Gluster Inc - http://www.gluster.com
+1(408)-770-1887, Ext-113
+1(408)-480-1730
___
Gluster
ag" is run. But the files newly created
in new directories will be redistributed properly, it is just for the
older directories it won't happen. (Directories i mean "top level"
directories).
Regards
--
Harshavardhana
Gluster Inc - http://www.gluster.com
+1(408)-770-
own set of
pairs. Only new
directories would help in creating files across evenly.
Regards
--
Harshavardhana
Gluster Inc - http://www.gluster.com
+1(408)-770-1887, Ext-113
+1(408)-480-1730
___
Gluster-users mailing list
Gluster-users@gluster.o
On 06/14/2010 05:09 AM, Todd Daugherty wrote:
ok I have done all of this. The numbers are the same. With all of the
tuning nothing gets better. What is the next step?
Todd
Well then next step would be to use 3.0.x releases.
Regards
--
Harshavardhana
Gluster Inc - http://www.gluster.com
ecommended.
But i think you should be getting enough benefit from the above details.
Regards
--
Harshavardhana
Gluster Inc - http://www.gluster.com
+1(408)-770-1887, Ext-113
+1(408)-480-1730
___
Gluster-users mailing list
Gluster-users@gluster.org
h
MB/sec over
glusterfs for writes as well as reads, if not then its a tuning or
configuration issue.
Thanks
--
Harshavardhana
Gluster Inc - http://www.gluster.com
+1(408)-770-1887, Ext-113
+1(408)-480-1730
___
Gluster-users mailing list
Gluster-us
benchmarks to 128k.
Regards
--
Harshavardhana
Gluster Inc - http://www.gluster.com
+1(408)-770-1887, Ext-113
+1(408)-480-1730
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
hunk writes has large implications on
performance.
Keep us posted.
Regards
--
Harshavardhana
Gluster Inc - http://www.gluster.com
+1(408)-770-1887, Ext-113
+1(408)-480-1730
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/
ow how the performance shows, then we can tune things
further.
Regards
--
Harshavardhana
Gluster Inc - http://www.gluster.com
+1(408)-770-1887, Ext-113
+1(408)-480-1730
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-b
(Raid controller).
Backend disk performance should be with "dd if=/dev/zero
of=/ bs=1M count=16386 oflag=direct"
Regards
--
Harshavardhana
Gluster Inc - http://www.gluster.com
+1(408)-770-1887, Ext-113
+1(408)-480-1730
___
Gluster-use
glusterfs mount point.
Regards
--
Harshavardhana
Gluster Inc - http://www.gluster.com
+1(408)-770-1887, Ext-113
+1(408)-480-1730
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
bove with big_writes enabled for pagecached
writes we can use --disable-direct-io-mode safely without having performance
penalty..
All depends on the client side. Recommended is to use XEN images to serve
from glusterfs use latest kernels.
Regards
--
Harshavardhana
Gluster - http://www.gluster.com
On
Regards
--
Harshavardhana
Gluster Inc - http://www.gluster.com
+1(408)-480-1730
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
file without using volgen.
Regards
--
Harshavardhana
Gluster Inc - http://www.gluster.com
+1(408)-480-1730
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
1 - 100 of 175 matches
Mail list logo