Hi Joshua:
On Wed, Apr 13, 2011 at 2:21 PM, Joshua Baker-LePain wrote:
> I figured that was the case, and it's easy enough to tweak the 3.1.1 spec
> file to build with the 3.1.4 tarball. But there are enough changes and
> things moving about that it's nice to have an "official" spec file to wor
Hi all:
On Thu, Jan 6, 2011 at 2:09 PM, Fabricio Cannini wrote:
> I second Piotr suggestion.
> May i also suggest to the gluster devs 2 things:
>
> - To follow debian's way of separate packages ( server, client, libraries, and
> common data packages). Makes automated installation much easier and
Hi Christian:
On Tue, Nov 16, 2010 at 1:34 AM, Christian Fischer
wrote:
> No statement from the developers about usability of glusterfs client on 32bit
> systems. But this was probably discussed in earlier threads.
I believe the official comment is that Gluster is not going to support
32-bit sy
Hi all:
I ran into issues with gNFS server on 32-bit OS. Did any of you run
into it as well?
http://gluster.org/pipermail/gluster-users/2010-November/005703.html
Thanks,
Bernard
On Fri, Nov 12, 2010 at 9:51 AM, Ken Bigelow wrote:
> We have all 32bit server / clients for Gluster. We did have
Hi Stefano:
On Fri, Nov 12, 2010 at 2:18 AM, Stefano Baronio
wrote:
> is there a way to have a 32bit Glusterfs client?
You can definitely build it yourself, but it is not officially
supported by Gluster. They recommend you use GlusterFS on 64-bit
architecture servers.
Cheers,
Bernard
_
Hi Joe:
On Sun, Nov 7, 2010 at 12:03 AM, Joe Landman
wrote:
> Actually, showmount didn't work.
>
> We get permission denied. Even after playing with the auth.allowed flag.
That's an indication that the gNFS server is not running.
I would recommend you review the FAQ and some of the recent pos
Hi Joe:
On Sat, Nov 6, 2010 at 9:53 PM, Joe Landman
wrote:
> We have a 3.1 cluster set up, and NFS mounting is operational. We are
> trying to get our heads around the mounting of this cluster. What we found
> works (for a 6 brick distributed cluster) is using the same server:/export
> in all
Hi Craig:
On Thu, Nov 4, 2010 at 11:42 PM, Craig Carl wrote:
> Thanks for filing the bug report. We have not been able to recreate the
> issue on CentOS 5.x 64bit servers, that indicates that the problem is either
> related to your 32bit servers or CentOS 4, neither of which are supported,
>
Hi Shehjar:
On Tue, Nov 2, 2010 at 9:40 PM, Shehjar Tikoo wrote:
> Please file a bug. I'll need the logs in TRACE level for the nfs server
> daemon while you run ls on the mount point, also the volume files.
Filed: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2057
BTW, I noticed t
Hi Horacio:
On Wed, Nov 3, 2010 at 7:50 PM, Horacio Sanson wrote:
> $ sudo mount -v -o mountproto=tcp,nfsvers=3 -t nfs store90:/www /mnt
Are you missing the mountpoint here?
BTW, what does `showmount -e` output when you run it on your gluster servers?
Cheers,
Bernard
Hi Horacio:
The NFS native server only works with NFSv3, I saw that in some of
your attempts your client might have negotiated using v3, but haven't
seen you explicitly specifying that in the mount call, perhaps you can
try something like this:
# mount -o nfsvers=3 gluster-nfs:/export /share
For
On Tue, Nov 2, 2010 at 2:01 PM, Bernard Li wrote:
> For instance for each brick I have:
>
> /export/share/test/a/b/foo
>
> The resulting NFS mountpoint only shows ./test/a but stops there ("a"
> is empty directory).
Quick update -- now I can see the directory &quo
Hi Shehjar:
On Fri, Oct 29, 2010 at 12:07 PM, Bernard Li wrote:
> Thanks, that worked. I copied /etc/glusterd/nfs/nfs-server.vol to the
> other server, started glusterfsd and I could mount the volume via NFS
> on a client.
I guess I spoke too soon. While I could successfully moun
Hi Shehjar:
On Thu, Oct 28, 2010 at 12:34 AM, Shehjar Tikoo wrote:
> Thats not recommended but I can see why this is needed. The simplest way to
> run the nfs server for the two replicas is to simply copy over the nfs
> volume file from the current nfs server. It will work right away. The volume
type nfs/server
subvolumes dshare
end-volume
and I start glusterfsd
Does this look about right? Is the remote-port correct?
Thanks,
Bernard
On Tue, Oct 26, 2010 at 10:14 PM, Shehjar Tikoo wrote:
> Bernard Li wrote:
>>
>> On Tue, Oct 26, 2010 at 9:15 PM, Shehjar
On Tue, Oct 26, 2010 at 9:15 PM, Shehjar Tikoo wrote:
> Regarding this pdf, only the portions which show mount commands and the FAQ
> section is applicable to 3.1. In 3.1, NFS gets started by default for a
> volume started with the volume start command.
So basically you're saying if I have a 2 s
Hi Craig:
On Tue, Oct 26, 2010 at 8:22 PM, Craig Carl wrote:
> You should be using the GA release of Gluster 3.1, it includes our NFS
> translator.
Yes, I'm using 3.1 GA already.
> The download is here -
> http://download.gluster.com/pub/gluster/glusterfs/3.1/LATEST/
> Documentation is here
Hi all:
I'm trying to setup an NFS export using GlusterFS 3.1. I have setup a
replicated volume using the gluster CLI as follows:
Volume Name: share
Type: Replicate
Status: Started
Number of Bricks: 5
Transport-type: tcp
Bricks:
Brick1: gluster01:/export/share
Brick2: gluster02:/export/share
Bri
Hi Renee:
On Fri, Oct 22, 2010 at 1:33 PM, Renee Beckloff wrote:
> First, thanks to you all for always providing the feedback we need to
> continue to grow the functionality and stability of our storage platform
> and file system. Due to some unforeseen issues that were not produced
> during be
Hi Craig:
On Wed, Oct 20, 2010 at 5:18 PM, Craig Carl wrote:
> We haven't written installation documents using yum because we can't
> always be sure the repo's are up to date and most users are more familiar
> with 'rpm -ivh'. I will investigate changing the installation documentation
> to us
Hi Craig:
On Wed, Oct 20, 2010 at 3:52 PM, Craig Carl wrote:
> The RHEL/CentOS upgrade guide is here -
> http://www.gluster.com/community/documentation/index.php/Gluster_3.0_to_3.1_Upgrade_Guide
You shouldn't need to `rpm -e` the old RPMs if you are using yum,
since I have added the obsoletes
Hi Jenn:
[Replying back to the mailing-list]
On Wed, Sep 22, 2010 at 3:06 PM, Jenn Fountain wrote:
> Yes, but it only replicates subdirectories and all of their content but not
> files.
>
> IE:
>
> [r...@stag02 upload]# ls -la
> total 1140
> drwxr-xr-x 69 xx xx 589824 Sep 22 14:42 .
> drwxrwxr
Hi Jenn:
On Wed, Sep 22, 2010 at 1:11 PM, Jenn Fountain wrote:
> I have a directory that is about 1 GB worth of JPG images. When I add a new
> server into the configuration, it starts to replicate everything but copies
> only a few files in this directory and then stops. I ultimately have t
Hi all:
I currently have 5 servers in a cluster/afr configuration and I have
mounted that volume on another server. What I want to do is setup
that mountpoint as a remote brick so that another server in a remote
location (over WAN) can mount it.
When I try to do this, I get the following error o
Hi guys:
On Mon, Sep 20, 2010 at 11:02 PM, Amar Tumballi wrote:
> Also check if your build msgs have '-fstack-protection' flag set.. if yes,
> please build with 'CFLAGS="-fno-stack-protection' flag and it should work
> fine.
Thanks for the suggestions, but turns out it wasn't stack-protection
b
Hi all:
I'm trying to mount a GlusterFS volume on openSUSE 11.3 x86_64 and it
crashed with:
*** buffer overflow detected ***: glusterfs terminated
in the logs. This is a replicated volume across 5 machines with cluster/afr.
Just wondering if anybody has experienced this before. I am planning
I just tested 3.0.5rc9 on RHEL6 Beta 2 x86_64 and it works just fine.
Perhaps you should post your .vol files (both client and server).
What commands did you use to mount?
Cheers,
Bernard
On Wed, Jul 7, 2010 at 3:54 PM, Nick wrote:
> Anyone get glusterfs working on RHEL 6 beta yet? 3.0.4 and
Hi all:
I have a simple cluster/afr setup but am having trouble mounting a
volume by retrieving the default volfile from the server via the
option "volume-filename.default".
Here are the volfiles:
[server]
volume posix
type storage/posix
option directory /export/gluster
end-volume
volume l
Hi all:
I saw the thread "Netboot / PXE-Boot from glusterfs?" in the online
list archives and decided to subscribe to the mailing-list and share
some notes I have. Sorry for not being able to thread my reply in.
Anyway, I have recently added experimental support for GlusterFS to
Perceus, which i
29 matches
Mail list logo