On Wed, Mar 26, 2014 at 1:18 AM, Lalatendu Mohanty wrote:
> Hey JM,
>
> Two meetings every week will be a overkill for me. Can we schedule this
> meeting monthly once or on alternative weeks with the current development
> meeting? We can start with monthly once (may be the first Wednesday of a
>
On 03/20/2014 07:22 PM, John Mark Walker wrote:
Greetings,
The development meetings have been a great success, and I'm very very happy to see them.
Once upon a time, we also had some meetings where we discussed things like the web site,
docs, meetups and events, marketing and anything else tha
Hi,
There is a post some time ago about migrating data using "remove-brick".
http://www.gluster.org/pipermail/gluster-users/2012-October/034473.html
Is that approach reliable? Why is it still not officially documented?
I followed the instruction to run the command on a distributed volume bricks
Ok, Seems i need to create a blog tomorrow :D
IMHO you could also link to the ppa... No plan to remove it...
On 25. März 2014 19:31:01 MEZ, John Mark Walker wrote:
>If someone wants to put this on their blog, I'll make sure to syndicate
>on gluster.org. Hint, hint... ;)
>
>-JM
>
>
>- Origi
Hi all,
are there any docs for GlusterFS 3.4.2 and its API? I've been looking
around but cannot find anything. I saw post with the same question for
GFs3.3 but the links there no longer exist..
Would anybody have any good links, tips?
v
--
Regards
Viktor Villafuerte
Optus Internet Engineering
Also see this bug
https://bugzilla.redhat.com/show_bug.cgi?id=1073217
May not be cause of your problems but it does bad things and gluster
sees this as a 'crash' even with graceful shutdown
v
On Tue 25 Mar 2014 22:24:22, Carlos Capriotti wrote:
> Let's go with the data collection first.
>
> W
Let's go with the data collection first.
What linux distro ?
Anything special about your network configuration ?
Any chance your server is taking too long to release networking and gluster
is starting before network is ready ?
Can you completely disable iptables and test again ?
I am afraid qu
Hello!
I have 2 nodes with GlusterFS 3.4.2. I created one replica volume using 2
bricks and enabled glusterd autostarts. Also firewall is configured and I
have to run "iptables -F" on nodes after reboot. It is clear that firewall
should be disabled in LAN, but I'm interested in my case.
Problem:
Resending it as I suppose it was not delievered :(
Hi Ryan/Narayan,
Apologies for top posting.
After today's call, we had a chat with Vijay and he added a hint.
You need to enable/*remote-dio*/ for that volume that serves for
virt-store.
> gluster volume set remote-dio enable
Hope this coul
If someone wants to put this on their blog, I'll make sure to syndicate on
gluster.org. Hint, hint... ;)
-JM
- Original Message -
> On 03/25/2014 02:52 PM, André Bauer wrote:
> > Am 25.03.2014 04:40, schrieb Lalatendu Mohanty:
> >
> >> Yes, Gluster server and Samba server can be on diff
On 03/25/2014 02:52 PM, André Bauer wrote:
Am 25.03.2014 04:40, schrieb Lalatendu Mohanty:
Yes, Gluster server and Samba server can be on different servers.
Theoretically this should work. I think I have seen some
mails/configuration from community around it. In my test set-up, I had
kept glus
Hi,
I have configured NFS failover for my Gluster volumes using CTDB, which works
just fine when I specify one or more IP address on the same VLAN tagged bonded
interface like this:
# cat /etc/ctdb/public_addresses
192.168.4.4/26 bond0.123
192.168.4.5/26 bond0.123
When I specify addresses fr
Hugues:
I tend to agree with you with your disk setting for system. I have that
myself and it is rock solid, and lightning fast.
For your 11 x 1 To (tera octets, uh ? The French way), I'd tend to agree,
because these 7.2 Krpm disks feel a bit slow, but for the 11x600 GB 15
Krpm... I think that wi
Hi,
In an effort to reduce the number of connections between nodes (we have 20
right now and will bring up another 20 soon) - for now these are VMs for
testing and analysis.
Is the single process AFR officially supported? The gluster commands don't work
after following the steps for single pro
Hi all,
I can have Dell PowerEdge server with 3 groups of RAID drive.
1st : 2x146Go in RAID1 for the system
2nd : 11x1To 7.2K RPM in RAID50 (with one spare)
3rd : 11x600Go 15K RPM in RAID50 (with one spare)
Can I make a kind of tiering with two gluster volumes, one for each speed of
dis
Hi,
I found this issue but don't have any information about it.
Gluster 3.4.1 for CentOS 6.4 has problem of migrating data when using
"remove-brick". It doesn't migrate data correctly. When I have two
subdirectories under each brick, running "remove-brick" with "start" migrates
one subdirector
Steve:
Tested that myself - not the nagios part, but the gluster commands you
posted later - and no errors or zombies.
Somebody else reported the same, so, sounds consistent.
There must be another process there biting your gluster, turning it into a
haunted scenario.
Cheers,
Carlos
On Thu, M
Hi Carlos,
Yes. You are right. It was using version 4. When I changed it to version 3, it
worked :) .
Thanks
Kumar
From: Carlos Capriotti [mailto:capriotti.car...@gmail.com]
Sent: Tuesday, March 25, 2014 4:32 PM
To: Gnan Kumar, Yalla
Cc: gluster-users
Subject: Re: [Gluster-users] NFS
Your NF
Your NFS mount is trying to use version 4 by default.
Gluster's NFS server uses verion 3.
Please, when mounting, use the option vers=3, and let us know.
On Tue, Mar 25, 2014 at 10:52 AM, wrote:
> Hi All,
>
>
>
> This is the output I am getting:
>
> ---
>
> root@glu-client:/home/o
As Carlos suggested earlier please verify that the native NFS server is not
running on your system.
If it is then please stop the native NFS server and then restart gluster volume
(Needed because of rpcbind)
Once the above prerequisites are met then mount nfs using the following command:
mount -
Hi All,
This is the output I am getting:
---
root@glu-client:/home/oss# mount -vvv -t nfs 10.211.203.66:/gv0 /mnt/glusterfs/
mount: fstab path: "/etc/fstab"
mount: mtab path: "/etc/mtab"
mount: lock path: "/etc/mtab~"
mount: temp path: "/etc/mtab.tmp"
mount: UID:0
mount: eUID:
Hi All,
I have installed glusterfs on two Ubuntu VMs . I am trying to access volumes
using NFS client from a glusterfs client. But the connection is being timed out.
-
root@glu-client:/home/oss# mount -t nfs -o proto=tcp,port=2049
10.211.203.66:/gv0 /mnt/glusterfs/
mount.nfs
Am 25.03.2014 04:40, schrieb Lalatendu Mohanty:
> Yes, Gluster server and Samba server can be on different servers.
> Theoretically this should work. I think I have seen some
> mails/configuration from community around it. In my test set-up, I had
> kept gluster and Samba on same server and haven
Feels like you have the native NFS server/service running on that Ubuntu
server as well and they are conflicting.
Make sure your native NFS is disabled. Gluster uses its own implementation.
On Tue, Mar 25, 2014 at 9:18 AM, wrote:
> Hi Carlos,
>
>
>
>
>
> Thanks for the reply.
>
>
>
> These ar
Hi Carlos,
Thanks for the reply.
These are the results from server and client:
---
root@primary:/home/oss# showmount -e localhost
Export list for localhost:
root@primary:/home/oss#
root@glu-client:~# showmount -e 10.211.203.66
Export list for 10.211.203.66:
root@glu-client:~
No, NFS starts automatically, except if you disable it on gluster, when you
configure your volume.
on your gluster server(s) run:
showmount -e localhost
on your client (if unix) do the same with
showmount -e ip.of.gluster.server
also, make sure you don't have firewalls interfering.
And, just
Hi All,
I have installed glusterfs on two Ubuntu VMs . I am trying to access volumes
using NFS client from a glusterfs client. But the connection is being timed out.
-
root@glu-client:/home/oss# mount -t nfs -o proto=tcp,port=2049
10.211.203.66:/gv0 /mnt/glusterfs/
mount.nfs
27 matches
Mail list logo