Re: [Gluster-users] Data on gluster volume gone

2018-09-19 Thread Raghavendra Gowdappa
On Thu, Sep 20, 2018 at 1:29 AM, Raghavendra Gowdappa 
wrote:

> Can you give volume info? Looks like you are using 2 way replica.
>

Yes indeed.
gluster volume create gvol0 replica 2 gfs01:/glusterdata/brick1/gvol0
gfs02:/glusterdata/brick2/gvol0

+Pranith. +Ravi.

Not sure whether 2 way replication has caused this. From what I understand
we need either 3 way replication or arbiter for correct resolution of heals.


> On Wed, Sep 19, 2018 at 9:39 AM, Johan Karlsson 
> wrote:
>
>> I have two servers setup with glusterFS in replica mode, a single volume
>> exposed via a mountpoint. The servers are running Ubuntu 16.04 LTS
>>
>> After a package upgrade + reboot of both servers, it was discovered that
>> the data was completely gone. New data written on the volume via the
>> mountpoint is replicated correctly, and gluster status/info commands states
>> that everything is ok (no split brain scenario or any healing needed etc).
>> But the previous data is completely gone, not even present on any of the
>> bricks.
>>
>> The following upgrade was done:
>>
>> glusterfs-server:amd64 (4.1.0-ubuntu1~xenial3 -> 4.1.4-ubuntu1~xenial1)
>> glusterfs-client:amd64 (4.1.0-ubuntu1~xenial3 -> 4.1.4-ubuntu1~xenial1)
>> glusterfs-common:amd64 (4.1.0-ubuntu1~xenial3 -> 4.1.4-ubuntu1~xenial1)
>>
>> The logs only show that connection between the servers was lost, which is
>> expected.
>>
>> I can't even determine if it was the package upgrade or the reboot that
>> caused this issue, but I've tried to recreate the issue without success.
>>
>> Any idea what could have gone wrong, or if I have done some wrong during
>> the setup. For reference, this is how I've done the setup:
>>
>> ---
>> Add a separate disk with a single partition on both servers (/dev/sdb1)
>>
>> Add gfs hostnames for direct communication without DNS, on both servers:
>>
>> /etc/hosts
>>
>> 192.168.4.45gfs01
>> 192.168.4.46gfs02
>>
>> On gfs01, create a new LVM Volume Group:
>>   vgcreate gfs01-vg /dev/sdb1
>>
>> And on the gfs02:
>>   vgcreate gfs02-vg /dev/sdb1
>>
>> Create logical volumes named "brick" on the servers:
>>
>> gfs01:
>>   lvcreate -l 100%VG -n brick1 gfs01-vg
>> gfs02:
>>   lvcreate -l 100%VG -n brick2 gfs02-vg
>>
>> Format the volumes with ext4 filesystem:
>>
>> gfs01:
>>   mkfs.ext4 /dev/gfs01-vg/brick1
>> gfs02:
>>   mkfs.ext4 /dev/gfs02-vg/brick2
>>
>> Create a mountpoint for the bricks on the servers:
>>
>> gfs01:
>>   mkdir -p /glusterdata/brick1
>> gds02:
>>   mkdir -p /glusterdata/brick2
>>
>> Make a permanent mount on the servers:
>>
>> gfs01:
>> /dev/gfs01-vg/brick1/glusterdata/brick1 ext4defaults
>> 0 0
>> gfs02:
>> /dev/gfs02-vg/brick2/glusterdata/brick2 ext4defaults
>> 0 0
>>
>> Mount it:
>>   mount -a
>>
>> Create a gluster volume mount point on the bricks on the servers:
>>
>> gfs01:
>>   mkdir -p /glusterdata/brick1/gvol0
>> gfs02:
>>   mkdir -p /glusterdata/brick2/gvol0
>>
>> From each server, peer probe the other one:
>>
>>   gluster peer probe gfs01
>> peer probe: success
>>
>>   gluster peer probe gfs02
>> peer probe: success
>>
>> From any single server, create the gluster volume as a "replica" with two
>> nodes; gfs01 and gfs02:
>>
>>   gluster volume create gvol0 replica 2 gfs01:/glusterdata/brick1/gvol0
>> gfs02:/glusterdata/brick2/gvol0
>>
>> Start the volume:
>>
>>   gluster volume start gvol0
>>
>> On each server, mount the gluster filesystem on the /filestore mount
>> point:
>>
>> gfs01:
>>   mount -t glusterfs gfs01:/gvol0 /filestore
>> gfs02:
>>   mount -t glusterfs gfs02:/gvol0 /filestore
>>
>> Make the mount permanent on the servers:
>>
>> /etc/fstab
>>
>> gfs01:
>>   gfs01:/gvol0 /filestore glusterfs defaults,_netdev 0 0
>> gfs02:
>>   gfs02:/gvol0 /filestore glusterfs defaults,_netdev 0 0
>> ---
>>
>> Regards,
>>
>> Johan Karlsson
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Data on gluster volume gone

2018-09-19 Thread Raghavendra Gowdappa
Can you give volume info? Looks like you are using 2 way replica.

On Wed, Sep 19, 2018 at 9:39 AM, Johan Karlsson 
wrote:

> I have two servers setup with glusterFS in replica mode, a single volume
> exposed via a mountpoint. The servers are running Ubuntu 16.04 LTS
>
> After a package upgrade + reboot of both servers, it was discovered that
> the data was completely gone. New data written on the volume via the
> mountpoint is replicated correctly, and gluster status/info commands states
> that everything is ok (no split brain scenario or any healing needed etc).
> But the previous data is completely gone, not even present on any of the
> bricks.
>
> The following upgrade was done:
>
> glusterfs-server:amd64 (4.1.0-ubuntu1~xenial3 -> 4.1.4-ubuntu1~xenial1)
> glusterfs-client:amd64 (4.1.0-ubuntu1~xenial3 -> 4.1.4-ubuntu1~xenial1)
> glusterfs-common:amd64 (4.1.0-ubuntu1~xenial3 -> 4.1.4-ubuntu1~xenial1)
>
> The logs only show that connection between the servers was lost, which is
> expected.
>
> I can't even determine if it was the package upgrade or the reboot that
> caused this issue, but I've tried to recreate the issue without success.
>
> Any idea what could have gone wrong, or if I have done some wrong during
> the setup. For reference, this is how I've done the setup:
>
> ---
> Add a separate disk with a single partition on both servers (/dev/sdb1)
>
> Add gfs hostnames for direct communication without DNS, on both servers:
>
> /etc/hosts
>
> 192.168.4.45gfs01
> 192.168.4.46gfs02
>
> On gfs01, create a new LVM Volume Group:
>   vgcreate gfs01-vg /dev/sdb1
>
> And on the gfs02:
>   vgcreate gfs02-vg /dev/sdb1
>
> Create logical volumes named "brick" on the servers:
>
> gfs01:
>   lvcreate -l 100%VG -n brick1 gfs01-vg
> gfs02:
>   lvcreate -l 100%VG -n brick2 gfs02-vg
>
> Format the volumes with ext4 filesystem:
>
> gfs01:
>   mkfs.ext4 /dev/gfs01-vg/brick1
> gfs02:
>   mkfs.ext4 /dev/gfs02-vg/brick2
>
> Create a mountpoint for the bricks on the servers:
>
> gfs01:
>   mkdir -p /glusterdata/brick1
> gds02:
>   mkdir -p /glusterdata/brick2
>
> Make a permanent mount on the servers:
>
> gfs01:
> /dev/gfs01-vg/brick1/glusterdata/brick1 ext4defaults0
>0
> gfs02:
> /dev/gfs02-vg/brick2/glusterdata/brick2 ext4defaults0
>0
>
> Mount it:
>   mount -a
>
> Create a gluster volume mount point on the bricks on the servers:
>
> gfs01:
>   mkdir -p /glusterdata/brick1/gvol0
> gfs02:
>   mkdir -p /glusterdata/brick2/gvol0
>
> From each server, peer probe the other one:
>
>   gluster peer probe gfs01
> peer probe: success
>
>   gluster peer probe gfs02
> peer probe: success
>
> From any single server, create the gluster volume as a "replica" with two
> nodes; gfs01 and gfs02:
>
>   gluster volume create gvol0 replica 2 gfs01:/glusterdata/brick1/gvol0
> gfs02:/glusterdata/brick2/gvol0
>
> Start the volume:
>
>   gluster volume start gvol0
>
> On each server, mount the gluster filesystem on the /filestore mount point:
>
> gfs01:
>   mount -t glusterfs gfs01:/gvol0 /filestore
> gfs02:
>   mount -t glusterfs gfs02:/gvol0 /filestore
>
> Make the mount permanent on the servers:
>
> /etc/fstab
>
> gfs01:
>   gfs01:/gvol0 /filestore glusterfs defaults,_netdev 0 0
> gfs02:
>   gfs02:/gvol0 /filestore glusterfs defaults,_netdev 0 0
> ---
>
> Regards,
>
> Johan Karlsson
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Data on gluster volume gone

2018-09-19 Thread Johan Karlsson
I have two servers setup with glusterFS in replica mode, a single volume 
exposed via a mountpoint. The servers are running Ubuntu 16.04 LTS

After a package upgrade + reboot of both servers, it was discovered that the 
data was completely gone. New data written on the volume via the mountpoint is 
replicated correctly, and gluster status/info commands states that everything 
is ok (no split brain scenario or any healing needed etc). But the previous 
data is completely gone, not even present on any of the bricks.

The following upgrade was done:

glusterfs-server:amd64 (4.1.0-ubuntu1~xenial3 -> 4.1.4-ubuntu1~xenial1)
glusterfs-client:amd64 (4.1.0-ubuntu1~xenial3 -> 4.1.4-ubuntu1~xenial1)
glusterfs-common:amd64 (4.1.0-ubuntu1~xenial3 -> 4.1.4-ubuntu1~xenial1)

The logs only show that connection between the servers was lost, which is 
expected.

I can't even determine if it was the package upgrade or the reboot that caused 
this issue, but I've tried to recreate the issue without success.

Any idea what could have gone wrong, or if I have done some wrong during the 
setup. For reference, this is how I've done the setup:

---
Add a separate disk with a single partition on both servers (/dev/sdb1)

Add gfs hostnames for direct communication without DNS, on both servers:

/etc/hosts

192.168.4.45gfs01
192.168.4.46gfs02

On gfs01, create a new LVM Volume Group:
  vgcreate gfs01-vg /dev/sdb1

And on the gfs02:
  vgcreate gfs02-vg /dev/sdb1

Create logical volumes named "brick" on the servers:
  
gfs01:
  lvcreate -l 100%VG -n brick1 gfs01-vg
gfs02:
  lvcreate -l 100%VG -n brick2 gfs02-vg

Format the volumes with ext4 filesystem:

gfs01:
  mkfs.ext4 /dev/gfs01-vg/brick1
gfs02:
  mkfs.ext4 /dev/gfs02-vg/brick2

Create a mountpoint for the bricks on the servers:

gfs01:
  mkdir -p /glusterdata/brick1
gds02:
  mkdir -p /glusterdata/brick2

Make a permanent mount on the servers:

gfs01:
/dev/gfs01-vg/brick1/glusterdata/brick1 ext4defaults0 0
gfs02:
/dev/gfs02-vg/brick2/glusterdata/brick2 ext4defaults0 0

Mount it:
  mount -a

Create a gluster volume mount point on the bricks on the servers:

gfs01:
  mkdir -p /glusterdata/brick1/gvol0
gfs02:
  mkdir -p /glusterdata/brick2/gvol0

>From each server, peer probe the other one:

  gluster peer probe gfs01
peer probe: success

  gluster peer probe gfs02
peer probe: success

>From any single server, create the gluster volume as a "replica" with two 
>nodes; gfs01 and gfs02:

  gluster volume create gvol0 replica 2 gfs01:/glusterdata/brick1/gvol0 
gfs02:/glusterdata/brick2/gvol0

Start the volume:

  gluster volume start gvol0

On each server, mount the gluster filesystem on the /filestore mount point:

gfs01:
  mount -t glusterfs gfs01:/gvol0 /filestore
gfs02:
  mount -t glusterfs gfs02:/gvol0 /filestore

Make the mount permanent on the servers:

/etc/fstab

gfs01:
  gfs01:/gvol0 /filestore glusterfs defaults,_netdev 0 0
gfs02:
  gfs02:/gvol0 /filestore glusterfs defaults,_netdev 0 0
---

Regards,

Johan Karlsson
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Mountpoint attendee survey

2018-09-19 Thread Amye Scavarda
Hi there!
If you attended mountpoint, August 27-28 in Vancouver, BC, we have a survey
out for feedback.
https://docs.google.com/forms/d/e/1FAIpQLSfp2VqFX5I6AgKb6hOLjjQTjHR6iaCbexOAFl8I0bNJF-K7ow/viewform


We'll use this to help plan future mountpoint events.
Thanks!
- amye

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] vfs_gluster broken

2018-09-19 Thread Anoop C S
On Wed, 2018-09-12 at 10:37 -0600, Terry McGuire wrote:
> > Can you please attach the output of `testparm -s` so as to look through how 
> > Samba is setup?

I have a setup where I could browse and work with a GlusterFS volume share made 
available to Windows
via vfs_glusterfs module on CentOS 7.5.1804 with glusterfs-3.10.12-1.el7 and 
samba-4.7.1-9.el7_5.

What am I missing? Are there any specific operation that leads to abnormal 
behaviours?

> From our test server (“nomodule-nofruit” is currently the only well-behaved 
> share):
> 
> root@mfsuat-01 ~]#testparm -s
> Load smb config files from /etc/samba/smb.conf
> rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
> Processing section "[share1]"
> Processing section "[share2]"
> Processing section "[nomodule]"
> Processing section "[nomodule-nofruit]"
> Processing section "[module]"
> Processing section "[IPC$]"
> WARNING: No path in service IPC$ - making it unavailable!
> NOTE: Service IPC$ is flagged unavailable.

On an unrelated note:

I don't think your intention to make [IPC$] unavailable using the 'available' 
parameter would work
at all. 

> Loaded services file OK.
> idmap range not specified for domain '*'
> ERROR: Invalid idmap range for domain *!

On an unrelated note:

Why haven't you specified range for default configuration? I think it is a must 
to set range for the
default configuration.

> WARNING: You have some share names that are longer than 12 characters.
> These may not be accessible to some older clients.
> (Eg. Windows9x, WindowsMe, and smbclient prior to Samba 3.0.)
> WARNING: some services use vfs_fruit, others don't. Mounting them in 
> conjunction on OS X clients
> results in undefined behaviour.
> 
> Server role: ROLE_DOMAIN_MEMBER
> 
> # Global parameters
> [global]
>   log file = /var/log/samba/log.%m
>   map to guest = Bad User
>   max log size = 50
>   realm = .AD.UALBERTA.CA
>   security = ADS
>   workgroup = STS
>   glusterfs:volume = mfs1
>   idmap config * : backend = tdb
>   access based share enum = Yes
>   force create mode = 0777
>   force directory mode = 0777
>   include = /mfsmount/admin/etc/mfs/smb_shares.conf
>   kernel share modes = No
>   read only = No
>   smb encrypt = desired
>   vfs objects = glusterfs
> [share1]
>   path = /share1
>   valid users = @mfs-...@.ad.ualberta.ca
> [share2]
>   path = /share2
>   valid users = @mfs-test-gr...@.ad.ualberta.ca

Oh.. you are sharing sub-directories which is also fine.

> [nomodule]
>   kernel share modes = Yes
>   path = /mfsmount/share1
>   valid users = @mfs-...@.ad.ualberta.ca
>   vfs objects = fruit streams_xattr

Interesting..

Even this FUSE mounted GlusterFS share is not behaving normal? What errors do 
you see in glusterfs
fuse mount log(/var/log/glusterfs/mfsmount-.log) while accessing this share?

> > 
> [nomodule-nofruit]
>   kernel share modes = Yes
>   path = /mfsmount/share1
>   valid users = @mfs-...@.ad.ualberta.ca
>   vfs objects = 
> 
> 
> [module]
>   path = /share1
>   valid users = @mfs-...@.ad.ualberta.ca
> [IPC$]
>   available = No
>   vfs objects = 

You may remove the whole [IPC$] section.

> > > My gluster version initially was 3.10.12.  I’ve since updated to gluster 
> > > 3.12.13, but the
> > > symptoms
> > > are the same.
> > > 
> > > Does this sound familiar to anyone?
> > 
> > All mentioned symptoms point towards a disconnection. We need to find out 
> > the origin of this
> > disconnection. What do we have in logs under /var/log/samba/? Any errors?
> 
> Actually, yes.  Large numbers of:
> 
> [2018/09/12 09:37:17.873711,  0] 
> ../source3/modules/vfs_glusterfs.c:996(vfs_gluster_stat)
>   glfs_stat(.) failed: Invalid argument
> 
> There appears to be some sort of connection remaining, as I can continue to 
> cause these errors in
> the server log by attempting I/O with the share.
> 
> This seems like the most promising lead to find the root cause.  Hopefully 
> you (or someone) can
> interpret what it means, and what I might do about it (besides not using 
> vfs_gluster anymore).
> 
> Regards,
> Terry

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-09-19 Thread Hu Bert
Hi Pranith,

i recently upgraded to version 3.12.14, still no change in
load/performance. Have you received any feedback?

At the moment i have 3 options:
- problem can be fixed within version 3.12
- upgrade to 4.1 and magically/hopefully "fix" the problem (might not
help when problem is within brick)
- replace glusterfs with $whatever (defeat... :-( )

thx
Hubert


2018-09-03 7:55 GMT+02:00 Pranith Kumar Karampuri :
>
>
> On Fri, Aug 31, 2018 at 1:18 PM Hu Bert  wrote:
>>
>> Hi Pranith,
>>
>> i just wanted to ask if you were able to get any feedback from your
>> colleagues :-)
>
>
> Sorry, I didn't get a chance to. I am working on a customer issue which is
> taking away cycles from any other work. Let me get back to you once I get
> time this week.
>
>>
>>
>> btw.: we migrated some stuff (static resources, small files) to a nfs
>> server that we actually wanted to replace by glusterfs. Load and cpu
>> usage has gone down a bit, but still is asymmetric on the 3 gluster
>> servers.
>>
>>
>> 2018-08-28 9:24 GMT+02:00 Hu Bert :
>> > Hm, i noticed that in the shared.log (volume log file) on gluster11
>> > and gluster12 (but not on gluster13) i now see these warnings:
>> >
>> > [2018-08-28 07:18:57.224367] W [MSGID: 109011]
>> > [dht-layout.c:186:dht_layout_search] 0-shared-dht: no subvolume for
>> > hash (value) = 3054593291
>> > [2018-08-28 07:19:17.733625] W [MSGID: 109011]
>> > [dht-layout.c:186:dht_layout_search] 0-shared-dht: no subvolume for
>> > hash (value) = 2595205890
>> > [2018-08-28 07:19:27.950355] W [MSGID: 109011]
>> > [dht-layout.c:186:dht_layout_search] 0-shared-dht: no subvolume for
>> > hash (value) = 3105728076
>> > [2018-08-28 07:19:42.519010] W [MSGID: 109011]
>> > [dht-layout.c:186:dht_layout_search] 0-shared-dht: no subvolume for
>> > hash (value) = 3740415196
>> > [2018-08-28 07:19:48.194774] W [MSGID: 109011]
>> > [dht-layout.c:186:dht_layout_search] 0-shared-dht: no subvolume for
>> > hash (value) = 2922795043
>> > [2018-08-28 07:19:52.506135] W [MSGID: 109011]
>> > [dht-layout.c:186:dht_layout_search] 0-shared-dht: no subvolume for
>> > hash (value) = 2841655539
>> > [2018-08-28 07:19:55.466352] W [MSGID: 109011]
>> > [dht-layout.c:186:dht_layout_search] 0-shared-dht: no subvolume for
>> > hash (value) = 3049465001
>> >
>> > Don't know if that could be related.
>> >
>> >
>> > 2018-08-28 8:54 GMT+02:00 Hu Bert :
>> >> a little update after about 2 hours of uptime: still/again high cpu
>> >> usage by one brick processes. server load >30.
>> >>
>> >> gluster11: high cpu; brick /gluster/bricksdd1/; no hdd exchange so far
>> >> gluster12: normal cpu; brick /gluster/bricksdd1_new/; hdd change
>> >> /dev/sdd
>> >> gluster13: high cpu; brick /gluster/bricksdd1_new/; hdd change /dev/sdd
>> >>
>> >> The process for brick bricksdd1 consumes almost all 12 cores.
>> >> Interestingly there are more threads for the bricksdd1 process than
>> >> for the other bricks. Counted with "ps huH p  | wc
>> >> -l"
>> >>
>> >> gluster11:
>> >> bricksda1 59 threads, bricksdb1 65 threads, bricksdc1 68 threads,
>> >> bricksdd1 85 threads
>> >> gluster12:
>> >> bricksda1 65 threads, bricksdb1 60 threads, bricksdc1 61 threads,
>> >> bricksdd1_new 58 threads
>> >> gluster13:
>> >> bricksda1 61 threads, bricksdb1 60 threads, bricksdc1 61 threads,
>> >> bricksdd1_new 82 threads
>> >>
>> >> Don't know if that could be relevant.
>> >>
>> >> 2018-08-28 7:04 GMT+02:00 Hu Bert :
>> >>> Good Morning,
>> >>>
>> >>> today i update + rebooted all gluster servers, kernel update to
>> >>> 4.9.0-8 and gluster to 3.12.13. Reboots went fine, but on one of the
>> >>> gluster servers (gluster13) one of the bricks did come up at the
>> >>> beginning but then lost connection.
>> >>>
>> >>> OK:
>> >>>
>> >>> Status of volume: shared
>> >>> Gluster process TCP Port  RDMA Port
>> >>> Online  Pid
>> >>>
>> >>> --
>> >>> [...]
>> >>> Brick gluster11:/gluster/bricksdd1/shared 49155 0
>> >>> Y   2506
>> >>> Brick gluster12:/gluster/bricksdd1_new/shared49155 0
>> >>> Y   2097
>> >>> Brick gluster13:/gluster/bricksdd1_new/shared49155 0
>> >>> Y   2136
>> >>>
>> >>> Lost connection:
>> >>>
>> >>> Brick gluster11:/gluster/bricksdd1/shared  49155 0
>> >>>  Y   2506
>> >>> Brick gluster12:/gluster/bricksdd1_new/shared 49155 0
>> >>> Y   2097
>> >>> Brick gluster13:/gluster/bricksdd1_new/shared N/A   N/A
>> >>> N   N/A
>> >>>
>> >>> gluster volume heal shared info:
>> >>> Brick gluster13:/gluster/bricksdd1_new/shared
>> >>> Status: Transport endpoint is not connected
>> >>> Number of entries: -
>> >>>
>> >>> reboot was at 06:15:39; brick then worked for a short period, but then
>> >>> somehow disconnected.
>> >>>
>> >>> from gluster13:/var/log/glusterfs/glusterd.log:
>> >>>
>> >>> [2018-08-28 04:27:36.944608] I [MSGID: 106005]
>> >>> 

Re: [Gluster-users] Glusterfs removal issue

2018-09-19 Thread Anoop C S
On Wed, 2018-09-19 at 16:54 +0530, aneesh mathai wrote:
> yea glusterfs 6dev
> i did source install...

'dnf' is the package management tool on Fedora. It does not interfere with 
source installation. You
have manually do a `make uninstall` to remove glusterfs completely. 

> 
> On Wed, Sep 19, 2018, 4:37 PM Anoop C S  wrote:
> > On Mon, 2018-09-17 at 21:44 +0530, aneesh mathai wrote:
> > > I want to remove glusterfs  from my system how can i remove it. i tried 
> > > "dnf remove glusterfs"
> > .
> > > It shows that the command exicuted successfully but still the gluster 
> > > remains.
> > 
> > glusterfs 6dev ??
> > 
> > seems like a `gluster` binary from previous source installation. Is that 
> > the case?
> > 
> > > ___
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org
> > > https://lists.gluster.org/mailman/listinfo/gluster-users
> > 

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs removal issue

2018-09-19 Thread Anoop C S
On Mon, 2018-09-17 at 21:44 +0530, aneesh mathai wrote:
> I want to remove glusterfs  from my system how can i remove it. i tried "dnf 
> remove glusterfs" .
> It shows that the command exicuted successfully but still the gluster remains.

glusterfs 6dev ??

seems like a `gluster` binary from previous source installation. Is that the 
case?

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] vfs_gluster for samba ?

2018-09-19 Thread Laurent Bardi
Le 19/09/2018 à 09:24, Anoop C S a écrit :
> On Wed, 2018-09-19 at 08:25 +0200, Laurent Bardi wrote:
>> Hi,
>>
>> Sorry if it has bee discussed, but i ve spend more than 2 hours on
>> goolgle to find answers, and nothing.
>>
>> i am on debian stretch and i use gluster from  your native repos.
>>
>> gluster version 4.1.4, samba version 4.5.12+dfsg-2+deb9u3 (the one from
>> native debian)
>>
>> The gluster vfs is not compiled by default with debian , everytime, i
>> had to download the deb src and recompile it(apt-get source samba, then
>> debuild -us -uc)
>>
>> after that the new samba-vfs-modulesxxx.deb contains the gluster_vfs.so.
>>
>> A) it doesnt work anymore (no glusterfs.so after rebuilding packages). I
>> ve read the log of the configure script it contains "gluster detected
>> api version 4" ?
> Further down the line were you able to read the logs to find those lines 
> which compiles
> vfs_glusterfs.c source?

in configure  :
Checking for glusterfs-api >=
4 : 16:07:16
runner /usr/bin/pkg-config "glusterfs-api >= 4" --cflags --libs
glusterfs-api
yes
Checking for header
api/glfs.h  :
16:07:16 runner /usr/bin/gcc -g -O2
-fdebug-prefix-map=/usr/local/archives/compile/samba/samba-4.5.12+dfsg=.
-fstack-protector-strong -Wformat -Werror=format-security -MD
-Wdate-time -D_FORTIFY_SOURCE=2 -I/usr/local/include -I/usr/include/uuid
-D_SAMBA_BUILD_=4 -DHAVE_CONFIG_H=1 -D_GNU_SOURCE=1
-D_XOPEN_SOURCE_EXTENDED=1 -D_FILE_OFFSET_BITS=64 -D__USE_FILE_OFFSET64
-DUSE_POSIX_ACLS=1 ../test.c -c -o default/test_1.o
no
Checking for library
gfapi  :
16:07:16 runner /usr/bin/gcc -g -O2
-fdebug-prefix-map=/usr/local/archives/compile/samba/samba-4.5.12+dfsg=.
-fstack-protector-strong -Wformat -Werror=format-security -MD -fPIC
-DPIC -Wdate-time -D_FORTIFY_SOURCE=2 -I/usr/local/include
-I/usr/include/uuid -D_SAMBA_BUILD_=4 -DHAVE_CONFIG_H=1 -D_GNU_SOURCE=1
-D_XOPEN_SOURCE_EXTENDED=1 -D_FILE_OFFSET_BITS=64 -D__USE_FILE_OFFSET64
-DUSE_POSIX_ACLS=1 ../test.c -c -o default/test_1.o
16:07:16 runner /usr/bin/gcc default/test_1.o -o
/usr/local/archives/compile/samba/samba-4.5.12+dfsg/bin/.conf_check_0/testbuild/default/libtestprog.so
-Wl,-z,relro -Wl,-z,now -Wl,--as-needed -lpthread -Wl,-no-undefined
-Wl,--export-dynamic -shared -L/usr/local/lib -Wl,-Bdynamic -lgfapi
-lacl -lglusterfs -lgfrpc -lgfxdr -luuid
yes

it seems that glfs.h not found , but an  apt-file search glfs.h find it
at  : glusterfs-common: /usr/include/glusterfs/api/glfs.h

after that there is only reference to the gluster_vfs manpage , it does
not seems to compile anything.

I suspect a problem with samba package ? (in that case i m asking the
wrong list :-})

Many thanks  for your answers.
PS: i can send you the log , i dont want to bother the list.

>
>> B) in the doc i don t see anymore the use of
>>
>>  vfs objects = glusterfs
>> glusterfs:volfile_server = 
>> glusterfs:volume = 
>> glusterfs:logfile = 
>>
>> Is there any change i ve missed ?
> Hm.. That's interesting. I will get the documentation fixed to reflect the 
> usage of VFS module for
> GlusterFS with Samba.
>
>

-- 
Laurent BARDI /  RSI CNRS-IPBS / CRSSI DR14
INSTITUT  de PHARMACOLOGIE et de BIOLOGIE STRUCTURALE
Tel : 05-61-17-59-05 http://www.ipbs.fr/
GSM : 06-23-46-06-28Laurent.BardiATipbs.fr
CNRS-IPBS 205 Route de Narbonne 31400 TOULOUSE FRANCE
...
J'étais indéniablement misanthrope.
Je voulus traverser à gué un marigot infesté d'imbéciles. 
Quand j'atteignis l'autre rive, j'étais devenu philanthrope.

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] vfs_gluster for samba ?

2018-09-19 Thread Anoop C S
On Wed, 2018-09-19 at 08:25 +0200, Laurent Bardi wrote:
> Hi,
> 
> Sorry if it has bee discussed, but i ve spend more than 2 hours on
> goolgle to find answers, and nothing.
> 
> i am on debian stretch and i use gluster from  your native repos.
> 
> gluster version 4.1.4, samba version 4.5.12+dfsg-2+deb9u3 (the one from
> native debian)
> 
> The gluster vfs is not compiled by default with debian , everytime, i
> had to download the deb src and recompile it(apt-get source samba, then
> debuild -us -uc)
> 
> after that the new samba-vfs-modulesxxx.deb contains the gluster_vfs.so.
> 
> A) it doesnt work anymore (no glusterfs.so after rebuilding packages). I
> ve read the log of the configure script it contains "gluster detected
> api version 4" ?

Further down the line were you able to read the logs to find those lines which 
compiles
vfs_glusterfs.c source?

> B) in the doc i don t see anymore the use of
> 
>  vfs objects = glusterfs
> glusterfs:volfile_server = 
> glusterfs:volume = 
> glusterfs:logfile = 
> 
> Is there any change i ve missed ?

Hm.. That's interesting. I will get the documentation fixed to reflect the 
usage of VFS module for
GlusterFS with Samba.


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] vfs_gluster for samba ?

2018-09-19 Thread Laurent Bardi
Hi,

Sorry if it has bee discussed, but i ve spend more than 2 hours on
goolgle to find answers, and nothing.

i am on debian stretch and i use gluster from  your native repos.

gluster version 4.1.4, samba version 4.5.12+dfsg-2+deb9u3 (the one from
native debian)

The gluster vfs is not compiled by default with debian , everytime, i
had to download the deb src and recompile it(apt-get source samba, then
debuild -us -uc)

after that the new samba-vfs-modulesxxx.deb contains the gluster_vfs.so.

A) it doesnt work anymore (no glusterfs.so after rebuilding packages). I
ve read the log of the configure script it contains "gluster detected
api version 4" ?

B) in the doc i don t see anymore the use of

 vfs objects = glusterfs
glusterfs:volfile_server = 
glusterfs:volume = 
glusterfs:logfile = 

Is there any change i ve missed ?

-- 
Laurent BARDI /  RSI CNRS-IPBS / CRSSI DR14
INSTITUT  de PHARMACOLOGIE et de BIOLOGIE STRUCTURALE
Tel : 05-61-17-59-05 http://www.ipbs.fr/
GSM : 06-23-46-06-28Laurent.BardiATipbs.fr
CNRS-IPBS 205 Route de Narbonne 31400 TOULOUSE FRANCE
...
J'étais indéniablement misanthrope.
Je voulus traverser à gué un marigot infesté d'imbéciles. 
Quand j'atteignis l'autre rive, j'étais devenu philanthrope.

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] sharding in glusterfs

2018-09-19 Thread Ashayam Gupta
Please find our workload details as requested by you :

* Only 1 write-mount point as of now
* Read-Mount : Since we auto-scale our machines this can be as big as
300-400 machines during peak times
* >" multiple concurrent reads means that Reads will not happen until the
file is completely written to"  Yes , in our current scenario we can ensure
that indeed this is the case.

But when you say it only supports single writer workload we would like to
understand the following scenarios with respect to multiple writers and the
current behaviour of glusterfs with sharding

   - Multiple Writer writes to different files
   - Multiple Writer writes to same file
  - they write to same file but different shards of same file
  - they write to same file (no gurantee if they write to different
  shards)

There might be some more cases which are known to you , would be helpful if
you can describe us about those scenarios as well or may point us to the
relevant documents.
Also it would be helpful if you can suggest the most stable version of
glusterfs with sharding feature to use , since we would like to use this in
production.

Thanks
Ashayam Gupta

On Tue, Sep 18, 2018 at 11:00 AM Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

>
>
> On Mon, Sep 17, 2018 at 4:14 AM Ashayam Gupta <
> ashayam.gu...@alpha-grep.com> wrote:
>
>> Hi All,
>>
>> We are currently using glusterfs for storing large files with write-once
>> and multiple concurrent reads, and were interested in understanding one of
>> the features of glusterfs called sharding for our use case.
>>
>> So far from the talk given by the developer [
>> https://www.youtube.com/watch?v=aAlLy9k65Gw] and the git issue [
>> https://github.com/gluster/glusterfs/issues/290] , we know that it was
>> developed for large VM images as use case and the second link does talk
>> about a more general purpose usage , but we are not clear if there are some
>> issues if used for non-VM image large files [which is the use case for us].
>>
>> Therefore it would be helpful if we can have some pointers or more
>> information about the more general use-case scenario for sharding and any
>> shortcomings if any , in case we use it for our scenario which is non-VM
>> large files with write-once and multiple concurrent reads.Also it would be
>> very helpful if you can suggest the best approach/settings for our use case
>> scenario.
>>
>
> Sharding is developed for Big file usecases and at the moment only
> supports single writer workload. I also added the maintainers for sharding
> to the thread. May be giving a bit of detail about access pattern w.r.t.
> number of mounts that are used for writing/reading would be helpful. I am
> assuming write-once and multiple concurrent reads means that Reads will not
> happen until the file is completely written to. Could you explain  a bit
> more about the workload?
>
>
>>
>> Thanks
>> Ashayam Gupta
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Pranith
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users