[Gluster-users] Announcing Glusterfs release 3.12.1 (Long Term Maintenance)

2017-09-13 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.1 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

We still carry a major issue that is reported in the release-notes as 
follows,


- Expanding a gluster volume that is sharded may cause file corruption

Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


Status of this bug can be tracked here, #1465123


Thanks,
Gluster community

[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.1/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs

[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.1/


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-09-13 Thread Sam McLeod
Hi Serkan,

I was wondering if you resolved your issue with the high CPU usage and hang 
after starting gluster?

I'm setting up a 3 server (replica 3, arbiter 1), 300 volume, Gluster 3.12 
cluster on CentOS 7 and am having what looks to be exactly the same issue as 
you.

With no volumes created CPU usage / load is normal, but after creating all the 
volumes even with no data CPU and RAM usage keeps creeping up and the logs are 
filling up with:

[2017-09-14 05:47:45.447772] E [client_t.c:324:gf_client_ref] 
(-->/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf8) [0x7fe3f2a1b7e8] 
-->/lib64/libgfrpc.so.0(rpcsvc_request_init+0x7f) [0x7fe3f2a1893f] 
-->/lib64/libglusterfs.so.0(gf_client_ref+0x179) [0x7fe3f2cb2e59] ) 0-client_t: 
null client [Invalid argument]
[2017-09-14 05:47:45.486593] E [client_t.c:324:gf_client_ref] 
(-->/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf8) [0x7fe3f2a1b7e8] 
-->/lib64/libgfrpc.so.0(rpcsvc_request_init+0x7f) [0x7fe3f2a1893f] --

etc...

It's not an overly helpful error message as although it says a null client gave 
an invalid argument, it doesn't state which client and what the argument was.

I've tried strace and valgrind on glusterd as well as starting glusterd with 
--debug to no avail.

--
Sam McLeod 
@s_mcleod
https://smcleod.net

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS don't expose iSCSI target for Window server

2017-09-13 Thread GiangCoi Mr
Hi all

Question 1:
I follow this instruction https://github.com/gluster/gluster-block, I use 2
gluster01 (192.168.101.110), gluster02 (192.168.101.111) and create one
gluster volume (block-storage). And use gluster-block to create block
storage (block-store/win)

[root@gluster01 ~]# gluster-block create block-store/win ha 2
192.168.101.110,192.168.101.111 40GiB
IQN: iqn.2016-12.org.gluster-block:5b1b5077-50d0-49ce-bfca-6db4656a3599
PORTAL(S):  192.168.101.110:3260 192.168.101.111:3260
RESULT: SUCCESS

[root@gluster01 gluster-block]# gluster-block list block-store
win

[root@gluster01 gluster-block]# gluster-block info block-store/win
NAME: win
VOLUME: block-store
GBID: 5b1b5077-50d0-49ce-bfca-6db4656a3599
SIZE: 40.0 GiB
HA: 2
PASSWORD:
EXPORTED NODE(S): 192.168.101.110 192.168.101.111

when I user Window Server 2012 (iSCSI Initiator) to connect to iSCSI target
over, it's connected but in Window don't recognize storage.


Question 2:
With block storage (block-store/win) I created with 40GB. How I scale out
block-store/win from 40GB to 100 GB.

Please help me to resolve 2 questions. Thanks so much

Regards.
Giang
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster Server upgrade from 3.5 to 3.8 in online upgrade mode

2017-09-13 Thread Diego Remolina
Correct, you should stop clients, then servers, make 100% sure all
processes are dead:

Sometimes the brick process dont die, so killall glustefs and kilall glustefsd
will be needed.

Once all gluster services are shut down apply your updates. Then start
glusterd on the servers then run gluster v status to ensure everything
is happy, then start clients.

Read the upgrade guides, I think if you use quotas, you have to do
some specific tasks.

Are you using RHEL official 3.8.x release? If you are not, then
perhaps you want to go straight to 3.10.x because 3.8.x is EOL if you
are using the community releases as 3.12 was recently released. See:

https://www.gluster.org/release-schedule/

HTH,

Diego





On Wed, Sep 13, 2017 at 3:40 PM, Hemant Mamtora  wrote:
> Thanks for your reply.
>
> So the way I understand is that it can be upgraded but with a downtime,
> which means that there are no clients writing to this gluster volume, as
> the volume is stopped.
>
> But post a upgrade we will still have the data on the gluster volume
> that we had before the upgrade(but with downtime).
>
> - Hemant
>
>
> On 9/13/17 2:33 PM, Diego Remolina wrote:
>> Nope, not gonna work... I could never go even from 3.6. to 3.7 without
>> downtime cause of the settings change, see:
>>
>> http://lists.gluster.org/pipermail/gluster-users.old/2015-September/023470.html
>>
>> Even when changing options in the older 3.6.x I had installed, my new
>> 3.7.x server would not connect, so had to pretty much stop gluster in
>> all servers, update to 3.7.x offline, then start gluster and all
>> servers would see each other just fine.
>>
>> Diego
>>
>> On Tue, Sep 12, 2017 at 6:15 PM, Hemant Mamtora  wrote:
>>> I was looking to upgrade Gluster server from ver 3.5.X to 3.8.X.
>>>
>>> I have already tried it in offline upgrade mode and that works, I am
>>> interested in knowing if this upgrade of gluster server version can be
>>> in online upgrade mode.
>>>
>>> Many thanks in advance.
>>>
>>> --
>>> - Hemant Mamtora
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
> --
> - Hemant Mamtora
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] one brick one volume process dies?

2017-09-13 Thread Ben Werthmann
These symptoms appear to be the same as I've recorded in this post:

http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html

On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee 
wrote:

> Additionally the brick log file of the same brick would be required.
> Please look for if brick process went down or crashed. Doing a volume start
> force should resolve the issue.
>
> On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav  wrote:
>
>> Please send me the logs as well i.e glusterd.logs and cmd_history.log.
>>
>>
>> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek  wrote:
>>
>>>
>>>
>>> On 13/09/17 06:21, Gaurav Yadav wrote:
>>>
>> Please provide the output of gluster volume info, gluster volume status
 and gluster peer status.

 Apart  from above info, please provide glusterd logs, cmd_history.log.

 Thanks
 Gaurav

 On Tue, Sep 12, 2017 at 2:22 PM, lejeczek >>> pelj...@yahoo.co.uk>> wrote:

 hi everyone

 I have 3-peer cluster with all vols in replica mode, 9
 vols.
 What I see, unfortunately, is one brick fails in one
 vol, when it happens it's always the same vol on the
 same brick.
 Command: gluster vol status $vol - would show brick
 not online.
 Restarting glusterd with systemclt does not help, only
 system reboot seem to help, until it happens, next time.

 How to troubleshoot this weird misbehaviour?
 many thanks, L.

 .
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org

>>> 
 http://lists.gluster.org/mailman/listinfo/gluster-users
 



>>> hi, here:
>>>
>>> $ gluster vol info C-DATA
>>>
>>> Volume Name: C-DATA
>>> Type: Replicate
>>> Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
>>> Brick2: 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
>>> Brick3: 10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
>>> Options Reconfigured:
>>> performance.md-cache-timeout: 600
>>> performance.cache-invalidation: on
>>> performance.stat-prefetch: on
>>> features.cache-invalidation-timeout: 600
>>> features.cache-invalidation: on
>>> performance.io-thread-count: 64
>>> performance.cache-size: 128MB
>>> cluster.self-heal-daemon: enable
>>> features.quota-deem-statfs: on
>>> changelog.changelog: on
>>> geo-replication.ignore-pid-check: on
>>> geo-replication.indexing: on
>>> features.inode-quota: on
>>> features.quota: on
>>> performance.readdir-ahead: on
>>> nfs.disable: on
>>> transport.address-family: inet
>>> performance.cache-samba-metadata: on
>>>
>>>
>>> $ gluster vol status C-DATA
>>> Status of volume: C-DATA
>>> Gluster process TCP Port  RDMA Port Online
>>> Pid
>>> 
>>> --
>>> Brick 10.5.6.49:/__.aLocalStorages/0/0-GLUS
>>> TERs/0GLUSTER-C-DATA N/A   N/A N   N/A
>>> Brick 10.5.6.100:/__.aLocalStorages/0/0-GLU
>>> STERs/0GLUSTER-C-DATA49152 0 Y   9376
>>> Brick 10.5.6.32:/__.aLocalStorages/0/0-GLUS
>>> TERs/0GLUSTER-C-DATA 49152 0 Y   8638
>>> Self-heal Daemon on localhost   N/A   N/A Y   387879
>>> Quota Daemon on localhost   N/A   N/A Y   387891
>>> Self-heal Daemon on rider.private.ccnr.ceb.
>>> private.cam.ac.uk   N/A   N/A Y   16439
>>> Quota Daemon on rider.private.ccnr.ceb.priv
>>> ate.cam.ac.uk   N/A   N/A Y   16451
>>> Self-heal Daemon on 10.5.6.32   N/A   N/A Y   7708
>>> Quota Daemon on 10.5.6.32   N/A   N/A Y   8623
>>> Self-heal Daemon on 10.5.6.17   N/A   N/A Y   20549
>>> Quota Daemon on 10.5.6.17   N/A   N/A Y   9337
>>>
>>> Task Status of Volume C-DATA
>>> 
>>> --
>>> There are no active volume tasks
>>
>>
>>>
>>>
>>>
>>> .
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
> --
> --Atin
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___

Re: [Gluster-users] Gluster Server upgrade from 3.5 to 3.8 in online upgrade mode

2017-09-13 Thread Hemant Mamtora
Thanks for your reply.

So the way I understand is that it can be upgraded but with a downtime, 
which means that there are no clients writing to this gluster volume, as 
the volume is stopped.

But post a upgrade we will still have the data on the gluster volume 
that we had before the upgrade(but with downtime).

- Hemant


On 9/13/17 2:33 PM, Diego Remolina wrote:
> Nope, not gonna work... I could never go even from 3.6. to 3.7 without
> downtime cause of the settings change, see:
>
> http://lists.gluster.org/pipermail/gluster-users.old/2015-September/023470.html
>
> Even when changing options in the older 3.6.x I had installed, my new
> 3.7.x server would not connect, so had to pretty much stop gluster in
> all servers, update to 3.7.x offline, then start gluster and all
> servers would see each other just fine.
>
> Diego
>
> On Tue, Sep 12, 2017 at 6:15 PM, Hemant Mamtora  wrote:
>> I was looking to upgrade Gluster server from ver 3.5.X to 3.8.X.
>>
>> I have already tried it in offline upgrade mode and that works, I am
>> interested in knowing if this upgrade of gluster server version can be
>> in online upgrade mode.
>>
>> Many thanks in advance.
>>
>> --
>> - Hemant Mamtora
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users

-- 
- Hemant Mamtora
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Issues with bricks and shd failing to start

2017-09-13 Thread Ben Werthmann
If you encounter issues where bricks and/or sometimes self-heal daemon fail
to start, please see these bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1491059
https://bugzilla.redhat.com/show_bug.cgi?id=1491060

The above bugs are filed against 3.10.4.

and this post where the OP was running 3.11.2:
http://lists.gluster.org/pipermail/gluster-users/2017-September/032433.html

Hopes this helps.

Ben Werthmann
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster Server upgrade from 3.5 to 3.8 in online upgrade mode

2017-09-13 Thread Diego Remolina
Nope, not gonna work... I could never go even from 3.6. to 3.7 without
downtime cause of the settings change, see:

http://lists.gluster.org/pipermail/gluster-users.old/2015-September/023470.html

Even when changing options in the older 3.6.x I had installed, my new
3.7.x server would not connect, so had to pretty much stop gluster in
all servers, update to 3.7.x offline, then start gluster and all
servers would see each other just fine.

Diego

On Tue, Sep 12, 2017 at 6:15 PM, Hemant Mamtora  wrote:
> I was looking to upgrade Gluster server from ver 3.5.X to 3.8.X.
>
> I have already tried it in offline upgrade mode and that works, I am
> interested in knowing if this upgrade of gluster server version can be
> in online upgrade mode.
>
> Many thanks in advance.
>
> --
> - Hemant Mamtora
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR

2017-09-13 Thread Ben Werthmann
I ran into something like this in 3.10.4 and filed two bugs for it:

https://bugzilla.redhat.com/show_bug.cgi?id=1491059
https://bugzilla.redhat.com/show_bug.cgi?id=1491060

Please see the above bugs for full detail.

In summary, my issue was related to glusterd's pid handling of pid files
when is starts self-heal and bricks. The issues are:

a. brick pid file leaves stale pid and brick fails to start when
glusterd is started. pid files are stored in `/var/lib/glusterd` which
persists across reboots. When glusterd is started (or restarted or
host rebooted) and the pid of any process matching the pid in the
brick pid file, brick fails to start.

b. self-heal-deamon pid file leave stale pid and indiscriminately
kills pid when glusterd is started. pid files are stored in
`/var/lib/glusterd` which persists across reboots. When glusterd is
started (or restarted or host rebooted) the pid of any process
matching the pid in the shd pid file is killed.

due to the nature of these bugs sometimes bricks/shd will start,
sometimes they will not, restart success may be intermittent. This bug
is most likely to occur when services were running with a low pid,
then the host is rebooted since reboots tend to densely group pids in
lower pid numbers. You might also see it if you have high pid churn
due to short lived processes.

In the case of self-heal daemon, you may also see other processes
"randomly" being terminated.

resulting in:

1a. pid file /var/lib/glusterd/glustershd/run/glustershd.pid remains
after shd is stopped
2a. glusterd kills any process number in the stale shd pid file.
1b. brick pid file(s) remain after brick is stopped
2b. glusterd fails to start brick when the pid in a pid file matches
any running process

Workaround:

in our automation, when we stop all gluster processes (reboot,
upgrade, etc.) we ensure all processes are stopped and then cleanup
the pids with:
'find /var/lib/glusterd/ -name '*pid' -delete'

This is not a complete solution, but works in our most critical times.
We may develop something more complete if the bug is not addressed
promptly.




On Sat, Aug 5, 2017 at 11:54 PM, Leonid Isaev <
leonid.is...@jila.colorado.edu> wrote:

> Hi,
>
> I have a distributed volume which runs on Fedora 26 systems with
> glusterfs 3.11.2 from gluster.org repos:
> --
> [root@taupo ~]# glusterd --version
> glusterfs 3.11.2
>
> gluster> volume info gv2
> Volume Name: gv2
> Type: Distribute
> Volume ID: 6b468f43-3857-4506-917c-7eaaaef9b6ee
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 6
> Transport-type: tcp
> Bricks:
> Brick1: kiwi:/srv/gluster/gv2/brick1/gvol
> Brick2: kiwi:/srv/gluster/gv2/brick2/gvol
> Brick3: taupo:/srv/gluster/gv2/brick1/gvol
> Brick4: fox:/srv/gluster/gv2/brick1/gvol
> Brick5: fox:/srv/gluster/gv2/brick2/gvol
> Brick6: logan:/srv/gluster/gv2/brick1/gvol
> Options Reconfigured:
> performance.readdir-ahead: on
> nfs.disable: on
>
> gluster> volume status gv2
> Status of volume: gv2
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick kiwi:/srv/gluster/gv2/brick1/gvol 49152 0  Y
>  1128
> Brick kiwi:/srv/gluster/gv2/brick2/gvol 49153 0  Y
>  1134
> Brick taupo:/srv/gluster/gv2/brick1/gvolN/A   N/AN
>  N/A
> Brick fox:/srv/gluster/gv2/brick1/gvol  49152 0  Y
>  1169
> Brick fox:/srv/gluster/gv2/brick2/gvol  49153 0  Y
>  1175
> Brick logan:/srv/gluster/gv2/brick1/gvol49152 0  Y
>  1003
> --
>
> The machine in question is TAUPO which has one brick that refuses to
> connect to
> the cluster. All installations were migrated from glusterfs 3.8.14 on
> Fedora
> 24: I simply rsync'ed /var/lib/glusterd to new systems. On all other
> machines
> glusterd starts fine and all bricks come up. Hence I suspect a race
> condition
> somewhere. The glusterd.log file (attached) shows that the brick connects,
> and
> then suddenly disconnects from the cluster:
> --
> [2017-08-06 03:12:38.536409] I [glusterd-utils.c:5468:glusterd_brick_start]
> 0-management: discovered already-running brick /srv/gluster/gv2/brick1/gvol
> [2017-08-06 03:12:38.536414] I [MSGID: 106143] 
> [glusterd-pmap.c:279:pmap_registry_bind]
> 0-pmap: adding brick /srv/gluster/gv2/brick1/gvol on port 49153
> [2017-08-06 03:12:38.536427] I [rpc-clnt.c:1059:rpc_clnt_connection_init]
> 0-management: setting frame-timeout to 600
> [2017-08-06 03:12:38.536500] I [rpc-clnt.c:1059:rpc_clnt_connection_init]
> 0-snapd: setting frame-timeout to 600
> [2017-08-06 03:12:38.536556] I [rpc-clnt.c:1059:rpc_clnt_connection_init]
> 0-snapd: setting frame-timeout to 600
> [2017-08-06 03:12:38.536616] I [MSGID: 106492] 
> [glusterd-handler.c:2717:__glusterd_handle_friend_update]
> 0-glusterd: Received friend update from uuid: d5a487e3-4c9b-4e5a-91ff-
> b8d85fd51da9
> [2017-08-06 03:12:38.584598] I [MSGID

[Gluster-users] Gluster Server upgrade from 3.5 to 3.8 in online upgrade mode

2017-09-13 Thread Hemant Mamtora
I was looking to upgrade Gluster server from ver 3.5.X to 3.8.X.

I have already tried it in offline upgrade mode and that works, I am 
interested in knowing if this upgrade of gluster server version can be 
in online upgrade mode.

Many thanks in advance.

-- 
- Hemant Mamtora
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] pausing scrub crashed scrub daemon on nodes

2017-09-13 Thread FNU Raghavendra Manjunath
Hi Amudhan,

Replies inline.

On Fri, Sep 8, 2017 at 6:37 AM, Amudhan P  wrote:

> Hi,
>
> I am using glusterfs 3.10.1 with 30 nodes each with 36 bricks and 10 nodes
> each with 16 bricks in a single cluster.
>
> By default I have paused scrub process to have it run manually. for the
> first time, i was trying to run scrub-on-demand and it was running fine,
> but after some time, i decided to pause scrub process due to high CPU
> usage and user reporting folder listing taking time.
> But scrub pause resulted below message in some of the nodes.
> Also, i can see that scrub daemon is not showing in volume status for some
> nodes.
>
> Error msg type 1
> --
>
> [2017-09-01 10:04:45.840248] I [bit-rot.c:1683:notify]
> 0-glustervol-bit-rot-0: BitRot scrub ondemand called
> [2017-09-01 10:05:05.094948] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec]
> 0-mgmt: Volume file changed
> [2017-09-01 10:05:06.401792] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec]
> 0-mgmt: Volume file changed
> [2017-09-01 10:05:07.544524] I [MSGID: 118035] 
> [bit-rot-scrub.c:1297:br_scrubber_scale_up]
> 0-glustervol-bit-rot-0: Scaling up scrubbe
> rs [0 => 36]
> [2017-09-01 10:05:07.552893] I [MSGID: 118048] 
> [bit-rot-scrub.c:1547:br_scrubber_log_option]
> 0-glustervol-bit-rot-0: SCRUB TUNABLES::
>  [Frequency: biweekly, Throttle: lazy]
> [2017-09-01 10:05:07.552942] I [MSGID: 118038] 
> [bit-rot-scrub.c:948:br_fsscan_schedule]
> 0-glustervol-bit-rot-0: Scrubbing is schedule
> d to run at 2017-09-15 10:05:07
> [2017-09-01 10:05:07.553457] I [glusterfsd-mgmt.c:1778:mgmt_getspec_cbk]
> 0-glusterfs: No change in volfile, continuing
> [2017-09-01 10:05:20.953815] I [bit-rot.c:1683:notify]
> 0-glustervol-bit-rot-0: BitRot scrub ondemand called
> [2017-09-01 10:05:20.953845] I [MSGID: 118038] 
> [bit-rot-scrub.c:1085:br_fsscan_ondemand]
> 0-glustervol-bit-rot-0: Ondemand Scrubbing s
> cheduled to run at 2017-09-01 10:05:21
> [2017-09-01 10:05:22.216937] I [MSGID: 118044] 
> [bit-rot-scrub.c:615:br_scrubber_log_time]
> 0-glustervol-bit-rot-0: Scrubbing started a
> t 2017-09-01 10:05:22
> [2017-09-01 10:05:22.306307] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec]
> 0-mgmt: Volume file changed
> [2017-09-01 10:05:24.684900] I [glusterfsd-mgmt.c:1778:mgmt_getspec_cbk]
> 0-glusterfs: No change in volfile, continuing
> [2017-09-06 08:37:26.422267] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec]
> 0-mgmt: Volume file changed
> [2017-09-06 08:37:28.351821] I [glusterfsd-mgmt.c:52:mgmt_cbk_spec]
> 0-mgmt: Volume file changed
> [2017-09-06 08:37:30.350786] I [MSGID: 118034] 
> [bit-rot-scrub.c:1342:br_scrubber_scale_down]
> 0-glustervol-bit-rot-0: Scaling down scr
> ubbers [36 => 0]
> pending frames:
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> patchset: git://git.gluster.org/glusterfs.git
> signal received: 11
> time of crash:
> 2017-09-06 08:37:30
> configuration details:
> argp 1
> backtrace 1
> dlfcn 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 3.10.1
> /usr/lib/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x78)[0x7fda0ab0b4f8]
> /usr/lib/libglusterfs.so.0(gf_print_trace+0x324)[0x7fda0ab14914]
> /lib/x86_64-linux-gnu/libc.so.6(+0x36d40)[0x7fda09ef9d40]
> /usr/lib/libglusterfs.so.0(syncop_readv_cbk+0x17)[0x7fda0ab429e7]
> /usr/lib/glusterfs/3.10.1/xlator/protocol/client.so(+
> 0x2db4b)[0x7fda04986b4b]
> /usr/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7fda0a8d5490]
> /usr/lib/libgfrpc.so.0(rpc_clnt_notify+0x1e7)[0x7fda0a8d5777]
> /usr/lib/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fda0a8d17d3]
> /usr/lib/glusterfs/3.10.1/rpc-transport/socket.so(+0x7194)[0x7fda05826194]
> /usr/lib/glusterfs/3.10.1/rpc-transport/socket.so(+0x9635)[0x7fda05828635]
> /usr/lib/libglusterfs.so.0(+0x83db0)[0x7fda0ab64db0]
> /lib/x86_64-linux-gnu/libpthread.so.0(+0x8182)[0x7fda0a290182]
> /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7fda09fbd47d]
> --
>
> Error msg type 2
>
> [2017-09-01 10:01:20.387248] I [MSGID: 118035] 
> [bit-rot-scrub.c:1297:br_scrubber_scale_up]
> 0-glustervol-bit-rot-0: Scaling up scrubbe
> rs [0 => 36]
> [2017-09-01 10:01:20.392544] I [MSGID: 118048] 
> [bit-rot-scrub.c:1547:br_scrubber_log_option]
> 0-glustervol-bit-rot-0: SCRUB TUNABLES::
>  [Frequency: biweekly, Throttle: lazy]
> [2017-09-01 10:01:20.392571] I [MSGID: 118038] 
> [bit-rot-scrub.c:948:br_fsscan_schedule]
> 0-glustervol-bi

Re: [Gluster-users] glusterfs expose iSCSI

2017-09-13 Thread GiangCoi Mr
Hi Prasanna.

I follow this instruction https://github.com/gluster/gluster-block, I use 2
gluster01,02 and create one gluster volume. when I user Window Server 2012
(iSCSI Initiator) to connect to iSCSI target over, it's connected but in
Window don't recognize storage.


Size of gluster volume is 40GB. How to fix it?
​

2017-09-13 18:34 GMT+07:00 Prasanna Kalever :

> On Wed, Sep 13, 2017 at 1:03 PM, GiangCoi Mr  wrote:
> > Hi all
> >
>
> Hi GiangCoi,
>
> The Good news is that now we have gluster-block [1] which will help
> you configure block storage using gluster very easy.
> gluster-block will take care of all the targetcli and tcmu-runner
> configuration for you, all you need as a pre-requisite is a gluster
> volume.
>
> And the sad part is we haven't tested gluster-block on centos, but
> just source compile should work IMO.
>
> > I want to configure glusterfs to expose iSCSI target. I followed this
> > artical
> > https://pkalever.wordpress.com/2016/06/23/gluster-
> solution-for-non-shared-persistent-storage-in-docker-container/
> > but when I install tcmu-runner. It doesn't work.
>
> What is your environment, do you want to setup guster block storage in
> a container environment or is it just in a non-container centos
> environment ?
>
> >
> > I setup on CentOS7 and installed tcmu-runner by rpm. When I run
> targetcli,
> > it not show user:glfs and user:gcow
> >
> > /> ls
> > o- / .. [...]
> >   o- backstores ... [...]
> >   | o- block ... [Storage Objects: 0]
> >   | o- fileio .. [Storage Objects: 0]
> >   | o- pscsi ... [Storage Objects: 0]
> >   | o- ramdisk . [Storage Objects: 0]
> >   o- iscsi . [Targets: 0]
> >   o- loopback .. [Targets: 0]
> >
>
> BTW - have you started your tcmu-runner.service ?
> If your tcmu-runner service is running but you still cannot see them
> listed in the 'targetcli ls' output, then it looks like your handlers
> were not loaded properly.
>
> In fedora, the default handler location will be at /usr/lib64/tcmu-runner
>
> [0] ॐ 04:55:22@~ $ ls /usr/lib64/tcmu-runner/
> handler_glfs.so
>
> Just try using --handler-path option
> [0] ॐ 04:56:05@~ $ tcmu-runner --handler-path /usr/lib64/tcmu-runner/ &
>
> [0] ॐ 05:00:54@~ $ targetcli ls | grep glfs
>   | o- user:glfs
> 
> ..
> [Storage Objects: 0]
>
> If it works, then may be you can tweak the systemd unit, in case if
> you want to run it as a service
>
> > How I configure glusterfs to expose iSCSI. Please help me.
>
> Feel free to parse gluster-block ReadMe [2]
>
>
> [1] https://github.com/gluster/gluster-block
> [2] https://github.com/gluster/gluster-block/blob/master/README.md
>
> Cheers!
> --
> Prasanna
>
> >
> > Regards,
> >
> > Giang
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterfs expose iSCSI

2017-09-13 Thread GiangCoi Mr
Thanks so much your answers.
I will try gluster-block.

2017-09-13 18:34 GMT+07:00 Prasanna Kalever :

> On Wed, Sep 13, 2017 at 1:03 PM, GiangCoi Mr  wrote:
> > Hi all
> >
>
> Hi GiangCoi,
>
> The Good news is that now we have gluster-block [1] which will help
> you configure block storage using gluster very easy.
> gluster-block will take care of all the targetcli and tcmu-runner
> configuration for you, all you need as a pre-requisite is a gluster
> volume.
>
> And the sad part is we haven't tested gluster-block on centos, but
> just source compile should work IMO.
>
> > I want to configure glusterfs to expose iSCSI target. I followed this
> > artical
> > https://pkalever.wordpress.com/2016/06/23/gluster-
> solution-for-non-shared-persistent-storage-in-docker-container/
> > but when I install tcmu-runner. It doesn't work.
>
> What is your environment, do you want to setup guster block storage in
> a container environment or is it just in a non-container centos
> environment ?
>
> >
> > I setup on CentOS7 and installed tcmu-runner by rpm. When I run
> targetcli,
> > it not show user:glfs and user:gcow
> >
> > /> ls
> > o- / .. [...]
> >   o- backstores ... [...]
> >   | o- block ... [Storage Objects: 0]
> >   | o- fileio .. [Storage Objects: 0]
> >   | o- pscsi ... [Storage Objects: 0]
> >   | o- ramdisk . [Storage Objects: 0]
> >   o- iscsi . [Targets: 0]
> >   o- loopback .. [Targets: 0]
> >
>
> BTW - have you started your tcmu-runner.service ?
> If your tcmu-runner service is running but you still cannot see them
> listed in the 'targetcli ls' output, then it looks like your handlers
> were not loaded properly.
>
> In fedora, the default handler location will be at /usr/lib64/tcmu-runner
>
> [0] ॐ 04:55:22@~ $ ls /usr/lib64/tcmu-runner/
> handler_glfs.so
>
> Just try using --handler-path option
> [0] ॐ 04:56:05@~ $ tcmu-runner --handler-path /usr/lib64/tcmu-runner/ &
>
> [0] ॐ 05:00:54@~ $ targetcli ls | grep glfs
>   | o- user:glfs
> 
> ..
> [Storage Objects: 0]
>
> If it works, then may be you can tweak the systemd unit, in case if
> you want to run it as a service
>
> > How I configure glusterfs to expose iSCSI. Please help me.
>
> Feel free to parse gluster-block ReadMe [2]
>
>
> [1] https://github.com/gluster/gluster-block
> [2] https://github.com/gluster/gluster-block/blob/master/README.md
>
> Cheers!
> --
> Prasanna
>
> >
> > Regards,
> >
> > Giang
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] one brick one volume process dies?

2017-09-13 Thread lejeczek

I emailed the logs earlier to just you.

On 13/09/17 11:58, Gaurav Yadav wrote:
Please send me the logs as well i.e glusterd.logs and 
cmd_history.log.



On Wed, Sep 13, 2017 at 1:45 PM, lejeczek 
mailto:pelj...@yahoo.co.uk>> wrote:




On 13/09/17 06:21, Gaurav Yadav wrote:

Please provide the output of gluster volume info,
gluster volume status and gluster peer status.

Apart  from above info, please provide glusterd
logs, cmd_history.log.

Thanks
Gaurav

On Tue, Sep 12, 2017 at 2:22 PM, lejeczek
mailto:pelj...@yahoo.co.uk>
>> wrote:

    hi everyone

    I have 3-peer cluster with all vols in replica
mode, 9
    vols.
    What I see, unfortunately, is one brick fails
in one
    vol, when it happens it's always the same vol
on the
    same brick.
    Command: gluster vol status $vol - would show
brick
    not online.
    Restarting glusterd with systemclt does not
help, only
    system reboot seem to help, until it happens,
next time.

    How to troubleshoot this weird misbehaviour?
    many thanks, L.

    .
    ___
    Gluster-users mailing list
Gluster-users@gluster.org

    >
http://lists.gluster.org/mailman/listinfo/gluster-users

   
>



hi, here:

$ gluster vol info C-DATA

Volume Name: C-DATA
Type: Replicate
Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1:
10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
Brick2:
10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
Brick3:
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
Options Reconfigured:
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
performance.io-thread-count: 64
performance.cache-size: 128MB
cluster.self-heal-daemon: enable
features.quota-deem-statfs: on
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
nfs.disable: on
transport.address-family: inet
performance.cache-samba-metadata: on


$ gluster vol status C-DATA
Status of volume: C-DATA
Gluster process   TCP Port RDMA Port Online  Pid

--
Brick 10.5.6.49:/__.aLocalStorages/0/0-GLUS
TERs/0GLUSTER-C-DATA    N/A   N/A N   N/A
Brick 10.5.6.100:/__.aLocalStorages/0/0-GLU
STERs/0GLUSTER-C-DATA    49152 0 Y   9376
Brick 10.5.6.32:/__.aLocalStorages/0/0-GLUS
TERs/0GLUSTER-C-DATA    49152 0 Y   8638
Self-heal Daemon on localhost   N/A  
N/A Y   387879
Quota Daemon on localhost   N/A  
N/A Y   387891
Self-heal Daemon on rider.private.ccnr.ceb.
private.cam.ac.uk  N/A  
N/A Y   16439
Quota Daemon on rider.private.ccnr.ceb.priv
ate.cam.ac.uk  N/A   N/A
Y   16451
Self-heal Daemon on 10.5.6.32   N/A  
N/A Y   7708
Quota Daemon on 10.5.6.32   N/A  
N/A Y   8623
Self-heal Daemon on 10.5.6.17   N/A  
N/A Y   20549
Quota Daemon on 10.5.6.17   N/A  
N/A Y   9337

Task Status of Volume C-DATA

--
There are no active volume tasks




.
___
Gluster-users mailing list
Gluster-users@gluster.org

http://lists.gluster.org/mailman/listinfo/gluster-users






.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterfs expose iSCSI

2017-09-13 Thread Prasanna Kalever
On Wed, Sep 13, 2017 at 1:03 PM, GiangCoi Mr  wrote:
> Hi all
>

Hi GiangCoi,

The Good news is that now we have gluster-block [1] which will help
you configure block storage using gluster very easy.
gluster-block will take care of all the targetcli and tcmu-runner
configuration for you, all you need as a pre-requisite is a gluster
volume.

And the sad part is we haven't tested gluster-block on centos, but
just source compile should work IMO.

> I want to configure glusterfs to expose iSCSI target. I followed this
> artical
> https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
> but when I install tcmu-runner. It doesn't work.

What is your environment, do you want to setup guster block storage in
a container environment or is it just in a non-container centos
environment ?

>
> I setup on CentOS7 and installed tcmu-runner by rpm. When I run targetcli,
> it not show user:glfs and user:gcow
>
> /> ls
> o- / .. [...]
>   o- backstores ... [...]
>   | o- block ... [Storage Objects: 0]
>   | o- fileio .. [Storage Objects: 0]
>   | o- pscsi ... [Storage Objects: 0]
>   | o- ramdisk . [Storage Objects: 0]
>   o- iscsi . [Targets: 0]
>   o- loopback .. [Targets: 0]
>

BTW - have you started your tcmu-runner.service ?
If your tcmu-runner service is running but you still cannot see them
listed in the 'targetcli ls' output, then it looks like your handlers
were not loaded properly.

In fedora, the default handler location will be at /usr/lib64/tcmu-runner

[0] ॐ 04:55:22@~ $ ls /usr/lib64/tcmu-runner/
handler_glfs.so

Just try using --handler-path option
[0] ॐ 04:56:05@~ $ tcmu-runner --handler-path /usr/lib64/tcmu-runner/ &

[0] ॐ 05:00:54@~ $ targetcli ls | grep glfs
  | o- user:glfs
..
[Storage Objects: 0]

If it works, then may be you can tweak the systemd unit, in case if
you want to run it as a service

> How I configure glusterfs to expose iSCSI. Please help me.

Feel free to parse gluster-block ReadMe [2]


[1] https://github.com/gluster/gluster-block
[2] https://github.com/gluster/gluster-block/blob/master/README.md

Cheers!
--
Prasanna

>
> Regards,
>
> Giang
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Basic questions

2017-09-13 Thread Roman
Hi guys.
A lot time with no words from me.
My GlustreFS setup is still alive and running for VM on 3.6.x version

Now I'm going to build a pretty large and scalable distributed storage and
some kind of smaller replicated.

Which version is recommended atm? I've read some notices on performance
3.12.0 vs 3.10.5 from list.

Another question is - are there any fresh tests on native and nfs mounts
with glusterfs? At the time I was using glusterfs to build storage nfs was
way faster.
Is there any way to mount replicated volume using single node name? Or how
should I do it to achieve automated failover if something will go wrong
using the nfs mount of replicated volume? Should I just edit the hosts file
to point same name against 2 different IP-s?

The distributed setup will go as large as 300 TB overall. File sezes are
10MB-1,5GB. Are there any recommendations for performance tuning on
volumes? I will not use tiering with ssd. The network is going to be
10Gbps. Any advice on this topic is highly appreciated. Will start with 1
server and 50TB disks on HW raid10 and add servers as data goes larger.

-- 
Best regards,
Roman.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] one brick one volume process dies?

2017-09-13 Thread Atin Mukherjee
Additionally the brick log file of the same brick would be required. Please
look for if brick process went down or crashed. Doing a volume start force
should resolve the issue.

On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav  wrote:

> Please send me the logs as well i.e glusterd.logs and cmd_history.log.
>
>
> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek  wrote:
>
>>
>>
>> On 13/09/17 06:21, Gaurav Yadav wrote:
>>
> Please provide the output of gluster volume info, gluster volume status
>>> and gluster peer status.
>>>
>>> Apart  from above info, please provide glusterd logs, cmd_history.log.
>>>
>>> Thanks
>>> Gaurav
>>>
>>> On Tue, Sep 12, 2017 at 2:22 PM, lejeczek >> pelj...@yahoo.co.uk>> wrote:
>>>
>>> hi everyone
>>>
>>> I have 3-peer cluster with all vols in replica mode, 9
>>> vols.
>>> What I see, unfortunately, is one brick fails in one
>>> vol, when it happens it's always the same vol on the
>>> same brick.
>>> Command: gluster vol status $vol - would show brick
>>> not online.
>>> Restarting glusterd with systemclt does not help, only
>>> system reboot seem to help, until it happens, next time.
>>>
>>> How to troubleshoot this weird misbehaviour?
>>> many thanks, L.
>>>
>>> .
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>>
>> 
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>> 
>>>
>>>
>>>
>> hi, here:
>>
>> $ gluster vol info C-DATA
>>
>> Volume Name: C-DATA
>> Type: Replicate
>> Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
>> Brick2: 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
>> Brick3: 10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
>> Options Reconfigured:
>> performance.md-cache-timeout: 600
>> performance.cache-invalidation: on
>> performance.stat-prefetch: on
>> features.cache-invalidation-timeout: 600
>> features.cache-invalidation: on
>> performance.io-thread-count: 64
>> performance.cache-size: 128MB
>> cluster.self-heal-daemon: enable
>> features.quota-deem-statfs: on
>> changelog.changelog: on
>> geo-replication.ignore-pid-check: on
>> geo-replication.indexing: on
>> features.inode-quota: on
>> features.quota: on
>> performance.readdir-ahead: on
>> nfs.disable: on
>> transport.address-family: inet
>> performance.cache-samba-metadata: on
>>
>>
>> $ gluster vol status C-DATA
>> Status of volume: C-DATA
>> Gluster process TCP Port  RDMA Port Online
>> Pid
>>
>> --
>> Brick 10.5.6.49:/__.aLocalStorages/0/0-GLUS
>> TERs/0GLUSTER-C-DATA N/A   N/A N   N/A
>> Brick 10.5.6.100:/__.aLocalStorages/0/0-GLU
>> STERs/0GLUSTER-C-DATA49152 0 Y   9376
>> Brick 10.5.6.32:/__.aLocalStorages/0/0-GLUS
>> TERs/0GLUSTER-C-DATA 49152 0 Y   8638
>> Self-heal Daemon on localhost   N/A   N/A Y   387879
>> Quota Daemon on localhost   N/A   N/A Y   387891
>> Self-heal Daemon on rider.private.ccnr.ceb.
>> private.cam.ac.uk   N/A   N/A Y   16439
>> Quota Daemon on rider.private.ccnr.ceb.priv
>> ate.cam.ac.uk   N/A   N/A Y   16451
>> Self-heal Daemon on 10.5.6.32   N/A   N/A Y   7708
>> Quota Daemon on 10.5.6.32   N/A   N/A Y   8623
>> Self-heal Daemon on 10.5.6.17   N/A   N/A Y   20549
>> Quota Daemon on 10.5.6.17   N/A   N/A Y   9337
>>
>> Task Status of Volume C-DATA
>>
>> --
>> There are no active volume tasks
>
>
>>
>>
>>
>> .
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

-- 
--Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] one brick one volume process dies?

2017-09-13 Thread Gaurav Yadav
Please send me the logs as well i.e glusterd.logs and cmd_history.log.


On Wed, Sep 13, 2017 at 1:45 PM, lejeczek  wrote:

>
>
> On 13/09/17 06:21, Gaurav Yadav wrote:
>
>> Please provide the output of gluster volume info, gluster volume status
>> and gluster peer status.
>>
>> Apart  from above info, please provide glusterd logs, cmd_history.log.
>>
>> Thanks
>> Gaurav
>>
>> On Tue, Sep 12, 2017 at 2:22 PM, lejeczek > pelj...@yahoo.co.uk>> wrote:
>>
>> hi everyone
>>
>> I have 3-peer cluster with all vols in replica mode, 9
>> vols.
>> What I see, unfortunately, is one brick fails in one
>> vol, when it happens it's always the same vol on the
>> same brick.
>> Command: gluster vol status $vol - would show brick
>> not online.
>> Restarting glusterd with systemclt does not help, only
>> system reboot seem to help, until it happens, next time.
>>
>> How to troubleshoot this weird misbehaviour?
>> many thanks, L.
>>
>> .
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> 
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>> 
>>
>>
>>
> hi, here:
>
> $ gluster vol info C-DATA
>
> Volume Name: C-DATA
> Type: Replicate
> Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
> Brick2: 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
> Brick3: 10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
> Options Reconfigured:
> performance.md-cache-timeout: 600
> performance.cache-invalidation: on
> performance.stat-prefetch: on
> features.cache-invalidation-timeout: 600
> features.cache-invalidation: on
> performance.io-thread-count: 64
> performance.cache-size: 128MB
> cluster.self-heal-daemon: enable
> features.quota-deem-statfs: on
> changelog.changelog: on
> geo-replication.ignore-pid-check: on
> geo-replication.indexing: on
> features.inode-quota: on
> features.quota: on
> performance.readdir-ahead: on
> nfs.disable: on
> transport.address-family: inet
> performance.cache-samba-metadata: on
>
>
> $ gluster vol status C-DATA
> Status of volume: C-DATA
> Gluster process TCP Port  RDMA Port Online
> Pid
> 
> --
> Brick 10.5.6.49:/__.aLocalStorages/0/0-GLUS
> TERs/0GLUSTER-C-DATA N/A   N/A N   N/A
> Brick 10.5.6.100:/__.aLocalStorages/0/0-GLU
> STERs/0GLUSTER-C-DATA49152 0 Y   9376
> Brick 10.5.6.32:/__.aLocalStorages/0/0-GLUS
> TERs/0GLUSTER-C-DATA 49152 0 Y   8638
> Self-heal Daemon on localhost   N/A   N/A Y   387879
> Quota Daemon on localhost   N/A   N/A Y   387891
> Self-heal Daemon on rider.private.ccnr.ceb.
> private.cam.ac.uk   N/A   N/A Y   16439
> Quota Daemon on rider.private.ccnr.ceb.priv
> ate.cam.ac.uk   N/A   N/A Y   16451
> Self-heal Daemon on 10.5.6.32   N/A   N/A Y   7708
> Quota Daemon on 10.5.6.32   N/A   N/A Y   8623
> Self-heal Daemon on 10.5.6.17   N/A   N/A Y   20549
> Quota Daemon on 10.5.6.17   N/A   N/A Y   9337
>
> Task Status of Volume C-DATA
> 
> --
> There are no active volume tasks
>
>
>
>
> .
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] one brick one volume process dies?

2017-09-13 Thread lejeczek



On 13/09/17 06:21, Gaurav Yadav wrote:
Please provide the output of gluster volume info, gluster 
volume status and gluster peer status.


Apart  from above info, please provide glusterd logs, 
cmd_history.log.


Thanks
Gaurav

On Tue, Sep 12, 2017 at 2:22 PM, lejeczek 
mailto:pelj...@yahoo.co.uk>> wrote:


hi everyone

I have 3-peer cluster with all vols in replica mode, 9
vols.
What I see, unfortunately, is one brick fails in one
vol, when it happens it's always the same vol on the
same brick.
Command: gluster vol status $vol - would show brick
not online.
Restarting glusterd with systemclt does not help, only
system reboot seem to help, until it happens, next time.

How to troubleshoot this weird misbehaviour?
many thanks, L.

.
___
Gluster-users mailing list
Gluster-users@gluster.org

http://lists.gluster.org/mailman/listinfo/gluster-users





hi, here:

$ gluster vol info C-DATA

Volume Name: C-DATA
Type: Replicate
Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 
10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
Brick2: 
10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
Brick3: 
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA

Options Reconfigured:
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
performance.io-thread-count: 64
performance.cache-size: 128MB
cluster.self-heal-daemon: enable
features.quota-deem-statfs: on
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
nfs.disable: on
transport.address-family: inet
performance.cache-samba-metadata: on


$ gluster vol status C-DATA
Status of volume: C-DATA
Gluster process TCP Port  RDMA 
Port Online  Pid

--
Brick 10.5.6.49:/__.aLocalStorages/0/0-GLUS
TERs/0GLUSTER-C-DATA N/A   N/A 
N   N/A

Brick 10.5.6.100:/__.aLocalStorages/0/0-GLU
STERs/0GLUSTER-C-DATA    49152 0 Y   
9376

Brick 10.5.6.32:/__.aLocalStorages/0/0-GLUS
TERs/0GLUSTER-C-DATA 49152 0 Y   
8638
Self-heal Daemon on localhost   N/A   N/A 
Y   387879
Quota Daemon on localhost   N/A   N/A 
Y   387891

Self-heal Daemon on rider.private.ccnr.ceb.
private.cam.ac.uk   N/A   N/A 
Y   16439

Quota Daemon on rider.private.ccnr.ceb.priv
ate.cam.ac.uk   N/A   N/A 
Y   16451
Self-heal Daemon on 10.5.6.32   N/A   N/A 
Y   7708
Quota Daemon on 10.5.6.32   N/A   N/A 
Y   8623
Self-heal Daemon on 10.5.6.17   N/A   N/A 
Y   20549
Quota Daemon on 10.5.6.17   N/A   N/A 
Y   9337


Task Status of Volume C-DATA
--
There are no active volume tasks



.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] man pages - incomplete

2017-09-13 Thread lejeczek



On 12/09/17 12:59, Niels de Vos wrote:

On Tue, Sep 12, 2017 at 10:01:14AM +0100, lejeczek wrote:

@devel

hi, I wonder who takes care of man pages when it comes to rpms?
I'd like to file a bugzilla report and would like to make sure it's packages
mainainer(s) are responsible for incomplete man pages.
Often man pages are neglected by authors, too often, and man is, should
always be "the place", we users/admins should not have to sroogle for info
almost every time.

Please file a bug against the "doc" component in Bugzilla:
   https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&component=doc

It helps a lot if you explain in detail what is missing or incorrect,
and what you expect to read instead.


Suffice to do quick comparison(@10.5.x) of gluster help vs 
man gluster, which reveals that subcommands like: reset, 
sync, clear-locks and maybe more are missing in man pages.
This stuff must be there it's vital, man pages must always 
be complete, then an admin/user life gets so much better.

thanks, L.


The man-pages themselves are under the doc/ directory in the GlusterFS
sources, the *.8 files:
   https://github.com/gluster/glusterfs/tree/master/doc

If you want to send patches to address the changes you like to see,
please follow the "Simplified development workflow":
   
http://gluster.readthedocs.io/en/latest/Developer-guide/Simplified-Development-Workflow/

In case you run into difficulties with sending patches, don't hesitate
to email gluster-de...@gluster.org or ask in #gluster-devel on Freenode
IRC.

Thanks,
Niels



.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] glusterfs expose iSCSI

2017-09-13 Thread GiangCoi Mr
Hi all

I want to configure glusterfs to expose iSCSI target. I followed this
artical
https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
but when I install tcmu-runner. It doesn't work.

I setup on CentOS7 and installed tcmu-runner by rpm. When I run targetcli,
it not show *user:glfs* and *user:gcow*

*/>* ls
o- / .. [...]
  o- backstores ... [...]
  | o- block ... [Storage Objects: 0]
  | o- fileio .. [Storage Objects: 0]
  | o- pscsi ... [Storage Objects: 0]
  | o- ramdisk . [Storage Objects: 0]
  o- iscsi . [Targets: 0]
  o- loopback .. [Targets: 0]

How I configure glusterfs to expose iSCSI. Please help me.

Regards,

Giang
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users