[Gluster-users] INFO: task "####" blocked for more than 120 seconds.

2014-09-15 Thread Roman
Hi

This morning we had a VM crash and these logs (see attc)

No logs on gluster servers nor virtual host where this is glusterfs mount
is attached.
Any ideas?



-- 
Best regards,
Roman.
Sep 16 06:36:58 munin kernel: [1025760.825197] jbd2/vda1-8 D 
88003fc93780 0   179  2 0x
Sep 16 06:36:58 munin kernel: [1025760.825204]  88003bb30740 
0046  88003e216180
Sep 16 06:36:58 munin kernel: [1025760.825210]  00013780 
880036d77fd8 880036d77fd8 88003bb30740
Sep 16 06:36:58 munin kernel: [1025760.825214]  880036d76000 
000181066311 88003e1d5c20 88003fc93fd0
Sep 16 06:36:58 munin kernel: [1025760.825218] Call Trace:
Sep 16 06:36:58 munin kernel: [1025760.825262]  [] ? 
wait_on_buffer+0x28/0x28
Sep 16 06:36:58 munin kernel: [1025760.825282]  [] ? 
io_schedule+0x59/0x71
Sep 16 06:36:58 munin kernel: [1025760.825286]  [] ? 
sleep_on_buffer+0x6/0xa
Sep 16 06:36:58 munin kernel: [1025760.825289]  [] ? 
__wait_on_bit+0x3e/0x71
Sep 16 06:36:58 munin kernel: [1025760.825292]  [] ? 
out_of_line_wait_on_bit+0x6f/0x78
Sep 16 06:36:58 munin kernel: [1025760.825295]  [] ? 
wait_on_buffer+0x28/0x28
Sep 16 06:36:58 munin kernel: [1025760.825311]  [] ? 
autoremove_wake_function+0x2a/0x2a
Sep 16 06:36:58 munin kernel: [1025760.825389]  [] ? 
jbd2_journal_commit_transaction+0xbeb/0x10bf [jbd2]
Sep 16 06:36:58 munin kernel: [1025760.825403]  [] ? 
load_TLS+0x7/0xa
Sep 16 06:36:58 munin kernel: [1025760.825407]  [] ? 
__switch_to+0x133/0x258
Sep 16 06:36:58 munin kernel: [1025760.825413]  [] ? 
kjournald2+0xc0/0x20a [jbd2]
Sep 16 06:36:58 munin kernel: [1025760.825417]  [] ? 
add_wait_queue+0x3c/0x3c
Sep 16 06:36:58 munin kernel: [1025760.825422]  [] ? 
commit_timeout+0x5/0x5 [jbd2]
Sep 16 06:36:58 munin kernel: [1025760.825426]  [] ? 
kthread+0x76/0x7e
Sep 16 06:36:58 munin kernel: [1025760.825436]  [] ? 
kernel_thread_helper+0x4/0x10
Sep 16 06:36:58 munin kernel: [1025760.825440]  [] ? 
kthread_worker_fn+0x139/0x139
Sep 16 06:36:58 munin kernel: [1025760.825444]  [] ? 
gs_change+0x13/0x13
Sep 16 06:36:58 munin kernel: [1025760.826398] flush-254:0 D 
88003fc13780 0  2672  2 0x
Sep 16 06:36:58 munin kernel: [1025760.826410]  880037075740 
0046  8160d020
Sep 16 06:36:58 munin kernel: [1025760.826421]  00013780 
88003b955fd8 88003b955fd8 880037075740
Sep 16 06:36:58 munin kernel: [1025760.826424]  88003b955770 
00013b955770 88003d424c90 88003fc13fd0
Sep 16 06:36:58 munin kernel: [1025760.826428] Call Trace:
Sep 16 06:36:58 munin kernel: [1025760.826432]  [] ? 
io_schedule+0x59/0x71
Sep 16 06:36:58 munin kernel: [1025760.826447]  [] ? 
get_request_wait+0x105/0x18f
Sep 16 06:36:58 munin kernel: [1025760.826452]  [] ? 
add_wait_queue+0x3c/0x3c
Sep 16 06:36:58 munin kernel: [1025760.826456]  [] ? 
blk_queue_bio+0x17f/0x28c
Sep 16 06:36:58 munin kernel: [1025760.826459]  [] ? 
generic_make_request+0x90/0xcf
Sep 16 06:36:58 munin kernel: [1025760.826463]  [] ? 
submit_bio+0xd3/0xf1
Sep 16 06:36:58 munin kernel: [1025760.826473]  [] ? 
ext4_io_submit+0x21/0x4a [ext4]
Sep 16 06:36:58 munin kernel: [1025760.826481]  [] ? 
mpage_da_submit_io+0x359/0x36f [ext4]
Sep 16 06:36:58 munin kernel: [1025760.826488]  [] ? 
mpage_da_map_and_submit+0x2aa/0x2f9 [ext4]
Sep 16 06:36:58 munin kernel: [1025760.826495]  [] ? 
ext4_mark_inode_dirty+0x1af/0x1da [ext4]
Sep 16 06:36:58 munin kernel: [1025760.826502]  [] ? 
mpage_da_map_and_submit+0x2e3/0x2f9 [ext4]
Sep 16 06:36:58 munin kernel: [1025760.826509]  [] ? 
ext4_da_writepages+0x239/0x45d [ext4]
Sep 16 06:36:58 munin kernel: [1025760.826519]  [] ? 
ext4_journal_start_sb+0x139/0x14f [ext4]
Sep 16 06:36:58 munin kernel: [1025760.826526]  [] ? 
ext4_da_writepages+0x2c4/0x45d [ext4]
Sep 16 06:36:58 munin kernel: [1025760.826533]  [] ? 
writeback_single_inode+0x11d/0x2cc
Sep 16 06:36:58 munin kernel: [1025760.826537]  [] ? 
writeback_sb_inodes+0x16b/0x204
Sep 16 06:36:58 munin kernel: [1025760.826541]  [] ? 
__writeback_inodes_wb+0x6d/0xab
Sep 16 06:36:58 munin kernel: [1025760.826545]  [] ? 
wb_writeback+0x128/0x21f
Sep 16 06:36:58 munin kernel: [1025760.826553]  [] ? 
arch_local_irq_save+0x11/0x17
Sep 16 06:36:58 munin kernel: [1025760.826558]  [] ? 
wb_do_writeback+0x146/0x1a8
Sep 16 06:36:58 munin kernel: [1025760.826562]  [] ? 
bdi_writeback_thread+0x85/0x1e6
Sep 16 06:36:58 munin kernel: [1025760.826565]  [] ? 
wb_do_writeback+0x1a8/0x1a8
Sep 16 06:36:58 munin kernel: [1025760.826569]  [] ? 
kthread+0x76/0x7e
Sep 16 06:36:58 munin kernel: [1025760.826573]  [] ? 
kernel_thread_helper+0x4/0x10
Sep 16 06:36:58 munin kernel: [1025760.826576]  [] ? 
kthread_worker_fn+0x139/0x139
Sep 16 06:36:58 munin kernel: [1025760.826579]  [] ? 
gs_change+0x13/0x13
Sep 16 06:36:58 munin kernel: [1025760.827533] munin-graph D 
88003fc13780 0 29913  29580 0x
Sep 16 06:36:58 munin kernel: [1025760.827537]  88003d

Re: [Gluster-users] Are the libgfapi interfaces supportted asynchronous calls?

2014-09-15 Thread Prashanth Pai
Is this what you're looking for ?
https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h#L470-L473

Regards,
 -Prashanth Pai

- Original Message -
From: "zhihua jiang" 
To: gluster-users@gluster.org
Sent: Tuesday, September 16, 2014 7:20:06 AM
Subject: [Gluster-users] Are the libgfapi interfaces supportted asynchronous
calls?



Are the libgfapi interfaces supportted asynchronous calls? 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] glusterfs replica volume self heal lots of small file very very slow!!why?how to improve?

2014-09-15 Thread justgluste...@gmail.com




justgluste...@gmail.com
 
发件人: justgluste...@gmail.com
发送时间: 2014-09-09 16:23
收件人: gluster-devel
抄送: gluster-users
主题: glusterfs replica volume self heal lots of small file very very slow!!why?
Hi all:
  I do the  following test:
 I create a glusterfs  replica volume (replica count is 2 ) with two server 
node(server A and server B),use XFS as the underlying filesystem,  then  mount 
the volume in client node,
then, I  shut down the network of server A node, in  client node, I copy a 
dir(which has a lot of small files), the dir size is 2.9GByte,
when  copy finish, I unmount the volume from the  client, then I start the 
network of server A node,   now, glusterfs  self-heal-daemon start heal dir  
from  server B to  server  A, 
in the  end,  I find the self-heal-daemon   heal the  dir use  40 minutes,  
It's too slow!  why?

   I   find out   related options  with  self-heal, as  follow:
   cluster.self-heal-window-size
   cluster.self-heal-readdir-size
   cluster.background-self-heal-count

   
  then  I  config :
  cluster.self-heal-window-size  is  1024(max value)
  cluster.self-heal-readdir-size   is  131072(max  value)
   
  and  then  do  the  same  test case,  find  this times  heal the dir  use 35 
minutes,   The effective is not obvious, 
  

  I  want  to ask,  If there are better ways to improve replica volume self 
heal  lots of small file  performance??
  
  thanks!



justgluste...@gmail.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Proposal for GlusterD-2.0

2014-09-15 Thread Justin Clift
On 15/09/2014, at 8:19 PM, Kaushal M wrote:

> For the present we (GlusterD maintainers, KP and me, and other
> GlusterD contributers) would like to start off GlusterD-2.0 by using
> Consul for membership and config storage. The initial implementation
> would probably only just have the minimum cluster management
> functions, and would mainly be a POC or prototype kind of
> implementation. We'll try to keep it clean and reusable for later. We
> have been planning to use Go as the language of development, as we
> believe it is easy for C developers to pick up and provides features
> that make it easier to write distributed systems. There has been
> another mail thread on the topic of language choices, which should
> probably answer any questions on this decision.
> 
> What does the list think of my thoughts above?

To be clear, this is just an experimental proof-of-concept yeah?

If so, how much time is expected to be needed for it, what are
the success/failure criteria (to judge it at the end), and when
is this PoC expected to be completed?

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Are the libgfapi interfaces supportted asynchronous calls?

2014-09-15 Thread zhihua jiang
Are the libgfapi interfaces supportted asynchronous calls?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Use geo-replication without passwordless ssh login

2014-09-15 Thread M S Vishwanath Bhat

On 15/09/14 20:48, Bo Yu wrote:

Hi,

I wonder if it is possible to configure Gluster geo-replication in a 
manner that it does not require passwordless ssh login, since in our 
system passwordless ssh is not allowed.


Or, is it possible to configure passwordless ssh for Gluster only, not 
for every user or programm.
TBH, the passwordless ssh configured by "push-pem" option is very 
specific to gluster (gsyncd to be more specific). But this will used 
after the session is created. During the create gluster need the 
passwordless ssh to get the details of the slave cluster (it's status, 
available size, files present or not etc).


So you need to have passwordless ssh from one node in master to one in 
slave *only* during the "geo-rep create push-pem". After session 
created, you can actually remove the passwordless ssh and Ideally 
geo-rep should still work.


HTH

 Best Regards,
Vishwanath



Thanks.

Bo




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Who's who ?

2014-09-15 Thread Niels de Vos
On Mon, Sep 15, 2014 at 11:20:45AM -0400, Jeff Darcy wrote:
> > > For "new columns which may be useful", these ones spring to mind:
> 
> > * Twitter username - many people have them these days
> > * A free form text description - eg "I'm Justin, I'm into databases, 
> > storage,
> > and developing embedded human augmentation systems." ;)
> > * Some kind of thumbnail photo - probably as the first column on the left
> 
> I think the current table is already quite wide, and adding more columns
> is going to be very problematic design-wise.  Instead, I suggest that we
> make each person's name a link to their wiki user page, where they can
> put whatever contact or other info makes sense.  I just did that for
> myself, and it barely takes more time than updating the Who's Who page
> itself (plus it cuts down on the update notifications for that page).

+1

Everyone with a wiki account already has a user page anyway. A link to
your own page would looks like this, assuming a username of
"ExampleYou":

   [[User:ExampleYou|Your full Name]]

You can get to your own page when you click on your username in the left
menu in the wiki (after you have logged in).

Cheers,
Niels
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Cancelled: Gluster Community Bug triage meeting

2014-09-15 Thread Niels De Vos
BEGIN:VCALENDAR
PRODID:Zimbra-Calendar-Provider
VERSION:2.0
METHOD:CANCEL
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:STANDARD
DTSTART:16010101T03
TZOFFSETTO:+0100
TZOFFSETFROM:+0200
RRULE:FREQ=YEARLY;WKST=MO;INTERVAL=1;BYMONTH=10;BYDAY=-1SU
TZNAME:CET
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:16010101T02
TZOFFSETTO:+0200
TZOFFSETFROM:+0100
RRULE:FREQ=YEARLY;WKST=MO;INTERVAL=1;BYMONTH=3;BYDAY=-1SU
TZNAME:CEST
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:2b343380-48a4-4972-a9e6-6671c536bb97
SUMMARY:Cancelled: Gluster Community Bug triage meeting
COMMENT:A single instance of a recurring meeting has been cancelled.
LOCATION:#gluster-meeting on Freenode IRC
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE:mailto:gluster
 -de...@gluster.org
ATTENDEE;CN=gluster-users@gluster.org;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-AC
 TION;RSVP=TRUE:mailto:gluster-users@gluster.org
ATTENDEE;PARTSTAT=DECLINED:mailto:jan.dre...@bertelsmann.de
ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=TENTATIVE:mailto:schsu...@msn.com
ATTENDEE;CN=Peter Auyeung;PARTSTAT=TENTATIVE:mailto:pauye...@shopzilla.com
ATTENDEE;CN="Heggland, Christian";PARTSTAT=DECLINED:mailto:Christian.Hegglan
 d...@nov.com
ATTENDEE;CN=Ragusa Mario;PARTSTAT=DECLINED:mailto:m.rag...@eurodata.de
ATTENDEE;CN=Peter Portante;PARTSTAT=DECLINED:mailto:pport...@redhat.com
ATTENDEE;CN=Kaleb Keithley;PARTSTAT=ACCEPTED:mailto:kkeit...@redhat.com
ATTENDEE;CN=Jeff McClain;PARTSTAT=DECLINED:mailto:jmccl...@skopos.us
ATTENDEE;CN=Owen Hau;PARTSTAT=DECLINED:mailto:o...@heha.org
ATTENDEE;CN=Gaurav Garg;PARTSTAT=ACCEPTED:mailto:gg...@redhat.com
ATTENDEE;CN="Zaitz, John";PARTSTAT=DECLINED:mailto:jza...@netsuite.com
ATTENDEE;CN=Lalatendu Mohanty;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED:mailto:
 lmoha...@redhat.com
ATTENDEE;CN=Dan Cyr;PARTSTAT=DECLINED:mailto:d...@truenorthmanagement.com
ATTENDEE;CN=Krishnan Parthasarathi;PARTSTAT=ACCEPTED:mailto:kparthas@redhat.
 com
ORGANIZER;CN=Niels de Vos:mailto:nde...@redhat.com
DTSTART;TZID="Europe/Berlin":20140916T14
DTEND;TZID="Europe/Berlin":20140916T15
STATUS:CANCELLED
CLASS:PUBLIC
X-MICROSOFT-CDO-INTENDEDSTATUS:BUSY
TRANSP:OPAQUE
RECURRENCE-ID;TZID="Europe/Berlin":20140916T14
LAST-MODIFIED:20140915T192527Z
DTSTAMP:20140915T192527Z
SEQUENCE:1
DESCRIPTION:A single instance of the following meeting has been cancelled:\n
 \nSubject: Gluster Community Bug triage meeting \nOrganiser: "Niels De Vos" 
  \n\nLocation: #gluster-meeting on Freenode IRC \nTime: T
 uesday\, 16 September\, 2014\, 2:00:00 PM - 3:00:00 PM GMT +01:00 Amsterdam\
 , Berlin\, Bern\, Rome\, Stockholm\, Vienna\n \nRequired: Jan.Dreyer@bertels
 mann.de\; schsu...@msn.com\; pauye...@shopzilla.com\; Christian.Heggland@nov
 .com\; m.rag...@eurodata.de\; pport...@redhat.com\; kkeit...@redhat.com\; jm
 ccl...@skopos.us\; o...@heha.org\; gg...@redhat.com\; jza...@netsuite.com ..
 . \nOptional: gluster-de...@gluster.org\; gluster-users@gluster.org \n\n*~*~
 *~*~*~*~*~*~*~*\n\nTomorrows (Tuesday) Gluster Bug Triage meeting has been c
 ancelled. Many of the\nregular active participants are unavailable. The next
  bug triage meeting will\ntake place next week.\n\nIf there are any issues o
 r questions about the bug triage\, please send them by\nemail and I (or any 
 other community member) will try to take care of it.\n\nThanks\,\nNiels
END:VEVENT
END:VCALENDAR___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Proposal for GlusterD-2.0

2014-09-15 Thread Kaushal M
I was away on a small vacation last week, so I haven't been able to
reply to this thread till now.

There has been quite some discussion while I was away. This kind of
discussion was exactly what we wanted.
I've read through the thread, and I'd like to summarize what I feel is
the general feeling shared by all.

- GlusterD has shortcomings in its store and membership functionality
and is in need of improvement.
This is the immediate goal that we were targeting. We started this
discussion because we did not want to start the development without
hearing what the community has to say.

- We'd like to move to using external utilities to manage some of the
things GlusterD currently does.
This is because we don't want to burden ourselves with creating new
implementations for doing tasks other tools do well already. This is
the reason for our exploration of consul. Consul does distributed
simple storage and membership well, and could replace the existing
store and peer membership mechanism as implemented in GlusterD.

- There is also a need to improve and build upon the current cluster
management facilities provided by GlusterD.
Users want to be able to do more using GlusterD than what is currently
possible. This, IMO, mainly is with regards to monitoring and
automation. Users want more information from GlusterD regarding the
cluster state, and want to run automated tasks based on the obtained
information. We'd like to move these kind of improvements away from
core GlusterD as well, and this is where the automation tools like
Puppet, Saltstack, etc. come in.

- Users want GlusterFS to remain easy to deploy
With the talk of all the external utilities, some users are concerned
GlusterFS is going to become hard to deploy, with even small
deployments needing lots of dependencies to be pulled in. This is a
something I am concerned with as well, as ease of deployment is one of
the notable characters of GlusterFS and we shouldn't lose it.

With these details in mind, I have an initial idea on how GlusterD-2.0
and GlusterFS-Quattro are going to shape up.

GlusterD will continue to exist and will do most of the functions it
performs today. Where ever possible it would delegate it's functions
to external utilities. We need to make sure that these external
utilities don't need any further configuration from the user to be
usable (how we are going to accomplish this is a problem on its own).
Retaining GlusterD in this way should satisfy the easy to deploy
character.
The higher level automation and monitoring capabilities will be
handled by tools like Puppet, Saltstack, Chef etc.  We would probably
need to pick one among these for initial community support. I have no
preference among these, but since we already have a puppet-gluster
module, I think Puppet would be the most likely choice. GlusterD would
be extended to provide more internal, cluster information to these
tools, for them to make their intelligent decisions.

The choice of programming language and external utilities we make,
will decide how hard achieving the above would be.

For the present we (GlusterD maintainers, KP and me, and other
GlusterD contributers) would like to start off GlusterD-2.0 by using
Consul for membership and config storage. The initial implementation
would probably only just have the minimum cluster management
functions, and would mainly be a POC or prototype kind of
implementation. We'll try to keep it clean and reusable for later. We
have been planning to use Go as the language of development, as we
believe it is easy for C developers to pick up and provides features
that make it easier to write distributed systems. There has been
another mail thread on the topic of language choices, which should
probably answer any questions on this decision.

What does the list think of my thoughts above?

~kaushal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Can not change brick with replace-brick.

2014-09-15 Thread Mike Sirius
Hi guys,

I need to change a server for one brick on two volumes, however I’m getting an 
error and not sure way.

ubuntu@ip-10-250-154-164:/var/lib$ sudo gluster volume info

Volume Name: gv0
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: ec2-46-137-0-188.eu-west-1.compute.amazonaws.com:/export/brick1
Brick2: ec2-46-137-52-99.eu-west-1.compute.amazonaws.com:/export/brick1

Volume Name: gv1
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: ec2-46-137-0-188.eu-west-1.compute.amazonaws.com:/export/wp-brick1
Brick2: ec2-46-137-52-99.eu-west-1.compute.amazonaws.com:/export/wp-brick1

~~~

What happened is. The server, which is brick1, got taken down, upgraded and 
launched again.
Of course after a new launch it got new DNS, everything else the same.

Other servers which are using gluster (have a gluster mount) are fine as well. 
I have updated /etc/fstab and mounted gluster to new DNS.

However, if I go to one of the gluster servers and do ‘gluster volume info’ I 
see that brick1 is old DNS (see above).

So, I tried to replace the brick:

ubuntu@ip-10-250-154-164:/var/lib$ sudo gluster volume replace-brick gv0 
ec2-46-137-0-188.eu-west-1.compute.amazonaws.com:/export/brick1 
ec2-54-228-85-244.eu-west-1.compute.amazonaws.com:/export/brick1 start
brick: ec2-46-137-0-188.eu-west-1.compute.amazonaws.com:/export/brick1 does not 
exist in volume: gv0

~~

I clearly did not do any typo… totally not sure way is it saying ‘brick does 
not exists in that volumne’, when volume info shows that brick?

Please help ☺




This email is private and confidential and may contain information which is 
privileged and protected from disclosure. If you have received this email in 
error, please notify the sender and delete it from your system. Email 
communications are not secure and therefore the Company does not accept legal 
responsibility for the contents of this email. Any opinions expressed in this 
email are solely those of the author and do not represent those of the Company. 
If verification of the contents of this email is required, please request a 
hard copy version from the appropriate person. Emails to and from the Company 
may be monitored for operational reasons. Although the Company operates 
anti-virus scanning systems, it does not accept responsibility for any damage 
caused by attachments transmitted with this email.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] confused with heal output

2014-09-15 Thread Khoi Mai
Can someone please help me understand how to interpret and possibly 
resolve the issue:

I have a distributed-replicated volume across 8 storage nodes.


[root@omht1140~]# gluster volume info dyn_cfu

Volume Name: dyn_cfu
Type: Distributed-Replicate
Volume ID: dc402619-13cf-54fe-9ded-59df26bc44b6
Status: Started
Number of Bricks: 2 x 4 = 8
Transport-type: tcp
Bricks:
Brick1: omht1140:/export/dynamic/coldfusion
Brick2: omdt1c5d:/export/dynamic/coldfusion
Brick3: omht11ad:/export/dynamic/coldfusion
Brick4: omdt1781:/export/dynamic/coldfusion
Brick5: omht1c56:/export/dynamic/coldfusion
Brick6: omdt1c58:/export/dynamic/coldfusion
Brick7: omht1c57:/export/dynamic/coldfusion
Brick8: omdt1c59:/export/dynamic/coldfusion
Options Reconfigured:
features.quota: on
features.limit-usage: /:50GB
network.ping-timeout: 21
server.allow-insecure: on

[root@omht1140 ~]# gluster volume heal dyn_cfu info
Gathering Heal info on volume dyn_cfu has been successful

Brick omht1140:/export/dynamic/coldfusion
Number of entries: 1
/

Brick omdt1c5d:/export/dynamic/coldfusion
Number of entries: 1
/

Brick omht11ad:/export/dynamic/coldfusion
Number of entries: 1
/

Brick omdt1781:/export/dynamic/coldfusion
Number of entries: 1
/

Brick omhq1c56:/export/dynamic/coldfusion
Number of entries: 1
/

Brick omdt1c58:/export/dynamic/coldfusion
Number of entries: 1
/

Brick omht1c57:/export/dynamic/coldfusion
Number of entries: 1
/

Brick omdt1c59:/export/dynamic/coldfusion
Number of entries: 1
/


while heal info split-brain shows 0 files need any attention.  My 
client/storage logs do not mention anything about any files being missing 
or misplaced or corruption.

Thanks,
Khoi


**

This email and any attachments may contain information that is confidential 
and/or privileged for the sole use of the intended recipient.  Any use, review, 
disclosure, copying, distribution or reliance by others, and any forwarding of 
this email or its contents, without the express permission of the sender is 
strictly prohibited by law.  If you are not the intended recipient, please 
contact the sender immediately, delete the e-mail and destroy all copies.
**
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Use geo-replication without passwordless ssh login

2014-09-15 Thread Marcus Bointon
On 15 Sep 2014, at 17:18, Bo Yu  wrote:

> I wonder if it is possible to configure Gluster geo-replication in a manner 
> that it does not require passwordless ssh login, since in our system 
> passwordless ssh is not allowed.


Why would you disable that? Using passwords on top of public keys doesn't add a 
great deal of security. Using them without public keys is more of an issue.

Marcus
-- 
Marcus Bointon
Technical Director, Synchromedia Limited

Creators of http://www.smartmessages.net/
UK 1CRM solutions http://www.syniah.com/
mar...@synchromedia.co.uk | http://www.synchromedia.co.uk/



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Who's who ?

2014-09-15 Thread Jeff Darcy
> > For "new columns which may be useful", these ones spring to mind:

> * Twitter username - many people have them these days
> * A free form text description - eg "I'm Justin, I'm into databases, storage,
> and developing embedded human augmentation systems." ;)
> * Some kind of thumbnail photo - probably as the first column on the left

I think the current table is already quite wide, and adding more columns
is going to be very problematic design-wise.  Instead, I suggest that we
make each person's name a link to their wiki user page, where they can
put whatever contact or other info makes sense.  I just did that for
myself, and it barely takes more time than updating the Who's Who page
itself (plus it cuts down on the update notifications for that page).
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Use geo-replication without passwordless ssh login

2014-09-15 Thread Bo Yu
Hi,

I wonder if it is possible to configure Gluster geo-replication in a manner
that it does not require passwordless ssh login, since in our system
passwordless ssh is not allowed.

Or, is it possible to configure passwordless ssh for Gluster only, not for
every user or programm.

Thanks.

Bo
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Who's who ?

2014-09-15 Thread Humble Devassy Chirammal
>Good thinking. :)

Thanks :)

> For "new columns which may be useful", these ones spring to mind:

 * Twitter username - many people have them these days
 * A free form text description - eg "I'm Justin, I'm into databases,
storage, and developing embedded human augmentation systems." ;)
 * Some kind of thumbnail photo - probably as the first column on the left

Maybe also a LinkedIn column (?), though I hesitate to encourage LinkedIn
these days... ;)

Please go ahead. :)  , I do second whatever helps in one way or the other.
I see people are already added blog/url column  in the same page.

--Humble

On Fri, Sep 12, 2014 at 12:48 PM, Justin Clift  wrote:

> On 12/09/2014, at 7:13 AM, Humble Devassy Chirammal wrote:
> 
> > Some of you may have noticed our new "Who is Who" (
> http://www.gluster.org/community/documentation/index.php/Who_is_Who)
> page  in gluster.org. The idea here is to gather information about our
> gluster community members, so that that it would be easy for all to reach
> out to one another.  We are assembling the information we need for a good
> 'Who's Who' page. Please add details about yourself. You may also add any
> new columns which you think may be useful (such as a column for social
> networking profile, blog).
>
> Good thinking. :)
>
> For "new columns which may be useful", these ones spring to mind:
>
>  * Twitter username - many people have them these days
>  * A free form text description - eg "I'm Justin, I'm into databases,
> storage, and developing embedded human augmentation systems." ;)
>  * Some kind of thumbnail photo - probably as the first column on the left
>
> Maybe also a LinkedIn column (?), though I hesitate to encourage LinkedIn
> these days... ;)
>
> + Justin
>
> --
> GlusterFS - http://www.gluster.org
>
> An open source, distributed file system scaling to several
> petabytes, and handling thousands of clients.
>
> My personal twitter: twitter.com/realjustinclift
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] error when using mount point as a brick directory.

2014-09-15 Thread Paul Guo
This is a good example. I do not know whether there are other cases, but for 
this case, I'd like another design: Add a parameter to guide glusterfs to 
double-check whether the bricks are mount points when creating or starting a 
volume.


-- Original --
From:  "Claudio Kuenzler";;
Date:  Sep 12, 2014
To:  "Juan José Pavlik Salles"; 
Cc:  "gluster-users"; "Paul 
Guo"; 
Subject:  Re: [Gluster-users] error when using mount point as a brick directory.



Thanks for the hint about force", I didn't try that.
 On the other side I am thankful for the error/warning which prevented me to 
create the volume because I just thought of a scenario which could result in 
problems.
 Imagine you have a LV you want to use for the gluster volume. Now you mount 
this LV to /mnt/gluster1. You do this on the other host(s), too and you create 
the gluster volume with /mnt/gluster1 as brick. 
 By mistake you forget to add the mount entry to fstab so the next time you 
reboot server1, /mnt/gluster1 will be there (because it's the mountpoint) but 
the data is gone (because the LV is not mounted).
 I don't know how gluster would handle that but it's actually easy to try it 
out :) 
 So using a subfolder within the mountpoint makes sense, because that subfolder 
will not exist when the mount of the LV didn't happen.
 On Sep 12, 2014 4:16 PM, "Juan José Pavlik Salles"  wrote:

Maybe it's not a warning, but you can use "force" to avoid that behaviour so 
it's not that you can't create the volume. Anyway, it's a good practice to 
create a subdirectory as far as I've read. 
2014-09-12 11:02 GMT-03:00 Claudio Kuenzler :

Hi




On Fri, Sep 12, 2014 at 3:00 PM, Juan José Pavlik Salles  
wrote:

Hi Paul, that's is more a warning than an error. This advice helps you avoid 
situations like this:

Not so sure if it's "only a warning". The volume cannot be created as long as 
the fs mountpoint is directly used as brick in "gluster volume create". 


I wrote a post about that last month: 
http://www.claudiokuenzler.com/blog/499/glusterfs-bricks-should-be-subfolder-of-mountpoint


But as long as it is documented I don't think it's a real issue to follow that 
rule.



 -- 

Pavlik Salles Juan José
Blog - http://viviendolared.blogspot.com___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Geo-replication upgrade to 3.5.2 ( SOLVED )

2014-09-15 Thread HL

Hello,

I've found out what the problem was ...

I am running gluster on DEBIAN wheezy

and the
/var/lib/glusterd/geo-replication/gsyncd_template.conf

was full of wrong paths like this one
[peersrx . %5Essh%3A]
remote_gsyncd = /nonexistent/gsyncd

After changing them all run ok !



On 15/09/2014 08:03 πμ, Aravinda wrote:

Please share the log snippets(on every master brick node) from
/var/log/glusterfs/geo-replication//*.log if you see any
errors.

--
regards
Aravinda


On 09/13/2014 07:02 PM, HL wrote:

Hello

I've upgraded all my nodes from 3.3.x to 3.5.2 glusterfs

since the geo-replication was confed under 3.3.x I'deleted all configs
and files on the remote system ...

Created the a new volume as the howtos say ...

how ever after stoping and starting georeplication a ZILION times on
the remote fresh volume (xfs) on geo NODE

I get status "faulty" and no replication at all!!

Could U please help ??

What is going on ??


gluster> volume geo-replication GLVol 192.168.92.2::GLVol_GEO stop
Stopping geo-replication session between GLVol &
192.168.92.2::GLVol_GEO has been successful
gluster> volume geo-replication GLVol 192.168.92.2::GLVol_GEO status

MASTER NODEMASTER VOLMASTER BRICK   SLAVE STATUS
CHECKPOINT STATUSCRAWL STATUS


fm GLVol /export/brick01 192.168.92.2::GLVol_GEO
   StoppedN/A  N/A
fe GLVol /export/brick02 192.168.92.2::GLVol_GEO
   StoppedN/A  N/A
gluster> volume geo-replication GLVol 192.168.92.2::GLVol_GEO start
Starting geo-replication session between GLVol &
192.168.92.2::GLVol_GEO has been successful
gluster> volume geo-replication GLVol 192.168.92.2::GLVol_GEO status

MASTER NODEMASTER VOLMASTER BRICK   SLAVESTATUS
CHECKPOINT STATUSCRAWL STATUS
---

fm GLVol /export/brick01 192.168.92.2::GLVol_GEO
   faultyN/A  N/A
fe GLVol /export/brick02 192.168.92.2::GLVol_GEO
   faultyN/A  N/A
gluster>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users