Re: [Gluster-devel] [Gluster-users] Automated regression tests now part of development work flow

2013-02-24 Thread James
On Sat, 2012-10-20 at 14:23 -0700, Anand Avati wrote:
> Hello all,
> 
> The recent ongoing changes to the development work flow are now complete.
> The highlight of the rework is the automated regression tests.
This is awesome!

> 
> So far we have only had fixed smoke tests that were run against every patch
> pre-commit (dbench, POSIX compliance). Here after, every change submitted
> to Gerrit must be accompanied with test case scripts which (attempts to)
> prove the correctness of the code change - as part of the new regression
> test in the work flow.
> 
> These test cases will be much more concentrated and focused on the changes
> brought in by the patch itself. The test cases, once committed will be part
> of every future pre-commit test. This will make sure no commit in the
> future will accidentally break or change the current commit either directly
> or with side effects. If a future change must change the behavior of the
> current patch, then the future patch must include corresponding changes to
> the test cases as well. This policy will keep code and test cases in sync.
I don't suppose someone knows offhand of a way to automatically "watch"
for proposals to certain test cases? This could be a useful way for a
programmer to get a chance to speak up about an upcoming incompatible
change. Anyways, low priority, if someone has already hacked this
together with git, or if you'd be interested in this sort of thing, let
me know.

> 
> The wiring and framework for supporting this are already available in
> glusterfs.git and Jenkins. Currently the infrastructure supports test cases
> limited to single node. Most of the test cases can actually be implemented
> in single node by having multiple instances of bricks and client mounts.
> Support of multi node test cases will be added soon.
> 
> The automated smoke test's voting power in Gerrit is now downgraded to +0
> Verified for PASS and -1 Verified for FAILURE. Regression test's voting
> powers are now +1 Verified and -1 Verified. This means just passing the
> smoke test can no more be a qualifier for verification. For those patches
> for which a regression test is required but currently not automatable into
> a test script, must be tested and voted manually for the Verification vote.
> 
> As the above described changes are already in effect, most of the currently
> submitted patches under review will have to be resubmitted with test cases.
> This is necessary to make the changes start being effective immediately. If
> you have currently
> 
> Note that this opens up a new avenue for contributors in the community. You
> can now contribute test cases. These contributions need not include source
> code changes but just extend our regression test set. These test cases can
> either be for coverage of old code and functionality, or for coverage of
> your use cases which could be applicable to others as well.
I will put it onto my TODO list to write some test cases that "watch"
the functionality used by my puppet-gluster module.

> 
> The workflow document is also updated at
> http://www.gluster.org/community/documentation/index.php/Development_Work_Flow.
> Specifically:
> Sec 1.4
> http://www.gluster.org/community/documentation/index.php/Development_Work_Flow#Commit_policy
> Sec 1.9
> http://www.gluster.org/community/documentation/index.php/Development_Work_Flow#Regression_tests_and_test_cases
> 
> More details on how to write test cases with examples and how the framework
> works can be found at -
> 
> 1. "tests: pre-commit regression tests"
> https://github.com/gluster/glusterfs/commit/bb41c8ab88f1a3d8c54b635674d0a72133623496
> 2. tests/README in glusterfs.git
> 3. Example new change - http://review.gluster.org/4114
> 
> Please offer feedback on how the framework can be improved (inclusion of
> more tools?) to capture test cases better.
> 
> Happy Hacking!
> Avati
Thanks!!
James

> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] What will 3.5 look like?

2013-03-06 Thread James
On Wed, 2013-03-06 at 02:05 -0500, John Mark Walker wrote:
> I realize that we haven't released 3.4 yet, but we're quickly approaching a 
> new release cycle that will ultimately produce GlusterFS 3.5.
Sweet.

> 
> The Gluster Dev Summit starts tomorrow, and one of the things we'll discuss 
> is the new roadmap. What we should all do is take a look at what is on the 
> table, discuss which should take priority, and vote on your favorites. 
For the 3.4 cycle you had asked me to produce a "feature page" for
puppet. I guess it got forgotten about, but here's the link if you're
still interested:
http://www.gluster.org/community/documentation/index.php/Planning34/PuppetModuleWIP
If you'd like to include this in 3.5, please feel free to move the page
to where it should belong.

It currently does most of what I want, however I think it could use some
changes and optimization's to make it perfect for everyone else. The
easiest way for me to get it there is to probably have some one on one
dev time with a gluster pro and sit and discuss some issues.

It could then get distributed along with the puppet packages to help out
admins. One advantage is that it provides a sort of defacto
documentation for those who want even more. I think this could help
address some recent (whether valid or not) complaints about lack of
documentation.

I hope this helps!
Cheers,
James - Gluster volunteer

> See the very early list of proposed features here:
> http://www.gluster.org/community/documentation/index.php/Planning35
> 
> If you want to submit new features, please use this template:
> http://www.gluster.org/community/documentation/index.php/Features/Feature_Template
> 
> - save as a new wiki page and then link to it from the planning page, above
> 
> 
> To see what we did for 3.4, see these pages:
> http://www.gluster.org/community/documentation/index.php/Features
> http://www.gluster.org/community/documentation/index.php/Features34
> http://www.gluster.org/community/documentation/index.php/Planning34
> 
> I've added space on the Planning35 page for new feature submissions and 
> discussions on proposed features. 
> 
> If one of the features you'd like to propose is not for the software itself, 
> but rather for infrastructure pieces, ie. website, documentation, dev 
> process, etc., those are also welcome. 
> 
> We will look to stream and record the dev summit sessions, so look for links 
> here and on the blog soon!
> 
> -John Mark
> Gluster Community Lead
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Phasing out replace-brick for data migration in favor of remove-brick.

2013-09-27 Thread James
On Fri, 2013-09-27 at 00:35 -0700, Anand Avati wrote:
> Hello all,
Hey,

Interesting timing for this post...
I've actually started working on automatic brick addition/removal. (I'm
planning to add this to puppet-gluster of course.) I was hoping you
could help out with the algorithm. I think it's a bit different if
there's no replace-brick command as you are proposing.

Here's the problem:
Given a logically optimal initial volume:

volA: rep=2; h1:/b1 h2:/b1 h3:/b1 h4:/b1 h1:/b2 h2:/b2 h3:/b2 h4:/b2

suppose I know that I want to add/remove bricks such that my new volume
(if I had created it new) looks like:

volB: rep=2; h1:/b1 h3:/b1 h4:/b1 h5:/b1 h6:/b1 h1:/b2 h3:/b2 h4:/b2
h5:/b2 h6:/b2

What is the optimal algorithm for determining the correct sequence of
transforms that are needed to accomplish this task. Obviously there are
some simpler corner cases, but I'd like to solve the general case.

The transforms are obviously things like running the add-brick {...} and
remove-brick {...} commands.

Obviously we have to take into account that it's better to add bricks
and rebalance before we remove bricks and risk the file system if a
replica is missing. The algorithm should work for any replica N. We want
to make sure the new layout makes sense to replicate the data on
different servers. In many cases, this will require creating a circular
"chain" of bricks as illustrated in the bottom of this image:
http://joejulian.name/media/uploads/images/replica_expansion.png
for example. I'd like to optimize for safety first, and then time, I
imagine.

Many thanks in advance.

James

Some comments below, although I'm a bit tired so I hope I said it all
right.

> DHT's remove-brick + rebalance has been enhanced in the last couple of
> releases to be quite sophisticated. It can handle graceful decommissioning
> of bricks, including open file descriptors and hard links.
Sweet

> 
> This in a way is a feature overlap with replace-brick's data migration
> functionality. Replace-brick's data migration is currently also used for
> planned decommissioning of a brick.
> 
> Reasons to remove replace-brick (or why remove-brick is better):
> 
> - There are two methods of moving data. It is confusing for the users and
> hard for developers to maintain.
> 
> - If server being replaced is a member of a replica set, neither
> remove-brick nor replace-brick data migration is necessary, because
> self-healing itself will recreate the data (replace-brick actually uses
> self-heal internally)
> 
> - In a non-replicated config if a server is getting replaced by a new one,
> add-brick  + remove-brick  "start" achieves the same goal as
> replace-brick   "start".
> 
> - In a non-replicated config,  is NOT glitch free
> (applications witness ENOTCONN if they are accessing data) whereas
> add-brick  + remove-brick  is completely transparent.
> 
> - Replace brick strictly requires a server with enough free space to hold
> the data of the old brick, whereas remove-brick will evenly spread out the
> data of the bring being removed amongst the remaining servers.

Can you talk more about the replica = N case (where N is 2 or 3?)
With remove brick, add brick you will need add/remove N (replica count)
bricks at a time, right? With replace brick, you could just swap out
one, right? Isn't that a missing feature if you remove replace brick?

> 
> - Replace-brick code is complex and messy (the real reason :p).
> 
> - No clear reason why replace-brick's data migration is better in any way
> to remove-brick's data migration.
> 
> I plan to send out patches to remove all traces of replace-brick data
> migration code by 3.5 branch time.
> 
> NOTE that replace-brick command itself will still exist, and you can
> replace on server with another in case a server dies. It is only the data
> migration functionality being phased out.
> 
> Please do ask any questions / raise concerns at this stage :)
I heard with 3.4 you can somehow change the replica count when adding
new bricks... What's the full story here please?

Thanks!
James

> 
> Avati
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Phasing out replace-brick for data migration in favor of remove-brick.

2013-10-10 Thread James
omenclature:
It is:
/path/bxxx#v

where b is a constant char 'b'
where xxx is a zero padded int for brick #
where #v is a constant '#v' followed by 
where  is a zero padded int for version #

each time new bricks are added, you increment the max visible version #
and use that. if no version number is specified, then we assume version
1. The length of padding must be decided on in advance and can't be
changed.

valid brick names include:

/data/b04

/data/b22#v0003

and so on...

Hostnames are simple: hostname where  is a padded int, and you
distribute your hosts sequentially across racks or switches or whatever
your commonality for SPOF is.

Technically, for the transforms, I'm not even sure the version # is
necessary.

The big problem with my algorithms, is that they don't work for chained
configurations. I'd love to be able to make that so!!!

Why is all this relevant ? Because if I can solve these problems,
Gluster users can have fully decentralized elastic volumes that
grow/shrink on demand, without ever having to manually run add/remove
brick commands. I'll be able to do all of this with puppet-gluster for
example. Users will just run puppet, without changing and
configurations, and hosts will automatically come up and grow to the
size the hardware supports. Most of the code is already published. More
to come.

Hope that was all understandable. It's probably hard to talk about this
by email, but I'm trying. :)

Cheers,
James

> 
> Avati



brick_logic_ordering_wip.rb
Description: application/ruby


brick_logic_ordering_v2_wip.rb
Description: application/ruby


brick_logic_transform_v1_wip.rb
Description: application/ruby


signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster Community Weekly Meeting

2013-11-26 Thread James
Sorry for being a bit confused. I'd like to participate, but I'm a bit
fuzzy about when exactly. Can someone confirm the time and timezone?

Cheers
James

On Fri, Nov 22, 2013 at 9:58 AM, Vijay Bellur  wrote:
> On 11/22/2013 07:44 PM, John Mark Walker wrote:
>>
>> Thank you for setting this up, Vijay!
>>
>> In the interests of getting west coast USA participation, I'm wondering
>> if this is too early (6am Pacific Time). Or we can trade off - alternate
>> between this time and 12 hours later on successive weeks.
>
>
> 6 AM Pacific Time is considered one of the better time slots for a global
> meeting. When folks in US spring forward, it would be 7 AM which might work
> better. Also, UTC + 2 translates to 11 PM in the far east. Irrespective of
> the slot we select, we will miss some part of the world :-/.
>
> We can even set up an online poll to determine what time works best for
> majority of us. I will send out recurring invites after we finalize the
> schedule.
>
>
>>
>> I'm looking to hear from others who would be interested in participating.
>
>
> +1
>
> Thanks,
> Vijay
>
>>
>>
>> 
>>
>>
>> The following is a new meeting request:
>>
>> Subject: Gluster Community Weekly Meeting
>> Organizer: "Vijay Bellur" 
>>
>> Location: #gluster-meeting on irc.freenode.net
>> Time: Wednesday, November 27, 2013, 7:30:00 PM - 8:30:00 PM GMT
>> +05:30 Chennai, Kolkata, Mumbai, New Delhi
>>
>> Invitees: gluster-devel@nongnu.org; gluster-us...@gluster.org
>>
>>
>> *~*~*~*~*~*~*~*~*~*
>>
>> Greetings,
>>
>> We have had discussions around weekly IRC meetings for our community
>> in the past but we have not got to do that so far. Here is an
>> attempt to start the first of it at UTC + 2 on Wednesday next. Let
>> us discuss all aspects related to the Gluster community then!
>>
>> Etherpad for the meeting -
>> http://titanpad.com/gluster-community-meetings. Please feel free to
>> add your agenda items there.
>>
>> Cheers,
>> Vijay
>>
>>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] Mechanisms for automatic management of Gluster

2013-11-27 Thread James
Hi,

This is along the lines of "tools for sysadmins". I plan on using
these algorithms for puppet-gluster, but will try to maintain them
separately as a standalone tool.

The problem: Given a set of bricks and servers, if they have a logical
naming convention, can an algorithm decide the ideal order. This could
allow parameters such as replica count, and
chained=true/false/offset#.

The second problem: Given a set of bricks in a volume, if someone adds
X bricks and removes Y bricks, is this valid, and what is the valid
sequence of add/remove brick commands.

I've written some code with test cases to try and figure this all out.
I've left out a lot of corner cases, but the boilerplate is there to
make it happen. Hopefully it's self explanatory. (gluster.py) Read and
run it.

Once this all works, the puppet-gluster use case is magic. It will be
able to take care of these operations for you (if you want).

For non puppet users, this will give admins the confidence to know
what commands they should _probably_ run in what order. I say probably
because we assume that if there's an error, they'll stop and inspect
first.

I haven't yet tried to implement the chained cases, or anything
involving striping. There are also some corner cases with some of the
current code. Once you add chaining and striping, etc, I realized it
was time to step back and ask for help :)

I hope this all makes sense. Comments, code, test cases are appreciated!

Cheers,

James
@purpleidea (irc/twitter)
https://ttboj.wordpress.com/
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (C) 2012-2013+ James Shubin
# Written by James Shubin 

"""This is brick logic for GlusterFS."""

import random
import unittest

def brick_str_to_dict(s):
	a = s.split(':')
	assert len(a) == 2

	p = a[1]
	p = p if p.endswith('/') else p+'/'

	return {'host': a[0], 'path': p}

def brick_dict_to_str(d):
	return str(d['host'])+':'+str(d['path'])

def get_version_from_path(path):
	pass
#	rindex = path.rindex('/')
#	if rindex.nil?
#		# TODO: error, brick needs a / character...
#	end

#	base = path[rindex+1, path.size-rindex]
#	findv = base.rindex('#v')
#	if findv.nil?
#		return 0	# version 0 (non-existant)
#	else
#		version = base[findv+2, base.size-findv]
#		if version.size < 1
#			# TODO: error, version string is missing...
#			# TODO: assume version 0 ?
#			return -1
#		end

#		return version.to_i
#	end
#end

def get_versions(group):
	versions = []
	for x in group:
		v = get_version_from_path(x['path'])
		if not v in versions:
			versions.append(v)

	return sorted(versions)		# should be all int's


def filter_version(group, version=0):	# TODO: empty version is 0 or None ?

	result = []
	for x in group:
		v = get_version_from_path(x['path'])
		if v == version:
			result.append(x)

	return result

def natural_brick_order(bricks):
	"""Put bricks in logical ordering."""

	# XXX: technically we should specify the replica to this function, but it might not be required. maybe it's a good idea as a checksum type thing...

	vfinal = []	# versioned final
	versions = get_versions(bricks)	# list of available versions...
	for version in versions:

		subset = filter_version(bricks, version)

		collect = {}

		for x in subset:
			key = x['host']
			val = x['path']

			if not key in collect.keys():
collect[key] = []	# initialize

			collect[key].append(val)	# save in array
			# TODO: ensure this array is always sorted (we could also do this after
			# or always insert elements in the correct sorted order too :P)
			collect[key] = sorted(collect[key])


		# we also could do this sort here...
		for x in collect.keys():
			collect[key] = sorted(collect[key])


		final = []	# final order...
		while len(collect) > 0:
			for x in sorted(collect.keys()):

# NOTE: this array should already be sorted!
p = collect[x].pop(0)	# assume an array of at least 1 element
final.append({'host': x, 'path': p})	# save

if len(collect[x]) == 0:	# maybe the array is empty now
	del collect[x]		# remove that empty list's key


		vfinal = vfinal + final	# concat array on...

	return vfinal


def brick_delta(a, b):

	ai = 0
	bi = 0

	add = []
	rem = []
	while ((len(a) - ai) > 0) and ((len(b) - bi) > 0):
		# same element, keep going
		if a[ai] == b[bi]:
			ai = ai + 1
			bi = bi + 1
			continue

		# the elements must differ

		# if the element in a, doesn't exist in b...
		if not a[ai] in b:
			# then it must be a delete operation of a[ai]
			rem.append(a[ai])	# push onto delete queue...
			ai = ai + 1
			continue

		# if the element in b, doesn't exist in a...
		if not b[bi] in a:
			# then it must be an add operation of b[bi]
			add.append(b[bi])	# push onto add queue...
			bi = bi + 1

[Gluster-devel] puppet-glusterfs module

2013-11-27 Thread James
Hi,

There's already a fairly advanced puppet-gluster module.

https://github.com/purpleidea/puppet-gluster

also mirrored at:

https://forge.gluster.org/puppet-gluster

Please let me know if there are features that it doesn't support that
your module does. I've been working on this for quite a while, that you
may want to use it instead.

Cheers,
James



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] puppet-glusterfs module

2013-11-27 Thread James
On Wed, Nov 27, 2013 at 10:54 AM, Jiri Hoogeveen
 wrote:
> Hi James,
>
> It looks like it is a Redhat OS family only module.
I've only actually tested it on CentOS. I would like to expand this,
however I am working for free and low in resources for this.

>
> puppet-gluster/manifests/client/base.pp
>
>   package { ['glusterfs', 'glusterfs-fuse']:
> ensure => present,
> }
>
> Do you mind, if I add some debian OS family support?
I think this would be a great idea. I actually have some patches
staging, they aren't public yet because I do not have test machines
for anything other than my two vm's :(

I mentioned this in gluster-meeting today. Hopefully I'll have some
hardware soon. If you have test vm's or hardware I can use, that would
be great too!

Does this make sense?
Cheers,

James


>
> Grtz, Jiri
> On 27 Nov 2013, at 16:46, James  wrote:
>
>> Hi,
>>
>> There's already a fairly advanced puppet-gluster module.
>>
>> https://github.com/purpleidea/puppet-gluster
>>
>> also mirrored at:
>>
>> https://forge.gluster.org/puppet-gluster
>>
>> Please let me know if there are features that it doesn't support that
>> your module does. I've been working on this for quite a while, that you
>> may want to use it instead.
>>
>> Cheers,
>> James
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@nongnu.org
>> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] puppet-glusterfs module

2013-11-27 Thread James
On Wed, Nov 27, 2013 at 11:51 AM, Jiri Hoogeveen
 wrote:
> Great to hear that you did not forget debian os family support :)

Of course not! A free software hacker only has so many resources available.

>
> I'm happy to help you with the debian support in code and some test hardware 
> for a few days.
Great!

>
> From 8-12 to i think 11-12 I have some hardware to test with.
> I need to do some tests with it in combination with libvirt/qemu, before we 
> take it in production on 16-12.

I'm not sure what these number ranges mean.

>
> The hw is 6 servers with each 5 sas disks. This should be enough to do some 
> tests :)

For my tests, I don't need powerful machines. It's not about
performance, it's about configuration management. So they can be cheap
hardware, or even virtual machines.

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] puppet-glusterfs module

2013-11-27 Thread James
On Thu, Nov 28, 2013 at 2:46 AM, Jiri Hoogeveen
 wrote:
> I have some hardware for testing from 8 december till 11 december. After this 
> date I need to do some tests and make the systems production ready before 16 
> december.


Okay so you have 8 dec -> 11 dec free? This only gives me a very short
window to access the machines. It won't give me a big window to work
on things.

Further more, if I don't have (mostly) constant access to even simple
machines, then I can't test new features and releases...

If at some point you have some vm's that you can give me access to, or
your company wants to donate some funds, I'll most likely have a very
capable puppet-gluster for debian style os.

Cheers,

James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Add the maillist

2013-11-28 Thread James
Check here:

https://lists.nongnu.org/mailman/listinfo/gluster-devel

Cheers,
James

On Thu, Nov 28, 2013 at 3:16 AM, jiademing...@gmail.com
 wrote:
> Hi All:
> I want to subscribe the maillist,Thanks
>
> jiademing.dd
> 
> jiademing...@gmail.com
>
> ___
> Gluster-devel mailing list
> Gluster-devel@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] puppet-glusterfs module

2013-11-28 Thread James
On Thu, Nov 28, 2013 at 5:23 AM, Gilles Dubreuil  wrote:
> Hi James,
>
> Your module looks quite advanced effectively.
Thanks. You're welcome to write to me in French if you prefer (just
guessing). (Je suis bilingue!)

>
> At the time I needed a customized module, I looked at the available ones
> (not sure if yours was in the list), and they seemed not to offer what I
> needed or too complicated to use (but often it's just a documentation
> issue).
I'm pretty confident that my module has all the features you might
want and it also permits you to only use the subset you want. There is
documentation here:
https://github.com/purpleidea/puppet-gluster/blob/master/DOCUMENTATION.md
It's also available as a pdf in the same directory.

>
> Ultimately I'm not saying we won't use yours but at the moment I'm
> running out of time to investigate.
Whether you do or not is up to you. I'm trying my best to ensure we
can all work together on one codebase, so that work isn't duplicated,
and then maybe you can work on harder problems.

>
> Will try to get back to you later.
Sure thing,

>
> Cheers,
> Gilles

Salut,
James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Qemu glusterfs, exposing complete bricks instead of individual images as shared storage to VM's ?

2013-11-29 Thread James
On Fri, Nov 29, 2013 at 5:06 PM, Sander Eikelenboom wrote:

> Hi,
>
> I'm using glusterfs for quite some time on my server for shared-storage to
> VM's.
> At the moment this had to go over tcp/ip bridge between host and guests, so
> i was interested in the option to use glusterfs directly with qemu. But it
> seems it
> only supports to expose individual images files that reside on a glusterfs
> brick.
>
> Would it be possible to extend this and make a complete brick available as
> disk to qemu as shared storage ?
> (so multiple vm's and the host can share this same storage space)
>

Sounds like you want to use Gluster as a backing store for the VM images
through qemu, but in addition, you probably want to mount a common
glusterfs volume inside the vms as well. That's how you do it!

Cheers,
James


>
> --
> Sander
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Qemu glusterfs, exposing complete bricks instead of individual images as shared storage to VM's ?

2013-11-29 Thread James
On Fri, Nov 29, 2013 at 6:56 PM, Sander Eikelenboom wrote:

> Erhmm well that's why glusterfs is momentarily in between :-)
>
> I have a LVM volume "shared_data" on the host .. which I export as a brick
> with glusterfs.
> Multiple VM's mount this brick over the tcp/ip transport, and all seems to
> go well with locking.
>
> I have looked at GFS2 and Ceph as well, though glusterfs served me well.
> It's just to see if it would be possible to eliminate the use of the
> tcp/ip transport for
> the VM's that use Qemu to reduce that overhead.
>

Okay, this should be on gluster-users first of all.

Second of all, the regular fuse mount and libgfapi both use tcp/ip.

Thirdly, you have to understand the difference between a block device
(qemu-libgfapi integration) and a gluster fuse mount (a filesystem). Read
about those a bit more, and hopefully this will make my comments make sense.

Fourthly, it's not a GFS2 vs. Gluster question. They are DIFFERENT
technologies, not competing technologies. GlusterFS is one piece. If you
_really_ want to have a shared block device, be used for a mounted
filesystem, then the individual writers _need_ to coordinate. That's what
GFS2+cman does. Also, I've never tested GlusterFS through qemu for a GFS2
fs. I'd be curious to hear if it works without bugs though.

Fifthly?, it's dinner time!

Cheers
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glusterfs file locking

2013-12-02 Thread James
this type of snapshotting is coming soon in 3.5 with lvm support afaict.
http://www.gluster.org/community/documentation/index.php/Features/File_Snapshot

On Mon, Dec 2, 2013 at 11:32 PM, kane  wrote:
> Hi Raghavendra,
>
> What I want to do is to take volume snapshots with btrfs, so I need 
> the volume locked.
> When snapshot is taking, all handles(IO,  gluster cli) except snap hung on.
>
> As I know, the xlator lock does not lock the whole volume bricks, but only a 
> range of a file in bricks.
>
> Any good advice?
>
> Thanks,
> Kane
>
> 在 2013年12月3日,下午12:06,Raghavendra Gowdappa  写道:
>
>> Hi Kane,
>>
>> You don't have to add the xlator explicitly through cli. The xlator is 
>> configured by default on bricks when you create a volume.
>>
>> regards,
>> Raghavendra.
>>
>> - Original Message -
>>> From: "kane" 
>>> To: rxsi...@gmail.com
>>> Cc: gluster-devel@nongnu.org
>>> Sent: Tuesday, December 3, 2013 8:41:30 AM
>>> Subject: [Gluster-devel] glusterfs file locking
>>>
>>> Hi, Rohit:
>>>
>>>  I have googled the issue you mailed last year as below:
>>> ==
>>> Does Gluster provide POSIX compatible file locking transparently with
>>> multiple clients simultaneously trying to lock files on a Gluster share
>>> with the native Gluster client?
>>>
>>> A lot of the code in our application framework relies on file locking for
>>> synchronization. I was running across some errors, and wanted to rule out
>>> file locking issues – hence this question.
>>>
>>> The gluster documentation mentions a features/posix-locks translator that
>>> should be used for distributed file level locking. This translator can be
>>> added using the config files as shown in the example at
>>>
>>> http://gluster.org/community/documentation/index.php/Translators/features/locks.
>>>
>>> However, this method of using config files doesn't seem to be available in
>>> newer versions of Gluster, and I couldn't find an equivalent way of using
>>> this translator using the command line.  Can someone confirm if I need to
>>> use this translator, and how I could enable it with Gluster 3.3?
>>> ==
>>>
>>> so as I meet this file locking usage in 3.3 too, I want to ask you what did
>>> you
>>> deal with the lock issue at last, did the xlator lock useful?
>>>
>>> Thanks,
>>> Kane
>>>
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@nongnu.org
>>> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>>>
>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster Community Weekly Meeting

2013-12-04 Thread James
Is this happening today in about 3 hours?

Cheers,

James

On Thu, Nov 28, 2013 at 3:02 AM, Vijay Bellur  wrote:
> The following is a new meeting request:
>
> Subject: Gluster Community Weekly Meeting
> Organizer: "Vijay Bellur" 
>
> Location: #gluster-meeting on irc.freenode.net
> Time: 8:30:00 PM - 9:30:00 PM GMT +05:30 Chennai, Kolkata, Mumbai, New Delhi
>  Recurrence : Every Wednesday No end date Effective Dec 4, 2013
>
> Invitees: gluster-us...@gluster.org; gluster-devel@nongnu.org
>
>
> *~*~*~*~*~*~*~*~*~*
>
> Greetings,
>
> This is the weekly slot to discuss all aspects concerning the Gluster 
> community.
>
> Etherpad for the meeting - http://titanpad.com/gluster-community-meetings. 
> Please feel free to add your agenda items in the Etherpad before the meeting.
>
> Cheers,
> Vijay
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster Community Weekly Meeting

2013-12-04 Thread James
On Wed, Dec 4, 2013 at 7:33 AM, Jay Vyas  wrote:
> wait so this is just IRC or a google hangout?


irc #gluster-meeting

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Could we add a MAINTAINERS file to the glusterfs sources?

2013-12-11 Thread James
> I did not apply any sophisticated logic, and just asked around who the
> current maintainers are. A mostly reviewed MAINTAINERS file has now been
> proposed for inclusion:
> - http://review.gluster.org/6480
>

Puppet-Gluster sounds like a notable "Related project" to add to that
file section...

/biased,

James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster Community Weekly Meeting

2013-12-11 Thread James
RE: meeting, sorry I couldn't make it, but I have some comments:

1) About the pre-packaged VM comment's. I've gotten Vagrant working on
Fedora. I'm using this to rapidly spin up and test GlusterFS.
https://ttboj.wordpress.com/2013/12/09/vagrant-on-fedora-with-libvirt/
In the coming week or so, I'll be publishing the Vagrant file for my
GlusterFS setup, but if you really want it now I can send you an early
version. This obviously integrates with Puppet-Gluster, but whether
you use that or not is optional. I think this is the best way to test
GlusterFS. If someone gives me hosting, I could publish "pre-built"
images very easily. Let me know what you think.

2) I never heard back from any action items from 2 weeks ago. I think
someone was going to connect me with a way to get access to some VM's
for testing stuff !

3) Hagarth:  RE: typos, I have at least one spell check patch against
3.4.1 I sent it to list before, but someone told me to enroll in the
jenkins thing, which wasn't worth it for a small patch. Let me know if
you want it.

4a) Someone mentioned documentation. Please feel free to merge in
https://github.com/purpleidea/puppet-gluster/blob/master/DOCUMENTATION.md
(markdown format). I have gone to great lengths to format this so that
it displays properly in github markdown, and standard (pandoc)
markdown. This way it works on github, and can also be rendered to a
pdf easily. Example:
https://github.com/purpleidea/puppet-gluster/raw/master/puppet-gluster-documentation.pdf
 You can use the file as a template!

4b) I think the documentation should be kept in the same repo as
GlusterFS. This way, when you submit a feature branch, it can also
come with documentation. Lots of people work this way. It helps you
get minimal docs there, and/or at least some example code or a few
sentences. Also, looking at the docs, you can see what commits came
with this

Thanks!

James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Mechanisms for automatic management of Gluster

2013-12-11 Thread James
On Wed, Dec 11, 2013 at 6:06 PM, Anand Avati  wrote:
> James,
> This is the right way to think about the problem. I have more specific
> comments in the script, but just wanted to let you know this is a great
> start.
>
> Thanks!

Thanks... So I'd like to solve these problems, but I think I need some
heavy lifting help with some of the algorithms... If I had all of
those answers today, I would probably have most of the management
features for 4.0 working (via puppet-gluster) in a week. Not sure if
you're interested in working on this. Please feel free to use the code
and test case infrastructure as a template.

Cheers!

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Could we add a MAINTAINERS file to the glusterfs sources?

2013-12-11 Thread James
On Wed, Dec 11, 2013 at 12:46 PM, Niels de Vos  wrote:
> On Wed, Dec 11, 2013 at 11:28:31AM -0500, James wrote:
>> > I did not apply any sophisticated logic, and just asked around who the
>> > current maintainers are. A mostly reviewed MAINTAINERS file has now been
>> > proposed for inclusion:
>> > - http://review.gluster.org/6480
>> >
>>
>> Puppet-Gluster sounds like a notable "Related project" to add to that
>> file section...
>
> I tried to include only projects that deal with the internals of
> Gluster. It is unclear to me how much the Puppet-Gluster maintainer
> needs to get informed about changes.

Well maybe Puppet-Gluster is the black-sheep child that doesn't get no
love, but I believe that GlusterFS isn't very useful without some sort
higher level configuration management tool, specifically for when
users want to scale. At the moment, few people really use GlusterFS at
100+ hosts / petabyte scale. What about 1000? 10k? How are we going to
manage that?

It would be helpful to be notified if things were going to break.
Because of Gluster's weird configuration system (instead of a simple
stable configuration file), Puppet-Gluster does this manually, eg:
https://github.com/purpleidea/puppet-gluster/blob/master/manifests/host.pp#L86
https://github.com/purpleidea/puppet-gluster/blob/master/manifests/host.pp#L191

Another place it tracks internals is through the xml properties:
https://github.com/purpleidea/puppet-gluster/blob/master/files/xml.py#L39

>
> A lot of the projects that are on http://forge.gluster.org are related
> and important to the community, but most of them should not need to
> track changes in the glusterfs-sources. If you feel that Puppet-Gluster
> really is relying on the internals of Gluster, we can add the project
> without issues. Please leave a note with the projects details in the
> code review for that,

Anyways, I don't care about being in the maintainers file, but at
least try and have a look at what Puppet-Gluster does and uses so
someone can give me a heads up if something big is going to change.

>
> Thanks,
> Niels

Cheers,
James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster Community Weekly Meeting

2013-12-12 Thread James
On Thu, Dec 12, 2013 at 1:43 PM, Vijay Bellur  wrote:
>> 4a) Someone mentioned documentation. Please feel free to merge in
>> https://github.com/purpleidea/puppet-gluster/blob/master/DOCUMENTATION.md
>> (markdown format). I have gone to great lengths to format this so that
>> it displays properly in github markdown, and standard (pandoc)
>> markdown. This way it works on github, and can also be rendered to a
>> pdf easily. Example:
>>
>> https://github.com/purpleidea/puppet-gluster/raw/master/puppet-gluster-documentation.pdf
>>   You can use the file as a template!
>
>
> Again having this in gerrit would be useful for merging the puppet
> documentation.


Okay, I'll try to look into Gerrit and maybe submit a fake patch for testing.
When and where (in the tree) would be a good time to submit a doc
patch? It's probably best to wait until after your docs hackathon,
right?

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gerrit doesn't use HTTPS

2013-12-12 Thread James
I just noticed that the Gluster Gerrit [1] doesn't use HTTPS!

Can this be fixed ASAP?

Cheers,
James

[1] http://review.gluster.org/

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] puppet-gluster on F19

2013-12-13 Thread James
On Thu, Dec 12, 2013 at 11:36 PM, Paul Cuzner  wrote:
>
> Hi,
>
> Has anybody tested Jame's puppet-gluster module on F19? I'm trying to use it 
> and hitting problems with dependencies for python-argparse which is in 
> rhel/centos, but not upstream fedora as far as I can tell(f19 or f20)

Of course it's available in F19 (package python-libs). It's built into
python 2.7. You're probably getting a Puppet dependency issue because
I didn't test against Fedora 19, and Puppet will try to install
'python-argparse' (which isn't necessary for fedora, but is for
centos/rhel).

You can replace the 'python-argparse' strings with 'python-libs', or
wait until I port Puppet-Gluster to F19. Feel free to keep track of
any other issues that you find and I'll patch those too.

>
> Trying to pull the rpm from the epel repo also backfires with a dependency on 
> the python 2.6 abi - and f19 is at 2.7.
>
> Any pointers?
>
> I was trying to put together a test environment to show foreman/puppet 
> unattended provisioning of gluster nodes.

I'm currently working on vagrant+various os's along with adding more
explicit compatibility for other os' in Puppet-Gluster. F19 is in the
list. I currently provision with cobbler+puppet and this works great.

As I mentioned in my other email, I don't have resources to test on
Foreman at the moment. If you're able to sponsor this work, I'm happy
to look into it for you, but until then maybe someone else has got the
answer.

>
> Cheers,
> Paul C
>
>
Cheers,
James

>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster Community Weekly Meeting

2013-12-13 Thread James
On Fri, Dec 13, 2013 at 3:30 AM, Niels de Vos  wrote:
>
>> Niels - do you have any thoughts here?
>
> I was thinking of using virt-builder[1] which is already part of Fedora.

Vagrant was supposed to be:
https://fedoraproject.org/wiki/Releases/20/ChangeSet#Vagrant Maybe you
know the official status?

In the meantime, I've written an article on how to get it going. I've
done the hard work of figuring it out, so it's pretty easy if you
follow my steps:
https://ttboj.wordpress.com/2013/12/09/vagrant-on-fedora-with-libvirt/

> Personally I would stick with the Fedora tools, and not use yet
> something else again. However, I'm more than happy if James builds and
> publishes one or more VMs for the test days, of course he is free to use
> whatever tools he likes :-)

Phew ;)

>
> The images should be minimal, and I do not expect them to be bigger than
> 512MB when compressed. Best would be to host them on the
> download.gluster.org server, at the same location of the packages.
>
> Niels
>
> 1. http://libguestfs.org/virt-builder.1.html

I think we're mixing up tools here. What is the goal of this?

I'm a big fan of guestfish and RWM Jones's tools, but Virt-builder and
Vagrant have different use cases.

Virt-builder is good for building an OS from scratch, configuring it,
and having it available to boot and use. You _can't_ easily iterate
different test environments from a clean image without distributing
that many different os images. Afaik, it only works with one machine
at a time. The equivalent of this in "ruby land" is Veewee.

Vagrant on the other hand is great at iterating different test
environments, with multiple machines running together. All you need to
do is publish one base image, and a Vagrantfile, and you'll have the
tools locally to bring up multiple different clusters and test each
one. Want to start over from scratch, you can do so in 30 seconds. I'm
not sure if there is an alternate tool that does this.

>From my point of view, I'm going to be doing the Vagrant stuff
anyways. I'll be publishing the code as usual, but I won't have
anywhere to host images. If Gluster/RedHat wants to host this
somewhere, others are welcome to use it.

Niles can also work on the virt-builder stuff if he likes, but the
usefulness will depend on the intended use case.

Cheers,
James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Early review/feedback for glusterfs 4.0 plan

2013-12-13 Thread James
On Thu, 2013-12-12 at 15:33 -0800, Anand Avati wrote:
> > 2b) If someone can help with the algorithms I mentioned in:
> >
> https://lists.gnu.org/archive/html/gluster-devel/2013-11/msg00135.html
> >   I think I'll be able to provide a convincing case that Gluster
> > management isn't as bad as you might be alluding to in your 4.0
> > comments. I think a lot of the GlusterFS core team are allergic to
> > Puppet. I think this is quite normal, because Gluster core is super
> > low level C, where as Puppet (a mostly declarative language) is
> super
> > high level and on the opposite side of this spectrum. However, I
> think
> > most people are discounting the importance of having something like
> > this. The future is _all_ configuration management. The early
> adopters
> > are almost mostly there.
> >
> 
> That's probably not entirely the case. I don't think there is anybody
> who
> disagrees on the need/importance of puppet and puppet-gluster. I also
> see
> how it becomes almost necessary on a 10k node cluster, and why it is
> important to make gluster integrate nicely with the puppet ecosystem.
> 
> At the same time, the reason why puppet-gluster module is having to
> solve
> complex issues is because gluster management isn't doing the right
> job.
Agreed.

>  The
> algorithms you are working for puppet-gluster are not puppet specific
> and
> their right "location" is in the core of gluster.
I agree that building some of these things into Gluster core makes
sense. At some point you end up building configuration management into
Gluster core. Somewhere along that road, I'd argue that it makes more
sense to have that logic externally in a declarative language, but I'm
sure I get there earlier than you do ;)

It doesn't mean that I won't like seeing some of it in Gluster, but I
think there are some benefits (LOC, reasoning about logic) that Puppet
can provide.

One problem is that doing this "blesses" Puppet as _the_ tool, when
someone else might prefer Chef. The good news there is that it would be
easy for someone to port between the two.

>  It doesn't feel right to
> see such "dense" logic in the puppet layer while that intelligence is
> needed pretty much for every one.
I try to "Push Puppet" to see how far it can go before it falls down :)
There are cracks, but I'm not done yet!
> 
> I'm hoping 4.0 will make the gluster-puppet module simple and elegant,
> and
> not "unnecessary".
I think it's elegant now, but I don't see it becoming simple.
Distributed things are by their very nature, quite complex. I'm sure you
probably know this more than I do.
> 
> 3) Between early Gluster x.y and 3.x I had to completely change
> > Puppet-Gluster to support the new management style. The current
> > Puppet-Gluster stuff is easily one of the most complicated Puppet
> > modules that exists _anywhere_! This is partly due to the fact that
> > while Gluster might be easy to manage (and even logical) for a
> human,
> > it is _not_ from an automation point of view.
> 
> 
> That's precisely my point as well. The complexity in the puppet module
> is
> for a problem which has to be fixed somewhere. And I think the right
> place
> for that extra logic to reside is within the gluster management layer.
> Puppet module should not need to know how gluster is distributing and
> replicating data - it is too low level/internal knowledge to be
> exposed up
> to a puppet-like layer.
> 
Don't users want to be able to decide these things?

Well I'm happy to try and work on these things. Since 4.0 isn't due for
some time, and also because current Puppet-Gluster could be seen as a
"research project", if you're able to help contribute some of the
algorithms that I'm trying to work on:

1) "Legacy" Gluster users (when 4.0 comes out) will still have awesome
management support. I think RHS might appreciate this.

2) We'll be able to test some of the algorithms well in advance before
all the Gluster code gets written.

Code is here:
https://lists.gnu.org/archive/html/gluster-devel/2013-11/txtdAJcI6dR34.txt

Nice having this chat with you,

Cheers!
James



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Providing VM images for beta testing? (was: Gluster Community Weekly Meeting)

2013-12-13 Thread James
On Fri, Dec 13, 2013 at 11:02 AM, Niels de Vos  wrote:
>
> The intended use-case is to provide testers with an easy way to start
> going. I assume that for a lot of people downloading+importing
> a VM-image is easier than creating+installing a VM from scratch.
Both Vagrant and Virt-Builder would provide an image. So this is the
same end result.

Do you intend to provide 4 different images if the user wants to setup
a 4 host cluster? What about 8 hosts? What about more? What about
different environments, eg: 1 brick vs. 4 bricks vs. N bricks per
host? How do you plan to install Gluster on each host? a shell script?
Puppet-Gluster ?

With Vagrant you only need to provide one image, and Vagrant does the rest.
In any case, I'll have all this ready in the next week or two and
you'll all be able to test.

Maybe RWM Jones has some new secret add ons to virt-builder that could
be helpful!

Cheers,
James


> We
> currently offer only packages that users need to install on a system (VM
> or physical hardware) before they can start testing. The idea of
> providing VM images would hopefully attract more people to join the
> Gluster Test Fests. Which tool or VM environment we should provide would
> mainly depend on the wishes from the users.
>
> Anyone with suggestions or preferences is more than welcome to let us
> know. How do testers want to (and can) setup some systems for beta
> testing? What would enable the biggest group of testers to do at least
> some testing of some functionality of their own interest?
>
> Thanks,
> Niels

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Providing VM images for beta testing? (was: Gluster Community Weekly Meeting)

2013-12-13 Thread James
On Fri, Dec 13, 2013 at 12:08 PM, Niels de Vos  wrote:
>
> The image should come with the version of Gluster that should be used
> for the Testdays. I do not really care how it gets installed, but
> I prefer to make it as simple as possible for users to start testing.
>
> No doubt that we need to point to some explanations on how to import
> such an image in VMware/oVirt/virt-manager/... and attach additional
> disks for the bricks. We could provide an image with some bricks on
> a partition to allow simple tests for new users too.
>
>> With Vagrant you only need to provide one image, and Vagrant does the
>> rest.
>> In any case, I'll have all this ready in the next week or two and
>> you'll all be able to test.
>
> Nice!
>
>> Maybe RWM Jones has some new secret add ons to virt-builder that could
>> be helpful!
>
> I think the manual is pretty clear. It explains how to add files to
> a VM, install additional packages and the like:
> - http://libguestfs.org/virt-builder.1.html
>
> Anyway, I hope some readers that are interested in testing the upcoming
> 3.5 release will express their preference for a VM image or images.
>
> Thanks,
> Niels

Okay, so how about you work on your virt-builder method, and since I'm
already working on my Vagrant method we'll have both scenarios
available.
Then everyone can try out every method and see what works best/what they prefer.

Can someone provide me access to hosting some images for Vagrant?

Thanks,
James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gerrit doesn't use HTTPS

2013-12-14 Thread James
On Sat, Dec 14, 2013 at 3:28 AM, Vijay Bellur  wrote:
> On 12/13/2013 04:05 AM, James wrote:
>>
>> I just noticed that the Gluster Gerrit [1] doesn't use HTTPS!
>>
>> Can this be fixed ASAP?
>>
>
> Configured now, thanks!
Thanks for looking into this promptly!

>
> Please check and let us know if you encounter any problems with https.
1) None of the CN information (name, location, etc) has been filled
in... Either that or I'm hitting a MITM (less likely).

2) Ideally the certificate would be signed. If it's not signed, you
should at least publish the "correct" signature somewhere we trust.

If you need help wrangling any of the SSL, I'm happy to help!
>
> -Vijay
>
Thanks!

James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gerrit doesn't use HTTPS

2013-12-14 Thread James
On Sat, Dec 14, 2013 at 10:56 AM, Anand Avati  wrote:
> IIRC we should be having a CA signed cert for *.gluster.org. Copying JM.
>
> Avati


Yikes! That cert is "someorganization"... All the cool kids are doing
HTTPS these days ;)

I'm happy to generate, sign and upload these if given access... It's a
normal sysadmin thing I've been known to do :P

James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] Introducing... JMWBot (the alter-ego of johnmark)

2013-12-15 Thread James
Yes, it's true.
I've been up late hacking on Gluster (and Puppet-Gluster)...

While waiting for my code to compile, patch review (*cough*), and for
JMW (aka johnmark) to take care of a few todo items, I realized I had
never written an IRC bot!! Now I never aspired to be the bot master that
JoeJulian is, but I figured I needed this notch on my hacker belt...

Therefore, I'd like to introduce: JMWBot. (now with 20% more twisted!)

JMWBot is the affectionate alter-ego of johnmark. JMWBot should most
likely be found hanging out in #gluster, and waiting for johnmark to
talk. If he does, JMWBot will bug johnmark up to once a day [1] to
remind him of pending todo items... set by you!

To add a public reminder in #gluster for johnmark:
JMWBot: @remind bring home some milk

To add a private reminder for johnmark:
/msg JMWBot @remind bring home some bacon

To list items, you can highlight the bot or /msg the bot with the @list
command. Only johnmark has the ability to @done  items. When your
item gets closed, you should get a message if you're on Freenode (and
you haven't changed your nick.)

FAQ:

* Why did you do this?
This is a hack, it was written for fun!

* Does this really work?
Yes, I think so. I tested it briefly. It stores your reminders on disk,
so your data should be safe across (currently manual) rejoin's. I don't
back up the server often, and all of this is WITHOUT WARRANTY, etc...

* Really?
Yeah, I think so. Test it out and let me know!

* Can this be done for other people/channels than johnmark/#gluster?
Yes! Please feel free to run your own bot, code is "open source" [2].

* I <3 puppet-gluster [3], where can I send $$$, resources and praise?
/msg purpleidea in #gluster or @purpleidea on irc! Thanks!

* Yikes! This code is terrible.
Well it's not that bad. But it was meant as a dirty hack. Feel free to
send patches or bug reports.

* I didn't find this funny/cool/amusing or even stable.
Sorry! It was written with good intentions.

* THE BOT HAZ MISS BEHAVEDD AND IS TAKING OVER THE CHANNEL AKA W0RLD.
I for one, welcome our new IRC overlords. Please /kick it, and let me
know how it misbehaved. /ban-ing the bot will make it sad :(

Well, enjoy and Happy hacking!
Cheers,

James
https://ttboj.wordpress.com/
@purpleidea (irc / twitter)


[1] configurable on request
[2] /msg purpleidea please send me JMWBot, ps: i <3 puppet-gluster
[3] https://github.com/purpleidea/puppet-gluster



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Fwd: Snapshot feature design review

2013-12-18 Thread James
Just saw this now. Haven't had to review in depth. One question:

Will all the commands have --xml options like the normal Gluster commands?
It is very useful to have these stable interfaces.

Cheers,
James


On Wed, Dec 18, 2013 at 10:58 PM, Vijay Bellur  wrote:
> On 10/17/2013 09:52 PM, Nagaprasad Sathyanarayana wrote:
>>
>>
>>>>
>>>> Hi All,
>>>>
>>>> The design document has been updated, and we have tried to address
>>>> all the review comments and design issues to the best of our ability.
>>>>
>>>> Please review the design and the document when possible.
>>>>
>>>> The design document can be found @
>>>> https://forge.gluster.org/snapshot/pages/Home
>>>>
>>>> Please feel free to critique/comment.
>
>
>
> Belated feedback on the CLI. CLI does seem to have a lot of hyphen/dash
> options. This is something that we have avoided in gluster so far.
> Conforming to the same design principle would be nice to have.
>
> The only reason why a hyphenated option seems necessary is due to the
> consistency groups. It would be better to treat consistency-group as a first
> class entity and have a separate mechanism to define that. For e.g.:
>
> #gluster consistency-group create 
>
> #gluster consistency-group add 
>
> #gluster consistency-group del 
>
>
> Once this is accomplished, we can avoid using the hyphenated options and
> distinguish between single volume snapshots and consistency-group snapshots
> by using a keyword for consistency groups.
>
> For e.g., creation of a snapshot can be structured as:
>
> #gluster snapshot create [consistency-group]  []
>
> Other snapshot commands can be structured on similar lines.
>
> Thoughts?
>
> Thanks,
> Vijay
>
> ___
> Gluster-devel mailing list
> Gluster-devel@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Gluster Community members visiting FOSDEM and DevConf in early February 2014

2014-01-03 Thread James
On Fri, Jan 3, 2014 at 10:40 AM, Niels de Vos  wrote:
> Hello all,
>
> on the weekend on 1-2 Febuary FOSDEM[1] will take place in Brussels, Belgium.
> Several users and Gluster community members will be attending the event. Some
> of us are trying to arrange a Gluster Community table with some promotional
> materials, demo's, Q+A and face-to-face chats.
>
> If you are interested in joining us, please let us know by responding to this
> email with some details, or add your note to the TitanPad[2]. In case you want
> to discuss a specific topic or would like to see a certain GlusterFS
> use-case/application, tell us in advance and we'll try to be prepared for it.

I'd love to attend if someone can cover my travel expenses. I'm happy
to give a talk too. I've got all the Vagrant stuff mostly done, and I
should have a public release of it with some new puppet-gluster code
within the week.

Cheers,
James

>
> Part of the same group of people will be going to Brno, Czech Republic for
> DevConf [3] the weekend after FOSDEM. We might be able to arrange a workshop 
> or
> presentation there too. However, we would like to hear from others what they
> prefer to see/hear/do, so please post your wishes!
>
> I hope to see a bunch of you in Brussels,
> Niels
>
>
> [1] https://fosdem.org
> [2] http://titanpad.com/gluster-at-fosdem2014
> [3] http://devconf.cz
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Agenda for Community meeting today

2014-01-08 Thread James
Hey all,

Sorry I could not attend the meeting, but I was up late hacking, and
couldn't wake up early enough.

One quick comment to add:

I'll be done the initial release of my Puppet-Gluster+Vagrant work,
and will send out an email and blog about this either later today or
tomorrow most likely.

This will give you "1 click" automatic Gluster builds, even for anyone
not familiar with Puppet. Should also be the solution for testing any
point release...

One thing to look at: It currently can use any rpm's (CentOS only for
initial release) from download.gluster.org As it stands, QA releases
aren't available there. I'd like to add those into the process, but
I'm not sure where the recommended (stable) place to get those from
will be.

Hope you all try it out tomorrow.

James


On Wed, Jan 8, 2014 at 11:30 AM, Vijay Bellur  wrote:
> On 01/08/2014 03:15 PM, Vijay Bellur wrote:
>>
>> Agenda for the weekly community meeting has been updated at:
>>
>> http://titanpad.com/gluster-community-meetings
>>
>> Please update the agenda if you have items for discussion.
>>
>
> Meeting minutes available here:
>
> http://meetbot.fedoraproject.org/gluster-meeting/2014-01-08/gluster-meeting.2014-01-08-15.01.txt
>
> Thanks,
>
> Vijay
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] Automatically Deploying GlusterFS with Puppet-Gluster+Vagrant!

2014-01-08 Thread James
Okay, It's ready for you to try!

You don't need to know anything about Puppet, and it's easy to follow
along if you're not even comfortable with a shell.

https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/

This has been quite a large amount of work to "get right", and I hope
you appreciate it! Let me know your experience!

Special thanks to John Mark, who hooked me up with hosting for the
vagrant base image "box".

Cheers,

James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Automatically Deploying GlusterFS with Puppet-Gluster+Vagrant!

2014-01-08 Thread James
I would recommend you read my earlier vagrant related articles. I
mention that issue in
http://ttboj.wordpress.com/2013/12/21/vagrant-vsftp-and-other-tricks/

Sadly the author liked the patch, but seems to have been awol since.
If I don't hear anything shortly, I'll fork it. The patch should be
easy to apply manually for now.

Thanks for the comments, let me know if you find any other issues.

James

On Thu, Jan 9, 2014 at 2:07 AM, Kaushal M  wrote:
> vagrant-cachier complains about 'mount_options'. Your patch hasn't
> been accepted into the upstream vagrant-cachier yet.
> I'll be better for other users if you could add some steps on applying
> your patch.
>
> ~kaushal
>
> On Thu, Jan 9, 2014 at 9:35 AM, James  wrote:
>> Okay, It's ready for you to try!
>>
>> You don't need to know anything about Puppet, and it's easy to follow
>> along if you're not even comfortable with a shell.
>>
>> https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/
>>
>> This has been quite a large amount of work to "get right", and I hope
>> you appreciate it! Let me know your experience!
>>
>> Special thanks to John Mark, who hooked me up with hosting for the
>> vagrant base image "box".
>>
>> Cheers,
>>
>> James
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Hangout with Semiosis (Louis Z) Today - Gluster on AWS, Java Filesystem and more

2014-01-10 Thread James
Youtube like for anyone who doesn't G+

https://www.youtube.com/watch?v=usoY_FPc2EY

On Fri, Jan 10, 2014 at 10:56 AM, John Mark Walker
 wrote:
> Here's the Youtube link:
> https://plus.google.com/events/c1e0kmili7gfqndhjdj66hnpdt4
>
> We're starting in 5 minutes. Please go to the irc channel #gluster-meeting
> for Q&A
>
>
>
> On Fri, Jan 10, 2014 at 9:41 AM, John Mark Walker 
> wrote:
>>
>> Fyi, if anyone wants to participate on the hangout roundtable, just let me
>> know.
>>
>> -- Forwarded message --
>> From: "John Mark Walker" 
>> Date: Jan 10, 2014 9:32 AM
>> Subject: Hangout with Semiosis (Louis Z) Today - Gluster on AWS, Java
>> Filesystem and more
>> To: "announce" 
>> Cc:
>>
>> In about 90 minutes, Louis Zuckerman and I will be "hanging out" and
>> talking about how he came to deploy GlusterFS on AWS, and why he'd
>> developing a Java Filesystem integration with GlusterFS. I'll post the
>> embedded YouTube link here when we're about to go live. Hangout starts at
>> 11am EST, 8am PST, 16:00GMT - follow along on YouTube and ask questions in
>> #gluster-meeting on IRC.gnu.org.
>>
>> http://www.gluster.org/2014/01/hangout-with-semiosis-louis-z-today/
>>
>>
>>
>> ___
>> Announce mailing list
>> annou...@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/announce
>
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] Force issue

2014-01-10 Thread James
I came across this issue while working on Puppet Gluster...

volume create: puppet: failed: The brick
annex1.example.com:/var/lib/puppet/tmp/gluster/data/puppet is is being
created in the root partition. It is recommended that you don't use
the system's root partition for storage backend. Or use 'force' at the
end of the command if you want to override this behavior.

For automation purposes, this is a problem, because while I'm happy to
add in the 'force' argument for all commands to avoid the above error,
if a different type of error that can be overrided by force occurs,
then I'll be unknowingly allowing it.

For this reason, it probably makes sense to add in an alternate
syntax, such as --allow-root-storage or similar... One for each
possible override.

Also:

As a side note, I would _LOVE_ to see an --allow-reusing-prefix to
forcibly clear the prefixes if they are empty and create the volume.
The problem is failed gluster volume create commands make folders on
all the hosts, and cause you to hit the prefix problem.

Cheers,
James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gluster Development Environment

2014-01-15 Thread James
On Wed, Jan 15, 2014 at 8:13 PM, Lluís Pàmies i Juárez
 wrote:
> Hello,
>
> I'm starting some gluster development and having to deal with virtual
> machines for test/debug seems a little bit to tedious. I was wondering if
> there is a way to start two (or more) servers in the same machine using
> different ports. Using something like that:
>
>  # glusterfsd -f simple-server-0.vol
>  # glusterfsd -f simple-server-1.vol
>  # glusterfs -f simple-client.vol /mnt/data
>
> where simple-server-0.vol simple-server-1.vol use different TCP ports?
>
> Thank you.

You probably want to look at using vagrant to make it easier:

https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/

If you don't want to use the Puppet parts, you can just use the base
machine however you like.

James


>
> --
> Lluis
>
> ___
> Gluster-devel mailing list
> Gluster-devel@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Ask for help : a question about glusterfs dht

2014-01-19 Thread James
If you could send your future messages as plain text that follows the
normal email line length maximums, it will be easier for this list to
read.

Thanks,
James

On Sun, Jan 19, 2014 at 9:26 PM, 刘新国  wrote:
> Hello,
>I am a graduate from Computer Department of Sichuan University in China ,
> and I have joined the mailing lists today. Two days agao I have sent a email
> to the lists for a question ,but I have not receive any reply untill now ,
> so I thought if I have not subscribe the question clearly and send this
> email again.
> I want to know what happened during the "fix layout" procedure so I
> debug it in the gdb . I have maked the source code with “make CFLAGS="-g
> -O0" && make install” to debug it . But when I attached the glusterd process
> and make breakpoints in it and execute the "rebalance ... fix layou.."
> commond , the glusterd process did not stop at these breakpoints . My
> breakpoints is at functions :gf_defrag_start_crawl() and
> gf_defrag_fix_layout .So what should I do to let the process stop at the
> breakpoints to follow the "fix layout" procedure? I have tried many methods
> and non of them helps,please help me , thanks!
>--Xinguo Liu
>2014.1.20
>
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] Building base VM images for GlusterFS

2014-01-20 Thread James
Hey Niles and Gluster,

I a little while back we were both looking at building base images for
testing GlusterFS [1]. My objective was to use Vagrant, which I've done
and published [2]. As it turned out, I wasn't able to find any suitable
base images to use, so I've had to build and customize my own [3]. As I
understood it, this might have been one of the prerequisites for your
work too.

In any case, I've now also published my base box builder tools, which
you and/or the Gluster community might find helpful. I've written a
short article about the process here:

https://ttboj.wordpress.com/2014/01/20/building-base-images-for-vagrant-with-a-makefile/

and if you want to dive right to the source it is available here:

https://github.com/purpleidea/puppet-gluster/tree/master/builder

and also mirrored here for johnmark:

https://forge.gluster.org/puppet-gluster/puppet-gluster/trees/master/builder

(hi johnmark)

I ended up using a Makefile [4] to manage the process, which I think is
pretty elegant and lean.

The only known issue at the moment, is that I had to disable SELinux in
the guest images. If anyone is able to look at the reasoning [5], and
perhaps suggest a fix or workaround, it would be appreciated!

I hope you all find this useful.

Happy hacking,

James


[1] http://gluster.org/pipermail/gluster-users/2013-December/038312.html
[2]
https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/
[3]
https://ttboj.wordpress.com/2014/01/20/building-base-images-for-vagrant-with-a-makefile/
[4]
https://github.com/purpleidea/puppet-gluster/blob/master/builder/Makefile
[5]
https://github.com/purpleidea/puppet-gluster/blob/master/builder/Makefile#L57



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Building base VM images for GlusterFS

2014-01-20 Thread James
On Mon, Jan 20, 2014 at 12:43 PM, John Mark Walker  wrote:
> James,
>
> This is awesome stuff!
Thanks!

> I think we can use this to create live images to hand out on USB keys.
You could also use Puppet-Gluster+Vagrant to build a cluster of X
machines (say four) and then shut those down, and then save the four
images to your USB key. The cool thing is that you only need to store
one base image. The four other images work as deltas (so they're very
small).

The one problem with either setup, is that you'd might need something
to make sure the guest network is working as expected. It's probably
fairly easy to get this all working off a stick that's plugged into
say a Fedora machine.

James

PS: btw 3.5.0beta works automatically with Puppet-Gluster+Vagrant.

>
> -JM

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Force issue

2014-01-20 Thread James
On Mon, Jan 20, 2014 at 5:01 PM, Paul Cuzner  wrote:
> Bit of a delay on this one (on hols), but +1 from me
>
> I use the force parameter in gluster-deploy - specifically for enabling
> snapshot support by provisioning thin LV's.
Cool - maybe there are other specific arguments that we should add.
Could you add your comments to bz? I opened this last week:

https://bugzilla.redhat.com/show_bug.cgi?id=1051993

>
> PC
PS: Once upon a time I remember we spoke about porting gluster deploy
to use Puppet-Gluster. FWIW, if you look at the Puppet-Gluster+Vagrant
stuff I just published, it shows how to integrate something (vagrant)
with Puppet-Gluster.
https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/

>
>
> ____
>
> From: "James" 
> To: "Gluster Devel" 
> Sent: Saturday, 11 January, 2014 6:50:50 PM
> Subject: [Gluster-devel] Force issue
>
>
> I came across this issue while working on Puppet Gluster...
>
> volume create: puppet: failed: The brick
> annex1.example.com:/var/lib/puppet/tmp/gluster/data/puppet is is being
> created in the root partition. It is recommended that you don't use
> the system's root partition for storage backend. Or use 'force' at the
> end of the command if you want to override this behavior.
>
> For automation purposes, this is a problem, because while I'm happy to
> add in the 'force' argument for all commands to avoid the above error,
> if a different type of error that can be overrided by force occurs,
> then I'll be unknowingly allowing it.
>
> For this reason, it probably makes sense to add in an alternate
> syntax, such as --allow-root-storage or similar... One for each
> possible override.
>
> Also:
>
> As a side note, I would _LOVE_ to see an --allow-reusing-prefix to
> forcibly clear the prefixes if they are empty and create the volume.
> The problem is failed gluster volume create commands make folders on
> all the hosts, and cause you to hit the prefix problem.
>
> Cheers,
> James
>
> ___
> Gluster-devel mailing list
> Gluster-devel@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Automatically Deploying GlusterFS with Puppet-Gluster+Vagrant!

2014-01-21 Thread James
On Tue, 2014-01-21 at 14:08 +0530, Kaushal M wrote:
> I tried again and I failed again.
Oh no! Let's see if I can help you through this. Your experiences are
great information so that I can fix things for other new users.

Short answer: use vagrant 1.3.5

Quick questions: Are you using Fedora 20? Are you using my Vagrantfile
without modification?

> 
> I had to modify your patch for vagrant-cachier as the plugin has been
> updated to 0.5.1. After this, I tried to get the puppet vm up and
> failed.
I could simply exclude vagrant-cachier from the default setup. It's only
an optimization, and isn't required.
I'll patch this to change the default shortly.

> I'm using vagrant-1.4 and it appears to have some issues with
> vagrant-libvirt and nfs.
Unfortunately, 1.4 is a new release, and it's known to break a lot of
plugins (like vagrant-libvirt). I'm currently using 1.3.5 -- the rpm's
are still available. I'd recommend trying again using that version. Once
vagrant-libvirt catches up with the breakages in 1.4, I'll test and
ensure it's working out of the box.

>  I commented out a couple of lines in
> vagrant-libvirt based on a comment on your first vagrant blog post. I
> tried getting the puppet vm up again. This time the domain starts, but
> got stuck at "Waiting for domain to get an ip address" and I'm still
> stuck.
Can you run your 'up' command and paste the logs? Search for 'vlog' here
for an easy way to save logs:
https://ttboj.wordpress.com/2013/12/21/vagrant-vsftp-and-other-tricks/

The other thing that I found very helpful was to open up virt-manager
(or virsh) and login through the console interface at any point after
you run the 'vagrant up' command. That way you can login and see what's
going on from the machine side. You can see if the machine actually got
an IP, or if it's not configured for DHCP properly, etc...

> 
> I did some digging around the code. Found that, vagrant-libvirt uses
> fog (ruby cloud services library)[1] to perform some interactions with
> libvirt. I'm stuck at the place where vagrant-libvirt is waiting on
> fog to obtain the ip address of the vm [2]. Fog requires a setup
> involving arpwatch and rsyslog to get the ip of the domain [3]. I did
> the setup, but even then fog couldn't get the address. Might be a
> problem with the way I performed the setup (the instructions are old
> and based on ubuntu, but I use ArchLinux), as the log file which fog
> parses remained empty.
I believe the IP comes from looking through dnsmasq logs. dnsmasq is
used for DHCP for the guests.

> 
> Do you have any tips on how I can resolve this problem? In the mean
> time, I'll continue digging around and see if I can fix it myself.
Yes! - I'll need the logs, but:

1) Please try with vagrant 1.3.5 first
2) Feel free to ping me on IRC for more interactive help
3) Post bugs you find at:
https://github.com/pradels/vagrant-libvirt/issues/
They've been pretty good and responsive, and I've even landed some
patches pretty quickly.


> 
> ~kaushal
Cheers,
James

> 
> [1] - http://fog.io/
> [2] - 
> https://github.com/pradels/vagrant-libvirt/blob/master/lib/vagrant-libvirt/action/wait_till_up.rb#L32
> [3] - http://jedi.be/blog/2011/09/13/libvirt-fog-provider/
> 
> On Thu, Jan 9, 2014 at 1:27 PM, James  wrote:
> > I would recommend you read my earlier vagrant related articles. I
> > mention that issue in
> > http://ttboj.wordpress.com/2013/12/21/vagrant-vsftp-and-other-tricks/
> >
> > Sadly the author liked the patch, but seems to have been awol since.
> > If I don't hear anything shortly, I'll fork it. The patch should be
> > easy to apply manually for now.
> >
> > Thanks for the comments, let me know if you find any other issues.
> >
> > James
> >
> > On Thu, Jan 9, 2014 at 2:07 AM, Kaushal M  wrote:
> >> vagrant-cachier complains about 'mount_options'. Your patch hasn't
> >> been accepted into the upstream vagrant-cachier yet.
> >> I'll be better for other users if you could add some steps on applying
> >> your patch.
> >>
> >> ~kaushal
> >>
> >> On Thu, Jan 9, 2014 at 9:35 AM, James  wrote:
> >>> Okay, It's ready for you to try!
> >>>
> >>> You don't need to know anything about Puppet, and it's easy to follow
> >>> along if you're not even comfortable with a shell.
> >>>
> >>> https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/
> >>>
> >>> This has been quite a large amount of work to "get right", and I hope

Re: [Gluster-devel] [Gluster-users] Automatically Deploying GlusterFS with Puppet-Gluster+Vagrant!

2014-01-21 Thread James
On Tue, 2014-01-21 at 03:53 -0500, James wrote:
> > 
> > I had to modify your patch for vagrant-cachier as the plugin has
> been
> > updated to 0.5.1. After this, I tried to get the puppet vm up and
> > failed.
> I could simply exclude vagrant-cachier from the default setup. It's
> only
> an optimization, and isn't required.
> I'll patch this to change the default shortly.

Try the latest master. It includes this patch:

https://github.com/purpleidea/puppet-gluster/commit/bee4993d4304730da27424dbdc73819b99d8ab5b

Let me know if it fixes your issue.

James



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Automatically Deploying GlusterFS with Puppet-Gluster+Vagrant!

2014-01-21 Thread James
Top posting because it's bed time...

I just pushed a HUGE bunch of changes... There are more new features,
rather than some magical fix to make vagrant run everywhere...

In any case, it adds in easy client support. So you can now do:

$ vagrant up puppet
$ sudo -v && vagrant up annex{1..4} --gluster-replica=2
--gluster-clients=2 --no-parallel
$ # wait until volume has been started... maybe there's an easy ssh
spin lock someone can write
$ vagrant up client{1..2} # boom, now you have two clients to use to
throw workloads at. Default mnt path is /mnt/gluster/ If this
should be different, let me know.

Cheers,
James


On Tue, Jan 21, 2014 at 4:13 AM, James  wrote:
> On Tue, 2014-01-21 at 03:53 -0500, James wrote:
>> >
>> > I had to modify your patch for vagrant-cachier as the plugin has
>> been
>> > updated to 0.5.1. After this, I tried to get the puppet vm up and
>> > failed.
>> I could simply exclude vagrant-cachier from the default setup. It's
>> only
>> an optimization, and isn't required.
>> I'll patch this to change the default shortly.
>
> Try the latest master. It includes this patch:
>
> https://github.com/purpleidea/puppet-gluster/commit/bee4993d4304730da27424dbdc73819b99d8ab5b
>
> Let me know if it fixes your issue.
>
> James
>

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Glusterfs SSL capability

2014-01-23 Thread James
Hi there,

Just saw these notes on Gluster+SSL:

https://lists.gnu.org/archive/html/gluster-devel/2013-05/msg00139.html

Questions:

1) How permanent are these interfaces? Is this expected to be unchanged
(and will it be the recommended method) for future GlusterFS versions ?
What about in 4.0 ?

I ask because if so, this looks like something which would be elegant to
add to Puppet-Gluster, and I'm pretty sure all the user would have to do
is say ssl => true.

2) Can you give me the _exact and full_ openssl command line that you'd
recommend someone run. This way I won't make mistakes or hurt my brain.

Can you also be more specific about which files to concatenate to
produce the glusterfs.ca file, and if it's a literal cat * > or if you
need to use a special program to merge them.

3) Are the /etc/ssl/glusterfs.* paths configurable (without re-compile)
somehow?

4) Does this change any of the ports that are used anywhere?

5) Anything else you think I should know?

Thanks!
James



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster Volume Properties

2014-01-24 Thread James
Hi there,

I've been taking another look at some of gluster volume properties. If
you know of some that are missing from my list or have incorrect
entries, please let me know! My list is here:

https://github.com/purpleidea/puppet-gluster/blob/master/manifests/volume/property/data.pp#L18

This curated list makes it easy to manage properties with
Puppet-Gluster. The list isn't complete though. This is where I need
your help!


Semiosis: The latest git master adds support for the options you
requested in gluster volume properties. It also contains this patch:
https://github.com/purpleidea/puppet-gluster/commit/221e3049f04fb608d013d7092bcfb258010b2d6d

which adds support for adding the rpc-auth-allow-insecure option to the
glusterd.vol file. You can use these two together like:

class { '::gluster::simple':
volume => 'yourvolumename',
rpcauthallowinsecure => true,
}

gluster::volume::property{ 'yourvolumename#server.allow-insecure':
value => on,# you can use true/false, on/off
}

which should hopefully make testing your libgfapi-jni easy!

If anyone has any other questions, please let me know.

James
@purpleidea (twitter / irc)
https://ttboj.wordpress.com/



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] gluster SSL support

2014-01-26 Thread James
On Sun, Jan 26, 2014 at 8:07 AM, Zbyszek Żółkiewski
 wrote:
> Thank you for the response, yes i have finally managed to enable ssl mode, 
> seems like gluster takes only specific certificates into account, or - 
> generated with CN Anyone - i am not sure why it did not worked with 
> previously generated certs (as i mentioned it hung on verify).

Am I to understand that you were _only_ able to make this work with
"CN=Anyone" ?

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Automatically Deploying GlusterFS with Puppet-Gluster+Vagrant!

2014-01-27 Thread James
On Mon, Jan 27, 2014 at 12:04 AM, Kaushal M  wrote:
> I finally got vagrant working.
Great! Now the fun stuff starts!

> Had to roll back to v1.3.5 to get it
As suspected. I've updated my original blog post to make it more clear
to only use 1.3.5

> working. I can get the puppet vm up, but the provisioning step always
> fails with a puppet error [1]. Puppet complains about not finding a
> declared class.
Interesting! I don't see this error. The puppet server always builds
properly for me. Can you verify that you did these steps:

1) git clone --recursive https://github.com/purpleidea/puppet-gluster
2) cd puppet-gluster/vagrant/gluster/
3) vagrant up puppet

In particular can you verify that you used --recursive and that the
puppet-gluster/vagrant/gluster/puppet/modules/ directory contains a
puppet/ folder?

Other than those things, I'm looking into this too... It seems some of
the time, I've been getting similar errors too. I'm not quite sure
what's going on. I got the feeling that maybe the puppet server didn't
have enough memory. Now I'm not sure. Maybe there's a libvirt
networking bug? Do you get the same errors when you repeat the
process, or different errors each time?

> This happens with and without vagrant-cachier. I'm
> using the latest box (uploaded on 22-Jan).

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Automatically Deploying GlusterFS with Puppet-Gluster+Vagrant!

2014-01-27 Thread James
On Mon, Jan 27, 2014 at 9:41 AM, Kaushal M  wrote:
> This. I hadn't done a recursive clone. I cloned the repo correctly
> again and everything works now. The vms are being provisioned as I
> type this. Finally, time to test puppet-gluster.


Awesome! I'm actually recording a screencast of the whole process. I
figured, I might as well in the hopes visualizing the process helps
others!

I'll post shortly, it might help with any other confusion.

Cheers,
James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Automatically Deploying GlusterFS with Puppet-Gluster+Vagrant!

2014-01-28 Thread James
Despite some recording issues, it's done:

http://ttboj.wordpress.com/2014/01/27/screencasts-of-puppet-gluster-vagrant/

James

On Mon, Jan 27, 2014 at 10:11 AM, John Mark Walker  wrote:
>
>
> - Original Message -
>> On Mon, Jan 27, 2014 at 9:41 AM, Kaushal M  wrote:
>> > This. I hadn't done a recursive clone. I cloned the repo correctly
>> > again and everything works now. The vms are being provisioned as I
>> > type this. Finally, time to test puppet-gluster.
>>
>>
>> Awesome! I'm actually recording a screencast of the whole process. I
>> figured, I might as well in the hopes visualizing the process helps
>> others!
>>
>> I'll post shortly, it might help with any other confusion.
>
>
> Perfect! That will be awesome. Post on Youtube - I'll make sure Josephus Ant 
> pulls it in :)
>
> -JM
>
>
>>
>> Cheers,
>> James
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@nongnu.org
>> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>>

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] libgfapi threads

2014-01-30 Thread James
On Thu, Jan 30, 2014 at 4:15 PM, Paul Cuzner  wrote:
> Wouldn't the thread count relate to the number of bricks in the volume,
> rather that peers in the cluster?


My naive understanding is:

1) Yes, you should expect to see one connection to each brick.

2) Some of the "scaling gluster to 1000" nodes work might address the
issue, as to avoid 1000 * brick count/perserver connections.

But yeah, Kelly: I think you're seeing the right number of threads.
But this is outside of my expertise.

James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] 3.5.0beta2 RPMs are available now

2014-01-31 Thread James
On Mon, Jan 27, 2014 at 2:55 PM, Kaleb S. KEITHLEY  wrote:
>
> 3.5.0beta2 RPMs for el6, el7, fedora 19, fedora 20, and fedora 21 (rawhide)
> are available at
> http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.0beta2/

Great...

As requested to fill out:

https://docs.google.com/spreadsheet/ccc?key=0ApI11Dqpup82dGZCYi1PT3pHdTNFYlZXMEdBOGdweUE#gid=0

Puppet-Gluster finished it's second test... FWIW the magic command is:

sudo -v && time vup puppet --gluster-replica=2 --gluster-clients=2
--gluster-version='3.5.0-0.4.beta2.el6'
sudo -v && time vup annex{1..4} --no-parallel
sudo -v && time vup client{1,2} --no-parallel

Test results: My laptop is very very hot.
First run, it didn't come up correctly.
Second run, it seemed to.

I suspect there is something different/strange/buggy with the way the
Gluster peering (state machine?) is working.
One bug that may be related is:
https://bugzilla.redhat.com/show_bug.cgi?id=1051992

Other than that, it seems the --no-parallel flag is required for now.
I suspect puppet is misbehaving as usual and has a race condition,
weird oom killer, and or is just plain buggy and shows unrelated error
messages when something goes wrong. IOW, it sometimes fails when
running in parallel!

James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] Testing replication and HA

2014-02-10 Thread James
It's been a while since I did some gluster replication testing, so I
spun up a quick cluster *cough, plug* using puppet-gluster+vagrant (of
course) and here are my results.

* Setup is a 2x2 distributed-replicated cluster
* Hosts are named: annex{1..4}
* Volume name is 'puppet'
* Client vm's mount (fuse) the volume.

* On the client:

# cd /mnt/gluster/puppet/
# dd if=/dev/urandom of=random.51200 count=51200
# sha1sum random.51200
# rsync -v --bwlimit=10 --progress random.51200 root@localhost:/tmp

* This gives me about an hour to mess with the bricks...
* By looking on the hosts directly, I see that the random.51200 file is
on annex3 and annex4...

* On annex3:
# poweroff
[host shuts down...]

* On client1:
# time ls
random.51200

real0m42.705s
user0m0.001s
sys 0m0.002s

[hangs for about 42 seconds, and then returns successfully...]

* I then powerup annex3, and then pull the plug on annex4. The same sort
of thing happens... It hangs for 42 seconds, but then everything works
as normal. This is of course the cluster timeout value and the answer to
life the universe and everything.

Question: Why doesn't glusterfs automatically flip over to using the
other available host right away? If you agree, I'll report this as a
bug. If there's a way to do this, let me know.

Apart from the delay, glad that this is of course still HA ;)

Cheers,
James
@purpleidea (twitter/irc)
https://ttboj.wordpress.com/



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Testing replication and HA

2014-02-11 Thread James
Thanks to everyone for their replies...

On Tue, Feb 11, 2014 at 2:37 AM, Kaushal M  wrote:
> The 42 second hang is most likely the ping timeout of the client translator.
Indeed I think it is...

>
> What most likely happened was that, the brick on annex3 was being used
> for the read when you pulled its plug. When you pulled the plug, the
> connection between the client and annex3 isn't gracefully terminated
> and the client translator still sees the connection as alive. Because
> of this the next fop is also sent to annex3, but it will timeout as
> annex3 is dead. After the timeout happens, the connection is marked as
> dead, and the associated client xlator is marked as down. Since afr
> now know annex3 is dead, it sends the next fop to annex4 which is
> still alive.
I think this sounds right... My thought was that maybe Gluster could
do better somehow. For example, if the timeout counter passes (say 1
sec) it immediately starts looking for a different brick to continue
from. This way a routine failover wouldn't interrupt activity for 42
seconds. Maybe this is a feature that could be part of the new style
replication?

>
> These kinds of unclean connection terminations are only handled by
> request/ping timeouts currently. You could set the ping timeout values
> to be lower, to reduce the detection time.
The reason I don't want to set this value significantly lower, is that
in the case of a _real_ disaster, or high load condition, I want to
have the 42 seconds to give things a chance to recover without having
to kill the "in process" client mount. So it makes sense to keep it
like this.

>
> ~kaushal

Cheers,
James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] custom ssl file locations

2014-02-17 Thread James
On Mon, Feb 17, 2014 at 5:35 PM, Banio  wrote:
> Any help would be much appreciated.

IIRC, I was told the paths were only configurable at compile time, but
I haven't verified this myself.

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] custom ssl file locations

2014-02-17 Thread James
On Mon, Feb 17, 2014 at 7:30 PM, Banio  wrote:
> This thread:
> http://lists.gnu.org/archive/html/gluster-devel/2013-05/msg00139.html makes
> me think you can configure them at any time.

I guess so, although this one: (#3)
https://lists.gnu.org/archive/html/gluster-devel/2014-01/msg00183.html
says otherwise :P

Good luck!
James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Proposal for some RPM packaging changes

2014-02-18 Thread James
On Tue, Feb 18, 2014 at 9:13 AM, Niels de Vos  wrote:
> 1. glusterfs-server (glusterd) and glusterfs-geo-replication depend on
>each other. It is not possible to install glusterfs-server without
>glusterfs-geo-replication, or the other way around.

The only question I would have is why does glusterfs-server, depend on
geo-replication?

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Proposal for some RPM packaging changes

2014-02-18 Thread James
On Tue, Feb 18, 2014 at 11:41 AM, Kaleb KEITHLEY  wrote:
>
> Actually, it doesn't.
>
> glusterfs-geo-replication _does_ reguire glusterfs-server.
>
> glusterfs-server _does not_ require glusterfs-geo-replication.

This makes sense, and is what I'd expect... And with that, as long as
it's not pedantic, I'd recommend they stay as separate packages. It
makes total sense that geo-replication pulls in glusterfs-server, but
not the other way around.

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] custom ssl file locations

2014-02-18 Thread James
On Tue, Feb 18, 2014 at 12:08 PM, Jeff Darcy  wrote:
> To apply these, you have to forego "mount -t glusterfs" and mount.glusterfs
> in favor of running the "glusterfs" command directly with "--xlator-option"
> like this:
>
>glusterfs --volfile-server=any_server --volfile-id=fubar \
>   --xlator-option fubar-client-N.transport.socket.ssl-own-cert=xxx \
>   ...
>
> Unfortunately this gets a bit tedious, because you have to add each option
> for each brick from 0 to N-1.  You'll probably want to wrap that in a script,
> or use Puppet (hi James).  As far as I can tell, though, the option does get
> through and is used to make the connections.

I'm waiting on the new (even awesome-er) SSL support coming in 3.6 (hi
Jeff) before I add the SSL features to puppet-gluster [1,2]. As an
aside, in an upcoming version of puppet-gluster I'm switching the
/etc/fstab based mounting of volumes to use an exec{} based
"glusterfs" command, so that we can support other options such as
these ones if they're needed.

If anyone wants this sooner, or wants SSL support for 3.3/3.4 in
puppet-gluster, let me know.

James

[1] https://github.com/purpleidea/puppet-gluster
[2] https://forge.gluster.org/puppet-gluster # (hi johnmark)

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFS unit test framework

2014-02-27 Thread James
Sweet...

FWIW, I think it would be fairly easy to add in support to have the
infrastructure use Puppet-Gluster to automatically build and test a
few different complete clusters. I'm happy to help out if someone
wants to undertake this, or work on it outright if someone wants to
sponsor it.

With more engineering this could even test things like: "build
cluster, add files, measure performance, add hosts, expand cluster,
change files, re-measure performance" ...

Cheers,
James

On Thu, Feb 20, 2014 at 2:00 PM, Luis Pabon  wrote:
> Hi all,
> I have uploaded my patch to add unit test support to GlusterFS. The unit
> test framework provides integration with Jenkins and coverage support.  The
> patch is:
>
> http://review.gluster.org/#/c/7145/
>
> * Documentation:
> https://github.com/lpabon/glusterfs/blob/xunit/doc/hacker-guide/en-US/markdown/unittest.md
>
> * Integration with Jenkins:
> http://build.gluster.org/job/glusterfs-unittests/
>
> * Tracker Bug:
> https://bugzilla.redhat.com/show_bug.cgi?id=1067059
>
> This is just the start.  If this is accepted, we would need developers to
> start adding unit tests as part of their patches. Imagine, hundreds, no
> wait, thousands of unit tests running on every patch submit :-).
>
> Let me know what you think.
>
> - Luis
>
> ___
> Gluster-devel mailing list
> Gluster-devel@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterFS unit test framework

2014-02-27 Thread James
On Thu, Feb 27, 2014 at 10:23 PM, Jeff Darcy  wrote:
> We're definitely going to need more test servers.

So I had this dream... To build a test cluster on ARM servers... I've
saved up a bunch of cheap USB keys for storage... Anyone have a rack
of ARM servers or a bunch of Beaglebone's or similar? Could be a cool
lightweight way to test GlusterFS... Could be a cool way to provide
better scaling/performance for smaller (TB wise) clusters...

Someone with hardware, ping me :)

James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Proposal: GlusterFS Quattro

2014-03-07 Thread James
On Fri, Mar 7, 2014 at 11:26 AM, Justin Clift  wrote:
> Port usage at the moment is such a pain (and not just for scalability). :/

I agree! The only sane way I know to manage the whitelist of ports is
with Puppet-Gluster... It took me a while to get this code right, so
hopefully it can be of benefit to you.

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] RFC: Gluster bundles for 3.5

2014-03-10 Thread James
On Mon, Mar 10, 2014 at 4:16 PM, John Mark Walker  wrote:
> Greetings,
>
> As you may or may not be aware, there are quite a few maturing projects that 
> may warrant inclusion in a GlusterFS 3.5 bundle:
>
> - gluster-swift
> - gluster-hdfs/gluster-hadoop
> - oVirt 3.4 (for RESTful API and management GUI)
> - pmux - for distributed MapReduce jobs
> - gflocator - implements some measure of data locality, used in pmux
> - glubix - for monitoring with zabbix
> - puppet-gluster - puppet module for deploying GlusterFS

Obviously I agree ;)

I would also nominate libgfapi-python [1] and glupy [2].

Although I wouldn't expect this (although I'd certainly welcome it) is
if major breakages in GlusterFS break Puppet-Gluster, that this be a
blocker for that change, although I don't really see this being an
issue since the user --xml interface is stable.

>
> I mention these in particular because there are actual, real-life users for 
> each of them. If there are other projects out there that you feel have been 
> overlooked, please nominate them.

Thanks for saving me from doing this :)

>
>
> How do we distribute this software? I've long thought that the easiest way to 
> release them is as packages that we make available for download with each 
> major release. We could call the major releases "Gluster Software 
> Distribution" or "Gluster++" or some other witty name that makes it clear 
> that it's more than just GlusterFS.

The amazing kkeithley [3]* might be helping me rpm-ify Puppet-Gluster
this week. This will serve a few use cases including the above, the
CentOS storage SIG, and maybe even RHS if they ever wanted to use
Puppet-Gluster.

>
>
> Please comment on both the projects I've listed above, as well as how we 
> should go about making this part of a release.
>
> -JM

Cheers,
James
@purpleidea (twitter / irc)
https://ttboj.wordpress.com/

[1] https://github.com/gluster/libgfapi-python/
[2] https://github.com/jdarcy/glupy/
[3]* kkeithley is even cooler in person than on IRC


>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] RFC: Gluster bundles for 3.5

2014-03-10 Thread James
On Mon, Mar 10, 2014 at 5:46 PM, Kaleb KEITHLEY  wrote:
> James---
>
> Independent of that we can proceed with packaging puppet-gluster for Fedora
> and download.gluster.org with it necessarily being part of the gluster
> ecosystem or on the Forge.

Sounds good. I'd love to maintain the packaging .spec upstream. I
think it would help interested parties.

>
> You should get a FAS (Fedora Account System) account as the first step for
> packaging in Fedora. We can start the packaging review process and work on
> getting you sponsored as a packager.

I'm purpleidea over there... Big surprise, my handle wasn't taken :)

>
> On the glupy and python-gfapi bindings if I'm not very much mistaken they're
> in glusterfs already or will emerge there eventually.

Not my projects, just nominated them, but I (sadly) don't actually use
them at the moment :P

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] PLEASE READ ! We need your opinion. GSOC-2014 and the Gluster community

2014-03-18 Thread James
On Tue, 2014-03-18 at 12:35 +0530, Kaushal M wrote:
> I had a discussion with some developers here in the office regarding
> this. We created a list of ideas which we thought could be suitable
> for student projects. I've added these to [1]. But I'm also putting
> them on here for more visibility.
> 
> (I've tried to arrange the list in descending order of difficulty as I find 
> it)
> 
> . Glusterd services high availablity
> Glusterd should restart the processes it manages, bricks, nfs
> server, self-heal daemon & quota daemon, whenever it detects they have
> died.

It might make sense to think about the interplay between this and the
systemd feature set... 

> . glusterfsiostat - Top like utility for glusterfs
> These are client side tools which will display stats from the
> io-stats translator. I'm not currently sure of the difference between
> the two.
> . ovirt gui for stats
> Have pretty graphs and tables in ovirt for the GlusterFS top and
> profile commands.
> . monitoring integrations - munin others.
> The more monitoring support we have for GlusterFS the better.
> . More compression algorithms for compression xlator
> The onwire compression translator should be extended to support
> more compression algorithms. Ideally it should be pluggable.
> . cinder glusterfs backup driver
> Write a driver for cinder, a part of openstack, to allow backup
> onto GlusterFS volumes
> . rsockets - sockets for rdma transport
> Coding for RDMA using the familiar socket api should lead to a
> more robust rdma transport
> . data import tool
> Create a tool which will allow already importing already existing
> data in the brick directories into the gluster volume. This is most
> likely going to be a special rebalance process.
> . rebalance improvements
> Improve rebalance preformance.
> . Improve the meta translator
> The meta xlator provides a /proc like interface to GlusterFS
> xlators. We could further improve this and make it a part of the
> standard volume graph.
> . geo-rep using rest-api
> This might be suitable for geo replication over WAN. Using
> rsync/ssh over WAN isn't too nice.
> . quota using underlying fs quota
> GlusterFS quota is currently maintained completely in GlusterFSs
> namespace using xattrs. We could make use of the quota capabilities of
> the underlying fs (XFS) for better performance.
> . snapshot pluggability
> Snapshot should be able to make use of snapshot support provided
> by btrfs for example.

This would be very useful :)

> . compression at rest
> Lessons learnt while implementing encryption at rest can be used
> with the compression at rest.
> . file-level deduplication
> GlusterFS works on files. So why not have dedup at the level files as 
> well.
> . composition xlator for small files
> Merge smallfiles into a designated large file using our own custom
> semantics. This can improve our small file performance.
> . multi master geo-rep
> Nothing much to say here. This has been discussed many times.
> 
> Any comments on this list?
> ~kaushal
> 
> [1] http://www.gluster.org/community/documentation/index.php/Projects
> 
> On Tue, Mar 18, 2014 at 9:07 AM, Lalatendu Mohanty  
> wrote:
> > On 03/13/2014 11:49 PM, John Mark Walker wrote:
> >>
> >> - Original Message -
> >>
> >>> Welcome, Carlos.  I think it's great that you're taking initiative here.
> >>
> >> +1 - I love enthusiastic fresh me^H^H^H^H^H^H^H^Hcommunity members! :)
> >>
> >>
> >>> However, it's also important to set proper expectations for what a GSoC
> >>> intern
> >>> could reasonably be expected to achieve.  I've seen some amazing stuff
> >>> out of
> >>> GSoC, but if we set the bar too high then we end up with incomplete code
> >>> and
> >>> the student doesn't learn much except frustration.
> >>
> >> This. The reason we haven't really participated in GSoC is not because we
> >> don't want to - it's because it's exceptionally difficult for a project of
> >> our scope, but that doesn't mean there aren't any possibilities. As an
> >> example, last year the Open Source Lab at OSU worked with a student to
> >> create an integration with Ganeti, which was mostly successful, and I think
> >> work has continued on that project. That's an example of a project with the
> >> right scope.
> >
> >
> > IMO integration projects are ideal fits for GSoc. I can see some information
> > in Trello back log i.e. under "Ecosystem Integration". But not sure of their
> > current status. I think we should again take look on these and see if
> > something can be done through GSoc.
> >
> >
>  3) Accelerator node project. Some storage solutions out there offer an
>  "accelerator node", which is, in short, a, extra node with a lot of RAM,
>  eventually fast disks (SSD), and that works like a proxy to the regular
>  volumes. active chunks of files are moved there, logs (ZIL style) are
>  recorded on fast media, among other things. There is NO active pro

Re: [Gluster-devel] DHT idea: rebalance-specific layout

2014-03-24 Thread James
On Mon, Mar 24, 2014 at 10:08 AM, Jeff Darcy  wrote:
> I was talking to a user about my size-weighted (or optionally
> free-space-weighted) rebalance script.  This led to thinking about ways to
> bring a system back into balance without migrating any old data, as some of
> our users already do.  Here's the example I was using.
>
> * Four existing 1TB bricks, which are 90% full.
>
> * One new 2TB brick, which is empty.
>
> Therefore, total free space is 2.4TB, of which the new brick has 2.0TB. If
> we set up the layouts so that the new brick has 5/6 of the hash space then
> as new files are added they should all reach 100% full at the same time
> without ever needing to migrate any old data.  Yay.
>
> Unfortunately, there's still a problem.  For these kinds of users (e.g.
> CDNs) the newest data also tends to remain hottest.  What happens when they
> want to retire some of their oldest hardware?  That *does* involve migrating
> old data, and the load for that will disproportionately fall on the newest
> servers which really should be spending as much of their time as possible
> serving new content.  That's not good.
>
> So (finally) here's the idea.  Have a *separate* set of layout values that
> are used specifically for rebalance, so that we can rebalance data one way
> even as new files are placed another way.  Let's consider a slightly
> different example.

I think this is a proper clever idea. (Assuming it would work.)

One question: would(n't) there be a chance for "thrashing" (maybe
there's a better word) where new files are getting put on brick X, but
the rebalance is then trying to move them to brick Y? (Well maybe call it a
single thrash, not thrashing.)

As a side note, I don't see this as a high priority feature that I'm
interested in.

>
> * 4 ancient 1TB bricks, 75% full
>
> * 16 medium-age 1.5TB bricks, also 75% full
>
> * 4 new 2TB bricks, empty
>
> Here's one possible way to use dual layouts:
>
> * currently 8TB free on the medium bricks, goal is 5TB
>
> * 4TB free on the new bricks
>
> * set regular layout to 44% new, 55% medium
>
> * set rebalance layout to 100% medium
>
> This way 44% of the new files but *none* of the files from the oldest bricks
> will flow toward the newest bricks.  100% of that traffic will be from the
> oldest bricks to the medium ones, and shouldn't affect the newest machines
> at all.  This would all be a lot easier if we had layout inheritance or
> default layouts instead of every single directory with its own layout, but
> we can probably find ways to deal with that.
>
> Any reactions?
>
>
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Introducing a new option to gluster peer command.

2014-03-31 Thread James
On Mon, Mar 31, 2014 at 10:29 PM, Nagaprasad Sathyanarayana
 wrote:
> In the current design, gluster peer probe does the job of both probing the
> server and adding it to trusted pool. Once the server is added to trusted
> pool, it can be detached usingpeer detach command.
>
> Wondering if it makes sense to bring in gluster peer attach command to add
> the server to trusted pool. The peer probe command will only prove the
> server mentioned and tells if it is reachable. It can also be enhanced to do
> some diagnostics such as probing specific ports.

Do I understand correctly:

gluster peer attach would attach the probing server into the pool it
is probing, correct?
If so, and if it is already a member of a pool, could you join two
different pools together?
I don't know what the gluster internals implications are, but as long
as I understand this correctly, then I think it would benefit the
management side of glusterfs.

It would certainly make peering more decentralized, as long as double
peering or running a simultaneous peer attach and peer probe don't
cause issues. This last point is very important :)


Cheers,
James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Regression tests: Should we test non-XFS too?

2014-04-05 Thread James
On Sat, Apr 5, 2014 at 11:37 AM, Justin Clift  wrote:
> So, are people interested in us running the tests on other
> brick filesystem types, such as ext4? (or whatever else)

Yes, absolutely, but I think it's btrfs that will matter, not ext4.

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] Puppet-Gluster+ThinP

2014-04-09 Thread James
Okay,

Here's a first draft of puppet-gluster w/ thin-p. This patch includes
documentation updates too! (w00t!)

https://github.com/purpleidea/puppet-gluster/tree/feat/thinp

FYI: I'll probably rebase this branch.
FYI: Somewhat untested. Read the commit message.

Comments welcome :)

I'm most interested to hear about if everyone is pleased with the way I
run the thin-p lv command. I think this makes the most sense, but let me
know if anyone has improvements. Also I'd love to hear about what the
default values for the parameters should be, but that's a one line
patch, so no rush for me.

Cheers,
James





signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Puppet-Gluster+ThinP

2014-04-09 Thread James
On Wed, 2014-04-09 at 11:56 -0400, Keith Schincke wrote:
> Here is a quick set of reviews.
> 
> What package and distro version includes the lvmthin man page?
I think it's still in git, but it's available online.

> On line 653 of Documentation.md, you refer to "man -7 thin". This should 
> be "man -7 lvmthin"
Good catch, thanks!

> This same is done in the README.md. Never mind. I see the sym link.
Yeah. This makes the github people happy.

>  From line 178 to 208, you are doing a lot of work to build the thin lvm 
> command line. Should you wrap this in a if $lvm_thinp conditional. This 
> will keep all the thin provisioning code  and a possible vgs system call 
> from occurring if $lvm_thinp is not true.
Actually it's all declarative, so it's safe without it. The conditional
is here at 211:

$lvm_lvcreate = $lvm_thinp ? {
true => "${lvm_thinp_lvcreate}",
default => "/sbin/lvcreate --extents 100%PVS -n ${lvm_lvname}
${lvm_vgname}",
}


> 
> Could you also update your lines 218 - 220 to this:
> 
> $dev2 = $lvm ? {
>  true => "/dev/${lvm_vgname}/${lvm_lvname}"} ,
>  default => "${dev1}",
> }
> 
> The long "if" with a short/trivial "else" is almost an "oh, by the way" 
> kind of statement. It can be easy to overlook. The separate conditional 
> can help an other reviewer follow along more easily.
Yeah, I like this actually. Thanks for the idea.

Branch updated (rebased):
https://github.com/purpleidea/puppet-gluster/tree/feat/thinp

Commit at:
https://github.com/purpleidea/puppet-gluster/commit/d204fe5c4b80f0bc6d7850da6ccc90cb695c5873

Also I added a small warning if someone enables thinp but disables LVM.

James

> 
> Keith
> 
> 
> On 04/09/2014 11:13 AM, James wrote:
> > Okay,
> >
> > Here's a first draft of puppet-gluster w/ thin-p. This patch includes
> > documentation updates too! (w00t!)
> >
> > https://github.com/purpleidea/puppet-gluster/tree/feat/thinp
> >
> > FYI: I'll probably rebase this branch.
> > FYI: Somewhat untested. Read the commit message.
> >
> > Comments welcome :)
> >
> > I'm most interested to hear about if everyone is pleased with the way I
> > run the thin-p lv command. I think this makes the most sense, but let me
> > know if anyone has improvements. Also I'd love to hear about what the
> > default values for the parameters should be, but that's a one line
> > patch, so no rush for me.
> >
> > Cheers,
> > James
> >
> >
> >
> 



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Puppet-Gluster+ThinP

2014-04-09 Thread James
On Wed, 2014-04-09 at 12:54 -0400, Keith Schincke wrote:
> James,
> 
> Here is an other small one:
> The number of extents used on the lvcreate should be a changable
> value. 
> The RHS 2.1 Admin guide recommends leaving 15% to 20% of the space 
> available for future snapshotting. There may also be other times
> where 
> the admin does not wish to use all of the remaining space withing the
> VG>
Agreed. I'm waiting to hear from someone what the recommended values
should be.

I'm not sure how this affects the patch. What do you think should be
different? Keep in mind, I'm new to understanding LVM thin-p so I may
have overlooked something.


> 
> Keith



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Puppet-Gluster+ThinP

2014-04-09 Thread James
On Wed, 2014-04-09 at 18:18 -0400, Paul Cuzner wrote:
> I'm really interested in the thinp best practices too. gluster-deploy
> has had thinp support for a while now
Can you paste the list of commands that gluster deploy runs to setup
physical storage including thinp LVM?

>  - and I asked the question about best practices a while back - but
> nothing came back.. 
> 
> Hopefully - you're timing is better than mine! 
> 
> cc'ing Rajesh since the thinp is all about snapshot enablement. 



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Puppet-Gluster+ThinP

2014-04-10 Thread James
On Thu, 2014-04-10 at 00:28 -0400, Paul Cuzner wrote:
> I can do that :) 
lol... okay, what are they??

> 
> Perhaps this could be a topic of conversation at the hackathon on
> Sunday in SF? 



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] Puppet-Gluster Bugzilla

2014-04-11 Thread James
Hi folks,

I hope there are no objections, but I requested a bug tracker for
Puppet-Gluster, and they stuck it under the GlusterFS project. I did
this because many new features and tweaks were requested, and I was
loosing track and figured I'd experiment more with BZ.

Now, if you want a feature or find a bug, I'll ask that you open a
ticket, and when I patch it, I'll ask for testing and ideally, at least
one ACK before merging to master and closing the bug.

New bugs here:
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&component=puppet-gluster

If you have issues to raise, although an email to me is probably a good
first point of contact too.

HTH, and let me know if you have any feedback about this process.

James

PS: The good news is that Puppet-Gluster has lots of new magic in git
master-- I haven't blogged about it all yet, but most patches typically
have documentation updates too.

PPS: Special mention if you can help me patch:
https://github.com/pradels/vagrant-libvirt/issues/162



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Puppet-Gluster Bugzilla

2014-04-12 Thread James
On Sat, 2014-04-12 at 07:16 +0100, Justin Clift wrote:
> On 12/04/2014, at 5:37 AM, James wrote:
> > Hi folks,
> > 
> > I hope there are no objections, but I requested a bug tracker for
> > Puppet-Gluster, and they stuck it under the GlusterFS project. I did
> > this because many new features and tweaks were requested, and I was
> > loosing track and figured I'd experiment more with BZ.
> > 
> > Now, if you want a feature or find a bug, I'll ask that you open a
> > ticket, and when I patch it, I'll ask for testing and ideally, at least
> > one ACK before merging to master and closing the bug.
> > 
> > New bugs here:
> > https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&component=puppet-gluster
> > 
> > If you have issues to raise, although an email to me is probably a good
> > first point of contact too.
> > 
> > HTH, and let me know if you have any feedback about this process.
> 
> Is there value in adding this to Gerrit, so people can use the Gluster review
> process for it as well? :)
Actually, I think no. Puppet-Gluster is still small, (just a hobby,
won't be big and professional like gnu). If it gets out of hand I'll
reconsider the need.

Cheers!
James

> 
> + Justin
> 
> --
> Open Source and Standards @ Red Hat
> 
> twitter.com/realjustinclift
> 



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Puppet-Gluster+ThinP

2014-04-20 Thread James
On Sun, Apr 20, 2014 at 7:59 PM, Ric Wheeler  wrote:
> The amount of space you set aside is very much workload dependent (rate of
> change, rate of deletion, rate of notifying the storage about the freed
> space).

>From the Puppet-Gluster perspective, this will be configurable. I
would like to set a vaguely sensible default though, which I don't
have at the moment.

>
> Keep in mind with snapshots (and thinly provisioned storage, whether using a
> software target or thinly provisioned array) we need to issue the "discard"
> commands down the IO stack in order to let the storage target reclaim space.
>
> That typically means running the fstrim command on the local file system
> (XFS, ext4, btrfs, etc) every so often. Less typically, you can mount your
> local file system with "-o discard" to do it inband (but that comes at a
> performance penalty usually).

Do you think it would make sense to have Puppet-Gluster add a cron job
to do this operation?
Exactly what command should run, and how often? (Again for having
sensible defaults.)

>
> There is also a event mechanism to help us get notified when we hit a target
> configurable watermark ("help, we are running short on real disk, add more
> or clean up!").
Can you point me to some docs about this feature?

>
> Definitely worth following up with the LVM/device mapper people on how to do
> this best,
>
> Ric

Thanks for the comments. From everyone I've talked to, it seems some
of the answers are still in progress. The good news is, that I'm ahead
of the curve for being ready for when this becomes more mainstream. I
think Paul is in the same position too.

James

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Puppet-Gluster+ThinP

2014-04-23 Thread James
On Sun, Apr 20, 2014 at 8:44 PM, Ric Wheeler  wrote:
> On 04/20/2014 05:11 PM, James wrote:
>>
>> On Sun, Apr 20, 2014 at 7:59 PM, Ric Wheeler  wrote:
>>>
>>> The amount of space you set aside is very much workload dependent (rate
>>> of
>>> change, rate of deletion, rate of notifying the storage about the freed
>>> space).
>>
>>  From the Puppet-Gluster perspective, this will be configurable. I
>> would like to set a vaguely sensible default though, which I don't
>> have at the moment.
>
>
> This will require a bit of thinking as you have noticed, but let's start
> with some definitions.
>
> The basic use case is one file system backed by an exclusive dm-thinp target
> (no other file system writing to that dm-thinp pool or contending for
> allocation).
>
> The goal is to get an alert in time to intervene before things get ugly, so
> we are hoping to get a sense of rate of change in the file system and how
> long any snapshot will be retained for.
>
> For example, if we have a 10TB file system (presented as such to the user)
> and we write say 500GB of new data/day, daily snapshots will need that space
> for as long as we retain them.  If you write much less (5GB/day), it will
> clearly take a lot less.
>
> The above makes this all an effort to predict the future, but that is where
> the watermark alert kicks in to help us recover from a bad prediction.
>
> Maybe we use a default of setting aside 20% of raw capacity for snapshots
> and set that watermark at 90% full?  For a lot of use people, I suspect a
> fairly low rate of change and that means pretty skinny snapshots.
>
> We will clearly need to have a lot of effort here in helping explain this to
> users so they can make the trade off for their particular use case.
>
>
>>
>>> Keep in mind with snapshots (and thinly provisioned storage, whether
>>> using a
>>> software target or thinly provisioned array) we need to issue the
>>> "discard"
>>> commands down the IO stack in order to let the storage target reclaim
>>> space.
>>>
>>> That typically means running the fstrim command on the local file system
>>> (XFS, ext4, btrfs, etc) every so often. Less typically, you can mount
>>> your
>>> local file system with "-o discard" to do it inband (but that comes at a
>>> performance penalty usually).
>>
>> Do you think it would make sense to have Puppet-Gluster add a cron job
>> to do this operation?
>> Exactly what command should run, and how often? (Again for having
>> sensible defaults.)
>
>
> I think that we should probably run fstrim once a day or so (hopefully late
> at night or off peak)?  Adding in Lukas who lead a lot of the discard work.

I decided I'd kick off this party by writing a patch, and opening a
bug against my own product (is it cool to do that?)
Bug is: https://bugzilla.redhat.com/show_bug.cgi?id=1090757
Patch is: 
https://github.com/purpleidea/puppet-gluster/commit/1444914fe5988cc285cd572e3ed1042365d58efd
Please comment on the bug if you have any advice or recommendations
about fstrim.

Thanks!

>
>
>>
>>> There is also a event mechanism to help us get notified when we hit a
>>> target
>>> configurable watermark ("help, we are running short on real disk, add
>>> more
>>> or clean up!").
>>
>> Can you point me to some docs about this feature?
>
>
> My quick google search only shows my own very shallow talk slides, so let me
> dig around for something better :)
>
>
>>
>>> Definitely worth following up with the LVM/device mapper people on how to
>>> do
>>> this best,
>>>
>>> Ric
>>
>> Thanks for the comments. From everyone I've talked to, it seems some
>> of the answers are still in progress. The good news is, that I'm ahead
>> of the curve for being ready for when this becomes more mainstream. I
>> think Paul is in the same position too.
>>
>> James
>
>
> This is all new stuff - even not with gluster on top of it - so this will
> mean hitting a few bumps I fear.  Definitely worth putting thought into this
> now and working on the documentation,
>
> Ric
>

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [dm-devel] Puppet-Gluster+ThinP

2014-04-23 Thread James
On Tue, Apr 22, 2014 at 10:30 AM, David Teigland  wrote:
> This topical man page is a recent addition.  If there are questions not
> covered here, we may want to add information about it.
>
> http://man7.org/linux/man-pages/man7/lvmthin.7.html


Hey, I've actually read this and it was extremely helpful.
Someone in #lvm pointed it out a few weeks ago.
I do have some questions, but I think they're more along the lines of
"what thin-p setup does glusterfs expect/prefer?"
Please feel free to have a quick look at:

https://bugzilla.redhat.com/show_bug.cgi?id=1090757

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] [bug #19231] glusterfsd: needs full path to config file with -f parameter

2007-03-07 Thread James Dyer

URL:
  

 Summary: glusterfsd: needs full path to config file with -f
parameter
 Project: Gluster
Submitted by: jad
Submitted on: Wednesday 07/03/07 at 16:52
Category: None
Severity: 3 - Normal
Priority: 5 - Normal
  Item Group: User Interface Bug
  Status: None
 Privacy: Public
 Assigned to: None
 Open/Closed: Open
 Discussion Lock: Any
Operating System: GNU/Linux

___

Details:

Nothing major but:

[EMAIL PROTECTED] gluster]# pwd
/home/jad/gluster
[EMAIL PROTECTED] gluster]# glusterfsd -l /dev/stdout -L DEBUG -f ./server0.vol 
[Mar 07 16:49:55] [ERROR/glusterfsd.c:197/main()] glusterfsd:FATAL: could not
open specfile: './server0.vol'
[EMAIL PROTECTED] gluster]# glusterfsd -l /dev/stdout -L DEBUG -f
/home/jad/gluster/server0.vol 
[Mar 07 16:50:05] [DEBUG/spec.y:113/new_section()]
libglusterfs/parser:new_section: New node for 'brick'


Would have thought that ./server0.vol should probably work as a parameter to
-f - guess there's a chdir happening somewhere beforehand.




___

Reply to this item at:

  

___
  Message sent via/by Savannah
  http://savannah.nongnu.org/



___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] Can't get afr to work

2007-03-08 Thread James Dyer
I've been trying for ages to get afr to work, but somethings either 
broken, or I'm just not 'getting it'...

I'm running glusterfs 1.3.0-pre2 on RHEL 4

At the moment, I'm just trying to get two instances of glusterdfsd running 
on the same server, with the directories /home/jad/gluster/cfs0 and cfs1 
respectively.  If I understand things correctly (and it's quite possible 
I'm not understanding properly), the following configuration should mean 
that any file I create in the glusterfs directory should appear in both 
cfs0 and cfs1, due to the replicate option.

Configuration for first glusterdfs:
volume brick
  type storage/posix
  option directory /home/jad/gluster/cfs0
end-volume

volume server
  type protocol/server
  option transport-type tcp/server
  option bind-address 127.0.0.1
  option listen-port 6996
  subvolumes brick
  option auth.ip.brick.allow *
end-volume

Configuration for second glusterdfs:
volume brick
  type storage/posix
  option directory /home/jad/gluster/cfs1
end-volume

volume server
  type protocol/server
  option transport-type tcp/server
  option bind-address 127.0.0.1
  option listen-port 6997
  subvolumes brick
  option auth.ip.brick.allow *
end-volume

Configuration for glusterfs mounting:
volume brick
  type protocol/client
  option transport-type tcp/client
  option remote-host 127.0.0.1
  option remote-port 6996
  option remote-subvolume brick
end-volume

volume brick-afr
  type protocol/client
  option transport-type tcp/client
  option remote-host 127.0.0.1
  option report-port 6997
  option remote-subvolume brick
end-volume

volume afr
  type cluster/afr
  subvolumes brick brick-afr
  option replicate *:2
end-volume



First instance of glusterdfs run with: 
glusterfsd -f /home/jad/gluster/server0.vol

Second instance:
glusterfsd -f /home/jad/gluster/server1.vol

glusterfs mount run with:
glusterfs -f ./client.vol glusterfs



What I'm finding however, is that if I do a 'touch glusterfs/foo.bar', the 
file is appearing in cfs0, but not in cfs1, which seems to me means that 
gluster is ignoring my 'option replicate *:2' entry in the afr volume.


Any thoughts on what I'm doing wrong???

James



___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Can't get afr to work

2007-03-08 Thread James Dyer
Daniel,

Tried the telnet to port 6997, and the process responded as expected; 
also tried changing the second server volume name to server2, but that's 
made no difference either.

Thanks for trying,

j

On Thu, 8 Mar 2007, Daniel van Ham Colchete wrote:

> James,
> 
> I'm not a Gluster developer but I think I know what the problem is.
> 
> To test my hypothesis, try a telnet at 127.0.0.1:6997. If I'm right it won't
> work.
> 
> The cause of the problem is that the volume name for both servers is the
> same and this shouldn't be right. Try changing the second server volume name
> from 'server' to 'server2'. It's not necessary to change anything else.
> 
> Your gluster mountpoint works because AFR will work even if one of the
> servers is down.
> 
> Tell-me what you found.
> 
> Best regards,
> Daniel Colchete
> 
> On 3/8/07, James Dyer <[EMAIL PROTECTED]> wrote:
> >
> > I've been trying for ages to get afr to work, but somethings either
> > broken, or I'm just not 'getting it'...
> >
> > I'm running glusterfs 1.3.0-pre2 on RHEL 4
> >
> > At the moment, I'm just trying to get two instances of glusterdfsd running
> > on the same server, with the directories /home/jad/gluster/cfs0 and cfs1
> > respectively.  If I understand things correctly (and it's quite possible
> > I'm not understanding properly), the following configuration should mean
> > that any file I create in the glusterfs directory should appear in both
> > cfs0 and cfs1, due to the replicate option.
> >
> > Configuration for first glusterdfs:
> > volume brick
> >   type storage/posix
> >   option directory /home/jad/gluster/cfs0
> > end-volume
> >
> > volume server
> >   type protocol/server
> >   option transport-type tcp/server
> >   option bind-address 127.0.0.1
> >   option listen-port 6996
> >   subvolumes brick
> >   option auth.ip.brick.allow *
> > end-volume
> >
> > Configuration for second glusterdfs:
> > volume brick
> >   type storage/posix
> >   option directory /home/jad/gluster/cfs1
> > end-volume
> >
> > volume server
> >   type protocol/server
> >   option transport-type tcp/server
> >   option bind-address 127.0.0.1
> >   option listen-port 6997
> >   subvolumes brick
> >   option auth.ip.brick.allow *
> > end-volume
> >
> > Configuration for glusterfs mounting:
> > volume brick
> >   type protocol/client
> >   option transport-type tcp/client
> >   option remote-host 127.0.0.1
> >   option remote-port 6996
> >   option remote-subvolume brick
> > end-volume
> >
> > volume brick-afr
> >   type protocol/client
> >   option transport-type tcp/client
> >   option remote-host 127.0.0.1
> >   option report-port 6997
> >   option remote-subvolume brick
> > end-volume
> >
> > volume afr
> >   type cluster/afr
> >   subvolumes brick brick-afr
> >   option replicate *:2
> > end-volume
> >
> >
> >
> > First instance of glusterdfs run with:
> > glusterfsd -f /home/jad/gluster/server0.vol
> >
> > Second instance:
> > glusterfsd -f /home/jad/gluster/server1.vol
> >
> > glusterfs mount run with:
> > glusterfs -f ./client.vol glusterfs
> >
> >
> >
> > What I'm finding however, is that if I do a 'touch glusterfs/foo.bar', the
> > file is appearing in cfs0, but not in cfs1, which seems to me means that
> > gluster is ignoring my 'option replicate *:2' entry in the afr volume.
> >
> >
> > Any thoughts on what I'm doing wrong???
> >
> > James
> >
> >
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@nongnu.org
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
> ___
> Gluster-devel mailing list
> Gluster-devel@nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
> 



___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Can't get afr to work

2007-03-08 Thread James Dyer
BUG/afr.c:2148/init()] afr:xlator name is brick-afr
[Mar 08 18:38:34] [DEBUG/afr.c:2152/init()] afr:child count 2
[Mar 08 18:38:34] [DEBUG/afr.c:2168/init()] afr:afr->init: afr server not 
specified, defaulting to brick
[Mar 08 18:38:34] [DEBUG/transport.c:83/transport_load()] 
libglusterfs/transport:transport_load: attempt to load type tcp/client
[Mar 08 18:38:34] [DEBUG/transport.c:88/transport_load()] 
libglusterfs/transport:transport_load: attempt to load file 
/usr/local/lib/glusterfs/1.3.0-pre2/transport/tcp/client.so
[Mar 08 18:38:34] [DEBUG/tcp-client.c:174/tcp_connect()] transport: tcp: 
:try_connect: socket fd = 7
[Mar 08 18:38:34] [DEBUG/tcp-client.c:196/tcp_connect()] transport: tcp: 
:try_connect: finalized on port `1023'
[Mar 08 18:38:34] [DEBUG/tcp-client.c:220/tcp_connect()] 
tcp/client:try_connect: defaulting remote-port to 6996
[Mar 08 18:38:34] [DEBUG/tcp-client.c:255/tcp_connect()] 
tcp/client:connect on 7 in progress (non-blocking)
[Mar 08 18:38:34] [DEBUG/tcp-client.c:301/tcp_connect()] 
tcp/client:connection on 7 success, attempting to handshake
[Mar 08 18:38:34] [DEBUG/tcp-client.c:58/do_handshake()] 
transport/tcp-client:dictionary length = 50
[Mar 08 18:38:34] [DEBUG/transport.c:83/transport_load()] 
libglusterfs/transport:transport_load: attempt to load type tcp/client
[Mar 08 18:38:34] [DEBUG/transport.c:88/transport_load()] 
libglusterfs/transport:transport_load: attempt to load file 
/usr/local/lib/glusterfs/1.3.0-pre2/transport/tcp/client.so
[Mar 08 18:38:34] [DEBUG/tcp-client.c:174/tcp_connect()] transport: tcp: 
:try_connect: socket fd = 8
[Mar 08 18:38:34] [DEBUG/tcp-client.c:196/tcp_connect()] transport: tcp: 
:try_connect: finalized on port `1022'
[Mar 08 18:38:34] [DEBUG/tcp-client.c:255/tcp_connect()] 
tcp/client:connect on 8 in progress (non-blocking)
[Mar 08 18:38:34] [DEBUG/tcp-client.c:301/tcp_connect()] 
tcp/client:connection on 8 success, attempting to handshake
[Mar 08 18:38:34] [DEBUG/tcp-client.c:58/do_handshake()] 
transport/tcp-client:dictionary length = 50


So the question now seems to be: Why is the client not connecting to the 
second glusterdfsd???

Thanks,

j


On Thu, 8 Mar 2007, Anand Avati wrote:

> 
> James,
>   Jut to make sure the second glusterfsd (of cfs1) is connected to the
> client, can you attach the log file of glusterfsd for cfs1?
> 
> avati
> 
> On Thu, Mar 08, 2007 at 01:58:26PM +, James Dyer wrote:
> > I've been trying for ages to get afr to work, but somethings either 
> > broken, or I'm just not 'getting it'...
> > 
> > I'm running glusterfs 1.3.0-pre2 on RHEL 4
> > 
> > At the moment, I'm just trying to get two instances of glusterdfsd running 
> > on the same server, with the directories /home/jad/gluster/cfs0 and cfs1 
> > respectively.  If I understand things correctly (and it's quite possible 
> > I'm not understanding properly), the following configuration should mean 
> > that any file I create in the glusterfs directory should appear in both 
> > cfs0 and cfs1, due to the replicate option.
> > 
> > Configuration for first glusterdfs:
> > volume brick
> >   type storage/posix
> >   option directory /home/jad/gluster/cfs0
> > end-volume
> > 
> > volume server
> >   type protocol/server
> >   option transport-type tcp/server
> >   option bind-address 127.0.0.1
> >   option listen-port 6996
> >   subvolumes brick
> >   option auth.ip.brick.allow *
> > end-volume
> > 
> > Configuration for second glusterdfs:
> > volume brick
> >   type storage/posix
> >   option directory /home/jad/gluster/cfs1
> > end-volume
> > 
> > volume server
> >   type protocol/server
> >   option transport-type tcp/server
> >   option bind-address 127.0.0.1
> >   option listen-port 6997
> >   subvolumes brick
> >   option auth.ip.brick.allow *
> > end-volume
> > 
> > Configuration for glusterfs mounting:
> > volume brick
> >   type protocol/client
> >   option transport-type tcp/client
> >   option remote-host 127.0.0.1
> >   option remote-port 6996
> >   option remote-subvolume brick
> > end-volume
> > 
> > volume brick-afr
> >   type protocol/client
> >   option transport-type tcp/client
> >   option remote-host 127.0.0.1
> >   option report-port 6997
> >   option remote-subvolume brick
> > end-volume
> > 
> > volume afr
> >   type cluster/afr
> >   subvolumes brick brick-afr
> >   option replicate *:2
> > end-volume
> > 
> > 
> > 
> > First instance of glusterdfs run with: 
> > glusterfsd -f /home/jad/gluster/server0.vol
> > 
> > Second instance:
> >

Re: [Gluster-devel] Can't get afr to work

2007-03-09 Thread James Dyer
Many thanks - I had a feeling I was doing something really stupid - 
working fine now.

j

On Fri, 9 Mar 2007, Krishna Srinivas wrote:

> volume brick-afr
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 127.0.0.1
>  option report-port 6997
>  option remote-subvolume brick
> end-volume
> 
> Here it has to be remote-port :-)
> 
> On 3/8/07, James Dyer <[EMAIL PROTECTED]> wrote:
> > I've been trying for ages to get afr to work, but somethings either
> > broken, or I'm just not 'getting it'...
> >
> > I'm running glusterfs 1.3.0-pre2 on RHEL 4
> >
> > At the moment, I'm just trying to get two instances of glusterdfsd running
> > on the same server, with the directories /home/jad/gluster/cfs0 and cfs1
> > respectively.  If I understand things correctly (and it's quite possible
> > I'm not understanding properly), the following configuration should mean
> > that any file I create in the glusterfs directory should appear in both
> > cfs0 and cfs1, due to the replicate option.
> >
> > Configuration for first glusterdfs:
> > volume brick
> >   type storage/posix
> >   option directory /home/jad/gluster/cfs0
> > end-volume
> >
> > volume server
> >   type protocol/server
> >   option transport-type tcp/server
> >   option bind-address 127.0.0.1
> >   option listen-port 6996
> >   subvolumes brick
> >   option auth.ip.brick.allow *
> > end-volume
> >
> > Configuration for second glusterdfs:
> > volume brick
> >   type storage/posix
> >   option directory /home/jad/gluster/cfs1
> > end-volume
> >
> > volume server
> >   type protocol/server
> >   option transport-type tcp/server
> >   option bind-address 127.0.0.1
> >   option listen-port 6997
> >   subvolumes brick
> >   option auth.ip.brick.allow *
> > end-volume
> >
> > Configuration for glusterfs mounting:
> > volume brick
> >   type protocol/client
> >   option transport-type tcp/client
> >   option remote-host 127.0.0.1
> >   option remote-port 6996
> >   option remote-subvolume brick
> > end-volume
> >
> > volume brick-afr
> >   type protocol/client
> >   option transport-type tcp/client
> >   option remote-host 127.0.0.1
> >   option report-port 6997
> >   option remote-subvolume brick
> > end-volume
> >
> > volume afr
> >   type cluster/afr
> >   subvolumes brick brick-afr
> >   option replicate *:2
> > end-volume
> >
> >
> >
> > First instance of glusterdfs run with:
> > glusterfsd -f /home/jad/gluster/server0.vol
> >
> > Second instance:
> > glusterfsd -f /home/jad/gluster/server1.vol
> >
> > glusterfs mount run with:
> > glusterfs -f ./client.vol glusterfs
> >
> >
> >
> > What I'm finding however, is that if I do a 'touch glusterfs/foo.bar', the
> > file is appearing in cfs0, but not in cfs1, which seems to me means that
> > gluster is ignoring my 'option replicate *:2' entry in the afr volume.
> >
> >
> > Any thoughts on what I'm doing wrong???
> >
> > James
> >
> >
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@nongnu.org
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
> 



___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Best practices?

2007-06-03 Thread James Porter

that is a good question, and how would you compile glusterfs and glusterfsd
?

On 6/3/07, Brandon Lamb <[EMAIL PROTECTED]> wrote:


I was wondering if there was any input on best practices of setting up
a 2 or 3 server cluster.

My question has to do with where to run glusterfsd (server) and where
to run glusterfs (mounting as a client).

Should I keep the servers that are actually handling the drives and
exporting the glusterfs ONLY act as servers?

ie should i run glusterfsd on 2 or 3 servers and then need another 1+
client machines that mount?

Or can I safely have the server and client running on the same machines?

does that make sense? not sure how else to ask it haha


___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Unify namespace-size convention

2007-07-24 Thread James Porter

I don't know about number of files but you certainly can limit the size with
the min-free-disk option in the rr scheduler. I assume you could also just
use ulimit. Anyone else with suggestions / knowledge?

On 7/16/07, Sebastien LELIEVRE <[EMAIL PROTECTED]> wrote:


Hi everyone.

I just have a little question:

Would there be a way to define the namespace volume size regards to the
bricks in use?

Let's say it simply : Is there any rule that would say :

"I have X bricks of Y Gb-size with Z thousand of files on each,
so I need a namespace volume of *how-to define it* Mb"

Does anyone have a clue on this ?

Cheers,

Sebastien.



___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Unify namespace-size convention

2007-07-24 Thread James Porter

Couldn't you make a ramdisk with a small block size. The only thing I can
see bad with that method would be if you are only using unify on a few
bricks and a machine is powered off or crashes.

On 7/24/07, Amar S. Tumballi <[EMAIL PROTECTED]> wrote:


Hi,
 Sorry for the late reply.

 The size of namespace as it contains entries for each file/directory on
the storage nodes, is directly dependent on number of files.

Size of namespace = total number of  files *  size of one block in
filesystem.

so, it would be useful to create a namespace directory (say partition),
with very less block size.

-amar


On 7/24/07, James Porter < [EMAIL PROTECTED]> wrote:
>
> I don't know about number of files but you certainly can limit the size
> with
> the min-free-disk option in the rr scheduler. I assume you could also
> just
> use ulimit. Anyone else with suggestions / knowledge?
>
> On 7/16/07, Sebastien LELIEVRE < [EMAIL PROTECTED]> wrote:
> >
> > Hi everyone.
> >
> > I just have a little question:
> >
> > Would there be a way to define the namespace volume size regards to
> the
> > bricks in use?
> >
> > Let's say it simply : Is there any rule that would say :
> >
> > "I have X bricks of Y Gb-size with Z thousand of files on each,
> > so I need a namespace volume of *how-to define it* Mb"
> >
> > Does anyone have a clue on this ?
> >
> > Cheers,
> >
> > Sebastien.
> >
> >
> >
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@nongnu.org
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
> >
> ___
> Gluster-devel mailing list
> Gluster-devel@nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>



--
Amar Tumballi
http://amar.80x25.org
[bulde on #gluster/irc.gnu.org]

___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] glusterfs 1.3.0-pre6 runtime error...

2007-07-24 Thread James Porter

did you make uninstall before you upgraded?

On 7/24/07, Jonathan Newman <[EMAIL PROTECTED]> wrote:


Well I successfully compiled 1.3.0-pre6 and prepared some basic configs
for
testing. However upon evocation I recieve an error such as this:
/usr/sbin/glusterfsd: symbol lookup error: /usr/sbin/glusterfsd: undefined
symbol: set_transport_register_cbk

If I recall from my C days (long long time ago), that is a link
problem...can someone please point me to what could be the cause and/or
solution to this problem? Any sort of google search has thus returned no
results. I am using:
fuse-2.7.0
libibverbs-1.0.4
sysfsutils-2.1.0
...on a Gentoo 2006.1 systemhere are the configs:

SERVERS:
# serv0.vol
volume brick
type storage/posix
option directory /gluster/0
end-volume

volume server
type protocol/server
option transport-type tcp/server
option listen-port 6996
option bind-address 127.0.0.1
subvolumes brick
option auth.ip.brick.allow 127.0.0.1
end-volume

# serv1.vol
volume brick
type storage/posix
option directory /gluster/1
end-volume

volume server
type protocol/server
option transport-type tcp/server
option listen-port 6997
option bind-address 127.0.0.1
subvolumes brick
option auth.ip.brick.allow 127.0.0.1
end-volume


# serv2.vol
volume brick
type storage/posix
option directory /gluster/2
end-volume

volume server
type protocol/server
option transport-type tcp/server
option listen-port 6998
option bind-address 127.0.0.1
subvolumes brick
option auth.ip.brick.allow 127.0.0.1
end-volume

# serv3.vol
volume brick
type storage/posix
option directory /gluster/3
end-volume

volume server
type protocol/server
option transport-type tcp/server
option listen-port 6999
option bind-address 127.0.0.1
subvolumes brick
option auth.ip.brick.allow 127.0.0.1
end-volume

# client.vol
volume client0
type protocol/client
option transport-type tcp/client
option remote-host 127.0.0.1
option remote-port 6996
option remote-subvolume brick
end-volume

volume client1
type protocol/client
option transport-type tcp/client
option remote-host 127.0.0.1
option remote-port 6997
option remote-subvolume brick
end-volume

volume client2
type protocol/client
option transport-type tcp/client
option remote-host 127.0.0.1
option remote-port 6998
option remote-subvolume brick
end-volume

volume client3
type protocol/client
option transport-type tcp/client
option remote-host 127.0.0.1
option remote-port 6999
option remote-subvolume brick
end-volume

volume bricks
  type cluster/unify
  subvolumes client0 client1 client2 client3
  option rr.limits.min-free-disk 10GB
  option scheduler rr
end-volume

### Add writeback feature
volume writeback
  type performance/write-back
  option aggregate-size 131072 # unit in bytes
  subvolumes bricks
end-volume

### Add readahead feature
volume readahead
  type performance/read-ahead
  option page-size 65536 # unit in bytes
  option page-count 16   # cache per file  = (page-count x page-size)
  subvolumes writeback
end-volume


Thanks...any help is much appreciated :).

PS: These configs were working with 1.2.3 without issue.

-Jonathan
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Quick Question: glusterFS and kernel (stat) caching

2007-08-07 Thread James Porter
I second this request, any info on the problem?

Jim

On 8/6/07, Bernhard J. M. Grün <[EMAIL PROTECTED]> wrote:
>
> Hello developers!
>
> At the moment we try to optimize our web server setup again.
> We tried to parallelize the stat calls in our web server software
> (lighttpd). But it seems the kernel does not cache the stat
> information from one request for further requests. But the same setup
> works fine on a local file system. So it seems that glusterFS and/or
> FUSE is not able to communicate with the kernel (stat) cache. Is this
> right? And is this problem solvable?
>
> Here is some schematic diagram of our approach:
> request -> lighttpd -> Threaded FastCGI program, that does only a stat
> (via fopen) -> lighttpd opens the file for reading and writes the date
> to the socket
>
>
> In a local scenario the second open uses the cached stat and so it
> does not block other reads in lighty at that point. but with a
> glusterFS mount it still blocks there.
>
> Maybe you can give me some advice. Thank you!
>
> Bernhard J. M. Grün
>
>
> ___
> Gluster-devel mailing list
> Gluster-devel@nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


[Gluster-devel] Server Side AFR gets transport endpoint is not connected

2008-08-27 Thread James E Warner
ted
2008-08-27 12:54:11 D [client-protocol.c:4063:client_setvolume_cbk]
cluster: SETVOLUME on remote-host succeeded
2008-08-27 12:54:12 D [client-protocol.c:4129:client_protocol_reconnect]
cluster: breaking reconnect chain
2008-08-27 12:54:17 D [fuse-bridge.c:352:fuse_entry_cbk] glusterfs-fuse:
64: (op_num=34) / => 1
2008-08-27 12:54:17 D [fuse-bridge.c:1640:fuse_opendir] glusterfs-fuse: 65:
OPEN /
2008-08-27 12:54:17 D [fuse-bridge.c:585:fuse_fd_cbk] glusterfs-fuse: 65:
(op_num=22) / => 0x86819b8
2008-08-27 12:54:17 D [fuse-bridge.c:352:fuse_entry_cbk] glusterfs-fuse:
66: (op_num=34) / => 1


Client Configuration File:

volume cluster
  type protocol/client
  option transport-type tcp/client
  option remote-host storage.frankenlab.com
  option remote-subvolume gfs
  option transport-timeout 10
end-volume

Server Configuration File:
=
volume gfs-ds
  type storage/posix
  option directory /mnt/test
end-volume

volume gfs-ds-locks
  type features/posix-locks
  subvolumes gfs-ds
end-volume

### Add remote client
volume gfs-storage2-ds
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.0.6
  option remote-subvolume gfs-ds
  option transport-timeout 10
end-volume

volume gfs-storage3-ds
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.0.7
  option remote-subvolume gfs-ds
  option transport-timeout 10
end-volume

volume gfs-ds-afr
  type cluster/afr
  subvolumes gfs-ds-locks gfs-storage2-ds gfs-storage3-ds
end-volume

volume gfs
  type performance/io-threads
  option thread-count 1
  option cache-size 32MB
  subvolumes gfs-ds-afr
end-volume

### Add network serving capability to above brick.
volume server
  type protocol/server
  option transport-type tcp/server
  subvolumes gfs
  option auth.addr.gfs-ds-locks.allow *
  option auth.addr.gfs.allow *
end-volume

Thanks,

James Warner

Computer Sciences Corporation
Registered Office: 3170 Fairview Park Drive, Falls Church, Virginia 22042,
USA
Registered in Nevada, USA No: C-489-59

-

This is a PRIVATE message. If you are not the intended recipient, please
delete without copying and kindly advise us by e-mail of the mistake in
delivery.
NOTE: Regardless of content, this e-mail shall not operate to bind CSC to
any order or other contract unless pursuant to explicit written agreement
or government initiative expressly permitting the use of e-mail for such
purpose.
-




___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Server Side AFR gets transport endpoint is not connected

2008-08-28 Thread James E Warner
Thanks for the prompt reply.  One final question is the HA translator
still planned for the upcoming 1.4 release and if not do you have a rough
idea of what release it is going into?

Thanks Again,

James Warner
Computer Sciences Corporation
Registered Office: 3170 Fairview Park Drive, Falls Church, Virginia 22042,
USA
Registered in Nevada, USA No: C-489-59

-

This is a PRIVATE message. If you are not the intended recipient, please
delete without copying and kindly advise us by e-mail of the mistake in
delivery.
NOTE: Regardless of content, this e-mail shall not operate to bind CSC to
any order or other contract unless pursuant to explicit written agreement
or government initiative expressly permitting the use of e-mail for such
purpose.
-




   
 "Krishna  
 Srinivas" 
 <[EMAIL PROTECTED]  To 
 h.com>    James E Warner/DEF/[EMAIL PROTECTED] 
 
 Sent by:   cc 
 krishna.srinivas@ gluster-devel@nongnu.org
 gmail.com Subject 
   Re: [Gluster-devel] Server Side AFR 
   gets transport endpoint is not  
 08/28/2008 01:03  connected   
 AM
   
   
   
   
   




On Thu, Aug 28, 2008 at 12:45 AM, James E Warner <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I'm currently testing gluster to see if I can make it work for our HA
> filesystem needs.  And in initial testing things seem to be very good
> especially with client side AFR performing replication to our server
nodes.
> However, we would like to keep our client network free of replication
> traffic so I set up server side afr with three storage bricks replicating
> data between themselves and round robin DNS for the node failover.  The
> round robin dns is working and the failover between the nodes is kind of
> working, but if I pull the network cable on the currently active server
> (the host that the glusterfs client is connected to) the next filesystem
> operation (such as ls /mnt/glusterfs) fails with a "transport endpoint is
> not connected" error.  Similarly, if I have a large copy operation in
> progress the copy will exit with a failure. All of the operations after
> that work fine and netstat shows that the node has failed over to the
next
> server in the list, but by that point I the current file system operation
> has failed.  Anyway, this leads me to a few questions:
>
> 0.  Do my config files look OK or does it look like I've configured this
> thing incorrectly? :)
> 1.  Is this the expected behavior or is this a bug?  From reading the
> mailing list I had the impression that on failure the operation would be
> tried on the remaining ip's that were cached in the clients list, so I
was
> surprised that the operation failed and I think that it is probably a
bug,
> but I could see an argument for how this might be considered normal
> operation.

That is the expected behavior.

>
> 2.  If this is expected behavior is there any plan to change the behavior
> in the future or is server side AFR always expected to work this way?
I've
> seen references to round robin dns being an interim measure on the
mailing
> list, so I'm not sure if there is another translator in the works or not.
> If there is something in the works is that available in the current
> glusterfs 1.4 snapshot releases or is that planned for a much later
> version?

Yes we plan to bring in a HA translator which will make this work fine.

>
> 3.  Can you think of any option that I might have missed that would
correct
> the problem and allow the currently running file operation to succeed
> during a failover?
>
&

Re: [Gluster-devel] issued compiling glusterfs--mainline--2.5--patch-802 on debian lenny

2008-12-01 Thread James Watkins-Harvey
Hi,


There seems to be two problems with your config. First, the fuse
headers files are not installed on your system, so configure disable
compilation of the fuse client, as shown in the configure summary:

> GlusterFS configure summary
> ===
> Fuse client : no
> Infiniband verbs : no
> epoll IO multiplex : yes

Try fetching libfuse-dev first:
  apt-get install libfuse-dev

This is not the reason why make fails, though... I haven't look at the
source, but is it possible that this is a very recent bug that will be
fixed as soon as a developer update his tree...

However, if you don't care not having the most recent development
code, maybe you should use the Gluster Debian package instead; it is
on sid (unstable) right now, but IMHO this has more to do with Debian
politics than package stability. If you want to go that route, I
suggest that you add the sid source repository to your
/etc/apt/sources.list file, then "fetch and build" the source package,
as in:

# echo "deb-src http://ftp.debian.org/debian/ unstable main contrib
non-free" >> /etc/apt/sources.list
# apt-get update
#
# apt-get build-dep glusterfs
# apt-get source -b glusterfs
#
# dpkg -i *gluster*.deb

And by the way, if you get an error when running 'apt-get update', it
may be because the apt cache is too big. Just edit your
/etc/apt/apt.conf, and add the following line:
 APT::Cache-Limit "16777216";


Good luck,

James Watkins-Harvey


___
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel