[Gluster-users] (no subject)

2016-08-02 Thread David Cowe
Hello,

Glusterfs 3.7.10

We have a glusterfs volume made up of 1 brick on each of our 4 nodes.

The volume is setup using tiering. The hot tier has 2 bricks in a replicate
and the cold tier has 2 bricks in a replicate.

We use samba (4.2) and ctdb to mount the volume to our windows clients via
cifs.

We cannot delete a file from the cifs mounted volume on Windows. The file
deletes ok on the Windows side without error but it does not delete from
the glusterfs volume on the storage nodes! When refreshing the Windows
cifs mounted volume (using f5), the file reappears.

We can install the gluster client on a Linux machine and mount the gluster
volume and delete a file without any of the above issues of the file
reappearing. We can also do this on Linux mounting it via nfs.

Our problem is to do with Gluster and Samba.

Any thoughts?

Regards,
David
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] (no subject)

2015-12-29 Thread Kyle Harris
Hello All,

I recently discovered the new arbiter functionality of the 3.7 branch so I
decided to give it a try.  First off, I too am looking forward to the
ability to add an arbiter to an already existing volume as discussed in the
following thread:
https://www.gluster.org/pipermail/gluster-users/2015-August/023030.html.

However, my first question for now is can someone perhaps go into a bit of
detail regarding the difference between using this new arbiter
functionality versus adding a dummy node with regards to helping to
eliminate split-brain?  In other words, a bit of information on which is
best and why?

Second, I noticed at the following URL where is discusses this new
functionality it says, and I quote "*By default, client quorum
(cluster.quorum-type) is set to auto . . ."* which I found to be a bit
confusing.  After setting up my new cluster I noticed that none of the
quorum settings including cluster.quorum-type seem to have a setting?

Arbiter Link
https://github.com/gluster/glusterfs/blob/release-3.7/doc/features/afr-arbiter-volumes.md

Thank you.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] (no subject)

2015-09-28 Thread Dan Bretherton

GtğtGggg
Sent from my Sony Xperia™ smartphonemmn y___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] (no subject)

2014-07-28 Thread Justin Clift
On 28/07/2014, at 3:38 PM, carlos alberto catanejo wrote:
> banco hsbc

Apologies for this spam.  Looks like the sender subscribed
first.  (now terminated)

/me hopes spammers aren't starting to figure out mailman

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] (no subject)

2014-07-28 Thread carlos alberto catanejo
banco hsbc
agencia-1387
conta n-00696-18





1 Nome Completo:carlos alberto catanejo. 2 País:brasil. 3. Nacionalidade . 
Brasileiro.4 Número de Telefone:(11)-962797364. 5 Fax:. 6 Sexo:masculino. 7 
Data de nascimento:16/02/1959. 8 Estado Civil:solteiro. 9 
E-Mail:carlosalbertocatan...@hotmail.com. 10 Ocupação: sacerdote11. Contato 
Endereço: rua alamar n 12perus
sao paulo-brasil
cep=05202-180 ___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] (no subject)

2013-12-02 Thread Mark Morlino
I use the same Centos and Gluster versions and also see high load averages.
I'm not sure I can help but I can relate since I have similar problems.

How many virtual machine images do you have on one brick?

I started out putting several VM images on a single gluster volume and have
since started creating a dedicated volume for each VM. My thinking was that
each brick gets a single process on each peer and that having separate
processes would prevent VMs from having to compete for slices of a single
brick process. I have not experienced a noticeable difference between the
setups so it is quite possible that my hypothesis was incorrect.


What is the underlying hardware setup?

I noticed significant VM performance improvements when I reconfigured the
LVM LVs that backend my VM images from raid 1 to raid 10. High load average
did not go away but VMs behave normally now instead of really slowly.

How many CPU cores do you have? On my system, which is is a pair of ~5 year
old supermicro servers with 8 cores and 8 GB of memory I consistently see
load averages in the mid-20s. It's higher than I would like but the VMs
that use the storage are still running well.

-Mark


On Mon, Dec 2, 2013 at 5:30 AM, gandalf istari wrote:

> I'm running version glusterfs 3.4.1 on a Centos 6.4
>
> now Load average: 3.37 3.41 3.29
>
>
>  the program that uses the most cpu:
> /usr/sbin/glusterfsd but several instances.
>
> The logs are not showing any errors.
>
>
> I have never used atop, but this looks like a problem
>
> LVM | s02-LogVol00 | busy 80% | read   0 | write   1166
>
> gr
>
>
> hi
> which version of gluster
> on which OS system version
> Did you run "top" (or even better "atop") to see which programm uses up
> how much resources (IO,RAM,CPU)?
> anything unusual in the logfiles?
>
>
> Am 02.12.2013 14:57:32, schrieb gandalf istari:
>
> Hi, I have two glusterfs server in replicated mode.
> brick one has my virtual machines and brick two server receives only the
> replicated parts.
>
> Now for the last 3 day's I get following message from my monitoring system.
>
> WARNING - load average: 6.04, 6.55, 6.07
>
> And the load sometime goes on for about 16 hours.
>
> For me this seems not to be normal, anyone some advice on this ?
>
> The load on brick one is not more the 1.8
>
>
> Thanks
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> --
>   --
>   [image: *Ecologic Institute*]   *Bernhard Glomm*
> IT Administration
>
>Phone:  +49 (30) 86880 134   Fax:  +49 (30) 86880 100   Skype: 
> bernhard.glomm.ecologic [image:
> Website:]  [image: | 
> Video:] [image:
> | Newsletter:]  [image: |
> Facebook:]  [image: |
> Linkedin:] 
> [image:
> | Twitter:]  [image: | 
> YouTube:] [image:
> | Google+:]    Ecologic
> Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin |
> Germany
> GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
> DE811963464
> Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] (no subject)

2013-12-02 Thread gandalf istari
I'm running version glusterfs 3.4.1 on a Centos 6.4

now Load average: 3.37 3.41 3.29


 the program that uses the most cpu:
/usr/sbin/glusterfsd but several instances.

The logs are not showing any errors.


I have never used atop, but this looks like a problem

LVM | s02-LogVol00 | busy 80% | read   0 | write   1166

gr


hi
which version of gluster
on which OS system version
Did you run "top" (or even better "atop") to see which programm uses up how
much resources (IO,RAM,CPU)?
anything unusual in the logfiles?


Am 02.12.2013 14:57:32, schrieb gandalf istari:

Hi, I have two glusterfs server in replicated mode.
brick one has my virtual machines and brick two server receives only the
replicated parts.

Now for the last 3 day's I get following message from my monitoring system.

WARNING - load average: 6.04, 6.55, 6.07

And the load sometime goes on for about 16 hours.

For me this seems not to be normal, anyone some advice on this ?

The load on brick one is not more the 1.8


Thanks

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users




-- 
  --
  [image: *Ecologic Institute*]   *Bernhard Glomm*
IT Administration

   Phone:  +49 (30) 86880 134   Fax:  +49 (30) 86880 100   Skype:
bernhard.glomm.ecologic [image:
Website:]  [image: |
Video:] [image:
| Newsletter:]  [image: |
Facebook:]  [image: |
Linkedin:]
[image:
| Twitter:]  [image: |
YouTube:] [image:
| Google+:]    Ecologic
Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin |
Germany
GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
DE811963464
Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] (no subject)

2013-10-06 Thread James
On Fri, Oct 4, 2013 at 10:55 AM, Jay Vyas  wrote:
> thanks james :)  how about a INSTALL_FROM_ZERO instructions in the readme?
Sure. If I don't have all that code and related docs published and
good enough to meet your needs within 2 weeks, ping me and I'll double
down and make it right. I hope to have it done before then though.

James
> :)
>
>
>
> On Fri, Oct 4, 2013 at 10:31 AM, James  wrote:
>>
>> On Fri, Oct 4, 2013 at 9:36 AM, Jay Vyas  wrote:
>> > Ah yes thanks James..!  Forgot about this - haven't adopted it yet
>> > because I still don't > know puppet to well..
>> I'm happy to help if you get stuck.
>>
>> > but the two node vagrant recipe on the forge is more for beginners who
>> > don't know what config they want - but just want to play with an 
>> > operational
>> > multi node gluster stack. > Similar to people wanting to download an ISO.
>>
>> Sure thing!
>>
>> >
>> > So, I guess someday lets join forces and put a vagrant recipe to use
>> > puppet to create a Virtual fedora cluster on the fly.  That will take the
>> > best of both worlds: vagrant for setting up machines and puppet for
>> > transparently configuring them.
>>
>> I'm happy to help hack on this front. I have a new puppet-gluster
>> class coming up which is a one line "include gluster::simple" It's
>> actually ready, but I am adding a few extra features first.
>>
>> Cheers
>
>
>
>
> --
> Jay Vyas
> http://jayunit100.blogspot.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] (no subject)

2013-10-04 Thread Jay Vyas
thanks james :)  how about a INSTALL_FROM_ZERO instructions in the readme?
:)



On Fri, Oct 4, 2013 at 10:31 AM, James  wrote:

> On Fri, Oct 4, 2013 at 9:36 AM, Jay Vyas  wrote:
> > Ah yes thanks James..!  Forgot about this - haven't adopted it yet
> because I still don't > know puppet to well..
> I'm happy to help if you get stuck.
>
> > but the two node vagrant recipe on the forge is more for beginners who
> don't know what config they want - but just want to play with an
> operational multi node gluster stack. > Similar to people wanting to
> download an ISO.
>
> Sure thing!
>
> >
> > So, I guess someday lets join forces and put a vagrant recipe to use
> puppet to create a Virtual fedora cluster on the fly.  That will take the
> best of both worlds: vagrant for setting up machines and puppet for
> transparently configuring them.
>
> I'm happy to help hack on this front. I have a new puppet-gluster
> class coming up which is a one line "include gluster::simple" It's
> actually ready, but I am adding a few extra features first.
>
> Cheers
>



-- 
Jay Vyas
http://jayunit100.blogspot.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] (no subject)

2013-10-04 Thread James
On Fri, Oct 4, 2013 at 9:36 AM, Jay Vyas  wrote:
> Ah yes thanks James..!  Forgot about this - haven't adopted it yet because I 
> still don't > know puppet to well..
I'm happy to help if you get stuck.

> but the two node vagrant recipe on the forge is more for beginners who don't 
> know what config they want - but just want to play with an operational multi 
> node gluster stack. > Similar to people wanting to download an ISO.

Sure thing!

>
> So, I guess someday lets join forces and put a vagrant recipe to use puppet 
> to create a Virtual fedora cluster on the fly.  That will take the best of 
> both worlds: vagrant for setting up machines and puppet for transparently 
> configuring them.

I'm happy to help hack on this front. I have a new puppet-gluster
class coming up which is a one line "include gluster::simple" It's
actually ready, but I am adding a few extra features first.

Cheers
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] (no subject)

2013-10-04 Thread Jay Vyas
Ah yes thanks James..!  Forgot about this - haven't adopted it yet because I 
still don't know puppet to well.. but the two node vagrant recipe on the forge 
is more for beginners who don't know what config they want - but just want to 
play with an operational multi node gluster stack.  Similar to people wanting 
to download an ISO.

So, I guess someday lets join forces and put a vagrant recipe to use puppet to 
create a Virtual fedora cluster on the fly.  That will take the best of both 
worlds: vagrant for setting up machines and puppet for transparently 
configuring them.


> On Oct 3, 2013, at 8:40 PM, James  wrote:
> 
>> On Thu, 2013-10-03 at 14:14 -0400, Jay Vyas wrote:
>> FYI If youre interested in "Trying" to play with a gluster distribtued
>> set
>> up on VMs,  you can try to spin up and have  vagrant installed ,
>> Checkout
>> this post :
>> http://www.gluster.org/2013/10/instant-ephemeral-gluster-clusters-with-vagrant/.
>> Im using this for my VM's at the moment.  Its the simplest way (in my
>> humble opinion) to get started, i.e. a bare bones distributed gluster
>> setup
> 
> *cough* or use puppet-gluster instead :)
> https://github.com/purpleidea/puppet-gluster
> 
> 
> 
>> in VMs.  And its also a playground for learniing because you can hack
>> the
>> shell scripts manually.
>> 
>> im hoping over time maybe a few others will use / test it and help
>> provide
>> feedback so we can have more easy setup recipes for POC gluster
>> clusters,
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] (no subject)

2013-10-03 Thread Vishvendra Singh Chauhan
Okay Guys,

I got it




On Fri, Oct 4, 2013 at 6:10 AM, James  wrote:

> On Thu, 2013-10-03 at 14:14 -0400, Jay Vyas wrote:
> > FYI If youre interested in "Trying" to play with a gluster distribtued
> > set
> > up on VMs,  you can try to spin up and have  vagrant installed ,
> > Checkout
> > this post :
> >
> http://www.gluster.org/2013/10/instant-ephemeral-gluster-clusters-with-vagrant/
> .
> >  Im using this for my VM's at the moment.  Its the simplest way (in my
> > humble opinion) to get started, i.e. a bare bones distributed gluster
> > setup
>
> *cough* or use puppet-gluster instead :)
> https://github.com/purpleidea/puppet-gluster
>
>
>
> > in VMs.  And its also a playground for learniing because you can hack
> > the
> > shell scripts manually.
> >
> > im hoping over time maybe a few others will use / test it and help
> > provide
> > feedback so we can have more easy setup recipes for POC gluster
> > clusters,
>
>


-- 
*Thanks and Regards.*
*Vishvendra Singh Chauhan*
*(**RHC{SA,E,SS,VA}CC{NA,NP})*
*+91-9711460593
*
http://linux-links.blogspot.in/
God First Work Hard Success is Sure...
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] (no subject)

2013-10-03 Thread James
On Thu, 2013-10-03 at 14:14 -0400, Jay Vyas wrote:
> FYI If youre interested in "Trying" to play with a gluster distribtued
> set
> up on VMs,  you can try to spin up and have  vagrant installed ,
> Checkout
> this post :
> http://www.gluster.org/2013/10/instant-ephemeral-gluster-clusters-with-vagrant/.
>  Im using this for my VM's at the moment.  Its the simplest way (in my
> humble opinion) to get started, i.e. a bare bones distributed gluster
> setup

*cough* or use puppet-gluster instead :)
https://github.com/purpleidea/puppet-gluster



> in VMs.  And its also a playground for learniing because you can hack
> the
> shell scripts manually.
> 
> im hoping over time maybe a few others will use / test it and help
> provide
> feedback so we can have more easy setup recipes for POC gluster
> clusters,



signature.asc
Description: This is a digitally signed message part
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] (no subject)

2013-10-03 Thread Jay Vyas
FYI If youre interested in "Trying" to play with a gluster distribtued set
up on VMs,  you can try to spin up and have  vagrant installed , Checkout
this post :
http://www.gluster.org/2013/10/instant-ephemeral-gluster-clusters-with-vagrant/.
 Im using this for my VM's at the moment.  Its the simplest way (in my
humble opinion) to get started, i.e. a bare bones distributed gluster setup
in VMs.  And its also a playground for learniing because you can hack the
shell scripts manually.

im hoping over time maybe a few others will use / test it and help provide
feedback so we can have more easy setup recipes for POC gluster clusters,

On Thu, Oct 3, 2013 at 1:11 PM, John Mark Walker wrote:

> The Gluster Storage Platform was a very old attempt to provide a
> downloadable ISO. It no longer exists - and hasn't existed for 2.5 years.
>
> If you're looking for an easily usable product with GlusterFS, the only
> one currently available is from Red Hat under the "Red Hat Storage" brand.
>
> If you want a GUI for GlusterFS, try oVirt at http://www.ovirt.org/ - you
> can use ovirt as management for KVM + Gluster storage pools, or as
> Gluster-only.
>
> Thanks,
> JM
>
>
> --
>
> Hello Friends,
>
>
> Please give me the link, How can i download the gluster storage platform
> iso. And how can i get the graphical console of gluster.
>
> --
> *Thanks and Regards.*
> *Vishvendra Singh Chauhan*
> *+91-8750625343*
> http:// linux-links.blogspot.com
> God First Work Hard Success is Sure...
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>



-- 
Jay Vyas
http://jayunit100.blogspot.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] (no subject)

2013-10-03 Thread John Mark Walker
The Gluster Storage Platform was a very old attempt to provide a downloadable 
ISO. It no longer exists - and hasn't existed for 2.5 years. 

If you're looking for an easily usable product with GlusterFS, the only one 
currently available is from Red Hat under the "Red Hat Storage" brand. 

If you want a GUI for GlusterFS, try oVirt at http://www.ovirt.org/ - you can 
use ovirt as management for KVM + Gluster storage pools, or as Gluster-only. 

Thanks, 
JM 

- Original Message -

> Hello Friends,

> Please give me the link, How can i download the gluster storage platform iso.
> And how can i get the graphical console of gluster.

> --
> Thanks and Regards.
> Vishvendra Singh Chauhan
> +91-8750625343
> http:// linux-links.blogspot.com
> God First Work Hard Success is Sure...

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] (no subject)

2013-10-03 Thread Vishvendra Singh Chauhan
Hello Friends,


Please give me the link, How can i download the gluster storage platform
iso. And how can i get the graphical console of gluster.

-- 
*Thanks and Regards.*
*Vishvendra Singh Chauhan*
*+91-8750625343*
http:// linux-links.blogspot.com
God First Work Hard Success is Sure...
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] (no subject)

2013-10-03 Thread Kaleb KEITHLEY

On 10/03/2013 07:00 AM, Vishvendra Singh Chauhan wrote:

Please suggest me, From where i can download the gluster storage
platform ISO.



The only ISO is for RHS (Red Hat Storage), which is only available to 
purchasers of a support license. If you have purchased RHS you should 
have been given the necessary information to download the ISO. If you 
don't have that information you should open a Support Ticket.


You can get community supported RPMs for Fedora, RHEL, CentOS, etc., and 
.debs for Ubuntu and Debian from 
http://download.gluster.org/pub/gluster/glusterfs/


--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] (no subject)

2013-10-03 Thread Vishvendra Singh Chauhan
Please suggest me, From where i can download the gluster storage platform
ISO.

-- 
*Thanks and Regards.*
*Vishvendra Singh Chauhan*
*(**RHC{SA,E,SS,VA}CC{NA,NP})*
*+91-9711460593
*
http://linux-links.blogspot.in/
God First Work Hard Success is Sure...
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] (no subject)

2013-06-18 Thread J . Alejandro Sánchez O .
Hello People, I have 2 servers with 4 devices for storage (raid 6), and
with them we config a distributed-replicated gluster, but the load average
is not the same in both (is high in one of them). The question is: If I
remove a server (trusted server) and later add another server ... what
happend? ... its OK? ...

I have Ubuntu Server 10.04 Server with GlusterFS 3.2.7-1 in both servers

Plus data: I see this messages in logs
/var/log/glusterfs/bricks/gluster-brick2.log:
[2013-06-18 09:31:39.587857] W [inode.c:844:inode_lookup]
(-->/usr/lib/glusterfs/3.2.7/xlator/features/marker.so(marker_lookup_cbk+0xfa)
[0x7f277df95e5a]
(-->/usr/lib/glusterfs/3.2.7/xlator/debug/io-stats.so(io_stats_lookup_cbk+0xff)
[0x7f277dd7f60f]
(-->/usr/lib/glusterfs/3.2.7/xlator/protocol/server.so(server_lookup_cbk+0x10b)
[0x7f277db67beb]))) 0-: inode not found

Somebody Have idea about this?

-- 
---
*J. Alejandro Sánchez O.*
*Skype / MSN / GTalk: jasanch...@gmail.com*
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] (no subject)

2013-06-10 Thread Ziemowit Pierzycki
Hi,

I created the following volume some time ago:

gluster> volume info

Volume Name: dist
Type: Striped-Replicate
Volume ID: 045d90f6-3881-4c63-88d6-5b2b024f2db5
Status: Started
Number of Bricks: 1 x 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: galatea-ib:/data/dist1
Brick2: mimas-ib:/data/dist1
Brick3: galatea-ib:/data/dist2
Brick4: mimas-ib:/data/dist2

... and ever since I've been having the following problems:

# cp -rf /data/shares/pictures/Home\ Theatre/ .
cp: writing ‘./Home Theatre/Home Theatre 002.jpg’: Invalid argument
cp: failed to extend ‘./Home Theatre/Home Theatre 002.jpg’: Invalid argument
cp: writing ‘./Home Theatre/Home Theatre 003.jpg’: Invalid argument
cp: failed to extend ‘./Home Theatre/Home Theatre 003.jpg’: Invalid argument
cp: writing ‘./Home Theatre/Home Theatre 005.jpg’: Invalid argument
cp: failed to extend ‘./Home Theatre/Home Theatre 005.jpg’: Invalid argument
cp: writing ‘./Home Theatre/Home Theatre 006.jpg’: Invalid argument
cp: failed to extend ‘./Home Theatre/Home Theatre 006.jpg’: Invalid argument
cp: closing ‘./Home Theatre/Thumbs.db’: Invalid argument

So empty files get created.  When I attempt to write to the empty file, it
works.  Any ideas what could be causing this?

Gluster packages installed:

glusterfs-fuse-3.3.1-15.fc18.x86_64
glusterfs-server-3.3.1-15.fc18.x86_64
glusterfs-3.3.1-15.fc18.x86_64

Thanks.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] (no subject)

2013-01-07 Thread Jay Vyas
Hi guys: Oddly, when I try to start glusterd using

sudo service glusterd start

I get no logs to /var/log/gluster*

Upon jeff's advice, I ran glusterd this way:

glusterd --debug -f /etc/glusterfs/...

And below is the output.  It appears to be related to ports but im not sure ...
any idea whats going on?


[jay@fedoravm ~]$ glusterd --debug -f /etc/glusterfs/glusterd.vol
[2013-01-07 20:34:59.163535] I [glusterfsd.c:1877:main] 0-glusterd:
Started running glusterd version 3git (glusterd --debug -f
/etc/glusterfs/glusterd.vol)
[2013-01-07 20:34:59.163784] D [glusterfsd.c:549:get_volfp]
0-glusterfsd: loading volume file /etc/glusterfs/glusterd.vol
[2013-01-07 20:34:59.165230] I [glusterd.c:929:init] 0-management:
Using /var/lib/glusterd as working directory
[2013-01-07 20:34:59.165324] D
[glusterd.c:330:glusterd_rpcsvc_options_build] 0-: listen-backlog
value: 128
[2013-01-07 20:34:59.165485] D [rpcsvc.c:1900:rpcsvc_init]
0-rpc-service: RPC service inited.
[2013-01-07 20:34:59.165539] D [rpcsvc.c:1665:rpcsvc_program_register]
0-rpc-service: New program registered: GF-DUMP, Num: 123451501, Ver:
1, Port: 0
[2013-01-07 20:34:59.165588] D
[rpc-transport.c:248:rpc_transport_load] 0-rpc-transport: attempt to
load file /usr/lib/glusterfs/3git/rpc-transport/socket.so
[2013-01-07 20:34:59.166966] I [socket.c:3390:socket_init]
0-socket.management: SSL support is NOT enabled
[2013-01-07 20:34:59.167078] I [socket.c:3405:socket_init]
0-socket.management: using system polling thread
[2013-01-07 20:34:59.167106] D [name.c:557:server_fill_address_family]
0-socket.management: option address-family not specified, defaulting
to inet
[2013-01-07 20:34:59.167222] E [socket.c:665:__socket_server_bind]
0-socket.management: binding to  failed: Address already in use
[2013-01-07 20:34:59.167254] E [socket.c:668:__socket_server_bind]
0-socket.management: Port is already in use
[2013-01-07 20:34:59.167280] W [rpcsvc.c:1395:rpcsvc_transport_create]
0-rpc-service: listening on transport failed
[2013-01-07 20:34:59.167304] E [glusterd.c:1023:init] 0-management:
creation of listener failed
[2013-01-07 20:34:59.167351] E [xlator.c:408:xlator_init]
0-management: Initialization of volume 'management' failed, review
your volfile again
[2013-01-07 20:34:59.167370] E [graph.c:292:glusterfs_graph_init]
0-management: initializing translator failed
[2013-01-07 20:34:59.167387] E [graph.c:479:glusterfs_graph_activate]
0-graph: init failed
[2013-01-07 20:34:59.167513] W [glusterfsd.c:969:cleanup_and_exit]
(-->glusterd(main+0x39d) [0x40493d]
(-->glusterd(glusterfs_volumes_init+0xb7) [0x407527]
(-->glusterd(glusterfs_process_volfp+0x103) [0x407433]))) 0-: received
signum (0), shutting down
[2013-01-07 20:34:59.167554] D
[glusterfsd-mgmt.c:2214:glusterfs_mgmt_pmap_signout] 0-fsd-mgmt:
portmapper signout arguments not given
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] (no subject)

2011-08-12 Thread Jürgen Maurer

L°
Von Samsung Mobile gesendet___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] (no subject)

2010-12-20 Thread Lakshmipathi
Hi Jeroen,
Sorry for the delay.We recommend glusterfs-volgen generated 
configuration/volume files for gluster-3.0.x 
and gluster cli for 3.1.x


-- 

Cheers,
Lakshmipathi.G
FOSS Programmer.
- Original Message -
From: "Jeroen Koekkoek" 
To: "Lakshmipathi (lakshmipa...@gluster.com)" 
Cc: "gluster-users@gluster.org" 
Sent: Wednesday, December 15, 2010 10:10:18 PM
Subject: RE: [Gluster-users] (no subject)

Hi Lakshmipathi,

I decoupled the client and server and tested again. The problem did not occur. 
That leads me to the following question: Is my original configuration a 
supported one? Meaning is my original configuration supposed to work? Because 
that configuration was really, really fast compared to the traditional 
client/server model.

I tested this with GlusterFS 3.0.7. I'll repeat the steps with 3.1.1 tomorrow.

Regards,
Jeroen

> -Original Message-
> From: Jeroen Koekkoek
> Sent: Wednesday, December 15, 2010 2:00 PM
> To: 'Lakshmipathi'
> Subject: RE: [Gluster-users] (no subject)
> 
> Hi Lakshmipathi,
> 
> I forgot to mention that I use a single volfile for both the server and
> the client, so that the client is actually the server and vice-versa.
> The same process is connected to the mount point and serving the brick
> over tcp. Below is my configuration for a single host.
> 
> - cut -
> ### file: glusterfs.vol
> 
> 
> ###  GlusterFS Server and Client Volume File  ##
> 
> 
> volume posix
>   type storage/posix
>   option directory /var/vmail_local
> end-volume
> 
> volume local_brick_mta1
>   type features/locks
>   subvolumes posix
> end-volume
> 
> volume server
>   type protocol/server
>   option transport-type tcp
>   option transport.socket.bind-address 172.16.104.21
>   option auth.addr.local_brick_mta1.allow 172.16.104.22
>   subvolumes local_brick_mta1
> end-volume
> 
> volume remote_brick
>   type protocol/client
>   option transport-type tcp
>   option remote-host 172.16.104.22
>   option remote-subvolume local_brick_mta2 end-volume
> 
> volume afr
>   type cluster/replicate
> #  option read-subvolume local_brick
>   subvolumes remote_brick local_brick_mta1 end-volume
> 
> volume writebehind
>   type performance/write-behind
>   option cache-size 1MB
>   subvolumes afr
> end-volume
> 
> volume quickread
>   type performance/quick-read
>   option cache-timeout 1
>   option max-file-size 1MB
>   subvolumes writebehind
> end-volume
> ----- /cut -
> 
> Regards,
> Jeroen
> 
> > -Original Message-
> > From: Lakshmipathi [mailto:lakshmipa...@gluster.com]
> > Sent: Wednesday, December 15, 2010 10:34 AM
> > To: Jeroen Koekkoek
> > Subject: Re: [Gluster-users] (no subject)
> >
> > Hi Jeroen Koekkoek,
> > We are unable to  reproduce your issue with 3.1.1.
> >
> > Steps -
> > setup 2 afr server and mount it on 2 clients.
> >
> > client1-mntpt#touch file.txt
> >
> > this file avail. on both client mounts - verified with ls command.
> > client1-mntpt#ls -l file.txt
> > client2-mntpt#ls -l file.txt
> >
> > Now unmounted client2.
> > umount 
> >
> > now removed the file from client1.
> > client1-mntpt#rm file.txt
> >
> > then mount client2 again and did a ls - client2-mntpt#ls -l
> >
> > file.txt is not available on both clients now,as expected.
> >
> >
> > If you are still facing this issue,sent us the server and client logs
> > along with exact steps to reproduce this issue.
> > Thanks.
> >
> >
> >
> > --
> > 
> > Cheers,
> > Lakshmipathi.G
> > FOSS Programmer.
> >
> > - Original Message -
> > From: "Jeroen Koekkoek" 
> > To: gluster-users@gluster.org
> > Sent: Wednesday, December 15, 2010 1:05:50 PM
> > Subject: [Gluster-users] (no subject)
> >
> > Hi,
> >
> > I have a question regarding glusterfs and replicate. I have a two node
> > setup. The following problem arises: if I create a file on the mount
> > point, then unmount gfs on the 2nd machine, remove the file from the
> > 1st (through the mount point) and brind the mount point on the 2nd
> > machine back up. The file is removed (from the 2nd) if I `ls` the
> > mount point on the 1st machine, the file is created (on the 1st) if I
> > `ls` the mount point on the 2nd.
> >
> > If I update the file instead of removing it, everything goes fine. The
> > file is up-to-date on both machines.
> >
> > I looked at the .landfill directory, but that is only used in a self-
> > heal situation. Is there a way I can work around this? Maybe using the
> > trash translator?
> >
> > Best regards,
> > Jeroen Koekkoek
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] (no subject)

2010-12-15 Thread Jeroen Koekkoek
Hi Lakshmipathi,

I decoupled the client and server and tested again. The problem did not occur. 
That leads me to the following question: Is my original configuration a 
supported one? Meaning is my original configuration supposed to work? Because 
that configuration was really, really fast compared to the traditional 
client/server model.

I tested this with GlusterFS 3.0.7. I'll repeat the steps with 3.1.1 tomorrow.

Regards,
Jeroen

> -Original Message-
> From: Jeroen Koekkoek
> Sent: Wednesday, December 15, 2010 2:00 PM
> To: 'Lakshmipathi'
> Subject: RE: [Gluster-users] (no subject)
> 
> Hi Lakshmipathi,
> 
> I forgot to mention that I use a single volfile for both the server and
> the client, so that the client is actually the server and vice-versa.
> The same process is connected to the mount point and serving the brick
> over tcp. Below is my configuration for a single host.
> 
> - cut -
> ### file: glusterfs.vol
> 
> 
> ###  GlusterFS Server and Client Volume File  ##
> 
> 
> volume posix
>   type storage/posix
>   option directory /var/vmail_local
> end-volume
> 
> volume local_brick_mta1
>   type features/locks
>   subvolumes posix
> end-volume
> 
> volume server
>   type protocol/server
>   option transport-type tcp
>   option transport.socket.bind-address 172.16.104.21
>   option auth.addr.local_brick_mta1.allow 172.16.104.22
>   subvolumes local_brick_mta1
> end-volume
> 
> volume remote_brick
>   type protocol/client
>   option transport-type tcp
>   option remote-host 172.16.104.22
>   option remote-subvolume local_brick_mta2 end-volume
> 
> volume afr
>   type cluster/replicate
> #  option read-subvolume local_brick
>   subvolumes remote_brick local_brick_mta1 end-volume
> 
> volume writebehind
>   type performance/write-behind
>   option cache-size 1MB
>   subvolumes afr
> end-volume
> 
> volume quickread
>   type performance/quick-read
>   option cache-timeout 1
>   option max-file-size 1MB
>   subvolumes writebehind
> end-volume
> - /cut -
> 
> Regards,
> Jeroen
> 
> > -Original Message-
> > From: Lakshmipathi [mailto:lakshmipa...@gluster.com]
> > Sent: Wednesday, December 15, 2010 10:34 AM
> > To: Jeroen Koekkoek
> > Subject: Re: [Gluster-users] (no subject)
> >
> > Hi Jeroen Koekkoek,
> > We are unable to  reproduce your issue with 3.1.1.
> >
> > Steps -
> > setup 2 afr server and mount it on 2 clients.
> >
> > client1-mntpt#touch file.txt
> >
> > this file avail. on both client mounts - verified with ls command.
> > client1-mntpt#ls -l file.txt
> > client2-mntpt#ls -l file.txt
> >
> > Now unmounted client2.
> > umount 
> >
> > now removed the file from client1.
> > client1-mntpt#rm file.txt
> >
> > then mount client2 again and did a ls - client2-mntpt#ls -l
> >
> > file.txt is not available on both clients now,as expected.
> >
> >
> > If you are still facing this issue,sent us the server and client logs
> > along with exact steps to reproduce this issue.
> > Thanks.
> >
> >
> >
> > --
> > 
> > Cheers,
> > Lakshmipathi.G
> > FOSS Programmer.
> >
> > - Original Message -
> > From: "Jeroen Koekkoek" 
> > To: gluster-users@gluster.org
> > Sent: Wednesday, December 15, 2010 1:05:50 PM
> > Subject: [Gluster-users] (no subject)
> >
> > Hi,
> >
> > I have a question regarding glusterfs and replicate. I have a two node
> > setup. The following problem arises: if I create a file on the mount
> > point, then unmount gfs on the 2nd machine, remove the file from the
> > 1st (through the mount point) and brind the mount point on the 2nd
> > machine back up. The file is removed (from the 2nd) if I `ls` the
> > mount point on the 1st machine, the file is created (on the 1st) if I
> > `ls` the mount point on the 2nd.
> >
> > If I update the file instead of removing it, everything goes fine. The
> > file is up-to-date on both machines.
> >
> > I looked at the .landfill directory, but that is only used in a self-
> > heal situation. Is there a way I can work around this? Maybe using the
> > trash translator?
> >
> > Best regards,
> > Jeroen Koekkoek
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] (no subject)

2010-12-14 Thread Jeroen Koekkoek
Hi,

I have a question regarding glusterfs and replicate. I have a two node setup. 
The following problem arises: if I create a file on the mount point, then 
unmount gfs on the 2nd machine, remove the file from the 1st (through the mount 
point) and brind the mount point on the 2nd machine back up. The file is 
removed (from the 2nd) if I `ls` the mount point on the 1st machine, the file 
is created (on the 1st) if I `ls` the mount point on the 2nd.

If I update the file instead of removing it, everything goes fine. The file is 
up-to-date on both machines.

I looked at the .landfill directory, but that is only used in a self-heal 
situation. Is there a way I can work around this? Maybe using the trash 
translator?

Best regards,
Jeroen Koekkoek
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] (no subject)

2010-05-04 Thread Dennis Arkhangelski
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hm... DRBD is:
a) not scalable at all
b) not really a parallel filesystem

IBM GPFS perhaps, but GPL Linux kernel module versions have quite
limited functionality compared to their closed modules. Alternatively
unsupported Sun Lustre setup with DRBD/heartbeat driven hybrid MGS/MDT.
Lustre itself is pretty whimsical though, from my personal experience,
and failover isn't smooth in the above setup. To make the story short,
GlusterFS is the most reliable parallel FS (and probably the slowest
one) I ever used.

As Marcus said, caching frontend of any nature has absolutely nothing in
common with storage subsystem. Caching itself has very limited usage
indeed, it's basically suitable for archaic sites with mostly static
content, and, sometimes, for media-rich sites (lol, why don't you use
Akamai then?). Caching frontends are next to useless for modern WEB-2.0
style stuff (well, what is webapp in our case?), which obviously heavily
relies on small files on storage backend.

Speaking of GlusterFS performance, don't forget its underlying
filesystem and physical storage performance. I had semi-production
GlusterFS setup while back (2.0.7) based on ReiserFS backend, which was
showing noticeably better performance on small files (PHP session cache
- - sic!) than previous ext3-based deployment, I'd say +20% to +30% on
low-to-mid data throughput. RAID-0 or RAID-10 gives even bigger
performance boost. Side note, any type of control sum RAID (5 or 6) is a
never-do for small files (circa stripe size), XOR is a poor man's RAID. :)

Further, as Gluster has userspace implementation, consider optimizing
kernel process scheduler. Linux CFS is a very bad idea on client side
where live services are running. I/O scheduler, other than CFQ, is a
very bad idea on server side. Renicing Gluster processes is actually the
must, -20 is enough. ;)

Further, what's the interconnect? Jenn, I'm guessing you're not running
Gluster over 100Mbit ethernet shared with public webapp traffic, right?
These VLAN thingy don't count of course. ;) Anything less than 1Gbit
makes no sense. I'd suggest Infiniband DDR4x and above if you're really
concerned about Gluster performance.

05.05.2010 3:14, David Simas пишет:
> On Tue, May 04, 2010 at 09:43:01PM +0200, Marcus Bointon wrote:
>> On 4 May 2010, at 21:27, Larry Bates wrote:
>>
>>> Seems to me that this problem should more likely be solved with Squid. 
>>> Nginx, or
>>> some caching software.
>>
>> The speed problem could be fixed with them, but it's not a replacement for 
>> what glusterfs is doing. I'm in the same boat: users upload images to a 
>> synchronously replicated gluster content area available to multiple web 
>> servers. Caching on individual servers is likely to run into coherency 
>> problems. With no server stickiness, we need to be able to guarantee that an 
>> uploaded file is immediately available to all front-ends without introducing 
>> a SPOF.
>>
>> While gluster might not be ideal for this, it is the *only* solution I've 
>> found that does it all. Do you have any better suggestions?
> 
> A suggestion, not necessarily better: DRBD in dual-primary mode with
> OCFS2 or GFS2.  See
> 
> http://www.drbd.org/docs/applications/
> 
> David Simas
> 
>>
>> Marcus
>> --
>> Marcus Bointon
>> Synchromedia Limited: Creators of http://www.smartmessages.net/
>> UK resellers of i...@hand CRM solutions
>> mar...@synchromedia.co.uk | http://www.synchromedia.co.uk/
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
> A
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

- -- 
Regards,
Dennis Arkhangelski
Technical Manager
WHB Networks LLC.
http://www.webhostingbuzz.com/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEUEARECAAYFAkvg1GAACgkQH77FUyBB2YWJawCcDj6982NZxwFDG/dHOydeN77/
pUsAl1u/GY9tuELkhrhtGUTesTVsGZY=
=oigW
-END PGP SIGNATURE-
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] (no subject)

2010-05-04 Thread David Simas
On Tue, May 04, 2010 at 09:43:01PM +0200, Marcus Bointon wrote:
> On 4 May 2010, at 21:27, Larry Bates wrote:
> 
> > Seems to me that this problem should more likely be solved with Squid. 
> > Nginx, or
> > some caching software.
> 
> The speed problem could be fixed with them, but it's not a replacement for 
> what glusterfs is doing. I'm in the same boat: users upload images to a 
> synchronously replicated gluster content area available to multiple web 
> servers. Caching on individual servers is likely to run into coherency 
> problems. With no server stickiness, we need to be able to guarantee that an 
> uploaded file is immediately available to all front-ends without introducing 
> a SPOF.
> 
> While gluster might not be ideal for this, it is the *only* solution I've 
> found that does it all. Do you have any better suggestions?

A suggestion, not necessarily better: DRBD in dual-primary mode with
OCFS2 or GFS2.  See

http://www.drbd.org/docs/applications/

David Simas

> 
> Marcus
> -- 
> Marcus Bointon
> Synchromedia Limited: Creators of http://www.smartmessages.net/
> UK resellers of i...@hand CRM solutions
> mar...@synchromedia.co.uk | http://www.synchromedia.co.uk/
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
A
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] (no subject)

2010-05-04 Thread Burnash, James
Not sure if this is helpful, but just in case:

http://evolvingweb.ca/story/drupal-cloud-deploying-rackspace-nginx-and-boost

I also found this reference, which doesn't involve Drupal:

http://nowlab.cse.ohio-state.edu/publications/conf-papers/2008/noronha-icpp08.pdf

Supposedly some of the benefits achieved by the use of memcached have now been 
accommodated by some of the Glluster performance translators, but ...

James



DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Marcus Bointon
Sent: Tuesday, May 04, 2010 3:43 PM
To: gluster-users@gluster.org Users
Subject: Re: [Gluster-users] (no subject)

On 4 May 2010, at 21:27, Larry Bates wrote:

> Seems to me that this problem should more likely be solved with Squid. Nginx, 
> or
> some caching software.

The speed problem could be fixed with them, but it's not a replacement for what 
glusterfs is doing. I'm in the same boat: users upload images to a 
synchronously replicated gluster content area available to multiple web 
servers. Caching on individual servers is likely to run into coherency 
problems. With no server stickiness, we need to be able to guarantee that an 
uploaded file is immediately available to all front-ends without introducing a 
SPOF.

While gluster might not be ideal for this, it is the *only* solution I've found 
that does it all. Do you have any better suggestions?

Marcus
--
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
UK resellers of i...@hand CRM solutions
mar...@synchromedia.co.uk | http://www.synchromedia.co.uk/


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] (no subject)

2010-05-04 Thread Marcus Bointon
On 4 May 2010, at 21:27, Larry Bates wrote:

> Seems to me that this problem should more likely be solved with Squid. Nginx, 
> or
> some caching software.

The speed problem could be fixed with them, but it's not a replacement for what 
glusterfs is doing. I'm in the same boat: users upload images to a 
synchronously replicated gluster content area available to multiple web 
servers. Caching on individual servers is likely to run into coherency 
problems. With no server stickiness, we need to be able to guarantee that an 
uploaded file is immediately available to all front-ends without introducing a 
SPOF.

While gluster might not be ideal for this, it is the *only* solution I've found 
that does it all. Do you have any better suggestions?

Marcus
-- 
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
UK resellers of i...@hand CRM solutions
mar...@synchromedia.co.uk | http://www.synchromedia.co.uk/


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] (no subject)

2010-05-04 Thread Larry Bates
I thought I would at least chime in here about read performance on small files.
Seems to me that this problem should more likely be solved with Squid. Nginx, or
some caching software.  If I'm understanding the problem, the files are mostly
read-only .html files which would react well to being cached by a well
configured cache that shouldn't be too hard to set up.  Hope info helps in some
way.

Larry Bates
vitalEsafe, Inc.
--

Message: 3
Date: Tue, 4 May 2010 14:24:44 -0400
From: "Burnash, James" 
Subject: Re: [Gluster-users] Performance Issue with Webserver
To: "gluster-users@gluster.org" 
Message-ID:
<9ad565c4a8561349b7227b79ddb988733179ea4...@exchange3.global.knight.com>

Content-Type: text/plain; charset="us-ascii"

Hi Jenn,

You may not have seen the posts, but small files do not, as a general rule, do
well on parallel file systems. There are numerous posts on this subject on this
list concerning this subject, and the Gluster developers have devoted a good bit
of energy into trying to address this, but ... this is not a general purpose
file system. It is designed to be efficient with large(r) file sizes.

Throughput is (hopefully) limited by the bandwidth available - but latency comes
is a factor as well in determining throughput.

Others on this list can give you much better details.

James

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Jenn Fountain
Sent: Tuesday, May 04, 2010 1:32 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Performance Issue with Webserver

We are running our webapp on a gluster mount.  We are finding that performance
is a lot slower than local disk.  We expected it to be slower but not this much
slower.  So, I am looking to you for some guidance on what to do.  IE: Not run
off the gluster mount or change config settings, etc.

Here are some numbers on performance:

Gluster Mount html:

Document Path:  /tmp/test.html
Document Length:17 bytes

Concurrency Level:  1
Time taken for tests:   0.269 seconds
Complete requests:  1
Failed requests:0
Write errors:   0
Total transferred:  302 bytes
HTML transferred:   17 bytes
Requests per second:3.72 [#/sec] (mean)
Time per request:   268.621 [ms] (mean)
Time per request:   268.621 [ms] (mean, across all concurrent requests)
Transfer rate:  1.10 [Kbytes/sec] received

Connection Times (ms)
  min  mean[+/-sd] median   max
Connect:   16   16   0.0 16  16
Processing:   253  253   0.0253 253
Waiting:  253  253   0.0253 253
Total:269  269   0.0269 269


Local disk html:

Document Path:  /tmp2/test.html
Document Length:16 bytes

Concurrency Level:  1
Time taken for tests:   0.035 seconds
Complete requests:  1
Failed requests:0
Write errors:   0
Total transferred:  301 bytes
HTML transferred:   16 bytes
Requests per second:28.24 [#/sec] (mean)
Time per request:   35.409 [ms] (mean)
Time per request:   35.409 [ms] (mean, across all concurrent requests)
Transfer rate:  8.30 [Kbytes/sec] received

Connection Times (ms)
  min  mean[+/-sd] median   max
Connect:   20   20   0.0 20  20
Processing:16   16   0.0 16  16
Waiting:   16   16   0.0 16  16
Total: 35   35   0.0 35  35



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users