Re: [Discuss] cracked boston site?

2014-05-14 Thread Eric Chadbourne
On 05/14/2014 07:52 PM, Tom Metro wrote: > Eric Chadbourne wrote: >> Does it appear hacked to you? ... The links appear odd. >> http://bostonbuilt.org/ > > So this is a site promoting a badge proclaiming "Built in Boston" that > they are encouraging site owners/developers to add to their sites to

Re: [Discuss] cracked boston site?

2014-05-14 Thread Tom Metro
Eric Chadbourne wrote: > Does it appear hacked to you? ... The links appear odd. > http://bostonbuilt.org/ So this is a site promoting a badge proclaiming "Built in Boston" that they are encouraging site owners/developers to add to their sites to promote that the site was developed using Boston ar

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread Bill Bogstad
On Wed, May 14, 2014 at 4:47 PM, Richard Pieri wrote: > Bill Bogstad wrote: >> done. I don't think POSIX requires this, but since most >> filesystems seem to work this way we have gotten used to that >> behavior. This is why the FAQ suggests doing something like: > > POSIX requires it. Per the

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread Richard Pieri
Bill Bogstad wrote: > done. I don't think POSIX requires this, but since most > filesystems seem to work this way we have gotten used to that > behavior. This is why the FAQ suggests doing something like: POSIX requires it. Per the POSIX fsync() manual: >> The fsync() function shall not retur

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread Bill Bogstad
On Wed, May 14, 2014 at 11:25 AM, F. O. Ozbek wrote: > > > On 05/14/2014 11:13 AM, Richard Pieri wrote: >> >> F. O. Ozbek wrote: >>> >>> The data gets written. We have tested it. >> >> >> Ignoring fsync/O_SYNC means that the file system driver doesn't flush >> its write buffers when instructed to

[Discuss] cracked boston site?

2014-05-14 Thread Eric Chadbourne
I just had an interview today. Real nice company. _Could really use a new job!_ Anyway they had this Boston Built image. Does it appear hacked to you? I looked at it with my no script plugin on firefox. The links appear odd. http://bostonbuilt.org/ Thanks, Eric _

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread Richard Pieri
F. O. Ozbek wrote: > I mean, come on, look at the claim you are making: > " MooseFS isn't reliable", have you ever done any tests? I know that if the power suddenly fails before data has been written to non-volatile storage then that data will be lost. That's how computers work (or don't work as t

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread F. O. Ozbek
On 05/10/2014 04:00 PM, Rich Braun wrote: Googling for performance-tuning on glusterFS makes me shake my head, ultimately the technology is half-baked if it takes 500ms or more just to transfer an inode from a source path into a glusterFS volume. I should be able to "cp -a" or rsync any direct

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread F. O. Ozbek
On 05/14/2014 01:45 PM, Richard Pieri wrote: F. O. Ozbek wrote: I think the discussion is losing its focus. If you are a bank moving billions of dollars around, go make EMC even richer. If you need a reliable HPC storage cluster that you can actually afford, use moosefs! That's been my point:

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread Richard Pieri
F. O. Ozbek wrote: > I think the discussion is losing its focus. If you are a bank > moving billions of dollars around, go make EMC even richer. > If you need a reliable HPC storage cluster that you can actually afford, > use moosefs! That's been my point: MooseFS isn't reliable. Parallel clusters

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread F. O. Ozbek
On 05/14/2014 12:52 PM, Richard Pieri wrote: F. O. Ozbek wrote: If you lose power to your entire storage cluster, you will lose some data, this is true on almost all filesystems.(including moosefs and glusterfs) If writes are atomic -- that is, the file system driver and underlying hardware

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread F. O. Ozbek
On 05/14/2014 12:20 PM, Dan Ritter wrote: On Wed, May 14, 2014 at 12:13:04PM -0400, F. O. Ozbek wrote: Like I said, we are not Cambridge. I think you're missing the point. Are all your servers connected to battery UPS systems? Yes. Are those connected to automatic transfer switched gen

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread Richard Pieri
F. O. Ozbek wrote: > If you lose power to your entire storage cluster, you will lose > some data, this is true on almost all filesystems.(including moosefs and > glusterfs) If writes are atomic -- that is, the file system driver and underlying hardware honor fsync calls -- then you won't lose anyt

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread Dan Ritter
On Wed, May 14, 2014 at 12:13:04PM -0400, F. O. Ozbek wrote: > > Like I said, we are not Cambridge. I think you're missing the point. Are all your servers connected to battery UPS systems? Are those connected to automatic transfer switched generators? Do you have a sufficient amount of fuel to r

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread F. O. Ozbek
On 05/14/2014 11:58 AM, Richard Pieri wrote: F. O. Ozbek wrote: That is the whole point, "doesn't flush its write buffers when instructed to do so". You don't need to instruct. The data gets written all the time. When we have done the tests, we have done tens of thousands of writes (basically

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread Richard Pieri
F. O. Ozbek wrote: > That is the whole point, "doesn't flush its write buffers when > instructed to do so". You don't need to instruct. The data gets written > all the time. When we have done the tests, we have done tens of > thousands of writes (basically checksum'ed test files) and > read tests s

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread F. O. Ozbek
On 05/14/2014 11:13 AM, Richard Pieri wrote: F. O. Ozbek wrote: The data gets written. We have tested it. Ignoring fsync/O_SYNC means that the file system driver doesn't flush its write buffers when instructed to do so. Maybe the data gets written. Maybe not. You can't be sure of it unless w

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread Richard Pieri
F. O. Ozbek wrote: > The data gets written. We have tested it. Ignoring fsync/O_SYNC means that the file system driver doesn't flush its write buffers when instructed to do so. Maybe the data gets written. Maybe not. You can't be sure of it unless writes are atomic, and you don't get that from Moo

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread F. O. Ozbek
On 05/14/2014 10:39 AM, Richard Pieri wrote: F. O. Ozbek wrote: We have tested moosefs extensively. The commercial version has redundant metadata servers and redundant chunk servers. Ignoring fsync is not a problem. We will use in production for real data. (not scratch.) Um. Yeah. Good luck w

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread Richard Pieri
F. O. Ozbek wrote: > We have tested moosefs extensively. The commercial version has > redundant metadata servers and redundant chunk servers. > Ignoring fsync is not a problem. We will use in production > for real data. (not scratch.) Um. Yeah. Good luck with that. I think you'll need it. Because

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread F. O. Ozbek
On 05/14/2014 10:07 AM, Richard Pieri wrote: F. O. Ozbek wrote: We have tested ceph, glusterfs and moosefs and decided to use moosefs. Be careful with MooseFS. Last I knew it ignores fsync and O_SYNC. I call that a deal breaker for anything other than scratch storage. See previous commentary a

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread Richard Pieri
F. O. Ozbek wrote: > We have tested ceph, glusterfs and moosefs and decided to use moosefs. Be careful with MooseFS. Last I knew it ignores fsync and O_SYNC. I call that a deal breaker for anything other than scratch storage. See previous commentary about SSDs that ignore fsync/O_SYNC and the asso

Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread F. O. Ozbek
On 05/10/2014 04:00 PM, Rich Braun wrote: Greetings...after concluding cephfs and ocfs2 are beyond-the-pale too complicated to get working, and xtreemfs fails to include automatic healing, I'm back to glusterfs. None of the docs seem to address the question of how to get the systemd (or sysvin

[Discuss] Boston Linux Meeting Wednesday, May 21, 2014 OpenStack from Scratch, Part II

2014-05-14 Thread Jerry Feldman
When: May 21, 2014 7PM (6:30PM for Q&A) Topic: OpenStack from Scratch, Part II Moderator:Federico Lucifredi Location: MIT Building E-51, Room 315 ### Please note that Wadsworth St. is still closed. ### Proceed West on Memorial Drive to Ames St. Ames will be ### 2-way during construction. Take a r