Re: [Discuss] SD Cards, cheap and here for a while?

2016-08-24 Thread F. O. Ozbek



On 08/24/2016 06:48 PM, Rich Pieri wrote:

On 8/24/2016 8:12 AM, Robert Krawitz wrote:



I've had USB flash drives show up DOA, too. Electronics fail. It
happens. That said, none of my Samsung and SanDisk SD and micro SD cards
have failed on me. I can't speak to cheaper brands like PNY; I tend to
avoid them because of poor performance.


I purchased exclusively Samsung micro SD cards, thinking they will be more
reliable. They also fail, believe me.

-- F. Ozbek
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] SD Cards, cheap and here for a while?

2016-08-24 Thread F. O. Ozbek


On 08/24/2016 08:12 AM, Robert Krawitz wrote:


I've also found SD cards to be unreliable (sometimes but not always
DOA) compared to SSD.  I bought a new micro SD for my phone; I tested
it prior to entering it into service, and it locked up hard after
about 4 GB of testing, after which it went permanently catatonic.  The
card it replaced developed silent write errors after about 2 years.
And I had had another such that was *ahem* mismarked.


I am using Micro SD cards as "hard drives" on pi2 nodes,
about 25% of them are bad. It is OK to use them
for temporary storage of data, you don't want to use them as your
primary storage.

-- F. Ozbek

___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Digital Right to Repair Bill, H.3383

2015-07-28 Thread F. O. Ozbek



On 7/28/15 12:27 PM, Rajiv Aaron Manglani wrote:


fyi. from http://massachusetts.digitalrighttorepair.org:

Massachusetts,
It's time to speak out for your right to repair

The people of Massachusetts have always stood up for their right to
repair. In 2012, voters passed a law that ensured residents' right to
repair their car wherever they wanted. Now, it's time to do the same for
electronics.

With the Digital Right to Repair Bill, H.3383, we have a chance to
guarantee our right to repair electronics --like smartphones, computers,
and even farm equipment. We have a chance to help the environment and
stand up for local repair jobs--the corner mom-and-pop repair shops that
keep getting squeezed out by manufacturers.


Yeah, I don't think this makes sense



The Digital Right to Repair Bill requires manufacturers to provide
owners and independent repair businesses with fair access to service
information, security updates, and replacement parts.



___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Digital Right to Repair Bill, H.3383

2015-07-28 Thread F. O. Ozbek



On 7/28/15 5:48 PM, F. O. Ozbek wrote:



On 7/28/15 12:27 PM, Rajiv Aaron Manglani wrote:


fyi. from http://massachusetts.digitalrighttorepair.org:

Massachusetts,
It's time to speak out for your right to repair

The people of Massachusetts have always stood up for their right to
repair. In 2012, voters passed a law that ensured residents' right to
repair their car wherever they wanted. Now, it's time to do the same for
electronics.

With the Digital Right to Repair Bill, H.3383, we have a chance to
guarantee our right to repair electronics --like smartphones, computers,
and even farm equipment. We have a chance to help the environment and
stand up for local repair jobs--the corner mom-and-pop repair shops that
keep getting squeezed out by manufacturers.


Yeah, I don't think this makes sense


meaning, I think it is unrealistic to repair the ever shrinking
and ever cheaper electronic parts...





The Digital Right to Repair Bill requires manufacturers to provide
owners and independent repair businesses with fair access to service
information, security updates, and replacement parts.



___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] NAS: lots of bays vs. lots of boxes

2015-07-10 Thread F. O. Ozbek



On 07/10/2015 03:36 PM, Richard Pieri wrote:

On 7/10/2015 3:08 PM, Kent Borg wrote:

But like many nice ideas, this one only goes so far: after a few cycles
you can't get matched replacement disks and a bit later technology
likely changes enough you need to rethink everything.


This is one way that ZFS and Btrfs are superior: they don't require
matching drives like traditional RAID.



MooseFS goes one step further, being RAIN, not
only it doesn't require matching hard-drives but it doesn't
require matching chunk servers either.

-- F. Ozbek
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] NAS: lots of bays vs. lots of boxes

2015-07-10 Thread F. O. Ozbek



On 07/10/2015 01:15 PM, Gordon Marx wrote:

Clearly the answer is RAIN (Redundant Array of Inexpensive NASes).



I love it!

Well, MooseFS is exactly that. RAIN. (Along with glusterfs, etc, etc.)



/me rushes to trademark, monetize

On Fri, Jul 10, 2015 at 1:13 PM, Kent Borg kentb...@borg.org wrote:

On 07/10/2015 12:36 PM, Richard Pieri wrote:


The answer to this conundrum is simple: disks are consumables like toner
and paper and batteries.



Certainly. But as with batteries, the technology changes, and there are
qualitative consequences. For example, the Wikipedia article on RAID says
that Dell recommends against RAID 5 with disks 1TB or larger on some Dell
product-or-other, because the very act of rebuilding the array will possibly
kill other old drives in your array before the data has been copied. RAID 6,
as I understand it, is better by surviving two failures, but it only pushes
the problem back and probably also becomes too risky with 2015-sized drives.

I can imagine someone putting together a swell RAID 5 package of the
slickest 8TB disks available, with plenty of spares to be extra safe, and
after a couple years of great performance one disk dies and the rest commit
suicide over the next few days in a sickening cascade as the array tries to
rebuild itself. Performing admirably the entire time!--until the data is
lost. Doesn't matter if the 8TB drives cost $50 or $800, they could all die
in a horrible capacity-induced pile up, taking some vital 24x7x365 system
with it.

Declaring they're consumables! doesn't answer questions about how one
would wisely fill up and use a 24-bay box.

-kb

___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss

___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] NAS: lots of bays vs. lots of boxes

2015-07-10 Thread F. O. Ozbek



On 07/10/2015 01:27 PM, Richard Pieri wrote:

On 7/10/2015 1:13 PM, Kent Borg wrote:

Certainly. But as with batteries, the technology changes, and there are
qualitative consequences. For example, the Wikipedia article on RAID
says that Dell recommends against RAID 5 with disks 1TB or larger on
some Dell product-or-other, because the very act of rebuilding the array
will possibly kill other old drives in your array before the data has
been copied. RAID 6, as I understand it, is better by surviving two
failures, but it only pushes the problem back and probably also becomes
too risky with 2015-sized drives.


Because if one disk reaches the end of its life and fails then the rest
of the disks in the set are soon to follow. The problem isn't RAID 5 or
Dell. It's poor maintenance. Perhaps a better comparison is engine oil
and filters, fan belts and hoses in a car. Consumables need to be
replaced /before/ they fail if you want the operation to continue smoothly.



That assumes drives only die out of old age, which is not true.
Some batch of hard-drives will be bad, and may die at high rates during
their 3/5 year life span.


I can imagine someone putting together a swell RAID 5 package of the
slickest 8TB disks available, with plenty of spares to be extra safe,

[snip]

The extra safe means nothing without a good backup plan. RAID does not
protect data. It keeps the system running after single disk failures.
RAID 6 just gives you one extra single disk failure before the whole
thing crashes.


Declaring they're consumables! doesn't answer questions about how one
would wisely fill up and use a 24-bay box.


The same way you would a single drive: put data on it and perform
regular backups, and replace the drive when it approaches the end of its
usable life.


___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] NAS: lots of bays vs. lots of boxes

2015-07-10 Thread F. O. Ozbek



On 07/10/2015 03:15 PM, Mike Small wrote:

I guess my point is people shouldn't be slavish to that advice. It
would be good if there were some to take up the detritus from those
companies and enthusiasts who are churning through all this junk
every couple years. Oh, and it would be nice if people would be a
little less paranoid and not figure harddrives need to have holes
drilled through them before turning them loose (not meaning to
dredge up that debate again). I don't have an electron scanning
microscope, scout's honour.


Most of the people who do the drilling ( I admit, I drilled hundreds
of drives) will actually agree with you. I agree with you 100% myself.
The data destruction is usually mandated by the 
institution/organization/company. And, no, unfortunately, you can't 
convince them otherwise

by technical arguments. If I had my own large company, I will overwrite
the entire drive with /dev/urandom and hand it to whoever wants it.

-- F. Ozbek


___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] NAS: lots of bays vs. lots of boxes

2015-07-05 Thread F. O. Ozbek

On 07/05/2015 09:04 AM, Edward Ned Harvey (blu) wrote:

From: Discuss [mailto:discuss-bounces+blu=nedharvey@blu.org] On
Behalf Of Tom Metro

I'm more interested in clever ways of using multiple, cheap, commodity
NAS boxes, Google-style. For example, for the same cost as that $600+
(diskless) DIY NAS I linked to, I can get 4 of the QNAP 2-bay boxes and
maybe combine them with something like MooseFS. You get redundancy
where
some number of the boxes can go down, and it still keeps working, and
you can expand capacity by adding more boxes (if drive density increases
don't keep pace).


I think the leaders in this space are glusterfs, and ceph. But I'm sure each 
one has their own individual strengths and weaknesses. Among them is 
compatibility -


I don't think you're going to get anything like this to work with windows or 
mac clients,

You can connect to MooseFS via samba. There is a production installation with 
hundreds of windows boxes
connecting to MooseFS via smb.

 or have an android or ios app.

No android or IOS app, yet.

-- F. Ozbek

___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] NAS: distributed file systems

2015-07-04 Thread F. O. Ozbek



On 07/04/2015 12:07 PM, Tom Metro wrote:

F. O. Ozbek wrote:

You can use moosefs to essentially do what Isilon does at a fraction
of the price. https://moosefs.com/


Thanks for the reminder of MooseFS.

I've ran across it before. There are a bunch like it. They tend to be
either very complicated to set up, or immature and incomplete. One
really needs some first-hand reviews.


MooseFS is relatively easy to setup. (one or two hours maximum).

MooseFS is not immature, it has been around for almost 10 years
and it is used in production environments around the world.
(Including Harvard University, Gemius S.A in Poland).

MooseFS does not have all the features you will expect yet but
it is under on-going development.

I am not sure about what you mean by hacked up Linux firmware but
as long as it is relatively recent and decent Linux installation
it should run. (If you contact them and ask for the specific
hardware, they may test it on that hardware for you.
I think it even runs on Pi2 now. :-)




I know Rich Braun has written here many times on GlusterFS (gluster.org).

Whether one can get one of these to run on appliance hardware with their
hacked up Linux firmware is another matter.

  -Tom


___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] NAS: distributed file systems

2015-07-04 Thread F. O. Ozbek



On 07/04/2015 12:07 PM, Tom Metro wrote:

F. O. Ozbek wrote:

You can use moosefs to essentially do what Isilon does at a fraction
of the price. https://moosefs.com/


Thanks for the reminder of MooseFS.

I've ran across it before. There are a bunch like it. They tend to be
either very complicated to set up, or immature and incomplete. One
really needs some first-hand reviews.


You can invite the MooseFS people to come and give a presentation
in one of the Fall meetings of BLU.
It will cost them a roundtrip ticket to Boston, but they will
probably do it for their marketing effort.
(By the way, I don't work for MooseFS and I don't benefit financially
from MooseFS.)

The person to contact will be Krzysztof Kielak
at krzysztof.kie...@moosefs.com




I know Rich Braun has written here many times on GlusterFS (gluster.org).

Whether one can get one of these to run on appliance hardware with their
hacked up Linux firmware is another matter.

  -Tom


___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] NAS: buy vs. buil

2015-07-04 Thread F. O. Ozbek



On 07/04/2015 10:06 AM, Rich Braun wrote:

I have two other requirements that at least until now have favored build rather than 
buy: encryption at rest and incremental upgradability. By the latter, I mean being able 
to expand storage by adding or replacing one or more of the drives vs. swapping out a whole chassis.

What I really want is an open-source equivalent of the Isilon system. You build 
up a NAS system as 3 or more of these auto-replicating chasses. But EMC bought 
that company out, and even back in their early days it was a high-priced system 
(albeit half the price per-gig of anything EMC).



You can use moosefs to essentially do what Isilon does at a fraction of the 
price.
You can chose your own hardware.
Official commercial support is also available.
Here is the link:
https://moosefs.com/

-- F. Ozbek




In the future I'd like to have replication to cloud systems I manage myself, 
once the price of cloud servers comes down and fiber-optic gigabit Internet 
comes to my house. Hopefully I don't have to move to Korea where you can 
already do that. :-/ So the cluster-software management tools should work the 
same on local hardware as cloud servers. There's not much investment in that 
niche anymore, though.

-rich
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss



___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] NAS: buy vs. build

2015-07-04 Thread F. O. Ozbek



On 07/03/2015 08:26 PM, Richard Pieri wrote:

Last year I specced out some high-end compute servers (2 by Xeon 10-core, 
384GB) with a pile of storage (80TB) from Dell and IBM. The compute portions 
were essentially identical
prices. The Dell storage was 2 x 12-bay SAS storage arrays with SATA (NL-SAS) 
drives; the IBM storage was 2 x 12-bay SAS storage arrays with SAS drives. Both 
sets of storage arrays
cost about the same.

The Dell SATA drives cost ~$14K *MORE* than the IBM SAS drives. You read that 
right. SATA drives on the order of twice the cost of SAS drives. Our Dell rep 
refused to budge on the
price even after we showed him the IBM quote.

Dell did not get the sale.


Didn't IBM leave the hardware business entirely (including the servers last 
year)?
You probably purchased Lenovo servers labeled as IBM servers.
Much better option would have been to purchase SuperMicro servers instead.

-- F. Ozbek.
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread F. O. Ozbek



On 05/10/2014 04:00 PM, Rich Braun wrote:

Greetings...after concluding cephfs and ocfs2 are beyond-the-pale too
complicated to get working, and xtreemfs fails to include automatic healing,
I'm back to glusterfs.

None of the docs seem to address the question of how to get the systemd (or
sysvinit) scripts working properly so I've had to customize my own.
Basically, the boot sequence has to be this:

   Mount local volumes (including the bricks which are xfs)
   Start up networking
   Initialize glusterd
   Wait for glusterd to come up
   Mount gluster volumes
   Launch LXC and/or VirtualBox instances

It's that 4th step which isn't obvious:  if I don't insert some sort of a
wait, the mount(s) will fail because glusterd isn't fully up-and-running.
Also, in step 5 if I don't wait for all the volumes to mount, one or more
instances come up without their dependent volumes.

I'll include my cobbled-together systemd scripts below.  Systemd is new enough
that I'm not completely familiar with it; for example it's easy to get a
circular dependency and so far the only way I've found to debug it is
trial-and-error, rebooting each time.

Googling for performance-tuning on glusterFS makes me shake my head,
ultimately the technology is half-baked if it takes 500ms or more just to
transfer an inode from a source path into a glusterFS volume.  I should be
able to cp -a or rsync any directory on my system, whether it's got 100
files or 1 million files, into a gluster volume without waiting for hours or
days.  It falls apart completely after about 10,000 files, which means I can
only use the technology for a small portion of my deployments.  That said, if
any of y'all are *actually using* glusterfs for *anything* I'd love to share
stories.

-rich


Rich,

We have tested ceph, glusterfs and moosefs and decided to use moosefs.
We have been testing these products for the last 18 months.
We are about to use moosefs in production this summer.

If you are interested in getting more details, you can contact me
off the list and I will explain what we have tested and where we are.
There is no point in going into details of why moosefs is better
than glusterfs in this list because glusterfs die-hards start attacking
immediately. We are located in Boston Longwood Medical Area. (Harvard 
Med School)

If you are serious about looking for alternatives to glusterfs,
let me know and I will give you all the details.

Thanks
Fevzi




---glusterd.service---
[Unit]
Description=Gluster elastic volume management daemon

[Service]
ExecStart=/usr/sbin/glusterd -N

[Install]
WantedBy=multi-user.target


---gluster-vols.service---
[Unit]
Description=Mount glusterFS volumes
Conflicts=shutdown.target
After=glusterd.service
ConditionPathExists=/etc/fstab

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/bin/bash -c sleep 10;/usr/bin/mount -a -O _netdev

[Install]
WantedBy=multi-user.target


---lxc@.service
[Unit]
Description=Linux Container
After=network.target gluster-vols.service

[Service]
Type=forking
ExecStart=/usr/bin/lxc-start -dn %I
ExecStop=/usr/bin/lxc-stop -n %I

[Install]
WantedBy=multi-user.target


___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread F. O. Ozbek


On 05/14/2014 10:39 AM, Richard Pieri wrote:

F. O. Ozbek wrote:

We have tested moosefs extensively. The commercial version has
redundant metadata servers and redundant chunk servers.
Ignoring fsync is not a problem. We will use in production
for real data. (not scratch.)


Um. Yeah. Good luck with that. I think you'll need it. Because all the
redundancy in the world does you no good if the data doesn't get written
in the first place.



The data gets written. We have tested it.
I don't need the luck, Thank you very much.

___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread F. O. Ozbek



On 05/14/2014 11:13 AM, Richard Pieri wrote:

F. O. Ozbek wrote:

The data gets written. We have tested it.


Ignoring fsync/O_SYNC means that the file system driver doesn't flush
its write buffers when instructed to do so. Maybe the data gets written.
Maybe not. You can't be sure of it unless writes are atomic, and you
don't get that from MooseFS.

So, like I said, good luck with that. Because outages like the one that
clobbered Cambridge back in November 2012 happen.



That is the whole point, doesn't flush its write buffers when 
instructed to do so. You don't need to instruct. The data gets written
all the time. When we have done the tests, we have done tens of 
thousands of writes (basically checksum'ed test files) and

read tests succeeded all the time. OK, I admit, you probably
do not want to run your transactional financial applications on moosefs
but the reality is that these filesystems are used in research
environments where high I/O bandwidth is the key.
The fact that it doesn't support forcing the buffer to the disk
is not the problem in this case. Glusterfs will start giving
you random I/O errors under heavy load. How is that any good?

I don't know what you are referring to in Cambridge but
we are not Cambridge.

Fevzi
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread F. O. Ozbek



On 05/14/2014 11:58 AM, Richard Pieri wrote:

F. O. Ozbek wrote:

That is the whole point, doesn't flush its write buffers when
instructed to do so. You don't need to instruct. The data gets written
all the time. When we have done the tests, we have done tens of
thousands of writes (basically checksum'ed test files) and
read tests succeeded all the time.


But what it seems you haven't done is kill the power while writing to
see how messy the damage is and how difficult the cleanup will be.



If you lose power to your entire storage cluster, you will lose
some data, this is true on almost all filesystems.(including moosefs and 
glusterfs)





The fact that it doesn't support forcing the buffer to the disk
is not the problem in this case. Glusterfs will start giving
you random I/O errors under heavy load. How is that any good?


It's not. It's also not relevant to lack of atomic writes on MooseFS.

Yes, I understand the desire for throughput. I don't like the idea of
sacrificing reliability to get it.


It is possible to setup a moosefs cluster with redundant metadata
servers in separate locations (and chunk servers in separate locations.)
This will save you from power outages as long as you don't lose power
in all the locations at the same time. Keep in mind these servers
are in racks with UPS units and generator backups.

Basically this fsync stuff is the same argument you hear over and
over again from the glusterfs folks. It is pretty much
irrelevant unless you are in financial industry transactional
environments. Even in those environments, you can provide
transactional integrity at the application level using multiple
separate moosefs clusters.




I don't know what you are referring to in Cambridge but
we are not Cambridge.


http://www.boston.com/metrodesk/2012/11/29/cambridge-power-outage-thousands-hit-blackout/0r93dJVZglkOagAFw8w9bK/story.html



Like I said, we are not Cambridge.
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread F. O. Ozbek



On 05/14/2014 12:20 PM, Dan Ritter wrote:

On Wed, May 14, 2014 at 12:13:04PM -0400, F. O. Ozbek wrote:


Like I said, we are not Cambridge.


I think you're missing the point.

Are all your servers connected to battery UPS systems?


Yes.

Are those

connected to automatic transfer switched generators?


Yes.


Do you have

a sufficient amount of fuel to run them as long as you will need
to?


72 hours.


 Do you have a monitoring system that will tell you about all

power transitions?


Yes,

Do you run fire-drills so that everyone knows

what to do in a power outage?


Our data hosting vendor does.

 Is every system tested from end to

end?


Yes.

And finally, have you compared the cost of all these things

to the value of your data?


Yes.



The lesson of Cambridge is that stable power isn't.

The lesson of http://en.wikipedia.org/wiki/Northeast_blackout_of_2003
is that local power outages aren't.


-dsr-



We know that. That is why we have all the stuff above.

___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread F. O. Ozbek

On 05/14/2014 01:45 PM, Richard Pieri wrote:

F. O. Ozbek wrote:

I think the discussion is losing its focus. If you are a bank
moving billions of dollars around, go make EMC even richer.
If you need a reliable HPC storage cluster that you can actually afford,
use moosefs!


That's been my point: MooseFS isn't reliable. Parallel clusters will
have overlapping sync cycles which reduces, but does not eliminate, the
windows during which no data or partial data has been flushed to disk.
More parallel clusters means smaller windows but you still can't be 100%
certain 100% of the time.

This is getting to be a lot like that MongoDB is Web Scale video.
GluserFS is slow because it writes to disk. :)

But seriously, in a purely compute environment where data on the shared
file system are replicated from other sources, MooseFS may be a good
choice. But as a primary data store? No chance.



We have been told by the glusterfs developers in the past
that they don't store their important data on glusterfs, yet,
somehow you seem to claim
that glusterfs is more reliable than moosefs ?!?

I mean, come on, look at the claim you are making:
 MooseFS isn't reliable, have you ever done any tests?
Well, we have done tests, and a lot of them, at the time
we have done the tests, glusterfs started giving random I/O
errors under heavy load. We will get errors messages about
files not being there and on random files as well.
At the same tests, we have received no errors from moosefs.
It takes some serious Chutzpah to make the general claim
you are making which is MooseFS isn't reliable.

The fact is moosefs is pretty much rock solid. This thread started
by someone asking for an advice on an alternative for glusterfs
which I have provided. and look at the state of discussion
it has taken us! We make our living providing reliable storage,
we have done extensive tests, at the time we have done the tests,
moosefs was far better than glusterfs (reliability and performance.)
Is glusterfs not worth testing again? No, we will probably test
the latest version again. (The same way we keep on testing
other solutions.)

Fevzi
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Gluster startup, small-files performance

2014-05-14 Thread F. O. Ozbek



On 05/10/2014 04:00 PM, Rich Braun wrote:

Googling for performance-tuning on glusterFS makes me shake my head,
ultimately the technology is half-baked if it takes 500ms or more just to
transfer an inode from a source path into a glusterFS volume.  I should be
able to cp -a or rsync any directory on my system, whether it's got 100
files or 1 million files, into a gluster volume without waiting for hours or
days.  It falls apart completely after about 10,000 files, which means I can
only use the technology for a small portion of my deployments.  That said, if
any of y'all are *actually using* glusterfs for *anything* I'd love to share
stories.

-rich



Yes, we are using moosefs, if you want to learn about
it you can contact me off the list.
I can give you detailed info about our tests, our environment,
show our hardware, etc, etc.

The list unfortunately is not a good medium to discuss,
the discussion quickly deteriorates.

Thanks
Fevzi

___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] The Ceph storage system, the CephFS filesystem

2014-04-23 Thread F. O. Ozbek

Have you tried moosefs?

We tested ceph, glusterfs and moosefs
and decided to use moosefs in production.

Fevzi

On 04/23/2014 02:19 AM, Rich Braun wrote:

I've fiddled unsuccessfully with the Ceph storage system and the
similarly-named (but separate codebase) CephFS cluster filesystem.  What I was
hoping for was a more broadly-supported/easier-to-install cluster technology
than OCFS2, that will hopefully work better than GlusterFS.

Before I give up on it (already gave up trying to get OCFS2 to work) and turn
to GlusterFS, I'd love to hear from any of you who are using CephFS.  Does it
work well?  How do I get it working without a lot of effort?  Are some distros
better-supported than others?

The official Ceph website is nothing short of overwhelming, and the
ceph-deploy python scripts are buggy mess, at least on my personal
distro-of-choice (OpenSuSE) and with the most recent long-term supported Ceph
version (0.72 Emperor).  Yet the promise of this platform is quite tantalizing
so I want to get it working.

-rich


___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] The Manchurian Computer?

2012-07-28 Thread F. O. Ozbek



On 07/26/2012 02:44 PM, Kent Borg wrote:


My Manchurian musings are on the theory that the Chinese are
aggressively trying to break into western systems, and that a Chinese
company (not just Chinese manufacturer) would have extra opportunity to
embed lots of stuff in products they ship.  Not just software, but in
firmware and silicon.  Maybe keyboard recorders that can be read back
given physical control of the computer.  Maybe network hardware features
that can offer more deluxe remote access.

Running Linux would break (most) Windows assumptions that remote access
might depend upon, but clever attackers might be more OS agnostic.
Lenovo seems like the make that would most likely have backdoors.  If
the US government makes US manufacturers put in backdoors (as has been
rumored) the Chinese are certainly not above similar behavior.


-kb


I think it is unlikely the foreign governments
will be modifying the hardware. Most the spying, etc
seems to be limited to the software level. (so far.)


___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] where to advertise?

2012-05-18 Thread F. O. Ozbek


What are the best websites to advertise open tech positions?
(This is a Sr. Sys Admin position.)

We will advertise in monster and dice.

Any other suggestions? (linked in ?)

Thanks in advance.

Fevzi
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss