Re: [Gluster-users] To GlusterFS or not...

2014-09-23 Thread Alexey Zilber
Yes, Roman is correct.   Also, if you have lots of random IO you're better
off with many smaller SAS drives.   This is because the greater number of
spindles you have the greater your random IO is.  This is also why we went
with ssd drives because sas drives weren't cutting it on the random io
front.

Another option you may try is using SAS drives with ZFS compression.
Compression will be especially helpful if you're using SATA drives.

-Alex
On Sep 23, 2014 2:10 PM, Roman rome...@gmail.com wrote:

 Hi,

 SAS 7200 RPM disks are not that small size at all (same as SATA
 basically). If I remember right, the reason of switching to SAS here would
 be Full Duplex with SAS (you can read and write in the same time to them)
  instead of Half Duplex with SATA disks (read or write per one moment only).

 2014-09-23 9:02 GMT+03:00 Chris Knipe sav...@savage.za.org:

 Hi,

 SSD has been considered but is not an option due to cost.  SAS has
 been considered but is not a option due to the relatively small sizes
 of the drives.  We are *rapidly* growing towards a PB of actual online
 storage.

 We are exploring raid controllers with onboard SSD cache which may help.

 On Tue, Sep 23, 2014 at 7:59 AM, Roman rome...@gmail.com wrote:
  Hi,
 
  just a question ...
 
  Would SAS disks be better in situation with lots of seek times using
  GlusterFS?
 
  2014-09-22 23:03 GMT+03:00 Jeff Darcy jda...@redhat.com:
 
 
   The biggest issue that we are having, is that we are talking about
   -billions- of small (max 5MB) files. Seek times are killing us
   completely from what we can make out. (OS, HW/RAID has been tweaked
 to
   kingdom come and back).
 
  This is probably the key point.  It's unlikely that seek times are
 going
  to get better with GlusterFS, unless it's because the new servers have
  more memory and disks, but if that's the case then you might as well
  just deploy more memory and disks in your existing scheme.  On top of
  that, using any distributed file system is likely to mean more network
  round trips, to maintain consistency.  There would be a benefit from
  letting GlusterFS handle the distribution (and redistribution) of files
  automatically instead of having to do your own sharding, but that's not
  the same as a performance benefit.
 
   I’m not yet too clued up on all the GlusterFS naming, but essentially
   if we do go the GlusterFS route, we would like to use non replicated
   storage bricks on all the front-end, as well as back-end servers in
   order to maximize storage.
 
  That's fine, so long as you recognize that recovering from a failed
  server becomes more of a manual process, but it's probably a moot point
  in light of the seek-time issue mentioned above.  As much as I hate to
  discourage people from using GlusterFS, it's even worse to have them be
  disappointed, or for other users with other needs to be so as we spend
  time trying to fix the unfixable.
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users
 
 
 
 
  --
  Best regards,
  Roman.



 --

 Regards,
 Chris Knipe




 --
 Best regards,
 Roman.

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster = RAID 10 over the network?

2014-09-21 Thread Alexey Zilber
Hi Ryan,

   I think if you could provide more info on the storage systems it would
help.  Things like total drives per raid set and size of each drive.  This
is a complicated question,  but a simple Googling brings up this
interesting article:
http://wolfcrow.com/blog/which-is-the-best-raid-level-for-video-editing-and-post-production-part-three-number-soup-for-the-soul/

  Imho,  without knowing any of these details,  my personal preference,
unless you're running a database is to do multiple raid-1 sets, stripe them
with lvm and drop xfs on them.

  I would like to add that if your storage provider only offers raid-5 or
raid-10 it might behoove you to look for another storage provider.  :)

-Alex
On Sep 21, 2014 8:24 PM, Ryan Nix ryan@gmail.com wrote:

 Hi All,

 So my boss and I decided to make a good size investment in a Gluster
 cluster.  I'm super excited and I will be taking a Redhat Storage class
 soon.

 However, we're debating the hardware configuration we intend to purchase.
 We agree that each brick/node, and we're buying four, each configured as
 RAID 10 will help us sleep at night, but to me, it seems like such an
 unfortunate waste of disk space.  Our graduate and PHD students work with
 lots of video and they filled up our proof-of-concept 4 TB ownCloud/Gluster
 setup in  2 months.

 I stumbled upon Howtoforge's Gluster setup guide from two years ago and
 I'm wondering if this is correct and or still relevant:

 http://bit.ly/1qkLoVe

 *This tutorial shows how to combine four single storage servers (running
 Ubuntu 12.10) to a distributed replicated storage with GlusterFS
 http://www.gluster.org/. Nodes 1 and 2 (replication1) as well as 3 and 4
 (replication2) will mirror each other,
 and replication1 and replication2 will be combined to one larger storage
 server (distribution). Basically, this is RAID10 over network. If you lose
 one server from replication1 and one from replication2, the distributed
 volume continues to work. The client system (Ubuntu 12.10 as well) will be
 able to access the storage as if it was a local filesystem*

 The vendor we have chosen, System 76, offers either RAID 5 or RAID 10 in
 each server.  Does anyone have insights or opinions on this?  It would seem
 to be that RAID 5 would be okay and that some kind drive monitoring
 (opinions also welcome, please) would be sufficient with the inherent
 nature of Gluster's Distributed/Replicated setup.  RAID 5 at System 76
 allows us to max out at 42 TB of useable space.  RAID 10 makes it 24 TB
 useable.

 I'd love to hear any insights or opinions on this.  To me, RAID 5 with
 Gluster in a distributed replicated setup should be sufficient and help us
 sleep well each night.  :)

 Thanks in advance!

 Ryan

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS-3.4.4 RPMs on download.gluster.org

2014-06-16 Thread Alexey Zilber
Changelog?


On Mon, Jun 16, 2014 at 9:24 PM, Kaleb S. KEITHLEY kkeit...@redhat.com
wrote:


 RPMs for el5-7 (RHEL, CentOS, etc.) and Fedora (19, 20, 21/rawhide), are
 now available in YUM repos at

   http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST

 There are also RPMs available for Pidora 20, SLES 11sp3 and OpenSuSE 13.1.

 Debian and Ubuntu DPKGs should also be appearing soon.

 --

 Kaleb







 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS-3.4.4 RPMs on download.gluster.org

2014-06-16 Thread Alexey Zilber
And, I found it myself:
https://github.com/gluster/glusterfs/blob/release-3.4/doc/release-notes/3.4.4.md




On Mon, Jun 16, 2014 at 11:13 PM, Alexey Zilber alexeyzil...@gmail.com
wrote:

 Changelog?


 On Mon, Jun 16, 2014 at 9:24 PM, Kaleb S. KEITHLEY kkeit...@redhat.com
 wrote:


 RPMs for el5-7 (RHEL, CentOS, etc.) and Fedora (19, 20, 21/rawhide), are
 now available in YUM repos at

   http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST

 There are also RPMs available for Pidora 20, SLES 11sp3 and OpenSuSE 13.1.

 Debian and Ubuntu DPKGs should also be appearing soon.

 --

 Kaleb







 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Cannot create volume on fresh install of 3.4.3 on CentOs 5.10

2014-06-14 Thread Alexey Zilber
Hi All,

   I'm having a horrid time getting gluster to create a volume.  Initially,
I messed up a path and had the error mentioned here:
http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/

 I fixed it, restarted gluster on both nodes, and then I just get a
straight up failure:

# gluster volume create devroot replica 2 transport tcp
sfdev1:/data/brick1/devroot sfdev2:/data/brick1/devroot

volume create: devroot: failed

The only thing that gets created are the extended attributes in
/data/brick1/devroot on sfdev1.

Here's the cli.log.. not much in there:

---

[2014-06-14 05:42:03.519769] W
[rpc-transport.c:175:rpc_transport_load] 0-rpc-transport: missing
'option transport-type'. defaulting to socket

[2014-06-14 05:42:03.523525] I [socket.c:3480:socket_init]
0-glusterfs: SSL support is NOT enabled
[2014-06-14 05:42:03.523580] I [socket.c:3495:socket_init]
0-glusterfs: using system polling thread
[2014-06-14 05:42:03.600482] I
[cli-cmd-volume.c:392:cli_cmd_volume_create_cbk] 0-cli: Replicate
cluster type found. Checking brick order.

[2014-06-14 05:42:03.600844] I
[cli-cmd-volume.c:304:cli_cmd_check_brick_order] 0-cli: Brick order
okay
[2014-06-14 05:42:03.668257] I
[cli-rpc-ops.c:805:gf_cli_create_volume_cbk] 0-cli: Received resp to
create volume
[2014-06-14 05:42:03.668365] I [input.c:36:cli_batch] 0-: Exiting with: -1
---

.cmd_log_history shows:

[2014-06-14 05:42:03.668051]  : volume create devroot replica 2
transport tcp sfdev1:/data/brick1/devroot sfdev2:/data/brick1/devroot
: FAILED :


etc-glusterfs-glusterd.vol.log seems good too.


Any ideas?


-Alex
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Cannot create volume on fresh install of 3.4.3 on CentOs 5.10

2014-06-14 Thread Alexey Zilber
:16:30.472989] D
[glusterd-utils.c:4980:glusterd_friend_find_by_hostname] 0-management:
Friend sfdev2 found.. state: 3



-Alex


On Sat, Jun 14, 2014 at 6:32 PM, Vijay Bellur vbel...@redhat.com wrote:

 On 06/14/2014 11:38 AM, Alexey Zilber wrote:

 Hi All,

 I'm having a horrid time getting gluster to create a volume.
   Initially, I messed up a path and had the error mentioned here:
 http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-
 it-is-already-part-of-a-volume/

   I fixed it, restarted gluster on both nodes, and then I just get a
 straight up failure:

 # gluster volume create devroot replica 2 transport tcp
 sfdev1:/data/brick1/devroot sfdev2:/data/brick1/devroot

 volume create: devroot: failed

 The only thing that gets created are the extended attributes in
 /data/brick1/devroot on sfdev1.

 Here's the cli.log.. not much in there:

 ---


 [2014-06-14 05:42:03.519769] W [rpc-transport.c:175:rpc_transport_load]
 0-rpc-transport: missing 'option transport-type'. defaulting to socket
 [2014-06-14 05:42:03.523525] I [socket.c:3480:socket_init] 0-glusterfs:
 SSL support is NOT enabled
 [2014-06-14 05:42:03.523580] I [socket.c:3495:socket_init] 0-glusterfs:
 using system polling thread


 [2014-06-14 05:42:03.600482] I
 [cli-cmd-volume.c:392:cli_cmd_volume_create_cbk] 0-cli: Replicate
 cluster type found. Checking brick order.
 [2014-06-14 05:42:03.600844] I
 [cli-cmd-volume.c:304:cli_cmd_check_brick_order] 0-cli: Brick order okay
 [2014-06-14 05:42:03.668257] I
 [cli-rpc-ops.c:805:gf_cli_create_volume_cbk] 0-cli: Received resp to
 create volume


 [2014-06-14 05:42:03.668365] I [input.c:36:cli_batch] 0-: Exiting with: -1
 ---

 .cmd_log_history shows:

 [2014-06-14 05:42:03.668051]: volume create devroot replica 2 transport
 tcp sfdev1:/data/brick1/devroot sfdev2:/data/brick1/devroot : FAILED :


 etc-glusterfs-glusterd.vol.log seems good too.


 Can you please check again the part relating to the attempt to create
 volume devroot in glusterd.vol.log? glusterd's log file normally contains
 information regarding why an operation failed.

 -Vijay


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users