Oh...I run XBMC on mine too....I just have to add folders...and you do that once and you're done. That's when you get a nice interface.

BTW, I had initially ripped to ISO...the I decided I don't want ISOs...so I'm re-ripping to mkv. That is taking a long time, but I do a few each day. the recent stuff is already mkv...but stuff I ripped two years ago is what i'm working on now. I assume you guys are all using mkv, right?

On 2/24/2014 9:45 AM, Brian Weeden wrote:
Anthony, I'd also add to Jim's comments that once you have one big central
drive you can use someting like XBMC to have a very nice interface on all
your HTPCs in the house and accessibility to all your content.



---------
Brian



On Mon, Feb 24, 2014 at 9:42 AM, James Maki <jwm_maill...@comcast.net>wrote:

That's how I started! :) But the desire for ease of use for my family (if
it's not in plain sight, they can't find the drive, folder or location of a
desired movie or TV show) and it just got "out of control!" A couple of
drives here. A Sans Digital tower there. A new HTPC in the family room.
Gigabit network hooking upstairs bedroom to the main computer downstairs.
You name it, it got added.  I ended up spending lots of time "cataloging,"
especially when adding drives. The pooling aspect of FlexRAID allows me to
have one BIG drive with a folder for Blu-rays, one for DVDs, and another
for
recorded TV shows. Previously, a desired file might have been on one of 4
computers and any one of the approximately 30 drives. I did compromise
awhile back and create 8 and 10 TB JBODs on the Sans Digital towers and
internal in the main HTPC. This made it slightly easier to catalog.

Of course, all of this ignores the "building computers, etc." is fun factor
of this hobby. :)

If nothing else, I have learned lots about SAS (which had intimidated me
before), building my own NAS, and a little about Server software. Always a
fun (if not occasionally, frustrating) experience.

To Brian: I am doing exactly that-One big drive with 3 shared folders. The
multiple pool idea was to facilitate doing smaller Updates/validates that
could be done overnight rather than over 3 or 4 days. Once I get the drive
set up as desired, I will give the parity backup another try and see if
once
it is set if the periodic updates of a static pool are quick. Thanks for
the
input and feedback.

Jim

-----Original Message-----
From: hardware-boun...@lists.hardwaregroup.com [mailto:hardware-
boun...@lists.hardwaregroup.com] On Behalf Of Anthony Q. Martin
You guys are so sophisticated!  I'm just stringing all my drives off a PC
with
external enclosures (10 drives inside the box, 8 more in two four-bay
enclosures).  Using 3 and 4 TB drives (greens, mostly, from WD and
seagate).
Mine or just NTFS mount volumes all shared over my GB network.  That way,
I can just navigate to any drive and any folder to play my rips from my
other
HTPCs.  Easy setup.  If a drive goes down, I just re-rip as I have all
the
optical
discs as backup.  Poor man's setup.  Lazy man's setup. :) Raid is too
complicated for my brain and I don't see my use as super critical.
Ripping to
mkv is mostly done in the background while working on other stuff.

On 2/24/2014 8:30 AM, Brian Weeden wrote:
Jim, have you thought about setting up multiple shares instead of
multiple pools?  For example, you could have one big drive pool with
all your data but share out any folder on that pool as a separate
network
share.


---------
Brian



On Sun, Feb 23, 2014 at 2:46 PM, Brian Weeden
<brian.wee...@gmail.com>wrote:
If you're doing an initialization and building parity for 23 TB of
data, I can expect that to take quite a while. The update I'm not so
sure about. It should only need up update parity for whatever files
were changed. So if the update needs just as long, that indicates
maybe
all your data changed.
But if it's just video files then it shouldn't.

I do know people have talked about exempting things like nfo files
and thumbnails from the RAID so the parity process will skip them.



---------
Brian



On Sun, Feb 23, 2014 at 2:17 PM, James Maki
<jwm_maill...@comcast.net>wrote:
Hi Brian,

I switched to FlexRAID to combine a total of 23 2tb drives spread
over 5 Sans Digital port multiplier towers plus extra drives on
several PCs used as HTPCs. I have ripped all my Blu-ray, DVDs and
recorded TV to the various arrays and over time had just gotten too
large to easily manage. I wanted to centralize everything on one
system. The system I started with utilized a AMD FM2 motherboard
with 8 onboard SATA ports, 2 SAS ports on an add-on card (for a
total of 8 additional SATA ports, and 3 of the Sans Digital towers
(5
disks each) for a total of 31 drives distributed as 1 OS drive, 4
parity drives and 26 data drives (several were empty). When this
continued to fail on creation, I moved the Sans Digital based drives
to a 6 port SAS controller card.

When I still had problems, I found that several drives were bad
(scan disk), including the 1st parity drive. Replacing the drives
gave me a successful creation but it took 4 days. The Update took
another 4 days. That's when I started having second thoughts on
using the Parity backup option. I guess I am just expected too much
from the software. That's when I thought creating several pools
would reduce the strain for each update/validate.

I am using a modestly powered AMD dual core 3.2 GHz processor and
mostly consumer drives (mixed with a few WD reds). I went with
Windows Home Server for economy reasons ($50 vs. $90-130 for
Windows
7 Home Premium/Professional). I utilized a HighPoint RocketRAID
2760A SAS RAID controller card. I am using RAID over File System
2.0u12, SnapRAID 1.4 Stable and Storage Pool 1.0 Stable (although
not using the SnapRAID at this point).

Overall, I am happy with the pooling facility of the software. I
just wish my large setup would not choke the parity option. Thanks
for all the input.

Not sure if there is an answer to my problem. More powerful hardware?
Reading the forums seems to indicate that hardware should NOT be the
bottleneck. There seems to be the option of Updating/Validating only
portions of the RAID each night. More research is needed on that
front. My current plan at this point is to fill the RAID in the
pooling only mode, make sure all names and organization is correct,
then commit to a stable, unchanging file system that I will then
commit to the SnapRAID parity option. That way I will only need to
Validate/Verify periodically.
Thanks,

Jim

-----Original Message-----
From: hardware-boun...@lists.hardwaregroup.com [mailto:hardware-
boun...@lists.hardwaregroup.com] On Behalf Of Brian Weeden
Sent: Sunday, February 23, 2014 6:06 AM
To: hardware
Cc: hwg
Subject: Re: [H] What are we up to (Was-Are we alive?)

Hi Jim. Sorry to hear you're having such troubles, especially since
I
think I'm
the one who introduced FlexRAID to the list.

I've been running it on my HTPC for several years now and (knock on
wood)
it's been running fine. Not sure how big your setup is, I'm running
7
DRUs
and
2 PRUs of 2 TB each. I have them mounted as a single pool that is
shared
on
my LAN. I run nightly parity updates.

Initilaizing my setup did take several hours, but my updates don't
take
very
long. Sometimes when I add several ripped HD movies at once it
might
take
a
few hours but that's it. How much data are you calcluating parity
for at
the
initialization? Do you have a lot of little files (like thousand of
pictures) or lots
of files that change often? Either of those could greatly increase
the
time it
takes to calcluate parity.

I'm running it under Win7, and unfortunately I don't have any
experience with Server 2011 or any of the Windows Server builds.

  From what I've gathered you can only have one pool per system. I
think that's a limit of how things work. But I've never needed more
than one
pool,
so it hasn't bothered me.

For hardware, I'm running the following based largely on a HTPC
hardware guide I found online. It's based on a server chipset to
maximize the bandwidth to the drives.

Intel Xeon E3-1225
Asus P8B WS LGA 1155 Intel C206
8 GB DDR3 SDRAM
Corsair TX750 V2 750W
2x Intel RAID Controller Card SATA/SAS PCI-E x8 Antec 1200 V3 Case
3x
5in1
hot swap HDD cages

Part of the key is the controller cards. I'm not actually using the
on-board
RAID, just using it for the ports and the bandwidth. I've  got two
SAS
to
SATA
cables plugged into each card, which gives me a total of 16 SATA
ports.
The
cards are each on an 8x PCIe bus that gives them a lot of bandwidth.
Boot
drive is an older SSD that is attached to one of the SATA ports on
the
mobo.
Once trick I figured out early on was to initialize your array with
the
biggest
number of DRUs you think you'll eventually have, even if you don't
actually
have that many drives at the start. That way you can add new DRUs
and
not
have to reinitialize the array.

When I started using FlexRAID it was basically a part-time project
being
run
by Brahim. He's now created a fully-fledged business out of it and
has
gone
way beyond just FlexRAID. Apparently he now has two products. I
think
the
classic FlexRAID system I'm still using has become RAID-F (RAID
over
filesystem) and he's got a new Transparent RAID product as well:
http://www.flexraid.com/faq/

I'm still running 2.0u8 (snapshot 1.4 stable) so I guess at some
point
I'll need
to move over to the commercial version. But for now it's working
fine
so I
don't want to disturb it.

Hope all this helps, and happy to answer any other questions
however I
can.

---------
Brian


Reply via email to