Re: [H] What are we up to (Was-Are we alive?)
In our family we don't really care about extra features or stuff. We have two small kids, so having all their movies and TV shows on demand is a big bonus. What I like about XBMC is that it keeps an entire library of movies and tv shows up to date, can be sorted by genre or whatever, and can be driven from the same remote control we use for the TV. Very wife/kid friendly. It also supports AirPlay so you can push iPad/iOS content to it. Downside is that the integration with browser-based streaming services like Netflix still needs work but it's there. - Brian On Mon, Feb 24, 2014 at 3:08 PM, James Maki wrote: > What is the main advantage of XBMC over, for instance, MPC and PowerDVD? It > looks like an interesting program that needs addition investigation on my > part. I support movie only ripping, but my wife and daughter often spend > hours watching the extras from some movies. It takes me extra time to try > and catalog the extras whereas using the ISO and PowerDVD menu structure, > it > is simple. > > Thanks for any input. > > Jim > > > -Original Message- > > From: hardware-boun...@lists.hardwaregroup.com [mailto:hardware- > > boun...@lists.hardwaregroup.com] On Behalf Of Anthony Q. Martin > > Sent: Monday, February 24, 2014 7:43 AM > > To: hardw...@lists.hardwaregroup.com > > Subject: Re: [H] What are we up to (Was-Are we alive?) > > > > Oh...I run XBMC on mine tooI just have to add folders...and you do > that > > once and you're done. That's when you get a nice interface. > > > > BTW, I had initially ripped to ISO...the I decided I don't want ISOs...so > I'm re- > > ripping to mkv. That is taking a long time, but I do a few each day. the > recent > > stuff is already mkv...but stuff I ripped two years ago is what i'm > working on > > now. I assume you guys are all using mkv, right? > > > > On 2/24/2014 9:45 AM, Brian Weeden wrote: > > > Anthony, I'd also add to Jim's comments that once you have one big > > > central drive you can use someting like XBMC to have a very nice > > > interface on all your HTPCs in the house and accessibility to all your > content. > > > > > > > > > > > > - > > > Brian > > > > > > > > > > > > On Mon, Feb 24, 2014 at 9:42 AM, James Maki > > wrote: > > > > > >> That's how I started! :) But the desire for ease of use for my family > > >> (if it's not in plain sight, they can't find the drive, folder or > > >> location of a desired movie or TV show) and it just got "out of > > >> control!" A couple of drives here. A Sans Digital tower there. A new > HTPC > > in the family room. > > >> Gigabit network hooking upstairs bedroom to the main computer > > downstairs. > > >> You name it, it got added. I ended up spending lots of time > "cataloging," > > >> especially when adding drives. The pooling aspect of FlexRAID allows > > >> me to have one BIG drive with a folder for Blu-rays, one for DVDs, > > >> and another for recorded TV shows. Previously, a desired file might > > >> have been on one of 4 computers and any one of the approximately 30 > > >> drives. I did compromise awhile back and create 8 and 10 TB JBODs on > > >> the Sans Digital towers and internal in the main HTPC. This made it > > >> slightly easier to catalog. > > >> > > >> Of course, all of this ignores the "building computers, etc." is fun > > >> factor of this hobby. :) > > >> > > >> If nothing else, I have learned lots about SAS (which had intimidated > > >> me before), building my own NAS, and a little about Server software. > > >> Always a fun (if not occasionally, frustrating) experience. > > >> > > >> To Brian: I am doing exactly that-One big drive with 3 shared > > >> folders. The multiple pool idea was to facilitate doing smaller > > >> Updates/validates that could be done overnight rather than over 3 or > > >> 4 days. Once I get the drive set up as desired, I will give the > > >> parity backup another try and see if once it is set if the periodic > > >> updates of a static pool are quick. Thanks for the input and > > >> feedback. > > >> > > >> Jim > > >> > > >>> -Original Message- > > >>> From: hardware-boun...@lists.hardwaregroup.com [mailto:hardware- > > >>> boun...@lists.hardwaregroup.com] On Behalf Of Anthony Q. Martin > > You > > >>> guys are so sophisticated! I'm just stringing all my drives off a > > >>> PC > > >> with > > >>> external enclosures (10 drives inside the box, 8 more in two > > >>> four-bay enclosures). Using 3 and 4 TB drives (greens, mostly, from > > >>> WD and > > >> seagate). > > >>> Mine or just NTFS mount volumes all shared over my GB network. That > > >>> way, I can just navigate to any drive and any folder to play my rips > > >>> from my > > >> other > > >>> HTPCs. Easy setup. If a drive goes down, I just re-rip as I have > > >>> all > > >> the > > >> optical > > >>> discs as backup. Poor man's setup. Lazy man's setup. :) Raid is > > >>> too complicated for my brain and I don't see my use as super > critical. > > >> Ripping to >
Re: [H] What are we up to (Was-Are we alive?)
What is the main advantage of XBMC over, for instance, MPC and PowerDVD? It looks like an interesting program that needs addition investigation on my part. I support movie only ripping, but my wife and daughter often spend hours watching the extras from some movies. It takes me extra time to try and catalog the extras whereas using the ISO and PowerDVD menu structure, it is simple. Thanks for any input. Jim > -Original Message- > From: hardware-boun...@lists.hardwaregroup.com [mailto:hardware- > boun...@lists.hardwaregroup.com] On Behalf Of Anthony Q. Martin > Sent: Monday, February 24, 2014 7:43 AM > To: hardw...@lists.hardwaregroup.com > Subject: Re: [H] What are we up to (Was-Are we alive?) > > Oh...I run XBMC on mine tooI just have to add folders...and you do that > once and you're done. That's when you get a nice interface. > > BTW, I had initially ripped to ISO...the I decided I don't want ISOs...so I'm re- > ripping to mkv. That is taking a long time, but I do a few each day. the recent > stuff is already mkv...but stuff I ripped two years ago is what i'm working on > now. I assume you guys are all using mkv, right? > > On 2/24/2014 9:45 AM, Brian Weeden wrote: > > Anthony, I'd also add to Jim's comments that once you have one big > > central drive you can use someting like XBMC to have a very nice > > interface on all your HTPCs in the house and accessibility to all your content. > > > > > > > > - > > Brian > > > > > > > > On Mon, Feb 24, 2014 at 9:42 AM, James Maki > wrote: > > > >> That's how I started! :) But the desire for ease of use for my family > >> (if it's not in plain sight, they can't find the drive, folder or > >> location of a desired movie or TV show) and it just got "out of > >> control!" A couple of drives here. A Sans Digital tower there. A new HTPC > in the family room. > >> Gigabit network hooking upstairs bedroom to the main computer > downstairs. > >> You name it, it got added. I ended up spending lots of time "cataloging," > >> especially when adding drives. The pooling aspect of FlexRAID allows > >> me to have one BIG drive with a folder for Blu-rays, one for DVDs, > >> and another for recorded TV shows. Previously, a desired file might > >> have been on one of 4 computers and any one of the approximately 30 > >> drives. I did compromise awhile back and create 8 and 10 TB JBODs on > >> the Sans Digital towers and internal in the main HTPC. This made it > >> slightly easier to catalog. > >> > >> Of course, all of this ignores the "building computers, etc." is fun > >> factor of this hobby. :) > >> > >> If nothing else, I have learned lots about SAS (which had intimidated > >> me before), building my own NAS, and a little about Server software. > >> Always a fun (if not occasionally, frustrating) experience. > >> > >> To Brian: I am doing exactly that-One big drive with 3 shared > >> folders. The multiple pool idea was to facilitate doing smaller > >> Updates/validates that could be done overnight rather than over 3 or > >> 4 days. Once I get the drive set up as desired, I will give the > >> parity backup another try and see if once it is set if the periodic > >> updates of a static pool are quick. Thanks for the input and > >> feedback. > >> > >> Jim > >> > >>> -Original Message- > >>> From: hardware-boun...@lists.hardwaregroup.com [mailto:hardware- > >>> boun...@lists.hardwaregroup.com] On Behalf Of Anthony Q. Martin > You > >>> guys are so sophisticated! I'm just stringing all my drives off a > >>> PC > >> with > >>> external enclosures (10 drives inside the box, 8 more in two > >>> four-bay enclosures). Using 3 and 4 TB drives (greens, mostly, from > >>> WD and > >> seagate). > >>> Mine or just NTFS mount volumes all shared over my GB network. That > >>> way, I can just navigate to any drive and any folder to play my rips > >>> from my > >> other > >>> HTPCs. Easy setup. If a drive goes down, I just re-rip as I have > >>> all > >> the > >> optical > >>> discs as backup. Poor man's setup. Lazy man's setup. :) Raid is > >>> too complicated for my brain and I don't see my use as super critical. > >> Ripping to > >>> mkv is mostly done in the background while working on other stuff. > >>> > >>> On 2/24/2014 8:30 AM, Brian Weeden wrote: > Jim, have you thought about setting up multiple shares instead of > multiple pools? For example, you could have one big drive pool > with all your data but share out any folder on that pool as a > separate > >> network > >>> share. > > > - > Brian > > > > On Sun, Feb 23, 2014 at 2:46 PM, Brian Weeden > >>> wrote: > > If you're doing an initialization and building parity for 23 TB of > > data, I can expect that to take quite a while. The update I'm not > > so sure about. It should only need up update parity for whatever > > files were changed. So if the update needs just as long, that > > indicates > >> maybe > >>
Re: [H] What are we up to (Was-Are we alive?)
Oh...I run XBMC on mine tooI just have to add folders...and you do that once and you're done. That's when you get a nice interface. BTW, I had initially ripped to ISO...the I decided I don't want ISOs...so I'm re-ripping to mkv. That is taking a long time, but I do a few each day. the recent stuff is already mkv...but stuff I ripped two years ago is what i'm working on now. I assume you guys are all using mkv, right? On 2/24/2014 9:45 AM, Brian Weeden wrote: Anthony, I'd also add to Jim's comments that once you have one big central drive you can use someting like XBMC to have a very nice interface on all your HTPCs in the house and accessibility to all your content. - Brian On Mon, Feb 24, 2014 at 9:42 AM, James Maki wrote: That's how I started! :) But the desire for ease of use for my family (if it's not in plain sight, they can't find the drive, folder or location of a desired movie or TV show) and it just got "out of control!" A couple of drives here. A Sans Digital tower there. A new HTPC in the family room. Gigabit network hooking upstairs bedroom to the main computer downstairs. You name it, it got added. I ended up spending lots of time "cataloging," especially when adding drives. The pooling aspect of FlexRAID allows me to have one BIG drive with a folder for Blu-rays, one for DVDs, and another for recorded TV shows. Previously, a desired file might have been on one of 4 computers and any one of the approximately 30 drives. I did compromise awhile back and create 8 and 10 TB JBODs on the Sans Digital towers and internal in the main HTPC. This made it slightly easier to catalog. Of course, all of this ignores the "building computers, etc." is fun factor of this hobby. :) If nothing else, I have learned lots about SAS (which had intimidated me before), building my own NAS, and a little about Server software. Always a fun (if not occasionally, frustrating) experience. To Brian: I am doing exactly that-One big drive with 3 shared folders. The multiple pool idea was to facilitate doing smaller Updates/validates that could be done overnight rather than over 3 or 4 days. Once I get the drive set up as desired, I will give the parity backup another try and see if once it is set if the periodic updates of a static pool are quick. Thanks for the input and feedback. Jim -Original Message- From: hardware-boun...@lists.hardwaregroup.com [mailto:hardware- boun...@lists.hardwaregroup.com] On Behalf Of Anthony Q. Martin You guys are so sophisticated! I'm just stringing all my drives off a PC with external enclosures (10 drives inside the box, 8 more in two four-bay enclosures). Using 3 and 4 TB drives (greens, mostly, from WD and seagate). Mine or just NTFS mount volumes all shared over my GB network. That way, I can just navigate to any drive and any folder to play my rips from my other HTPCs. Easy setup. If a drive goes down, I just re-rip as I have all the optical discs as backup. Poor man's setup. Lazy man's setup. :) Raid is too complicated for my brain and I don't see my use as super critical. Ripping to mkv is mostly done in the background while working on other stuff. On 2/24/2014 8:30 AM, Brian Weeden wrote: Jim, have you thought about setting up multiple shares instead of multiple pools? For example, you could have one big drive pool with all your data but share out any folder on that pool as a separate network share. - Brian On Sun, Feb 23, 2014 at 2:46 PM, Brian Weeden wrote: If you're doing an initialization and building parity for 23 TB of data, I can expect that to take quite a while. The update I'm not so sure about. It should only need up update parity for whatever files were changed. So if the update needs just as long, that indicates maybe all your data changed. But if it's just video files then it shouldn't. I do know people have talked about exempting things like nfo files and thumbnails from the RAID so the parity process will skip them. - Brian On Sun, Feb 23, 2014 at 2:17 PM, James Maki wrote: Hi Brian, I switched to FlexRAID to combine a total of 23 2tb drives spread over 5 Sans Digital port multiplier towers plus extra drives on several PCs used as HTPCs. I have ripped all my Blu-ray, DVDs and recorded TV to the various arrays and over time had just gotten too large to easily manage. I wanted to centralize everything on one system. The system I started with utilized a AMD FM2 motherboard with 8 onboard SATA ports, 2 SAS ports on an add-on card (for a total of 8 additional SATA ports, and 3 of the Sans Digital towers (5 disks each) for a total of 31 drives distributed as 1 OS drive, 4 parity drives and 26 data drives (several were empty). When this continued to fail on creation, I moved the Sans Digital based drives to a 6 port SAS controller card. When I still had problems, I found that several drives were bad (scan disk), including the 1st parity drive. Replacing the drives gave me
Re: [H] What are we up to (Was-Are we alive?)
Are you using the SnapRAID function of FlexRAID? If so, how long does it take for updates to the parity drives? You are not REQUIRED to use RAID with the Sans Digital port expander towers. I was using them as JBOD or using Windows 7 build in JBOD function to create a 10 tb "ARRAY." When I started having problems creating the FlexRAID pool, I thought it MIGHT be due to slow access to the Sans Digital tower due to a small "pipeline," squeezing 5 drives output through 1 SATA port. Turned out it was probably some defective drives. Jim > -Original Message- > From: hardware-boun...@lists.hardwaregroup.com [mailto:hardware- > boun...@lists.hardwaregroup.com] On Behalf Of Chris Reeves > I've never combined sans digital with flexraid.. Aren't they creating their > own > raid5 internally? > > I currently keep 70TB online in flexraid with no issues, I know I had > mentioned it before. >
Re: [H] What are we up to (Was-Are we alive?)
I've never combined sans digital with flexraid.. Aren't they creating their own raid5 internally? I currently keep 70TB online in flexraid with no issues, I know I had mentioned it before. -Original Message- From: "Brian Weeden" Sent: 2/23/2014 8:07 AM To: "hardware" Cc: "hwg" Subject: Re: [H] What are we up to (Was-Are we alive?) Hi Jim. Sorry to hear you're having such troubles, especially since I think I'm the one who introduced FlexRAID to the list. I've been running it on my HTPC for several years now and (knock on wood) it's been running fine. Not sure how big your setup is, I'm running 7 DRUs and 2 PRUs of 2 TB each. I have them mounted as a single pool that is shared on my LAN. I run nightly parity updates. Initilaizing my setup did take several hours, but my updates don't take very long. Sometimes when I add several ripped HD movies at once it might take a few hours but that's it. How much data are you calcluating parity for at the initialization? Do you have a lot of little files (like thousand of pictures) or lots of files that change often? Either of those could greatly increase the time it takes to calcluate parity. I'm running it under Win7, and unfortunately I don't have any experience with Server 2011 or any of the Windows Server builds. >From what I've gathered you can only have one pool per system. I think that's a limit of how things work. But I've never needed more than one pool, so it hasn't bothered me. For hardware, I'm running the following based largely on a HTPC hardware guide I found online. It's based on a server chipset to maximize the bandwidth to the drives. Intel Xeon E3-1225 Asus P8B WS LGA 1155 Intel C206 8 GB DDR3 SDRAM Corsair TX750 V2 750W 2x Intel RAID Controller Card SATA/SAS PCI-E x8 Antec 1200 V3 Case 3x 5in1 hot swap HDD cages Part of the key is the controller cards. I'm not actually using the on-board RAID, just using it for the ports and the bandwidth. I've got two SAS to SATA cables plugged into each card, which gives me a total of 16 SATA ports. The cards are each on an 8x PCIe bus that gives them a lot of bandwidth. Boot drive is an older SSD that is attached to one of the SATA ports on the mobo. Once trick I figured out early on was to initialize your array with the biggest number of DRUs you think you'll eventually have, even if you don't actually have that many drives at the start. That way you can add new DRUs and not have to reinitialize the array. When I started using FlexRAID it was basically a part-time project being run by Brahim. He's now created a fully-fledged business out of it and has gone way beyond just FlexRAID. Apparently he now has two products. I think the classic FlexRAID system I'm still using has become RAID-F (RAID over filesystem) and he's got a new Transparent RAID product as well: http://www.flexraid.com/faq/ I'm still running 2.0u8 (snapshot 1.4 stable) so I guess at some point I'll need to move over to the commercial version. But for now it's working fine so I don't want to disturb it. Hope all this helps, and happy to answer any other questions however I can. - Brian On Sat, Feb 22, 2014 at 8:00 PM, James Maki wrote: > I have beating my head against the wall trying to install FlexRAID on > Windows Home Server 2011 since the beginning of the year. I spent the first > month trying to install using several Sans Digital port expanding towers. I > kept having errors/crashes when the system tried to calculate parity on the > initial install. I thought it might be the slow access to the > port-multiplier set-up, but I finally ran scan disk on all the drives and > found one parity drive (out of 4) had disk errors that were probably > causing > the problem. Then, the initial install was taking over 4 days. I found this > unacceptable and kept looking for a reason and whether this was typical. I > upgraded to a hardware RAID care (?), with multiple SAS ports. The Create > process was still very slow. The Parity Update took a similar length of > time. As did the Validate Parity procedure. And I assume the Verify > procedure would take a similar period of time. The suggestion is to run the > Update every night, the Validate weekly, and the Verify monthly. With the > length of time for an Update, it would be impossible to keep up with this > schedule. > > The program seems to be in a period of flux, with the developer not sure of > its direction. There is no firm documentation, just the wiki, forums, and > some how-tos. It is easy to find the how to do the general set-up, but I > believe most are going with small RAID sizes. Now, my storage needs are > more > for convenience and video access rather than any important, can't be > replaced files (for the most part). My business and personal files are > saved > to a different system. The attraction of FlexRAID is its ability to combine > multiple hard drives into a "single pool" to the user. This is what > attracted me to the software. Also, remov
Re: [H] What are we up to (Was-Are we alive?)
Anthony, I'd also add to Jim's comments that once you have one big central drive you can use someting like XBMC to have a very nice interface on all your HTPCs in the house and accessibility to all your content. - Brian On Mon, Feb 24, 2014 at 9:42 AM, James Maki wrote: > That's how I started! :) But the desire for ease of use for my family (if > it's not in plain sight, they can't find the drive, folder or location of a > desired movie or TV show) and it just got "out of control!" A couple of > drives here. A Sans Digital tower there. A new HTPC in the family room. > Gigabit network hooking upstairs bedroom to the main computer downstairs. > You name it, it got added. I ended up spending lots of time "cataloging," > especially when adding drives. The pooling aspect of FlexRAID allows me to > have one BIG drive with a folder for Blu-rays, one for DVDs, and another > for > recorded TV shows. Previously, a desired file might have been on one of 4 > computers and any one of the approximately 30 drives. I did compromise > awhile back and create 8 and 10 TB JBODs on the Sans Digital towers and > internal in the main HTPC. This made it slightly easier to catalog. > > Of course, all of this ignores the "building computers, etc." is fun factor > of this hobby. :) > > If nothing else, I have learned lots about SAS (which had intimidated me > before), building my own NAS, and a little about Server software. Always a > fun (if not occasionally, frustrating) experience. > > To Brian: I am doing exactly that-One big drive with 3 shared folders. The > multiple pool idea was to facilitate doing smaller Updates/validates that > could be done overnight rather than over 3 or 4 days. Once I get the drive > set up as desired, I will give the parity backup another try and see if > once > it is set if the periodic updates of a static pool are quick. Thanks for > the > input and feedback. > > Jim > > > -Original Message- > > From: hardware-boun...@lists.hardwaregroup.com [mailto:hardware- > > boun...@lists.hardwaregroup.com] On Behalf Of Anthony Q. Martin > > > You guys are so sophisticated! I'm just stringing all my drives off a PC > with > > external enclosures (10 drives inside the box, 8 more in two four-bay > > enclosures). Using 3 and 4 TB drives (greens, mostly, from WD and > seagate). > > Mine or just NTFS mount volumes all shared over my GB network. That way, > > I can just navigate to any drive and any folder to play my rips from my > other > > HTPCs. Easy setup. If a drive goes down, I just re-rip as I have all > the > optical > > discs as backup. Poor man's setup. Lazy man's setup. :) Raid is too > > complicated for my brain and I don't see my use as super critical. > Ripping to > > mkv is mostly done in the background while working on other stuff. > > > > On 2/24/2014 8:30 AM, Brian Weeden wrote: > > > Jim, have you thought about setting up multiple shares instead of > > > multiple pools? For example, you could have one big drive pool with > > > all your data but share out any folder on that pool as a separate > network > > share. > > > > > > > > > > > > - > > > Brian > > > > > > > > > > > > On Sun, Feb 23, 2014 at 2:46 PM, Brian Weeden > > wrote: > > > > > >> If you're doing an initialization and building parity for 23 TB of > > >> data, I can expect that to take quite a while. The update I'm not so > > >> sure about. It should only need up update parity for whatever files > > >> were changed. So if the update needs just as long, that indicates > maybe > > all your data changed. > > >> But if it's just video files then it shouldn't. > > >> > > >> I do know people have talked about exempting things like nfo files > > >> and thumbnails from the RAID so the parity process will skip them. > > >> > > >> > > >> > > >> - > > >> Brian > > >> > > >> > > >> > > >> On Sun, Feb 23, 2014 at 2:17 PM, James Maki > > wrote: > > >> > > >>> Hi Brian, > > >>> > > >>> I switched to FlexRAID to combine a total of 23 2tb drives spread > > >>> over 5 Sans Digital port multiplier towers plus extra drives on > > >>> several PCs used as HTPCs. I have ripped all my Blu-ray, DVDs and > > >>> recorded TV to the various arrays and over time had just gotten too > > >>> large to easily manage. I wanted to centralize everything on one > > >>> system. The system I started with utilized a AMD FM2 motherboard > > >>> with 8 onboard SATA ports, 2 SAS ports on an add-on card (for a > > >>> total of 8 additional SATA ports, and 3 of the Sans Digital towers > > >>> (5 > > >>> disks each) for a total of 31 drives distributed as 1 OS drive, 4 > > >>> parity drives and 26 data drives (several were empty). When this > > >>> continued to fail on creation, I moved the Sans Digital based drives > > >>> to a 6 port SAS controller card. > > >>> > > >>> When I still had problems, I found that several drives were bad > > >>> (scan disk), including the 1st parity drive. Replacing the drives > > >>> gave me a successful
Re: [H] What are we up to (Was-Are we alive?)
That's how I started! :) But the desire for ease of use for my family (if it's not in plain sight, they can't find the drive, folder or location of a desired movie or TV show) and it just got "out of control!" A couple of drives here. A Sans Digital tower there. A new HTPC in the family room. Gigabit network hooking upstairs bedroom to the main computer downstairs. You name it, it got added. I ended up spending lots of time "cataloging," especially when adding drives. The pooling aspect of FlexRAID allows me to have one BIG drive with a folder for Blu-rays, one for DVDs, and another for recorded TV shows. Previously, a desired file might have been on one of 4 computers and any one of the approximately 30 drives. I did compromise awhile back and create 8 and 10 TB JBODs on the Sans Digital towers and internal in the main HTPC. This made it slightly easier to catalog. Of course, all of this ignores the "building computers, etc." is fun factor of this hobby. :) If nothing else, I have learned lots about SAS (which had intimidated me before), building my own NAS, and a little about Server software. Always a fun (if not occasionally, frustrating) experience. To Brian: I am doing exactly that-One big drive with 3 shared folders. The multiple pool idea was to facilitate doing smaller Updates/validates that could be done overnight rather than over 3 or 4 days. Once I get the drive set up as desired, I will give the parity backup another try and see if once it is set if the periodic updates of a static pool are quick. Thanks for the input and feedback. Jim > -Original Message- > From: hardware-boun...@lists.hardwaregroup.com [mailto:hardware- > boun...@lists.hardwaregroup.com] On Behalf Of Anthony Q. Martin > You guys are so sophisticated! I'm just stringing all my drives off a PC with > external enclosures (10 drives inside the box, 8 more in two four-bay > enclosures). Using 3 and 4 TB drives (greens, mostly, from WD and seagate). > Mine or just NTFS mount volumes all shared over my GB network. That way, > I can just navigate to any drive and any folder to play my rips from my other > HTPCs. Easy setup. If a drive goes down, I just re-rip as I have all the optical > discs as backup. Poor man's setup. Lazy man's setup. :) Raid is too > complicated for my brain and I don't see my use as super critical. Ripping to > mkv is mostly done in the background while working on other stuff. > > On 2/24/2014 8:30 AM, Brian Weeden wrote: > > Jim, have you thought about setting up multiple shares instead of > > multiple pools? For example, you could have one big drive pool with > > all your data but share out any folder on that pool as a separate network > share. > > > > > > > > - > > Brian > > > > > > > > On Sun, Feb 23, 2014 at 2:46 PM, Brian Weeden > wrote: > > > >> If you're doing an initialization and building parity for 23 TB of > >> data, I can expect that to take quite a while. The update I'm not so > >> sure about. It should only need up update parity for whatever files > >> were changed. So if the update needs just as long, that indicates maybe > all your data changed. > >> But if it's just video files then it shouldn't. > >> > >> I do know people have talked about exempting things like nfo files > >> and thumbnails from the RAID so the parity process will skip them. > >> > >> > >> > >> - > >> Brian > >> > >> > >> > >> On Sun, Feb 23, 2014 at 2:17 PM, James Maki > wrote: > >> > >>> Hi Brian, > >>> > >>> I switched to FlexRAID to combine a total of 23 2tb drives spread > >>> over 5 Sans Digital port multiplier towers plus extra drives on > >>> several PCs used as HTPCs. I have ripped all my Blu-ray, DVDs and > >>> recorded TV to the various arrays and over time had just gotten too > >>> large to easily manage. I wanted to centralize everything on one > >>> system. The system I started with utilized a AMD FM2 motherboard > >>> with 8 onboard SATA ports, 2 SAS ports on an add-on card (for a > >>> total of 8 additional SATA ports, and 3 of the Sans Digital towers > >>> (5 > >>> disks each) for a total of 31 drives distributed as 1 OS drive, 4 > >>> parity drives and 26 data drives (several were empty). When this > >>> continued to fail on creation, I moved the Sans Digital based drives > >>> to a 6 port SAS controller card. > >>> > >>> When I still had problems, I found that several drives were bad > >>> (scan disk), including the 1st parity drive. Replacing the drives > >>> gave me a successful creation but it took 4 days. The Update took > >>> another 4 days. That's when I started having second thoughts on > >>> using the Parity backup option. I guess I am just expected too much > >>> from the software. That's when I thought creating several pools > >>> would reduce the strain for each update/validate. > >>> > >>> I am using a modestly powered AMD dual core 3.2 GHz processor and > >>> mostly consumer drives (mixed with a few WD reds). I went with > >>> Windows Home Server for eco
Re: [H] What are we up to (Was-Are we alive?)
You guys are so sophisticated! I'm just stringing all my drives off a PC with external enclosures (10 drives inside the box, 8 more in two four-bay enclosures). Using 3 and 4 TB drives (greens, mostly, from WD and seagate). Mine or just NTFS mount volumes all shared over my GB network. That way, I can just navigate to any drive and any folder to play my rips from my other HTPCs. Easy setup. If a drive goes down, I just re-rip as I have all the optical discs as backup. Poor man's setup. Lazy man's setup. :) Raid is too complicated for my brain and I don't see my use as super critical. Ripping to mkv is mostly done in the background while working on other stuff. On 2/24/2014 8:30 AM, Brian Weeden wrote: Jim, have you thought about setting up multiple shares instead of multiple pools? For example, you could have one big drive pool with all your data but share out any folder on that pool as a separate network share. - Brian On Sun, Feb 23, 2014 at 2:46 PM, Brian Weeden wrote: If you're doing an initialization and building parity for 23 TB of data, I can expect that to take quite a while. The update I'm not so sure about. It should only need up update parity for whatever files were changed. So if the update needs just as long, that indicates maybe all your data changed. But if it's just video files then it shouldn't. I do know people have talked about exempting things like nfo files and thumbnails from the RAID so the parity process will skip them. - Brian On Sun, Feb 23, 2014 at 2:17 PM, James Maki wrote: Hi Brian, I switched to FlexRAID to combine a total of 23 2tb drives spread over 5 Sans Digital port multiplier towers plus extra drives on several PCs used as HTPCs. I have ripped all my Blu-ray, DVDs and recorded TV to the various arrays and over time had just gotten too large to easily manage. I wanted to centralize everything on one system. The system I started with utilized a AMD FM2 motherboard with 8 onboard SATA ports, 2 SAS ports on an add-on card (for a total of 8 additional SATA ports, and 3 of the Sans Digital towers (5 disks each) for a total of 31 drives distributed as 1 OS drive, 4 parity drives and 26 data drives (several were empty). When this continued to fail on creation, I moved the Sans Digital based drives to a 6 port SAS controller card. When I still had problems, I found that several drives were bad (scan disk), including the 1st parity drive. Replacing the drives gave me a successful creation but it took 4 days. The Update took another 4 days. That's when I started having second thoughts on using the Parity backup option. I guess I am just expected too much from the software. That's when I thought creating several pools would reduce the strain for each update/validate. I am using a modestly powered AMD dual core 3.2 GHz processor and mostly consumer drives (mixed with a few WD reds). I went with Windows Home Server for economy reasons ($50 vs. $90-130 for Windows 7 Home Premium/Professional). I utilized a HighPoint RocketRAID 2760A SAS RAID controller card. I am using RAID over File System 2.0u12, SnapRAID 1.4 Stable and Storage Pool 1.0 Stable (although not using the SnapRAID at this point). Overall, I am happy with the pooling facility of the software. I just wish my large setup would not choke the parity option. Thanks for all the input. Not sure if there is an answer to my problem. More powerful hardware? Reading the forums seems to indicate that hardware should NOT be the bottleneck. There seems to be the option of Updating/Validating only portions of the RAID each night. More research is needed on that front. My current plan at this point is to fill the RAID in the pooling only mode, make sure all names and organization is correct, then commit to a stable, unchanging file system that I will then commit to the SnapRAID parity option. That way I will only need to Validate/Verify periodically. Thanks, Jim -Original Message- From: hardware-boun...@lists.hardwaregroup.com [mailto:hardware- boun...@lists.hardwaregroup.com] On Behalf Of Brian Weeden Sent: Sunday, February 23, 2014 6:06 AM To: hardware Cc: hwg Subject: Re: [H] What are we up to (Was-Are we alive?) Hi Jim. Sorry to hear you're having such troubles, especially since I think I'm the one who introduced FlexRAID to the list. I've been running it on my HTPC for several years now and (knock on wood) it's been running fine. Not sure how big your setup is, I'm running 7 DRUs and 2 PRUs of 2 TB each. I have them mounted as a single pool that is shared on my LAN. I run nightly parity updates. Initilaizing my setup did take several hours, but my updates don't take very long. Sometimes when I add several ripped HD movies at once it might take a few hours but that's it. How much data are you calcluating parity for at the initialization? Do you have a lot of little files (like thousand of pictures) or lots of files that change often? Either of t
Re: [H] What are we up to (Was-Are we alive?)
Jim, have you thought about setting up multiple shares instead of multiple pools? For example, you could have one big drive pool with all your data but share out any folder on that pool as a separate network share. - Brian On Sun, Feb 23, 2014 at 2:46 PM, Brian Weeden wrote: > If you're doing an initialization and building parity for 23 TB of data, I > can expect that to take quite a while. The update I'm not so sure about. It > should only need up update parity for whatever files were changed. So if > the update needs just as long, that indicates maybe all your data changed. > But if it's just video files then it shouldn't. > > I do know people have talked about exempting things like nfo files and > thumbnails from the RAID so the parity process will skip them. > > > > - > Brian > > > > On Sun, Feb 23, 2014 at 2:17 PM, James Maki wrote: > >> Hi Brian, >> >> I switched to FlexRAID to combine a total of 23 2tb drives spread over 5 >> Sans Digital port multiplier towers plus extra drives on several PCs used >> as >> HTPCs. I have ripped all my Blu-ray, DVDs and recorded TV to the various >> arrays and over time had just gotten too large to easily manage. I wanted >> to >> centralize everything on one system. The system I started with utilized a >> AMD FM2 motherboard with 8 onboard SATA ports, 2 SAS ports on an add-on >> card >> (for a total of 8 additional SATA ports, and 3 of the Sans Digital towers >> (5 >> disks each) for a total of 31 drives distributed as 1 OS drive, 4 parity >> drives and 26 data drives (several were empty). When this continued to >> fail >> on creation, I moved the Sans Digital based drives to a 6 port SAS >> controller card. >> >> When I still had problems, I found that several drives were bad (scan >> disk), >> including the 1st parity drive. Replacing the drives gave me a successful >> creation but it took 4 days. The Update took another 4 days. That's when I >> started having second thoughts on using the Parity backup option. I guess >> I >> am just expected too much from the software. That's when I thought >> creating >> several pools would reduce the strain for each update/validate. >> >> I am using a modestly powered AMD dual core 3.2 GHz processor and mostly >> consumer drives (mixed with a few WD reds). I went with Windows Home >> Server >> for economy reasons ($50 vs. $90-130 for Windows 7 Home >> Premium/Professional). I utilized a HighPoint RocketRAID 2760A SAS RAID >> controller card. I am using RAID over File System 2.0u12, SnapRAID 1.4 >> Stable and Storage Pool 1.0 Stable (although not using the SnapRAID at >> this >> point). >> >> Overall, I am happy with the pooling facility of the software. I just wish >> my large setup would not choke the parity option. Thanks for all the >> input. >> >> Not sure if there is an answer to my problem. More powerful hardware? >> Reading the forums seems to indicate that hardware should NOT be the >> bottleneck. There seems to be the option of Updating/Validating only >> portions of the RAID each night. More research is needed on that front. My >> current plan at this point is to fill the RAID in the pooling only mode, >> make sure all names and organization is correct, then commit to a stable, >> unchanging file system that I will then commit to the SnapRAID parity >> option. That way I will only need to Validate/Verify periodically. >> >> Thanks, >> >> Jim >> >> > -Original Message- >> > From: hardware-boun...@lists.hardwaregroup.com [mailto:hardware- >> > boun...@lists.hardwaregroup.com] On Behalf Of Brian Weeden >> > Sent: Sunday, February 23, 2014 6:06 AM >> > To: hardware >> > Cc: hwg >> > Subject: Re: [H] What are we up to (Was-Are we alive?) >> > >> > Hi Jim. Sorry to hear you're having such troubles, especially since I >> think I'm >> > the one who introduced FlexRAID to the list. >> > >> > I've been running it on my HTPC for several years now and (knock on >> wood) >> > it's been running fine. Not sure how big your setup is, I'm running 7 >> DRUs >> and >> > 2 PRUs of 2 TB each. I have them mounted as a single pool that is shared >> on >> > my LAN. I run nightly parity updates. >> > >> > Initilaizing my setup did take several hours, but my updates don't take >> very >> > long. Sometimes when I add several ripped HD movies at once it might >> take >> a >> > few hours but that's it. How much data are you calcluating parity for at >> the >> > initialization? Do you have a lot of little files (like thousand of >> pictures) or lots >> > of files that change often? Either of those could greatly increase the >> time it >> > takes to calcluate parity. >> > >> > I'm running it under Win7, and unfortunately I don't have any experience >> > with Server 2011 or any of the Windows Server builds. >> > >> > From what I've gathered you can only have one pool per system. I think >> > that's a limit of how things work. But I've never needed more than one >> pool, >> > so it hasn't bothered me. >> > >> > For har