d6jg wrote: 
> In a LMS on one machine data on another via NFS/CIFS scenario the
> transcoding (if required) is always going to be done on the LMS machine.
> If the OS and software are on one drive and the data is on another in
> the same machine then any transcoding will be done on the OS/LMS drive
> so I really don’t understand your point.
> Surely transcoding is precisely the sort of task where an SSD is a good
> idea.
> 
> Streaming is a low level network activity and you would be very hard
> pressed to choke a gigabit Ethernet connection by reading a music file
> from one machine to another and then streaming it out again.
> 
> People successfully run Pi based LMS with data on NAS. Personally I
> don’t because my experience of running this way was that the Pi’s
> network card was its weakest point so I moved to an HP54 with single SSD
> and all files on NAS but I regularly build new Pi’s and connect them to
> the NAS. If every time I wanted to build a new LMS server I had to build
> new Raid arrays on attached USB enclosure (also a Pi weak spot as the
> LAN port shares the same bus) then it would be a nightmare.

You misunderstood my point about transcoding over a network connection. 
I was not talking about stream transcoding on the fly through LMS.  I
was speaking of file transcoding through the server machine using DB
Poweramp or other application, with the NAS over ethernet as the host
volume.  Processing will often stall and plummet in speed in the latter
environment.  Once you move up to video files, processing can slow to a
crawl or fail outright.  File transcoding, container conversions, and
metadata revisions are much more common in today's larger home A/V
libraries than ten or fifteen years ago.  These tasks are much more
efficiently performed over a wide local bus.  And a local TB or USB3
target volume provides that luxury, and avoids write-cycling wear
through an SSD OS drive (which are much more perishable in that
environment).  

I stand by my opinion that a remotely networked working data target can
effectively double the network traffic going through the server machine
interface, and network health and traffic buffering then becomes more
critical to server performance than with a local data volume.  With
increasing numbers of clients, the added network traffic only increases
proportionally.  For most smaller client groups, it should not pose too
much of an issue if the aggregate traffic does not saturate the
effective network bus bandwidth.  Otherwise, in a very busy environment,
QOS rules and router configuration best practices can become critical. 


But again, maintaining multiple networked data arrays is too expensive
for most home environments, so I completely understand the widespread
practice of using a NAS as a working data volume target.

The OP using a local 6 drive data stack is somewhat excessive even for
me, however.  And juggling multiple bare drives in an ad hoc rotation
for backups would drive me crazy.  Which is why I think a more robust
prosumer or enterprise grade 4 drive H/W RAID solution a better option
for a local data target in that scenario.  A well-designed stack will
perform automatic file checking and parity checks and corrections as
part of regular RAID scrubbing (and perhaps mitigate some of the hash
errors the OP is describing with a less sophisticated stack machine). 
If someone does not want to constantly juggle bare drives in rotation, a
further networked and/or cloud back up solution are some other more
automated backup alternatives.  

Backup strategies are in the eye of the beholder, and how they
prioritize and value the particular data.  Some treat this type of data
as static, and off-line archive drive backups to be sufficient.  Others,
like me, treat it as dynamic, and deserving of continuous active backup
systems with some fault-tolerance.  Each has to make their own decisions
based on how they manage and value their data.


------------------------------------------------------------------------
sgmlaw's Profile: http://forums.slimdevices.com/member.php?userid=13995
View this thread: http://forums.slimdevices.com/showthread.php?t=109946

_______________________________________________
discuss mailing list
discuss@lists.slimdevices.com
http://lists.slimdevices.com/mailman/listinfo/discuss

Reply via email to