Re: ZFS and power management

2020-01-05 Thread Karl Denninger
On 1/5/2020 16:10, Peter wrote:
> On Wed, 18 Dec 2019 17:22:16 +0100, Karl Denninger
>  wrote:
>
>> I'm curious if anyone has come up with a way to do this...
>>
>> I have a system here that has two pools -- one comprised of SSD disks
>> that are the "most commonly used" things including user home directories
>> and mailboxes, and another that is comprised of very large things that
>> are far less-commonly used (e.g. video data files, media, build
>> environments for various devices, etc.)
>
> I'm using such a configuration for more than 10 years already, and
> didn't perceive the problems You describe.
> Disks are powered down with gstopd or other means, and they stay
> powered down until filesystems in the pool are actively accessed.
> A difficulty for me was that postgres autovacuum must be completeley
> disabled if there are tablespaces on the quiesced pools. Another thing
> that comes to mind is smartctl in daemon mode (but I never used that).
> There are probably a whole bunch more of potential culprits, so I
> suggest You work thru all the housekeeping stuff (daemons, cronjobs,
> etc.) to find it.

I found a number of things and managed to kill them off in terms of
active access, and now it is behaving.  I'm using "camcontrol idle -t
240 da{xxx}", which interestingly enough appears NOT to survive a
reboot, but otherwise does what's expected.

-- 
Karl Denninger
k...@denninger.net 
/The Market Ticker/
/[S/MIME encrypted email preferred]/


smime.p7s
Description: S/MIME Cryptographic Signature


Re: ZFS and power management

2020-01-05 Thread Peter
On Wed, 18 Dec 2019 17:22:16 +0100, Karl Denninger   
wrote:



I'm curious if anyone has come up with a way to do this...

I have a system here that has two pools -- one comprised of SSD disks
that are the "most commonly used" things including user home directories
and mailboxes, and another that is comprised of very large things that
are far less-commonly used (e.g. video data files, media, build
environments for various devices, etc.)


I'm using such a configuration for more than 10 years already, and didn't  
perceive the problems You describe.
Disks are powered down with gstopd or other means, and they stay powered  
down until filesystems in the pool are actively accessed.
A difficulty for me was that postgres autovacuum must be completeley  
disabled if there are tablespaces on the quiesced pools. Another thing  
that comes to mind is smartctl in daemon mode (but I never used that).
There are probably a whole bunch more of potential culprits, so I suggest  
You work thru all the housekeeping stuff (daemons, cronjobs, etc.) to find  
it.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS and power management

2019-12-18 Thread Alan Somers
On Wed, Dec 18, 2019 at 9:22 AM Karl Denninger  wrote:

> I'm curious if anyone has come up with a way to do this...
>
> I have a system here that has two pools -- one comprised of SSD disks
> that are the "most commonly used" things including user home directories
> and mailboxes, and another that is comprised of very large things that
> are far less-commonly used (e.g. video data files, media, build
> environments for various devices, etc.)
>
> The second pool has perhaps two dozen filesystems that are mounted, but
> again, rarely accessed.  However, despite them being rarely accessed ZFS
> performs various maintenance checkpoint functions on a nearly-continuous
> basis (it appears) because there's a low level, but not zero, amount of
> I/O traffic to and from them.  Thus if I set power control (e.g. spin
> down after 5 minutes of inactivity) they never do.  I could simply
> export the pool but I prefer (greatly) to not do that because some of
> the data on that pool (e.g. backups from PCs) is information that if a
> user wants to get to it it ought to "just work."
>
> Well, one disk is no big deal.  A rack full of them is another matter.
> I could materially cut the power consumption of this box down (likely by
> a third or more) if those disks were spun down during 95% of the time
> the box is up, but with the "standard" way ZFS does things that doesn't
> appear to be possible.
>
> Has anyone taken a crack at changing the paradigm (e.g. using the
> automounter, perhaps?) to get around this?
>
> --
> Karl Denninger
> k...@denninger.net 
> /The Market Ticker/
> /[S/MIME encrypted email preferred]/
>

I have, and I found that it wasn't actually ZFS's fault.  By itself ZFS
wasn't initiating any background I/O whatsoever.  I used a combination of
fstat and dtrace to track down the culprit processes.  Once I had
shutdown/patched/reconfigured each of those processes, the disks stayed
idle indefinitely.  You might have success using the same strategy.  I
suspect that the automounter wouldn't help you, because any access that
ought to "just work" for a normal user would likewise "just work" for
whatever background process is hitting your disks right now.
-Alan
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


ZFS and power management

2019-12-18 Thread Karl Denninger
I'm curious if anyone has come up with a way to do this...

I have a system here that has two pools -- one comprised of SSD disks
that are the "most commonly used" things including user home directories
and mailboxes, and another that is comprised of very large things that
are far less-commonly used (e.g. video data files, media, build
environments for various devices, etc.)

The second pool has perhaps two dozen filesystems that are mounted, but
again, rarely accessed.  However, despite them being rarely accessed ZFS
performs various maintenance checkpoint functions on a nearly-continuous
basis (it appears) because there's a low level, but not zero, amount of
I/O traffic to and from them.  Thus if I set power control (e.g. spin
down after 5 minutes of inactivity) they never do.  I could simply
export the pool but I prefer (greatly) to not do that because some of
the data on that pool (e.g. backups from PCs) is information that if a
user wants to get to it it ought to "just work."

Well, one disk is no big deal.  A rack full of them is another matter. 
I could materially cut the power consumption of this box down (likely by
a third or more) if those disks were spun down during 95% of the time
the box is up, but with the "standard" way ZFS does things that doesn't
appear to be possible.

Has anyone taken a crack at changing the paradigm (e.g. using the
automounter, perhaps?) to get around this?

-- 
Karl Denninger
k...@denninger.net 
/The Market Ticker/
/[S/MIME encrypted email preferred]/


smime.p7s
Description: S/MIME Cryptographic Signature