> > I'd rather educate other developers that this may happen. dmesg > timestamps should already make it easy to see. > > And actually... if you do "time sync" in userspace just before > programing the RTC and suspending, this whole issue should go away. > > I agree w/ you on both comments basically.
Thad said, when it comes to dmesg, readers would guess by current implementation of the program, the two lines of pr_info and pm_pr_dbg are controlled by compilation flags as well as printk run-level, I think the information is enough while it is not guaranteed for this subject. Another reason is, months ago I worked on my community to illustrate this odd, adding 'sync' policy in the userspace script [1] mitigated the longer sync (issued by kernel) in suspending, however I realized there is still rare case because the userspace sync is before the processes freeze, the script is potentially competing w/ other high loading tasks which means there is still a small window (sync -> program alarm -> suspend until freeze) that could generate such odd. Short recap this topic is trying to give a clear indication as simple mechanism for the platform and OS developers who may concern the suspending time w/ some sort of time constrain; given a clear metric it allows developers to have an easier triage such hard-to-reproduce issue shall go to virtual memory/filesystem rather than examine whether there is longer cost on each PM sub-state along w/ the device callbacks through a long suspending log. Lastly, I understand this data might not so interesting to kernel developers; somehow my role is sitting in between trying to bridge kernel and OS developers, I fully respect reviewers' comments and justification. Sincerely, Harry [1] https://chromium-review.googlesource.com/c/chromiumos/platform2/+/458560/14/power_manager/tools/suspend_stress_test#202 (Apologize long URL and context as reference)