rnd Bergmann
Signed-off-by: Brad Campbell
Signed-off-by: Henrik Rydberg
---
Changelog :
v1 : Initial attempt
v2 : Address logic and coding style based on comments received
v3 : Removed some debug hangover. Added tested-by. Modifications for
MacBookAir1,1
- Significant rework of wait logic b
On 12/11/20 7:05 am, Henrik Rydberg wrote:
> On 2020-11-11 14:06, Brad Campbell wrote:
>> Commit fff2d0f701e6 ("hwmon: (applesmc) avoid overlong udelay()")
>> introduced an issue whereby communication with the SMC became
>> unreliable with write errors like
Tested-by: Andreas Kemnade # MacBookAir6,2
Acked-by: Arnd Bergmann
Signed-off-by: Brad Campbell
Signed-off-by: Henrik Rydberg
---
Changelog :
v1 : Initial attempt
v2 : Address logic and coding style
v3 : Removed some debug hangover. Added tested-by. Modifications for
MacBookAir1,1
v4 : Re-
On 11/11/20 4:56 pm, Guenter Roeck wrote:
> On 11/10/20 7:38 PM, Brad Campbell wrote:
>> Commit fff2d0f701e6 ("hwmon: (applesmc) avoid overlong udelay()")
>> introduced an issue whereby communication with the SMC became
>> unreliable with write errors like
.
Length and error consolidation suggested by Henrik Rydberg
Signed-off-by: Brad Campbell
Index: linux-stable/drivers/hwmon/applesmc.c
===
--- linux-stable.orig/drivers/hwmon/applesmc.c
+++ linux-stable/drivers/hwmon/applesmc.c
rnd Bergmann
Signed-off-by: Brad Campbell
Signed-off-by: Henrik Rydberg
---
Changelog :
v1 : Initial attempt
v2 : Address logic and coding style
v3 : Removed some debug hangover. Added tested-by. Modifications for
MacBookAir1,1
v4 : Re-factored logic based on Apple driver. Simplified wait_stat
G'day All,
Versions 1-3 of this patch were various attempts to try and simplify/clarify
the communication to the SMC in order to remove the timing sensitivity which
was exposed by Commit fff2d0f701e6 ("hwmon: (applesmc) avoid overlong
udelay()"). As with the original author(s), we were limited
On 10/11/20 3:55 pm, Guenter Roeck wrote:
> On Tue, Nov 10, 2020 at 01:04:04PM +1100, Brad Campbell wrote:
>> On 9/11/20 3:06 am, Guenter Roeck wrote:
>>> On 11/8/20 2:14 AM, Henrik Rydberg wrote:
>>>> On Sun, Nov 08, 2020 at 09:35:28AM +0100, Henrik Rydberg wrote:
On 9/11/20 3:06 am, Guenter Roeck wrote:
> On 11/8/20 2:14 AM, Henrik Rydberg wrote:
>> On Sun, Nov 08, 2020 at 09:35:28AM +0100, Henrik Rydberg wrote:
>>> Hi Brad,
>>>
>>> On 2020-11-08 02:00, Brad Campbell wrote:
>>>> G'day Henrik,
>>&
On 10/11/20 4:08 am, Henrik Rydberg wrote:
> Hi Brad,
>
>> Out of morbid curiosity I grabbed an older MacOS AppleSMC.kext (10.7) and
>> ran it through the disassembler.
>>
>> Every read/write to the SMC starts the same way with a check to make sure
>> the SMC is in a sane state. If it's not, a r
On 8/11/20 11:04 pm, Henrik Rydberg wrote:
> On 2020-11-08 12:57, Brad Campbell wrote:
>> On 8/11/20 9:14 pm, Henrik Rydberg wrote:
>>> On Sun, Nov 08, 2020 at 09:35:28AM +0100, Henrik Rydberg wrote:
>>>> Hi Brad,
>>>>
>>>> On 202
On 9/11/20 7:44 pm, Andreas Kemnade wrote:
> On Sun, 8 Nov 2020 11:14:29 +0100
> Henrik Rydberg wrote:
>
>> On Sun, Nov 08, 2020 at 09:35:28AM +0100, Henrik Rydberg wrote:
>>> Hi Brad,
>>>
>>> On 2020-11-08 02:00, Brad Campbell wrote:
>>>>
On 9/11/20 3:06 am, Guenter Roeck wrote:
> On 11/8/20 2:14 AM, Henrik Rydberg wrote:
>> On Sun, Nov 08, 2020 at 09:35:28AM +0100, Henrik Rydberg wrote:
>>> Hi Brad,
>>>
>>> On 2020-11-08 02:00, Brad Campbell wrote:
>>>> G'day Henrik,
>>&
On 8/11/20 9:14 pm, Henrik Rydberg wrote:
> On Sun, Nov 08, 2020 at 09:35:28AM +0100, Henrik Rydberg wrote:
>> Hi Brad,
>>
>> On 2020-11-08 02:00, Brad Campbell wrote:
>>> G'day Henrik,
>>>
>>> I noticed you'd also loosened up the re
nges previously committed.
Tested on : MacbookAir6,2 MacBookPro11,1 iMac12,2
Fixes: fff2d0f701e6 ("hwmon: (applesmc) avoid overlong udelay()")
Reported-by: Andreas Kemnade
Tested-by: Andreas Kemnade # MacBookAir6,2
Acked-by: Arnd Bergmann
Signed-off-by: Brad Campbell
Signed-off-by: H
On 8/11/20 5:31 am, Henrik Rydberg wrote:
> On 2020-11-06 21:02, Henrik Rydberg wrote:
>>> So as it stands, it does not work at all. I will continue to check another
>>> machine, and see if I can get something working.
>>
>> On the MacBookAir3,1 the situation is somewhat better.
>>
>> The first th
On 7/11/20 3:26 am, Henrik Rydberg wrote:
>>> I can't guarantee it won't break older machines which is why I've asked for
>>> help testing it. I only have a MacbookPro 11,1 and an iMac 12,2. It fixes
>>> both of those.
>>>
>>> Help testing would be much appreciated.
>>
>> I see, this makes much m
On 6/11/20 3:12 am, Guenter Roeck wrote:
> On 11/4/20 11:26 PM, Brad Campbell wrote:
>> Commit fff2d0f701e6 ("hwmon: (applesmc) avoid overlong udelay()") introduced
>> an issue whereby communication with the SMC became unreliable with write
>> errors like :
>>
On 5/11/20 6:56 pm, Henrik Rydberg wrote:
> Hi Brad,
>
> Great to see this effort, it is certainly an area which could be improved.
> After having seen several generations of Macbooks while modifying much of
> that code, it became clear that the SMC communication got refreshed a few
> times ove
nd restore function with the changes previously committed.
v2 : Address logic and coding style
Reported-by: Andreas Kemnade
Fixes: fff2d0f701e6 ("hwmon: (applesmc) avoid overlong udelay()")
Signed-off-by: Brad Campbell
---
diff --git a/drivers/hwmon/applesmc.c b/drivers/hwmon/applesmc.
nd restore function with the changes previously committed.
Reported-by: Andreas Kemnade
Signed-off-by: Brad Campbell
---
diff --git a/drivers/hwmon/applesmc.c b/drivers/hwmon/applesmc.c
index a18887990f4a..22cc5122ce9a 100644
--- a/drivers/hwmon/applesmc.c
+++ b/drivers/hwmon/applesmc.c
@@ -4
On 5/11/20 3:43 pm, Guenter Roeck wrote:
On 11/4/20 6:18 PM, Brad Campbell wrote:
On 5/11/20 12:20 am, Andreas Kemnade wrote:
On Tue, 3 Nov 2020 16:56:32 +1100
Brad Campbell wrote:
If anyone with a Mac having a conventional SMC and seeing issues on 5.9 could test this it'd be
apprec
On 5/11/20 1:18 pm, Brad Campbell wrote:
I'm not entirely sure where to go from here. I'd really like some wider testing
before cleaning this up and submitting it. It puts extra checks & constraints
on the comms with the SMC that weren't there previously.
I guess given ther
On 5/11/20 12:20 am, Andreas Kemnade wrote:
On Tue, 3 Nov 2020 16:56:32 +1100
Brad Campbell wrote:
If anyone with a Mac having a conventional SMC and seeing issues on 5.9 could test this it'd be
appreciated. I'm not saying this code is "correct", but it "works for
On 3/11/20 10:56 am, Brad Campbell wrote:
I've examined the code in VirtualSMC and I'm not convinced we were not waiting
on the wrong bits.
#define SMC_STATUS_AWAITING_DATA BIT0 ///< Ready to read data.
#define SMC_STATUS_IB_CLOSED BIT1 /// A write is pending.
#define SM
On 6/10/20 6:02 pm, Andreas Kemnade wrote:
On Thu, 1 Oct 2020 21:07:51 -0700
Guenter Roeck wrote:
On 10/1/20 3:22 PM, Andreas Kemnade wrote:
On Wed, 30 Sep 2020 22:00:09 +0200
Arnd Bergmann wrote:
On Wed, Sep 30, 2020 at 6:44 PM Guenter Roeck wrote:
On Wed, Sep 30, 2020 at 10:54:42AM
G'day Sean,
With the addition of this patch on a vanilla v5.7 :
Tested-by: Brad Campbell
On 8/6/20 12:34 am, Sean Christopherson wrote:
On Sat, Jun 06, 2020 at 05:08:38AM +0800, kernel test robot wrote:
arch/x86/kernel/cpu/centaur.c: In function 'init_centaur':
arch
On 25/5/20 7:46 pm, Maxim Levitsky wrote:
On Sun, 2020-05-24 at 18:43 +0800, Brad Campbell wrote:
On 24/5/20 12:50 pm, Brad Campbell wrote:
G'day all.
Machine is a Macbook Pro Retina ~ 2014. Kernels are always vanilla kernel and
compiled on the machine. No additional patches.
vend
On 24/5/20 12:50 pm, Brad Campbell wrote:
G'day all.
Machine is a Macbook Pro Retina ~ 2014. Kernels are always vanilla kernel and
compiled on the machine. No additional patches.
vendor_id : GenuineIntel
cpu family : 6
model : 69
model name : Intel(R) Core(TM) i5-4278
G'day all.
Machine is a Macbook Pro Retina ~ 2014. Kernels are always vanilla kernel and
compiled on the machine. No additional patches.
vendor_id : GenuineIntel
cpu family : 6
model : 69
model name : Intel(R) Core(TM) i5-4278U CPU @ 2.60GHz
stepping: 1
microco
Campbell
Tested-by: Brad Campbell
Signed-off-by: Mika Westerberg
---
drivers/thunderbolt/switch.c | 19 +++
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 410bf1bc..8e712fbf8233 100644
On 28/8/19 21:19, Mika Westerberg wrote:
On Wed, Aug 28, 2019 at 06:43:35PM +0800, Brad Campbell wrote:
On 28/8/19 6:23 pm, Mika Westerberg wrote:
On Wed, Aug 28, 2019 at 05:12:00PM +0800, Brad Campbell wrote:
Apart from the warning in the log (which is not fatal, I'll look into
it)
On 28/8/19 6:23 pm, Mika Westerberg wrote:
On Wed, Aug 28, 2019 at 05:12:00PM +0800, Brad Campbell wrote:
Apart from the warning in the log (which is not fatal, I'll look into
it) to me the second path setup looks fine.
Can you do one more experiment? Boot the system up without any
On 28/8/19 5:12 pm, Brad Campbell wrote:
On 28/8/19 3:33 pm, Mika Westerberg wrote:
I'm suspecting that the boot firmware does configure second DP path also
and we either fail to discover it properly or the boot firmware fails to
set it up.
Also if you boot with one monitor connecte
On 06/04/17 08:30, Brad Campbell wrote:
G'day All,
This is a vaguely current git head kernel compiled yesterday.
Oopsed and rebooted itself, and then oopsed and rebooted again. There
was no sign of a raid rebuild in the kernel logs, and it's a staging
machine so there is nothing run
G'day All,
This is a vaguely current git head kernel compiled yesterday.
Oopsed and rebooted itself, and then oopsed and rebooted again. There
was no sign of a raid rebuild in the kernel logs, and it's a staging
machine so there is nothing running after a reboot that goes near these
disks. Th
On 11/07/2013 06:54 PM, Justin Piszcz wrote:
On Mon, Nov 4, 2013 at 5:25 AM, Justin Piszcz wrote:
Hi,
I run two SSDs in a RAID-1 configuration and I have a swap partition on a
third SSD. Over time, the mismatch_cnt between the two devices grows higher
and higher.
Are both SSD's identical?
On 13/07/13 18:34, Justin Piszcz wrote:
And possibly:
discard_zeroes_data: 1
Does it though?
Here's my 6 x SSD RAID10 that definitely discards.
brad@srv:~$ grep . /sys/block/md2/queue/*
/sys/block/md2/queue/add_random:0
/sys/block/md2/queue/discard_granularity:33553920
/sys/block/md2/queue/di
G'day all,
I'm building a bit of hardware. It's basically a serial multiplexer that communicates to the PC
using a single usb-serial port. It has the ability to run between 2 and 8 standard async ports over
this single interface.
I'd rather not write a kernel driverif possible as I figure thi
Paolo Ciarrocchi wrote:
Brad, is it possible for you to do some more test with the latest
version of both SD and CFS and post some more detailed feedbacks?
That would help a lot.
Err.. Ok. I have the latest version of SD (that I know about). I could upgrade CFS, but unlike those
doing real s
Con Kolivas wrote:
I've had a few requests for a standalone patch implementing swap prefetch for
mainline.
Here is a patch that is a current rollup that should apply and work for
vanilla 2.6.21 (ie not a -ck kernel):
http://ck.kolivas.org/patches/swap-prefetch/2.6.21-swap_prefetch-38.patch
Neil Brown wrote:
I wonder if we should avoid bypassing the stripe cache if the needed stripes
are already in the cache... or if at least one needed stripe is or
if the array is degraded...
Probably in the degraded case we should never bypass the cache, as if
we do, then a sequential read of
Neil Brown wrote:
You could test this theory by putting a
WARN_ON(cfqq->next_rq == NULL);
at the end of cfq_reposition_rq_rb, just after the cfq_add_rq_rb call.
[ 756.311074] BUG: at block/cfq-iosched.c:543 cfq_reposition_rq_rb()
[ 756.329615] [] cfq_merged_request+0x71/0x80
[ 756.34
Jens Axboe wrote:
It looks to be extremely rare. Aliases are extremely rare, front merges
are rare. And you need both to happen with the details you outlined. But
it's a large user base, and we've had 3-4 reports on this in the past
months. So it obviously does happen. I could not make it trigge
Neil Brown wrote:
On Wednesday April 25, [EMAIL PROTECTED] wrote:
BUT... That may explain while we are only seeing it on md. Would md
ever be issuing such requests that trigger this condition?
Can someone remind me which raid level(s) was/were involved?
Raid-5 gegraded here, But I've had it
Neil Brown wrote:
How likely it would be to get two requests with the same sector number
I don't know. I wouldn't expect it to ever happen - I have seen it
before, but it was due to a bug in ext3. Maybe XFS does it
intentionally some times?
It certainly sounds like an odd thing to occur.
Ev
Jens Axboe wrote:
Ok, can you try and reproduce with this one applied? It'll keep the
system running (unless there are other corruptions going on), so it
should help you a bit as well. It will dump some cfq state info when the
condition triggers that can perhaps help diagnose this. So if you can
Jens Axboe wrote:
Thanks for testing Brad, be sure to use the next patch I sent instead.
The one from this mail shouldn't even get you booted. So double check
that you are still using CFQ :-)
[184901.576773] BUG: unable to handle kernel NULL pointer dereference at
virtual address 005c
[1
Alex Dubov wrote:
Have you looked at the last version (0.8)? It fixed all outstanding issues (as
far as I know).
Seconded. I've been running Alex's latest driver since its release. I routinely suspend/resume
60-100 times between boots to S3 and disk, I've suspended with cards in the socket a
Jens Axboe wrote:
I had something similar for generic_unplug_request() as well, but didn't
see/hear any reports of it being tried out. Here's a complete debugging
patch for this and other potential dangers.
I had a clean 2.6.21-rc7 that I forgot to change the default sched on take down my mai
Neil Brown wrote:
On Monday April 16, [EMAIL PROTECTED] wrote:
cfq_dispatch_insert() was called with rq == 0. This one is getting really
annoying... and md is involved again (RAID0 this time.)
Yeah... weird.
RAID0 is so light-weight and so different from RAID1 or RAID5 that I
feel fairly safe
Neil Brown wrote:
On Monday April 16, [EMAIL PROTECTED] wrote:
cfq_dispatch_insert() was called with rq == 0. This one is getting really
annoying... and md is involved again (RAID0 this time.)
Yeah... weird.
RAID0 is so light-weight and so different from RAID1 or RAID5 that I
feel fairly safe
Adrian Bunk wrote:
[ Cc's added, additional information is in http://lkml.org/lkml/2007/4/15/32 ]
On Sun, Apr 15, 2007 at 02:49:29PM +0400, Brad Campbell wrote:
Brad Campbell wrote:
G'day all,
All I have is a digital photo of this oops. (It's 3.5mb). I have serial
console
Brad Campbell wrote:
G'day all,
All I have is a digital photo of this oops. (It's 3.5mb). I have serial
console configured, but Murphy is watching me carefully and I just can't
seem to reproduce it while logging the console output.
And as usual, after trying to capture on
G'day all,
All I have is a digital photo of this oops. (It's 3.5mb). I have serial console configured, but
Murphy is watching me carefully and I just can't seem to reproduce it while logging the console output.
http://www.fnarfbargle.com/CIMG0736.JPG
I had it die the same way using plain 2.6.
Eric Sandeen wrote:
Samuel Thibault wrote:
Hi,
Distribution installers usually try to probe OSes for building a suited
grub menu. Unfortunately, mounting an ext3 partition, even in read-only
mode, does perform some operations on the filesystem (log recovery).
This is not a good idea since it m
Willy Tarreau wrote:
Probably that you got the wrong laptop. If you buy an ultra-thin with highly
proprietary hardware, it may be hard. But if you choose in profesionnal ranges,
there is rarely any problem. I have a compaq nc8000 on which everything works
fine, and it boots in about 20 seconds.
Pierre Ossman wrote:
Brad Campbell wrote:
[EMAIL PROTECTED]:/sys/block/mmcblk0$ ls -laR
.:
total 0
drwxr-xr-x 6 root root0 2007-02-11 23:29 .
drwxr-xr-x 13 root root0 2007-02-11 23:27 ..
-r--r--r-- 1 root root 4096 2007-02-11 23:28 dev
lrwxrwxrwx 1 root root0 2007-02-11 23:27
Pierre Ossman wrote:
Brad Campbell wrote:
[EMAIL PROTECTED]:/$ find sys/devices | grep mmc
sys/devices/pci:00/:00:1e.0/:06:05.3/tifm_sd0:3/mmc_host:mmc0
This is strange. You should be getting more entries below that.
I believe that should be the case..
/sys/block/mmcblk0
Pierre Ossman wrote:
Brad Campbell wrote:
I've tested both with and without CONFIG_SYSFS_DEPRECATED on, both fail
the same way.
hald reports that the device has no parent and decides to ignore it.
Works fine here. The device tree is:
/sys/devices/pnp0/00:02/mmc0/mmc0:0001/block:mm
Pierre Ossman wrote:
Alex Dubov wrote:
One more problem (you may already know about it) - I was contacted by somebody
from the hald
project and indeed I can confirm that on 2.6.20 kernel hald fails to take
action on card
insertion. I can't see anything in my code so this may be a general mmc p
Michael McConnell wrote:
Hello,
Is there a mailing list and/or a website tracing the development of the
tifm_* drivers? I've got a new Vaio that has this chipset for its SD
card reader and would like to track this driver development (and maybe
help with its development).
From the bottom
Brad Campbell wrote:
Herbert Poetzl wrote:
sounds great! where can I get that version?
should it be in 2.6.20-rc* or is there a separate
patch available somewhere?
The patch was contained in the message from Alan to you that I replied
to. I just applied it to a vanilla 2.6.20-rc3 tree and
Herbert Poetzl wrote:
sounds great! where can I get that version?
should it be in 2.6.20-rc* or is there a separate
patch available somewhere?
The patch was contained in the message from Alan to you that I replied to. I just applied it to a
vanilla 2.6.20-rc3 tree and fired it up.
(I've pas
Alan wrote:
On Tue, 2 Jan 2007 08:01:45 +0100
Herbert Poetzl <[EMAIL PROTECTED]> wrote:
if you are interested in investigating this, please
let me know what kind of data you would like to see
and/or what kind of tests would be appreciated.
I reviewed the 374 code a bit further to see what mig
[EMAIL PROTECTED] wrote:
Running x86-32 using kernel 2.6.8 (from Debian sarge), although can always
roll my own if necessary. Preferred filesystem would be ext3, and I
anticipate no need to grow beyond the initial 2.5TB.
I'm running 2.1TB and 3TB filesystems on ext3 here. It's probably not fast or
Neil Whelchel wrote:
Hello,
I have two Promise SATA TX4 cards connected to a total of 6 Maxtor 250 GB
drives (7Y250M0) configured into a RAID 5. All works well with small
disk load, but when a large number of requests are issued, it causes crash
similar to the attached, except that the errors befor
Florian Engelhardt wrote:
Neat trick which I only discovered in desparation last week when
battling a RAID lockup on the -rc4-mm1 kernel on a remote box.
I was also having hard lockup issues, but reseating all my PCI cards
appear to have rectified that one.
Well, there are not much PCI-Cards in th
J.A. Magallon wrote:
Hi...
I posted this in other mail, but now I can confirm this.
I have a box with a SATA RAID-5, and with 2.6.11-rc3-mm2+libata-dev1
works like a charm as a samba server, I dropped it 12Gb from an
osx client, and people does backups from W2k boxes and everything was fine.
With 2
Florian Engelhardt wrote:
I activated the raid (/dev/md0), then mounted it, and after
that i was starting nfs. I was able to mount the share
on my desktop, creating direcrotys was no problem, but
as soon as i was copying a file to the share, the server
freezed.
Creating files localy (while loged in
Neil Brown wrote:
Could you please confirm if there is a problem with
2.6.11-rc4-bk4->bk10
as reported, and whether it seems to be the same problem.
Ok.. are we all ready? I had applied your development patches to all my vanilla 2.6.11-rc4-*
kernels. Thus they all exhibited the same problem in
Neil Brown wrote:
On Friday February 25, [EMAIL PROTECTED] wrote:
Turning on debugging in raid6main.c and md.c make it much harder to hit. So I'm assuming something
timing related.
raid6d --> md_check_recovery --> generic_make_request --> make_request --> get_active_stripe
Yes, there is a real p
Brad Campbell wrote:
G'day all,
I have a painful issue with a RAID-6 box. It only manifests itself on a
fully complete and synced up array, and I can't reproduce it on an array
smaller than the entire drives which means after every attempt at
debugging I have to endure a 12 hour resyn
G'day all,
I have a painful issue with a RAID-6 box. It only manifests itself on a fully complete and synced up
array, and I can't reproduce it on an array smaller than the entire drives which means after every
attempt at debugging I have to endure a 12 hour resync before I try again.
I have a s
74 matches
Mail list logo