+/* r3 = n (where n = [0-1023])
+ * The maximum number of BHRB entries supported with PPC_MFBHRBE
instruction
+ * is 1024. We have limited number of table entries here as POWER8
implements
+ * 32 BHRB entries.
+ */
+
+/* .global read_bhrb */
+_GLOBAL(read_bhrb)
+ cmpldi r3,1023
This
+ /*
+* Only need to copy the first 512 bytes from address 0. But since
+ * the compiler emits a warning if src == NULL for memcpy use
copy_page
+ * instead. Copies more than needed but this code is not
performance
+* critical.
+*/
+ copy_page(lowcore, &S39
Oh, that's strange. I'm pretty sure I've used -x assembler when I've
experimented with using cpp on dts manually before, and it seems to
have worked.
Maybe you used "-x assembler-with-cpp"? That should work better ;-)
Or just use the "-traditional-cpp" option, i.e. "gcc -E -traditional-
cpp".
...since some architectures don't support __udivdi3() (and
we don't want to use that, anyway).
Signed-off-by: Segher Boessenkool <[EMAIL PROTECTED]>
---
include/linux/time.h |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/include/linux/time.h b/incl
Sorry to follow up this late...
Is whitespace (in any form) allowed in the compatible value?
No. Only printable characters are allowed, that is, byte values
0x21..0x7e and 0xa1..0xfe; each text string is terminated by a
0x00; there can be several text strings concatenated in one
"compatible" prope
Is whitespace (in any form) allowed in the compatible value?
No. Only printable characters are allowed, that is, byte values
0x21..0x7e and 0xa1..0xfe; each text string is terminated by a
0x00; there can be several text strings concatenated in one
"compatible" property.
Yes, whitespace is used at
I choose the spec. If an implementation is not conformant to the
spec,
it doesn't "work".
Not to say that Linux doesn't have to work around bugs in actual
implementations, of course. And there's a lot of those. Too bad ;-)
Yah, well.. ok, let's say we have a spec... and an implementation that
+ /* Early out if we are an invalid form of lswi */
+ if ((instword & INST_STRING_MASK) == INST_LSWX)
Typo ^
Segher
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More ma
The fbmem.c bug made "less /proc/fb" segfault, as it made read()
returned more
bytes than were requested.
The offb.c bug caused /proc/fb output to be incorrect, and potentially
could cause kernel data structure corruption.
Enjoy,
Segher
--->SNIP HERE<---
diff -ur linux-2.2.19/drivers/video/fb
While experimenting with framebuffer access performances, we noticed a
very significant improvement in write access to it when not setting
the "guarded" bit on the MMU mappings. This bit basically says that
reads and writes won't have side effects (it allows speculation).
Unless the data is already
I still
suspect that because the behaviour is different between 4.1 and 4.2, it
might be a regression in 4.2,
The kernel code is wrong. It might have accidentally worked
with GCC-4.1, but that doesn't mean GCC-4.2 has regressed.
Only supported features that stop working are regressions;
invalid
More importantly, "reg-shift" doesn't say what part of
the bigger words to access. A common example is byte-wide
registers on a 32-bit-only bus; it's about 50%-50% between
connecting the registers to the low byte vs. connecting it
to the byte with the lowest address.
We already have "big-end
The hardware is called (E)IDE, the protocol is called ATA.
Or that's what I was told -- I think there's some historic
revisionism involved, too.
ATA is the interface and standards for the ANSI standards based disk
attachment. IDE "Integrated Drive Electronics" is a marketing name used
to cover a
You don't need to patch Linux at all. In fact for silly things like
this
I would recommend against it :)
If the workaround doesn't go into the kernel, everybody with affected
hardware has to individually find out about the bug (probably by
experiencing
an annoying keyboardless boot) and fix i
Second issue as reported earilier allmodconfig fails to build on imac
g3.
CC arch/powerpc/kernel/lparmap.s
AS arch/powerpc/kernel/head_64.o
lparmap.c: Assembler messages:
lparmap.c:84: Error: file number 1 already allocated
make[1]: *** [arch/powerpc/kernel/head_64.o] Blad 1
make:
On 2 aug 2007, at 12:14, Mariusz Kozlowski wrote:
Second issue as reported earilier allmodconfig fails to build on
imac g3.
Do you really mean g3? If so it's a 32-bit kernel and it shouldn't be
building lparmap.s.
It might be a bug nevertheless, there are more "issues" with
the interesting
Some how your defconfig is targeting a PPC64 box:
CONFIG_PPC64=y
shouldn't be set if you want to build a kernel for a G3 imac.
allyesconfig/allmodconfig select a 64-bit build always. Maybe
it shouldn't.
Segher
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
Well, top-level assembly is usually nasty. Setting the section in the
assembly statement as you said is probably the only thing you *can* do.
You'll probably need to (at the end of the asm block) restore
the current section to what it was before (".previous"), too.
I don't think there is any
Second issue as reported earilier allmodconfig fails to build on imac
g3.
CC arch/powerpc/kernel/lparmap.s
AS arch/powerpc/kernel/head_64.o
lparmap.c: Assembler messages:
lparmap.c:84: Error: file number 1 already allocated
make[1]: *** [arch/powerpc/kernel/head_64.o] Blad 1
make: *
2) The fix was in the wrong place anyway, if it was going to be done
anywhere
at all it needs to be in
arch/powerpc/kernel/prom_init.c:fixup_device_tree_chrp()
like the ISA ranges breakage (which is on Briq) and IDE IRQ
misnumbering fix.
Not the keyboard platform driver.
Yeah. In the bootwra
It seems like things go wrong when lparmap.s is generated with
(DWARF) debug info; could you try building it (manually) with -g0
added on the end of the compile line, and see if head_64.o compiles
okay for you then? If so, I'll prepare a proper patch for it, I
have a similar one (also for lparmap
We can't have split stores because we don't use atomic64_t on 32-bit
architectures.
That's not true; the compiler is free to split all stores
(and reads) from memory however it wants. It is debatable
whether "volatile" would prevent this as well, certainly
it is unsafe if you want to be portabl
Historically this has been
+accomplished by declaring the counter itself to be volatile, but the
+ambiguity of the C standard on the semantics of volatile make this
practice
+vulnerable to overly creative interpretation by compilers.
It's even worse when accessing through a volatile casted poi
That's hardly the only reason. But yeah, that's one way to
implement the workaround, but _we_ (the Linux community) cannot
do it like that (easily) for all users.
But you're the guy who told us our firmware sucks and we should fix our
firmware
Yes, and? You _should_ fix your firmware, it is
The only safe way to get atomic accesses is to write
assembler code. Are there any downsides to that? I don't
see any.
The assumption that aligned word reads and writes are atomic, and that
words are aligned unless explicitly packed otherwise, is endemic in
the kernel. No sane compiler viol
The compiler is within its rights to read a 32-bit quantity 16 bits at
at time, even on a 32-bit machine. I would be glad to help pummel any
compiler writer that pulls such a dirty trick, but the C standard
really
does permit this.
Yes, but we don't write code for these compilers. There are
The only safe way to get atomic accesses is to write
assembler code. Are there any downsides to that? I don't
see any.
The assumption that aligned word reads and writes are atomic, and
that words are aligned unless explicitly packed otherwise, is
endemic in the kernel. No sane compiler viol
If you need to guarantee that the value is written to memory at a
particular time in your execution sequence, you either have to read it
from memory to force the compiler to store it first
That isn't enough. The CPU will happily read the datum back from
its own store queue before it ever hit m
Explicit
+casting in atomic_read() ensures consistent behavior across
architectures
+and compilers.
Even modulo compiler bugs, what makes you believe that?
When you declare a variable volatile, you don't actually tell the
compiler where you want to override its default optimization behavior,
Anyway, what's the supposed advantage of *(volatile *) vs. using
a real volatile object? That you can access that same object in
a non-volatile way?
That's my understanding. That way accesses where you don't care about
volatility may be optimised.
But those accesses might be done non-atomic
Anyway, what's the supposed advantage of *(volatile *) vs. using
a real volatile object? That you can access that same object in
a non-volatile way?
You'll have to take that up with Linus and the minds behind Volatile
Considered Harmful, but the crux of it is that volatile objects are
prone t
So, why not use the well-defined alternative?
Because we don't need to, and it hurts performance.
It hurts performance by implementing 32-bit atomic reads in assembler?
No, I misunderstood the question. Implementing 32-bit atomic reads in
assembler is redundant, because any sane compiler, *p
+- #address-cells : Address representation for
"rapidio" devices.
+ This field represents the number of cells needed to represent
+ the RapidIO address of the registers.
Can you explain this a little further. I'm a bit confused by
'RapidIO address of the registers'.
I want to
+ l) RapidIO
"FSL PowerPC bridge RapidIO" or something like that -- you
aren't doing a _generic_ rapidio binding here.
+ RapidIO is a definition of a system interconnect. This node add
+ the support for RapidIO processor in kernel. The node name is
+ suggested to be 'rapidio'.
+
+ Re
As of 2.6.22 the kernel doesn't recognize the i8042 keyboard/mouse
controller
on the PegasosPPC. This is because of a feature/bug in the OF device
tree:
the "device_type" attribute is an empty string instead of "8042" as the
kernel expects. This patch (against 2.6.22.1) adds a secondary
detecti
This doesn't mean that shift is better anyway. If everyone
considers it
better, I give up. But be warned that shift (stride) is not the only
property
characterizing register accesses -- the regs might be only accessible
as
16/32-bit quantities, for example (16-bit is a real world example --
+ [EMAIL PROTECTED] {
+ compatible = "mmio-ide";
+ device_type = "ide";
Why not "ata"?
The hardware is called (E)IDE, the protocol is called ATA.
Or that's what I was told -- I think there's some historic
revisionism involved, too.
Also, what mmio-i
I never suggested that -- what I did suggest was make of_serial.c
recognize certain chip types and register them with 8250 driver.
What would be the advantage of maintaining a list of chips whose only
difference is register spacing, rather than just using reg-shift and
being done with it?
r
The compiler is within its rights to read a 32-bit quantity 16 bits at
at time, even on a 32-bit machine. I would be glad to help pummel any
compiler writer that pulls such a dirty trick, but the C standard
really
does permit this.
Code all over the kernel assumes that 32-bit reads/writes
are
, nothing shocking).
Signed-off-by: Segher Boessenkool <[EMAIL PROTECTED]>
---
include/asm-powerpc/atomic.h | 34 --
1 files changed, 28 insertions(+), 6 deletions(-)
diff --git a/include/asm-powerpc/atomic.h b/include/asm-powerpc/atomic.h
index c44810b..bc17506
That means GCC cannot compile Linux; it already optimises
some accesses to scalars to smaller accesses when it knows
it is allowed to. Not often though, since it hardly ever
helps in the cost model it employs.
Please give an example code snippet + gcc version + arch
to back this up.
u
That means GCC cannot compile Linux; it already optimises
some accesses to scalars to smaller accesses when it knows
it is allowed to. Not often though, since it hardly ever
helps in the cost model it employs.
Please give an example code snippet + gcc version + arch
to back this up.
u
You'd have to use "+m".
Yes, though I would use "=m" on the output list and "m" on the input
list. The reason is that I've seen gcc fall on its face with an ICE on
s390 due to "+m". The explanation I've got from our compiler people was
quite esoteric, as far as I remember gcc splits "+m" to an i
Note that last line.
Segher, how about you just accept that Linux uses gcc as per reality,
and
that sometimes the reality is different from your expectations?
"+m" works.
It works _most of the time_. Ask Martin. Oh you don't even have to,
he told you two mails ago. My last mail simply po
Yes, though I would use "=m" on the output list and "m" on the input
list. The reason is that I've seen gcc fall on its face with an ICE
on
s390 due to "+m". The explanation I've got from our compiler people
was
quite esoteric, as far as I remember gcc splits "+m" to an input
operand
and an out
"+m" works. We use it. It's better than the alternatives. Pointing to
stale documentation doesn't change anything.
Well, perhaps on i386. I've seen some older versions of the s390 gcc
die
with an ICE because I have used "+m" in some kernel inline assembly.
I'm
happy to hear that this issue is
Yeah. Compiler errors are more annoying though I dare say ;-)
Actually, compile-time errors are fine,
Yes, they don't cause data corruption or anything like that,
but I still don't think the 390 people want to ship a kernel
that doesn't build -- and it seems they still need to support
GCC ver
Well if there is only one memory location involved, then smp_rmb()
isn't
going to really do anything anyway, so it would be incorrect to use it.
rmb() orders *any* two reads; that includes two reads from the same
location.
Consider that smp_rmb basically will do anything from flushing the
pip
"Volatile behaviour" itself isn't consistently defined (at least
definitely not consistently implemented in various gcc versions across
platforms),
It should be consistent across platforms; if not, file a bug please.
but it is /expected/ to mean something like: "ensure that
every such access a
Well if there is only one memory location involved, then smp_rmb()
isn't
going to really do anything anyway, so it would be incorrect to use
it.
rmb() orders *any* two reads; that includes two reads from the same
location.
If the two reads are to the same location, all CPUs I am aware of
will
Well if there is only one memory location involved, then smp_rmb()
isn't
going to really do anything anyway, so it would be incorrect to use
it.
rmb() orders *any* two reads; that includes two reads from the same
location.
If the two reads are to the same location, all CPUs I am aware of
will
How does the compiler know that msleep() has got barrier()s?
Because msleep_interruptible() is in a separate compilation unit,
the compiler has to assume that it might modify any arbitrary global.
No; compilation units have nothing to do with it, GCC can optimise
across compilation unit bounda
I think this was just terminology confusion here again. Isn't "any
code
that it cannot currently see" the same as "another compilation unit",
and wouldn't the "compilation unit" itself expand if we ask gcc to
compile more than one unit at once? Or is there some more specific
"definition" for "com
What you probably mean is that the compiler has to assume any code
it cannot currently see can do anything (insofar as allowed by the
relevant standards etc.)
I think this was just terminology confusion here again. Isn't "any code
that it cannot currently see" the same as "another compilation un
What volatile does are a) never optimise away a read (or write)
to the object, since the data can change in ways the compiler
cannot see; and b) never move stores to the object across a
sequence point. This does not mean other accesses cannot be
reordered wrt the volatile access.
If the abstract
Of course, if we find there are more callers in the kernel who want
the
volatility behaviour than those who don't care, we can re-define the
existing ops to such variants, and re-name the existing definitions
to
somethine else, say "atomic_read_nonvolatile" for all I care.
Do we really need a
No; compilation units have nothing to do with it, GCC can optimise
across compilation unit boundaries just fine, if you tell it to
compile more than one compilation unit at once.
Last I checked, the Linux kernel build system did compile each .c file
as a separate compilation unit.
I have some
Possibly these were too trivial to expose any potential problems that
you
may have been referring to, so would be helpful if you could write a
more
concrete example / sample code.
The trick is to have a sufficiently complicated expression to force
the compiler to run out of registers.
You ca
Please check the definition of "cache coherence".
Which of the twelve thousand such definitions? :-)
Summary: the CPU is indeed within its rights to execute loads and
stores
to a single variable out of order, -but- only if it gets the same
result
that it would have obtained by executing them
"compilation unit" is a C standard term. It typically boils down
to "single .c file".
As you mentioned later, "single .c file with all the other files
(headers
or other .c files) that it pulls in via #include" is actually
"translation
unit", both in the C standard as well as gcc docs.
Yeah
Part of the motivation here is to fix heisenbugs. If I knew where
they
By the same token we should probably disable optimisations
altogether since that too can create heisenbugs.
Almost everything is a tradeoff; and so is this. I don't
believe most people would find disabling all compiler
op
No; compilation units have nothing to do with it, GCC can optimise
across compilation unit boundaries just fine, if you tell it to
compile more than one compilation unit at once.
Last I checked, the Linux kernel build system did compile each .c
file
as a separate compilation unit.
I have som
Part of the motivation here is to fix heisenbugs. If I knew where
they
By the same token we should probably disable optimisations
altogether since that too can create heisenbugs.
Precisely the point -- use of volatile (whether in casts or on asms)
in these cases are intended to disable those
A volatile default would disable optimizations for atomic_read.
atomic_read without volatile would allow for full optimization by the
compiler. Seems that this is what one wants in many cases.
Name one such case.
An atomic_read should do a load from memory. If the programmer puts
an atomic_rea
Part of the motivation here is to fix heisenbugs. If I knew where
they
By the same token we should probably disable optimisations
altogether since that too can create heisenbugs.
Almost everything is a tradeoff; and so is this. I don't
believe most people would find disabling all compiler
op
The only thing volatile on an asm does is create a side effect
on the asm statement; in effect, it tells the compiler "do not
remove this asm even if you don't need any of its outputs".
It's not disabling optimisation likely to result in bugs,
heisen- or otherwise; _not_ putting the volatile on a
I'd go so far as to say that anywhere where you want a non-"volatile"
atomic_read, either your code is buggy, or else an int would work just
as well.
Even, the only way to implement a "non-volatile" atomic_read() is
essentially as a plain int (you can do some tricks so you cannot
assign to the r
I can't speak for this particular case, but there could be similar
code
examples elsewhere, where we do the atomic ops on an atomic_t object
inside a higher-level locking scheme that would take care of the kind
of
problem you're referring to here. It would be useful for such or
similar
code if
Note that "volatile"
is a type-qualifier, not a type itself, so a cast of the _object_
itself
to a qualified-type i.e. (volatile int) would not make the access
itself
volatile-qualified.
There is no such thing as "volatile-qualified access" defined
anywhere; there only is the concept of a "vo
Here, I should obviously admit that the semantics of *(volatile int *)&
aren't any neater or well-defined in the _language standard_ at all.
The
standard does say (verbatim) "precisely what constitutes as access to
object of volatile-qualified type is implementation-defined", but GCC
does help u
ment the atomic operations in assembler,
like Segher Boessenkool did for powerpc in response to my previous
patchset.
Puh-lease. I DO NOT DISTRUST THE COMPILER, I just don't assume
it will do whatever I would like it to do without telling it.
It's a machine you know, and it is very we
Part of the motivation here is to fix heisenbugs. If I knew
where they
By the same token we should probably disable optimisations
altogether since that too can create heisenbugs.
Almost everything is a tradeoff; and so is this. I don't
believe most people would find disabling all compiler
op
atomic_dec() already has volatile behavior everywhere, so this is
semantically
okay, but this code (and any like it) should be calling cpu_relax()
each
iteration through the loop, unless there's a compelling reason not
to. I'll
allow that for some hardware drivers (possibly this one) such a
co
+ if (len > TASK_SIZE)
+ return -ENOMEM;
Shouldn't that be addr+len instead? The check looks incomplete
otherwise. And you meant ">=" I guess?
- /* Paranoia, caller should have dealt with this */
- BUG_ON((addr + len) > 0x1UL);
-
Any
+ if (len > TASK_SIZE)
+ return -ENOMEM;
Shouldn't that be addr+len instead? The check looks incomplete
otherwise. And you meant ">=" I guess?
No. Have a look at the other hugetlb_get_unmapped_area()
implementations. Because this is in the get_unmapped_area() path,
'add
Thanks Jean. Your compressed/head.o looks fine.
No it doesn't -- the .text.head section doesn't have
the ALLOC attribute set. The section then ends up not
being assigned to an output segment (during the linking
of vmlinux) and all hell breaks loose. The linker gives
you a warning about this bt
.text.head is not type AX so it will be left out from the linked
output.
No, it does get added, but the section is not added to
any segment, so a) it ends up near the end of the address
map instead of being first thing, and b) it won't be loaded
at run time.
This reminds me that I have put ano
I think what might be happening is that pdflush writes them out fine,
however we don't trap writes by the application _during_ that writeout.
Yeah. I believe that more exactly it happens if the very last
write to the page causes a writeback (due to dirty balancing)
while another writeback for t
In this case, the second form
should be used when the macro needs to return a value (and you can't
use an inline function for whatever reason), whereas the first form
should be used at all other times.
that's a fair point, although it's certainly not the coding style
that's in play now. for exa
+#define KFREE(x) \
+ do {\
+ kfree(x); \
+ x = NULL; \
+ } while(0)
This doesn't work correctly if "x" has side effects --
double evaluation. Use a temporary variable instead,
or better, an inline function.
All we've done is created a trivial implementation for exporting
the device tree to userland that isn't burdened by the powerpc
and sparc legacy code that's in there now.
So now we'll have _3_ different implementations of exporting
the OFW device tree via procfs. Your's, the proc_devtree
of pow
+A regular file in ofwfs contains the exact byte sequence that
+comprises the OFW property value. Properties are not reformatted
+into text form, so numeric property values appear as binary
+integers. While this is inconvenient for viewing, it is generally
+easier for programs that read property
I would not exactly call what we have for powerpc
"exporting the OFW device tree". I don't quite know
what it is, but it isn't as simple as exporting the
OFW device tree. I don't think we really wanted to
get into any of that here.
The Linux PowerPC port uses an OF-like device tree on
*every* pl
Some comments, mostly coding style:
- 0xb0 - 0x13f Free. Add more parameters here if you really need them.
+ 0xb0 16 bytes Open Firmware information (magic, version, callback,
idt)
Is there an OF ISA binding for x86 somewhere? And don't
point me to the source code, I'd like to see
#define setcc(cc) ({ \
partial_status &= ~(SW_C0|SW_C1|SW_C2|SW_C3); \
partial_status |= (cc) & (SW_C0|SW_C1|SW_C2|SW_C3); })
This _does_ return a value though, bad example.
Where does it return a value?
partial_status |=
I don't see any uses of it
Ah, that's a separate thing --
Not the same exact thing -- using a text representation for
the property contents is a very different thing (and completely
braindead).
The filesystem bit is for groveling around and getting information
from the shell prompt, or shell scripts. Text processing.
If you want the binary bits, expo
If people want to return something from a ({ }) construct, they should
do it
explicitly, e.g.
#define setcc(cc) ({ \
partial_status &= ~(SW_C0|SW_C1|SW_C2|SW_C3); \
partial_status |= (cc) & (SW_C0|SW_C1|SW_C2|SW_C3); \
partial_status; \
})
No, they generally should use
If you *really* want (the option of) showing things as text
in the filesystem, you better make it so that there is a
one-to-one translation back to binary. For example, what
does this mean, is it a text string or two bytes:
01.02
Yes you as a user can guess, but scripts can't (reliably).
We h
It has proved a good idea in general as I can easily get an exact
device-tree dump from users by asking for a tarball of
/proc/device-tree
and in some case, the data in there -is- binary (For example, the EDID
properties for monitors left by video drivers, or things like that).
Yes and with o
There is one big problem: text representation is useless
(to scripts etc.) unless it can be transformed back to binary;
i.e., it has to be possible to reliably detect _how_ some
property is represented into text, something that cannot be
done with how openpromfs handles it.
Text is text is text
So please do this crap right.
I strongly agree. Nowadays, both powerpc and sparc use an in-memory
copy
of the tree (wether you use the flattened format during the trampoline
from OF runtime to the kernel or not is a different matter, we created
that for the sake of kexec and embedded devices w
In addition, I haven't given on the idea one day of actually merging
the
powerpc and sparc implementation of a lot of that stuff. Mostly the
device-tree accessors proper, the of_device/of_platform bits etc...
into
something like drivers/of1394 maybe.
1394? :-)
Thus if i386 is going to have
Except that none of the powerpc platforms can keep OF alive after the
kernel has booted, which is why we do an in-memory copy of the tree.
Adding that functionality hasn't gotten easier at all since
we use the flattened tree for everything, heh.
We have well defined interfaces to access that c
IMHO, the directory entries in the filesystem
should be in the form "[EMAIL PROTECTED]" (eg: /[EMAIL PROTECTED],0,
"pci" is the node name, "@" is the separator character defined
by IEEE 1275, and "1f,0" is the unit-address,
which are always guaranteed to be unique.
They should be. The problem is
Simple system tools should not need to interpret binary data in
order to provide access to simple structured data like this, that's
just stupid.
I would agree with you if the data was properly typed in the first
place
but it's not,
OF device tree properties are "properly typed" just fine --
Segher had suggested to use .section command to specifically mark
.text.head section as AX (allocatable and executable) to solve the
problem.
Great to hear it works in real life too.
Here, have a From: line (or how should this patch history be
encoded?) :-)
From: Segher Boessenkool <[EM
static void __devinit vio_dev_release(struct device *dev)
{
- if (dev->archdata.of_node) {
- /* XXX should free TCE table */
- of_node_put(dev->archdata.of_node);
- }
+ /* XXX should free TCE table */
+ of_node_put(dev->archdata.of_node);
Are you really suggesting that using a kernel copy of the
device tree is the correct thing to do, and the only correct
thing to do -- with the sole argument that "that's what the
current ports do"?
Well, there are reasons why that's what the current ports do :-)
Sure. It might have been a goo
Not single thread -- but a "global OF lock" yes. Not that
it matters too much, (almost) all property accesses are init
time anyway (which is effectively single threaded).
Not that true anymore. A lot of driver probe is being threaded
nowadays,
either bcs of the new multithread probing bits, o
I do object basically to having something that doesn't also provide
in-kernel interfaces to access the device nodes & properties.
That's the wrong way around. Work is underway to instead
have the devicetreefs *use* the in-kernel interfaces. Would
that be acceptable?
I don't
agree with the re
1 - 100 of 516 matches
Mail list logo