On 2012-12-10T13:21:23, NeilBrown wrote:
> The problem with this approach is that it slows down resync even when there
> is no other IO happening.
> If that is deemed to be acceptable, then the patch set seems fine, though I
> would probably make the default a lot higher so as not to change
On 2012-12-10T13:21:23, NeilBrown ne...@suse.de wrote:
The problem with this approach is that it slows down resync even when there
is no other IO happening.
If that is deemed to be acceptable, then the patch set seems fine, though I
would probably make the default a lot higher so as not to
On 2012-11-22T14:27:52, Guangliang Zhao wrote:
Hi Guangliang,
thanks for adding this. I think this approach is a good direction to
take, just one feedback:
> Add ioctl to control resync speed, userspace tool
> is dmsetup message, message format is:
> dmsetup message $device 0 "set
On 2012-11-22T14:27:52, Guangliang Zhao gz...@suse.com wrote:
Hi Guangliang,
thanks for adding this. I think this approach is a good direction to
take, just one feedback:
Add ioctl to control resync speed, userspace tool
is dmsetup message, message format is:
dmsetup message $device 0
On 2012-08-27T12:52:24, Toshi Kani wrote:
> kdump can be interrupted by watchdog timer when the timer is left
> activated on the crash kernel. Changed the hpwdt driver to disable
> watchdog timer at boot-time. This assures that watchdog timer is
> disabled until /dev/watchdog is opened, and
On 2012-08-27T12:52:24, Toshi Kani toshi.k...@hp.com wrote:
kdump can be interrupted by watchdog timer when the timer is left
activated on the crash kernel. Changed the hpwdt driver to disable
watchdog timer at boot-time. This assures that watchdog timer is
disabled until /dev/watchdog is
On 2007-06-25T17:14:11, Pavel Machek <[EMAIL PROTECTED]> wrote:
> Actually, I surprised Lars a lot by telling him ln /etc/shadow /tmp/
> allows any user to make AA ineffective on large part of systems -- in
> internal discussion. (It is not actually a _bug_, but it is certainly
> unexpected).
On 2007-06-25T17:14:11, Pavel Machek [EMAIL PROTECTED] wrote:
Actually, I surprised Lars a lot by telling him ln /etc/shadow /tmp/
allows any user to make AA ineffective on large part of systems -- in
internal discussion. (It is not actually a _bug_, but it is certainly
unexpected).
Pavel,
On 2007-06-22T08:41:51, Stephen Smalley <[EMAIL PROTECTED]> wrote:
> The issue arises even for a collection of collaborating confined
> processes with different profiles, and the collaboration may be
> intentional or unintentional (in the latter case, one of the confined
> processes may be taking
On 2007-06-22T07:53:47, Stephen Smalley <[EMAIL PROTECTED]> wrote:
> > No the "incomplete" mediation does not flow from the design. We have
> > deliberately focused on doing the necessary modifications for pathname
> > based mediation. The IPC and network mediation are a wip.
> The fact that
On 2007-06-22T07:19:39, Stephen Smalley <[EMAIL PROTECTED]> wrote:
> > > Or can access the data under a different path to which their profile
> > > does give them access, whether in its final destination or in some
> > > temporary file processed along the way.
> > Well, yes. That is intentional.
On 2007-06-21T23:45:36, Joshua Brindle <[EMAIL PROTECTED]> wrote:
> >remember, the policies define a white-list
>
> Except for unconfined processes.
The argument that AA doesn't mediate what it is not configured to
mediate is correct, yes, but I don't think that's a valid _design_ issue
with
On 2007-06-21T23:45:36, Joshua Brindle [EMAIL PROTECTED] wrote:
remember, the policies define a white-list
Except for unconfined processes.
The argument that AA doesn't mediate what it is not configured to
mediate is correct, yes, but I don't think that's a valid _design_ issue
with AA.
Or
On 2007-06-22T07:19:39, Stephen Smalley [EMAIL PROTECTED] wrote:
Or can access the data under a different path to which their profile
does give them access, whether in its final destination or in some
temporary file processed along the way.
Well, yes. That is intentional.
Your
On 2007-06-22T07:53:47, Stephen Smalley [EMAIL PROTECTED] wrote:
No the incomplete mediation does not flow from the design. We have
deliberately focused on doing the necessary modifications for pathname
based mediation. The IPC and network mediation are a wip.
The fact that you have to
On 2007-06-22T08:41:51, Stephen Smalley [EMAIL PROTECTED] wrote:
The issue arises even for a collection of collaborating confined
processes with different profiles, and the collaboration may be
intentional or unintentional (in the latter case, one of the confined
processes may be taking
On 2007-06-21T20:16:25, Joshua Brindle <[EMAIL PROTECTED]> wrote:
> not. One need only look at the wonderful marketing literature for AA to
> see what you are telling people it can do, and your above statement
> isn't consistent with that, sorry.
I'm sorry. I don't work in marketing.
--
On 2007-06-21T16:59:54, Stephen Smalley <[EMAIL PROTECTED]> wrote:
> Or can access the data under a different path to which their profile
> does give them access, whether in its final destination or in some
> temporary file processed along the way.
Well, yes. That is intentional.
Your point is?
On 2007-06-21T22:07:40, Pavel Machek <[EMAIL PROTECTED]> wrote:
> > AA is supposed to allow valid access patterns, so for non-buggy apps +
> > policies, the rename will be fine and does not change the (observed)
> > permissions.
> That still breaks POSIX, right? Hopefully it will not break any
On 2007-06-21T15:42:28, James Morris <[EMAIL PROTECTED]> wrote:
> > A veto is not a technical argument. All technical arguments (except for
> > "path name is ugly, yuk yuk!") have been addressed, have they not?
> AppArmor doesn't actually provide confinement, because it only operates on
>
On 2007-06-21T12:30:08, [EMAIL PROTECTED] wrote:
> well, if you _really_ want people who are interested in this to do weekly
> "why isn't it merged yet you $%#$%# developers" threads that can be
> arranged.
>
> the people who want this have been trying to be patient and let the system
> work.
On 2007-06-21T20:33:11, Pavel Machek <[EMAIL PROTECTED]> wrote:
> inconvenient, yes, insecure, no.
Well, only if you use the most restrictive permissions. And then you'll
suddenly hit failure cases which you didn't expect to, which can
possibly cause another exploit to become visible.
> I
I've caught up on this thread with growing disbelief while reading the
mails, so much that I've found it hard to decide where to reply to.
So people are claiming that AA is ugly, because it introduces pathnames
and possibly a regex interpreter. Ok, taste differs. We've got many
different flavours
I've caught up on this thread with growing disbelief while reading the
mails, so much that I've found it hard to decide where to reply to.
So people are claiming that AA is ugly, because it introduces pathnames
and possibly a regex interpreter. Ok, taste differs. We've got many
different flavours
On 2007-06-21T20:33:11, Pavel Machek [EMAIL PROTECTED] wrote:
inconvenient, yes, insecure, no.
Well, only if you use the most restrictive permissions. And then you'll
suddenly hit failure cases which you didn't expect to, which can
possibly cause another exploit to become visible.
I believe
On 2007-06-21T12:30:08, [EMAIL PROTECTED] wrote:
well, if you _really_ want people who are interested in this to do weekly
why isn't it merged yet you $%#$%# developers threads that can be
arranged.
the people who want this have been trying to be patient and let the system
work. if it
On 2007-06-21T15:42:28, James Morris [EMAIL PROTECTED] wrote:
A veto is not a technical argument. All technical arguments (except for
path name is ugly, yuk yuk!) have been addressed, have they not?
AppArmor doesn't actually provide confinement, because it only operates on
filesystem
On 2007-06-21T22:07:40, Pavel Machek [EMAIL PROTECTED] wrote:
AA is supposed to allow valid access patterns, so for non-buggy apps +
policies, the rename will be fine and does not change the (observed)
permissions.
That still breaks POSIX, right? Hopefully it will not break any apps,
On 2007-06-21T16:59:54, Stephen Smalley [EMAIL PROTECTED] wrote:
Or can access the data under a different path to which their profile
does give them access, whether in its final destination or in some
temporary file processed along the way.
Well, yes. That is intentional.
Your point is?
On 2007-06-21T20:16:25, Joshua Brindle [EMAIL PROTECTED] wrote:
not. One need only look at the wonderful marketing literature for AA to
see what you are telling people it can do, and your above statement
isn't consistent with that, sorry.
I'm sorry. I don't work in marketing.
--
Teamlead
On 2007-06-10T23:05:47, Pavel Machek <[EMAIL PROTECTED]> wrote:
> But you have that regex in _user_ space, in a place where policy
> is loaded into kernel.
>
> AA has regex parser in _kernel_ space, which is very wrong.
That regex parser only applies user defined policy. The logical
connection
On 2007-06-10T23:05:47, Pavel Machek [EMAIL PROTECTED] wrote:
But you have that regex in _user_ space, in a place where policy
is loaded into kernel.
AA has regex parser in _kernel_ space, which is very wrong.
That regex parser only applies user defined policy. The logical
connection
On 2005-09-03T01:57:31, Daniel Phillips <[EMAIL PROTECTED]> wrote:
> The only current users of dlms are cluster filesystems. There are zero users
> of the userspace dlm api.
That is incorrect, and you're contradicting yourself here:
> What does have to be resolved is a common API for node
On 2005-09-03T09:27:41, Bernd Eckenfels <[EMAIL PROTECTED]> wrote:
> Oh thats interesting, I never thought about putting data files (tablespaces)
> in a clustered file system. Does that mean you can run supported RAC on
> shared ocfs2 files and anybody is using that?
That is the whole point why
On 2005-09-03T09:27:41, Bernd Eckenfels [EMAIL PROTECTED] wrote:
Oh thats interesting, I never thought about putting data files (tablespaces)
in a clustered file system. Does that mean you can run supported RAC on
shared ocfs2 files and anybody is using that?
That is the whole point why OCFS
On 2005-09-03T01:57:31, Daniel Phillips [EMAIL PROTECTED] wrote:
The only current users of dlms are cluster filesystems. There are zero users
of the userspace dlm api.
That is incorrect, and you're contradicting yourself here:
What does have to be resolved is a common API for node
On 2005-09-01T16:28:30, Alan Cox <[EMAIL PROTECTED]> wrote:
> Competition will decide if OCFS or GFS is better, or indeed if someone
> comes along with another contender that is better still. And competition
> will probably get the answer right.
Competition will come up with the same situation
On 2005-09-01T16:28:30, Alan Cox [EMAIL PROTECTED] wrote:
Competition will decide if OCFS or GFS is better, or indeed if someone
comes along with another contender that is better still. And competition
will probably get the answer right.
Competition will come up with the same situation like
On 2005-08-10T12:05:11, Christoph Hellwig <[EMAIL PROTECTED]> wrote:
> > What would a syntax look like which in your opinion does not remove
> > totally valid symlink targets for magic mushroom bullshit? Prefix with
> > // (which, according to POSIX, allows for implementation-defined
> >
On 2005-08-10T11:54:50, Christoph Hellwig <[EMAIL PROTECTED]> wrote:
> It works now. Unlike context link which steal totally valid symlink
> targets for magic mushroom bullshit.
Right, that is a valid concern. Avoiding context dependent symlinks
entirely certainly is one possible path around
On 2005-08-10T11:32:56, Christoph Hellwig <[EMAIL PROTECTED]> wrote:
> > Would a generic implementation of that higher up in the VFS be more
> > acceptable?
> No. Use mount --bind
That's a working and less complex alternative for upto how many places
at once? That works for non-root users
On 2005-08-10T08:03:09, Christoph Hellwig <[EMAIL PROTECTED]> wrote:
> > Kindly lose the "Context Dependent Pathname" crap.
> Same for ocfs2.
Would a generic implementation of that higher up in the VFS be more
acceptable?
It's not like context-dependent symlinks are an arbitary feature, but
On 2005-08-10T08:03:09, Christoph Hellwig [EMAIL PROTECTED] wrote:
Kindly lose the Context Dependent Pathname crap.
Same for ocfs2.
Would a generic implementation of that higher up in the VFS be more
acceptable?
It's not like context-dependent symlinks are an arbitary feature, but
rather
On 2005-08-10T11:32:56, Christoph Hellwig [EMAIL PROTECTED] wrote:
Would a generic implementation of that higher up in the VFS be more
acceptable?
No. Use mount --bind
That's a working and less complex alternative for upto how many places
at once? That works for non-root users how...?
On 2005-08-10T11:54:50, Christoph Hellwig [EMAIL PROTECTED] wrote:
It works now. Unlike context link which steal totally valid symlink
targets for magic mushroom bullshit.
Right, that is a valid concern. Avoiding context dependent symlinks
entirely certainly is one possible path around this.
On 2005-08-10T12:05:11, Christoph Hellwig [EMAIL PROTECTED] wrote:
What would a syntax look like which in your opinion does not remove
totally valid symlink targets for magic mushroom bullshit? Prefix with
// (which, according to POSIX, allows for implementation-defined
behaviour)?
On 2005-08-03T11:56:18, David Teigland <[EMAIL PROTECTED]> wrote:
> > * Why use your own journalling layer and not say ... jbd ?
> Here's an analysis of three approaches to cluster-fs journaling and their
> pros/cons (including using jbd): http://tinyurl.com/7sbqq
Very instructive read, thanks
On 2005-08-03T11:56:18, David Teigland [EMAIL PROTECTED] wrote:
* Why use your own journalling layer and not say ... jbd ?
Here's an analysis of three approaches to cluster-fs journaling and their
pros/cons (including using jbd): http://tinyurl.com/7sbqq
Very instructive read, thanks for
On 2005-08-02T10:52:00, Lee Revell <[EMAIL PROTECTED]> wrote:
> > Power consumption matters to server, desktop, and laptop.
> >
> > Assuming this is a laptop issue is wildly incorrect.
>
> I would think you'd get the best power/performance ration from a desktop
> by just having it suspend after
On 2005-08-02T10:02:59, Lee Revell <[EMAIL PROTECTED]> wrote:
> > Maybe new desktop systems - but what about the tens of millions of old
> > systems that don't.
> Does anyone really give a shit about saving power on the desktop anyway?
> This is basically a laptop issue.
Desktops? Screw
On 2005-08-02T10:02:59, Lee Revell [EMAIL PROTECTED] wrote:
Maybe new desktop systems - but what about the tens of millions of old
systems that don't.
Does anyone really give a shit about saving power on the desktop anyway?
This is basically a laptop issue.
Desktops? Screw desktops.
On 2005-08-02T10:52:00, Lee Revell [EMAIL PROTECTED] wrote:
Power consumption matters to server, desktop, and laptop.
Assuming this is a laptop issue is wildly incorrect.
I would think you'd get the best power/performance ration from a desktop
by just having it suspend after 5 or 10
On 2005-07-20T11:39:38, Joel Becker <[EMAIL PROTECTED]> wrote:
> In turn, let me clarify a little where configfs fits in to
> things. Configfs is merely a convenient and transparent method to
> communicate configuration to kernel objects. It's not a place for
> uevents, for netlink
On 2005-07-20T11:39:38, Joel Becker [EMAIL PROTECTED] wrote:
In turn, let me clarify a little where configfs fits in to
things. Configfs is merely a convenient and transparent method to
communicate configuration to kernel objects. It's not a place for
uevents, for netlink sockets, or
On 2005-07-20T09:55:31, "Walker, Bruce J (HP-Labs)" <[EMAIL PROTECTED]> wrote:
> Like Lars, I too was under the wrong impression about this configfs
> "nodemanager" kernel component. Our discussions in the cluster
> meeting Monday and Tuesday were assuming it was a general service that
> other
On 2005-07-20T11:35:46, David Teigland <[EMAIL PROTECTED]> wrote:
> > Also, eventually we obviously need to have state for the nodes - up/down
> > et cetera. I think the node manager also ought to track this.
> We don't have a need for that information yet; I'm hoping we won't ever
> need it in
On 2005-07-20T11:35:46, David Teigland [EMAIL PROTECTED] wrote:
Also, eventually we obviously need to have state for the nodes - up/down
et cetera. I think the node manager also ought to track this.
We don't have a need for that information yet; I'm hoping we won't ever
need it in the
On 2005-07-20T09:55:31, Walker, Bruce J (HP-Labs) [EMAIL PROTECTED] wrote:
Like Lars, I too was under the wrong impression about this configfs
nodemanager kernel component. Our discussions in the cluster
meeting Monday and Tuesday were assuming it was a general service that
other kernel
On 2005-07-18T14:15:53, David Teigland <[EMAIL PROTECTED]> wrote:
> Some of the comments about the dlm concerned how it's configured (from
> user space.) In particular, there was interest in seeing the dlm and
> ocfs2 use common methods for their configuration.
>
> The first area I'm looking at
On 2005-07-18T14:15:53, David Teigland [EMAIL PROTECTED] wrote:
Some of the comments about the dlm concerned how it's configured (from
user space.) In particular, there was interest in seeing the dlm and
ocfs2 use common methods for their configuration.
The first area I'm looking at is how
On 2005-07-08T13:36:12, David Howells <[EMAIL PROTECTED]> wrote:
> The attached patch prevents oopses interleaving with characters from other
> printks on other CPUs by only breaking the lock if the oops is happening on
> the machine holding the lock.
>
> It might be better if the oops generator
On 2005-07-08T13:36:12, David Howells [EMAIL PROTECTED] wrote:
The attached patch prevents oopses interleaving with characters from other
printks on other CPUs by only breaking the lock if the oops is happening on
the machine holding the lock.
It might be better if the oops generator got
On 2005-07-05T07:09:47, "Richard B. Johnson" <[EMAIL PROTECTED]> wrote:
> This problem will continue. Eventually there will be no general
> exported symbols. The apparent idea is to prevent the use of the
> kernel in proprietary systems.
... with proprietary kernel extensions. There's a
On 2005-07-05T07:09:47, Richard B. Johnson [EMAIL PROTECTED] wrote:
This problem will continue. Eventually there will be no general
exported symbols. The apparent idea is to prevent the use of the
kernel in proprietary systems.
... with proprietary kernel extensions. There's a difference.
On 2005-04-15T00:56:35, Domen Puncer <[EMAIL PROTECTED]> wrote:
> This is permissions in sysfs (or 0 if no file is to be created).
Duh. Should have caught that. Try this one.
Index: linux-2.6.11/drivers/block/nbd.c
===
---
On 2005-04-15T00:56:35, Domen Puncer [EMAIL PROTECTED] wrote:
This is permissions in sysfs (or 0 if no file is to be created).
Duh. Should have caught that. Try this one.
Index: linux-2.6.11/drivers/block/nbd.c
===
---
From: Lars Marowsky-Bree <[EMAIL PROTECTED]>
This patches adds the "nbds_max" parameter to the nbd kernel module,
which limits the number of nbds allocated. Previously, always all 128
entries were allocated unconditionally, which used to waste resources
and needlessly flood th
From: Lars Marowsky-Bree [EMAIL PROTECTED]
This patches adds the nbds_max parameter to the nbd kernel module,
which limits the number of nbds allocated. Previously, always all 128
entries were allocated unconditionally, which used to waste resources
and needlessly flood the hotplug system
On 2005-04-13T08:59:21, Lennart Sorensen <[EMAIL PROTECTED]> wrote:
> It is becoming harder and harder to find supported cards it seems.
> Finding a card with decent 2D drivers for X can still be done, but 3D is
> just not really an option it seems. Even 2D seems to be a problem on
> many cards
On 2005-04-13T08:59:21, Lennart Sorensen [EMAIL PROTECTED] wrote:
It is becoming harder and harder to find supported cards it seems.
Finding a card with decent 2D drivers for X can still be done, but 3D is
just not really an option it seems. Even 2D seems to be a problem on
many cards if you
On 2005-03-09T18:36:37, Alex Aizman <[EMAIL PROTECTED]> wrote:
> Heartbeat is good for reliability, etc. WRT "getting paged-out" -
> non-deterministic (things depend on time), right?
Right, if we didn't get scheduled often enough for us to send our
heartbeat messages to the other peers, they'll
On 2005-03-09T18:36:37, Alex Aizman [EMAIL PROTECTED] wrote:
Heartbeat is good for reliability, etc. WRT getting paged-out -
non-deterministic (things depend on time), right?
Right, if we didn't get scheduled often enough for us to send our
heartbeat messages to the other peers, they'll evict
On 2005-03-08T22:25:29, Alex Aizman <[EMAIL PROTECTED]> wrote:
> There's (or at least was up until today) an ongoing discussion on our
> mailing list at http://groups-beta.google.com/group/open-iscsi. The
> short and long of it: the problem can be solved, and it will. Couple
> simple things we
On 2005-03-08T22:25:29, Alex Aizman [EMAIL PROTECTED] wrote:
There's (or at least was up until today) an ongoing discussion on our
mailing list at http://groups-beta.google.com/group/open-iscsi. The
short and long of it: the problem can be solved, and it will. Couple
simple things we
On 2005-03-04T01:44:06, Junfeng Yang <[EMAIL PROTECTED]> wrote:
> > That would be a bug. Please send the e2fsck output.
>
> Here is the trace
>
> 1. file system is made with sbin/mkfs.ext2 -F -b 1024 /dev/hda9 60
> and mounted with -o sync,dirsync
>
> 1. operations FiSC did:
>
>
On 2005-03-04T01:44:06, Junfeng Yang [EMAIL PROTECTED] wrote:
That would be a bug. Please send the e2fsck output.
Here is the trace
1. file system is made with sbin/mkfs.ext2 -F -b 1024 /dev/hda9 60
and mounted with -o sync,dirsync
1. operations FiSC did:
creat(/mnt/sbd0/0001)
On 2005-03-02T15:23:49, Greg KH <[EMAIL PROTECTED]> wrote:
> > This could be improved: _All_ new features have to go through -mm first
> > for a period (of whatever length) / one cycle. 2.6.x only directly picks
> > up "obvious" bugfixes, and a select set of features which have ripened
> > in
On 2005-03-02T15:23:49, Greg KH [EMAIL PROTECTED] wrote:
This could be improved: _All_ new features have to go through -mm first
for a period (of whatever length) / one cycle. 2.6.x only directly picks
up obvious bugfixes, and a select set of features which have ripened
in -mm. 2.6.x-pre
On 2005-03-02T14:21:38, Linus Torvalds <[EMAIL PROTECTED]> wrote:
> We'd still do the -rcX candidates as we go along in either case, so as a
> user you wouldn't even _need_ to know, but the numbering would be a rough
> guide to intentions. Ie I'd expect that distributions would always try to
>
On 2005-03-02T14:21:38, Linus Torvalds [EMAIL PROTECTED] wrote:
We'd still do the -rcX candidates as we go along in either case, so as a
user you wouldn't even _need_ to know, but the numbering would be a rough
guide to intentions. Ie I'd expect that distributions would always try to
base
On 2005-02-11T19:58:41, Christoph Hellwig <[EMAIL PROTECTED]> wrote:
> > +/* Code borrowed from dm-lsi-rdac by Mike Christie */
>
> Any reason that module isn't submitted?
No idea why.
> > + bio->bi_bdev = path->dev->bdev;
> > + bio->bi_sector = 0;
> > + bio->bi_private = path;
> > +
On 2005-02-11T19:58:41, Christoph Hellwig [EMAIL PROTECTED] wrote:
+/* Code borrowed from dm-lsi-rdac by Mike Christie */
Any reason that module isn't submitted?
No idea why.
+ bio-bi_bdev = path-dev-bdev;
+ bio-bi_sector = 0;
+ bio-bi_private = path;
+ bio-bi_end_io =
On 2005-01-21T17:12:30, Jan Kasprzak <[EMAIL PROTECTED]> wrote:
> Just FWIW, I've got the following crash when trying to boot a 2.6.11-rc1-bk9
> kernel on my dual opteron Fedora Core 3 box. I will try -bk8 now.
Attached is a likely candidate for a fix.
(It's been discussed on linux-raid
On 2005-01-21T17:12:30, Jan Kasprzak [EMAIL PROTECTED] wrote:
Just FWIW, I've got the following crash when trying to boot a 2.6.11-rc1-bk9
kernel on my dual opteron Fedora Core 3 box. I will try -bk8 now.
Attached is a likely candidate for a fix.
(It's been discussed on linux-raid already.)
On 2005-01-18T22:18:01, "Kiniger, Karl (GE Healthcare)" <[EMAIL PROTECTED]>
wrote:
> idea for enhancement of software raid 1:
>
> every time the raid determines that a sector cannot
> be read it could at least try to overwrite the bad are
> with good data from the other disk.
The idea is good
On 2005-01-18T22:18:01, Kiniger, Karl (GE Healthcare) [EMAIL PROTECTED]
wrote:
idea for enhancement of software raid 1:
every time the raid determines that a sector cannot
be read it could at least try to overwrite the bad are
with good data from the other disk.
The idea is good and I'm
On 2001-06-26T15:31:04,
"SATHISH.J" <[EMAIL PROTECTED]> said:
> I would like to know how I can use gdb to debug some function in the
> kernel. Please help me out with this detail.
The easiest way would be user-mode-linux, hosted on sourceforge.net.
Sincerely,
Lars Marowsky-Brée <[EMAIL
On 2001-06-26T15:31:04,
SATHISH.J [EMAIL PROTECTED] said:
I would like to know how I can use gdb to debug some function in the
kernel. Please help me out with this detail.
The easiest way would be user-mode-linux, hosted on sourceforge.net.
Sincerely,
Lars Marowsky-Brée [EMAIL
On 2001-05-24T10:45:25,
Tobias Ringstrom <[EMAIL PROTECTED]> said:
> > if (!printed_version++)
> > - printk(version);
> > + printk("%s", version);
> >
> > DMFE_DBUG(0, "dmfe_init_one()", 0);
> >
>
> Could you please explain the purpose of this change? To me it
On 2001-05-24T10:45:25,
Tobias Ringstrom [EMAIL PROTECTED] said:
if (!printed_version++)
- printk(version);
+ printk(%s, version);
DMFE_DBUG(0, dmfe_init_one(), 0);
Could you please explain the purpose of this change? To me it looks less
efficient
On 2001-05-19T16:25:47,
Daniel Phillips <[EMAIL PROTECTED]> said:
> How about:
>
> # mkpart /dev/sda /dev/mypartition -o size=1024k,type=swap
> # ls /dev/mypartition
> basesizedevice type
> # cat /dev/mypartition/size
> 1048576
> # cat /dev/mypartition/device
>
On 2001-05-19T16:25:47,
Daniel Phillips [EMAIL PROTECTED] said:
How about:
# mkpart /dev/sda /dev/mypartition -o size=1024k,type=swap
# ls /dev/mypartition
basesizedevice type
# cat /dev/mypartition/size
1048576
# cat /dev/mypartition/device
/dev/sda
#
On 2001-05-16T08:34:00,
Christoph Biardzki <[EMAIL PROTECTED]> said:
> I was investigating redundant path failover with FibreChannel disk devices
> during the last weeks. The idea is to use a second, redundant path to a
> storage device when the first one fails. Ideally one could also
On 2001-05-16T08:34:00,
Christoph Biardzki [EMAIL PROTECTED] said:
I was investigating redundant path failover with FibreChannel disk devices
during the last weeks. The idea is to use a second, redundant path to a
storage device when the first one fails. Ideally one could also implement
On 2001-05-06T01:36:05,
Mike Castle <[EMAIL PROTECTED]> said:
> On Sun, May 06, 2001 at 10:12:17AM +0200, Lars Marowsky-Bree wrote:
> > You assign a new EXTRAVERSION to the new kernel you are building, and keep the
> > old kernel at the old name.
>
> Except that
On 2001-05-06T17:45:06,
Keith Owens <[EMAIL PROTECTED]> said:
> You already have a working kernel which you want to rename to use as a
> backup version. Changing EXTRAVERSION and recompiling builds a new
> kernel and adds uncertainty about whether the kernel still works - did
> you change
On 2001-05-06T17:45:06,
Keith Owens [EMAIL PROTECTED] said:
You already have a working kernel which you want to rename to use as a
backup version. Changing EXTRAVERSION and recompiling builds a new
kernel and adds uncertainty about whether the kernel still works - did
you change anything
On 2001-05-06T01:36:05,
Mike Castle [EMAIL PROTECTED] said:
On Sun, May 06, 2001 at 10:12:17AM +0200, Lars Marowsky-Bree wrote:
You assign a new EXTRAVERSION to the new kernel you are building, and keep the
old kernel at the old name.
Except that some patches (ie, RAID, -ac) use
On 2001-03-31T09:36:33,
James Lewis Nance <[EMAIL PROTECTED]> said:
> > > 4) What is the time frame of releasing 2.5.x-final (or 2.6.x) ?
> > wow that's jumping the gun a bit.
> But its easy to answer. It will come out about 1 year after whatever
> target date we initially set :-)
Sorry,
On 2001-03-31T09:36:33,
James Lewis Nance [EMAIL PROTECTED] said:
4) What is the time frame of releasing 2.5.x-final (or 2.6.x) ?
wow that's jumping the gun a bit.
But its easy to answer. It will come out about 1 year after whatever
target date we initially set :-)
Sorry, s/we
1 - 100 of 142 matches
Mail list logo