Re: Kernel dumps [was Re: possible changes from Panzura]

2013-07-10 Thread Bakul Shah
On Wed, 10 Jul 2013 14:50:19 PDT Jordan Hubbard j...@mail.turbofuzz.com wrote:
 
 On Jul 10, 2013, at 1:04 PM, asom...@gmail.com wrote:
 
  I don't doubt that it would be useful to have an emergency network
  stack.  But have you ever looked into debugging over firewire?
 
 My point was more that actually being able to debug a machine over the networ
 k is such a step up in terms of convenience/awesomeness that if anyone is thi
 nking of putting any time and attention into this area at all, that's definit
 ely the target to go for.

You have to use this just once to see how convenient it is!

For a previous company James Da Silva did this in 1997 by
adding a network console (IIRC in a day or two).  A new
ethernet type was used + a host specific ethernet multicast
address so you could connect from any machine on the same
ethernet segment.  Either as a remote console for the usual
console IO  ddb, or to run remote gdb.  Quite insecure but
that didn't matter as this was used in a test network.  There
was no emegerency network stack; just a polling function added
to an ethernet driver since this had to work even when the
kernel was on the operating table under anaesthetic! No new
gdb hacks were necessary since the invoking program set things
up for it.

If I was doing this today, I'd probably still do the same and
make sure that the interface used for remote debugging is on
an isolated network.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: close(2) while accept(2) is blocked

2013-03-30 Thread Bakul Shah
On Sat, 30 Mar 2013 09:14:34 PDT John-Mark Gurney j...@funkthat.com wrote:
 
 As someone else pointed out in this thread, if a userland program
 depends upon this behavior, it has a race condition in it...
 
 Thread 1  Thread 2Thread 3
   enters routine to read
 enters routine to close
 calls close(3)
   open() returns 3
   does read(3) for orignal fd
 
 How can the original threaded program ensure that thread 2 doesn't
 create a new fd in between?  So even if you use a lock, this won't
 help, because as far as I know, there is no enter read and unlock
 mutex call yet...

It is worse. Consider:

fd = open(file,...);
read(fd, ...);

No guarantee read() gets data from the same opened file!
Another thread could've come along, closed fd and pointed it
to another file. So nothing is safe. Might as well stop using
threads, right?!

We are talking about cooperative threads where you don't have
to assume the worst case.  Here not being notified on a close
event can complicate things. As an example, I have done
something like this in the past: A frontend process validating
TCP connections and then passing on valid TCP connections to
another process for actual service (via sendmsg() over a unix
domain). All the worker threads in service process can do a
recvmsg() on the same fd. They process whatever tcp connection
they get. Now what happens when the frontend process is
restarted for some reason?  All the worker threads need to
eventually reconnect to a new unix domain posted by the new
frontend process. You can handle this multiple ways but
terminating all the blocking syscalls on the now invalid fd is
the simplest solution from a user perspective.

 I decided long ago that this is only solvable by proper use of locking
 and ensuring that if you call close (the syscall), that you do not have
 any other thread that may use the fd.  It's the close routine's (not
 syscall) function to make sure it locks out other threads and all other
 are out of the code path that will use the fd before it calls close..

If you lock before close(), you have to lock before every
other syscall on that fd. That complicates userland coding and
slows down things when this can be handled more simply in the
kernel.

Another usecase is where N worker threads all accept() on the
same fd. Single threading using a lock defeats any performance
gain.

 If someone could describe how this new eject a person from read could
 be done in a race safe way, then I'd say go ahead w/ it...  Otherwise
 we're just moving the race around, and letting people think that they
 have solved the problem when they haven't...

In general it just makes sense to notify everyone waiting on
something that the situation has changed and they are going to
have to wait forever.  The kernel should already have the
necessary information about which threads are sleeping on a
fd. Wake them all up. On being awakened they see that the fd
is no longer valid and all return with a count of data already
read or -1 and EBADF. Doing the equivalent in userland is
complicated.

Carl has pointed out how BSD and Linux have required a
workaround compared to Solaris and OS X (in Java and IIRC, the
Go runtime). Seems like we have a number of usecases and this
is something worth fixing.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: close(2) while accept(2) is blocked

2013-03-29 Thread Bakul Shah
On Fri, 29 Mar 2013 14:30:59 PDT Carl Shapiro carl.shap...@gmail.com wrote:
 
 In other operating systems, such as Solaris and MacOS X, closing the
 descriptor causes blocked system calls to return with an error.

What happens if you select() on a socket and another thread
closes this socket?  Ideally select() should return (with
EINTR?) so that the blocking thread can some cleanup action.
And if you do that, the blocking accept() case is not really
different.

There is no point in *not* telling blocking threads that the
descriptor they're waiting on is one EBADF and nothing is
going to happen.

 It is not obvious whether there is any benefit to having the current
 blocking behaviour. 

This may need some new kernel code but IMHO this is worth fixing.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: using FreeBSD to create a completely new OS

2012-12-10 Thread Bakul Shah
On Sun, 09 Dec 2012 23:48:12 EST Aryeh Friedman aryeh.fried...@gmail.com  
wrote:
 For personal hobby reasons I want to write an OS completely from
 scratch (due to some aspects of the design no existing OS is a
 suitable starting place)... what I mean is I want to start with the
 MBR (boot0) and go on from there... I only have one *REAL* machine to
 work with which means I need to work with something like
 emulators/virtualbox-ose... I also want to do as many automated tests
 as possible (for example seeing if the installer copied the MBR [and
 later other stuff] correctly to the virtual HDD) for this reason I
 have a few questions on vb (or perhaps QEMU if not possible in vb):
 
 1. Can it be scripted?
 2. Is there any documentation on the various virtual HDD formats and
 such (that way I can check the physical drive and not by indirect
 query)?
 
 Also can people give me some idea of a good general
 development/testing framework the one I have in mind so far is:

You may wish to check out 
http://wiki.osdev.org/Expanded_Main_Page
http://wiki.osdev.org/Projects
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: lib for working with graphs

2012-11-29 Thread Bakul Shah
On Nov 29, 2012, at 7:12 AM, Andriy Gapon a...@freebsd.org wrote:

 on 28/11/2012 18:36 Mehmet Erol Sanliturk said the following:
 
 
 On Wed, Nov 28, 2012 at 6:37 AM, Andriy Gapon a...@freebsd.org
 mailto:a...@freebsd.org wrote:
 
on 28/11/2012 16:31 David Wolfskill said the following:
 On Wed, Nov 28, 2012 at 04:20:28PM +0200, Andriy Gapon wrote:
 
 Does anyone know a light-weight BSD-licensed (or analogous) library / 
 piece of
 code for doing useful things with graphs?
 Thank you.
 
 
 Errr graphs is fairly ambiguous, and things with graphs covers a
 very wide range of activities.
 
Graphs as in vertices, edges, etc :)
And things like graph basics: BFS, DFS, connected components, topological
sort, etc
 
 ports/math/R may be useful for this -- I use it to generate graphs (and
 perform statistical analyses).
 
 ports/graphics/plotmtv is possibly of some interest, as well, as it
 allows a certain level of interactivity (though the code hasn't been
 updated in quite some time -- but it still works).
 
 If neither of those suits your intent, perhaps you could expand a bit on
 what that intent is?
 
And, big oops sorry, forgot one very important detail - it has to be C.
 
 http://en.wikipedia.org/wiki/JUNG
 http://en.wikipedia.org/wiki/Xfig
 http://en.wikipedia.org/wiki/SVG-edit
 
 
 http://en.wikipedia.org/wiki/Category:Graph_drawing_software
 http://en.wikipedia.org/wiki/Comparison_of_vector_graphics_editors
 http://en.wikipedia.org/wiki/Category:Free_diagramming_software
 
 
 Thank you very much .
 
 Thank you, but all of these appear to be off-mark.
 They all are end-user oriented applications for drawing/editing graphs, etc.
 While I need a light-weight library for embedding graph analysis.

What about Prof. Knuth's Stanford GraphBase library? It is in public domain.
And there is a whole book about it! 
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Replace bcopy() to update ether_addr

2012-08-20 Thread Bakul Shah
On Mon, 20 Aug 2012 13:05:51 MDT Warner Losh i...@bsdimp.com  wrote:
 
 On Aug 20, 2012, at 10:48 AM, Wojciech Puchar wrote:
 
  #if defined(__i386__) || defined(__amd64__)
*dst =3D *src;
  #else
bcopy(src, dst, ETHER_ADDR_LEN);
  #else
  short *tmp1=3D((*short)src),*tmp2=3D((*short)dst);
  *tmp2=3D*tmp1; *(tmp2+1)=3D*(tmp1+1); *(tmp2+2)=3D*(tmp1+2);
 
  or use ++.
 
  i think it is always aligned to 2 bytes and this should produce usable
 code on any CPU? should be 6 instructions on MIPS and PPC IMHO.
 
 We should tag it as __aligned(2) then, no?  If so, then the compiler
 should generate the code you posted.

Doesn't gcc inline memcpy these days? 
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Graphical Terminal Environment

2012-03-07 Thread Bakul Shah
On Tue, 06 Mar 2012 14:08:51 EST Brandon Falk bfalk_...@brandonfa.lk  wrote:
You seem to understand exactly want I want. Just small font terminals on all
screens, and I was actually thinking `screen` would do the trick for the
splitting/management of them. As for stripping down X, I might do so as a proof
of concept, but in the end I want to develop my own for my own learning.
 
When I mention lines, circles, etc I was thinking moreso at the very low level
of fonts being drawn by lines and dots (although I would like to branch it
eventually to support 2d graphics where people could maybe make some 2d games,
but keep the high-res terminal on the side to keep it minimal). I also may want
to draw some lines to border terminal windows (screen would eliminate this
obviously).

You might want to look at /dev/draw of plan9. And rio, as
someone else also suggested.  /sys/src/9/port/devdraw.c on
plan9 is only about 2200 lines. You may also wish to read Rob
Pike's paper on 8 1/2 and man pages for draw(2), draw(3),
graphics(2) and rio(1) @ cat-v.org. See rio presentation @
http://herpolhode.com/rob/lec5.pdf as it lays out window
system design issues very nicely.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: dwarf2 reader

2011-06-21 Thread Bakul Shah
On Mon, 13 Jun 2011 10:05:15 EDT Ewart Tempest etemp...@jnpr.net  wrote:
 I have developed some flight recording capability in the JUNOS FreeBSD 
 based kernel, with the flight recorded data being captured in binary 
 form for performance. All the subsequent formatting and display of this 
 data is performed by a user-space application. I would like to reduce 
 the amount of time that designers spend writing formatters to display 
 their flight recorded data. kgdb is perfectly capable of displaying all 
 kernel resident data structures, and  the manner in which it does so is 
 perfectly acceptable for flight recording purposes. The code that kgdb 
 uses to support this framework is difficult to break out - does anyone 
 know of a dwarf2 reader s/w implementation that is more re-usable?

In addition to lldb, there is the Path_DB debugger from
Pathscale that has a dwarf reader. Path_DB seems fairly
portable but don't know how hard it would be break out the
dwarf2 code, about 2K+ lines of C++. Open sourced under CDDL.

git://github.com/path64/debugger.git
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: [RFC] Replacing our regex implementation

2011-05-09 Thread Bakul Shah
On Sun, 08 May 2011 21:35:04 CDT Zhihao Yuan lich...@gmail.com  wrote:
 1. This lib accepts many popular grammars (PCRE, POSIX, vim, etc.),
 but it does not allow you to change the mode.
 http://code.google.com/p/re2/source/browse/re2/re2.h

The mode is decided when an RE2 object is instantiated so this
is ok. You can certainly instantiate multiple objects with
different options if so desired.

 2. It focuses on speed and features, not stability and standardization.

Look at the open issues. Seems stable enough to me. re2 has a
posix only mode. It also does unicode.

 3. It uses C++. We seldom accepts C++ code in base system, and does
 not accept it in libc.

This is the show stopper.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: [RFC] Replacing our regex implementation

2011-05-09 Thread Bakul Shah
On Mon, 09 May 2011 17:51:46 EDT David Schultz d...@freebsd.org  wrote:
 On Sun, May 08, 2011, Bakul Shah wrote:
  On Sun, 08 May 2011 21:35:04 CDT Zhihao Yuan lich...@gmail.com  wrote:
   1. This lib accepts many popular grammars (PCRE, POSIX, vim, etc.),
   but it does not allow you to change the mode.
   http://code.google.com/p/re2/source/browse/re2/re2.h
  
  The mode is decided when an RE2 object is instantiated so this
  is ok. You can certainly instantiate multiple objects with
  different options if so desired.
  
   2. It focuses on speed and features, not stability and standardization.
  
  Look at the open issues. Seems stable enough to me. re2 has a
  posix only mode. It also does unicode.

s/posix only mode/posix only mode as well/

  
   3. It uses C++. We seldom accepts C++ code in base system, and does
   not accept it in libc.
  
  This is the show stopper.
 
 Use of C++ is a clear show-stopper if it introduces new runtime
 requirements, e.g., dependencies on STL or exceptions.  Aside from
 that, however, I can't think of any fundamental, technical reasons
 why a component of libc couldn't be written in C++.  (Perhaps the
 toolchain maintainers could name some, and they'd be the best
 authority on the matter.)  You can expect some resistance
 regardless, however, so make sure the technical merits of RE2 are
 worth the trouble.

Ok, I just verified there are no additional runtime
requirements by running a simple test, where I added a C
wrapper around an RE2 C++ call, compiled it with c++, then
compiled the client C code with cc, and linked everything with
cc. This works (tested on on x86_64, under 8.1).

I do think RE2 is very well done (see swtch.com/~rsc/regexp
articles) and it is actively maintained, has a battery of
pretty exhaustive tests.  Seems TRE's author also likes re2:
http://hackerboss.com/is-your-regex-matcher-up-to-snuff/

So if we want to consider this, it is a real possibility.

 IIRC, some of the prior discussions on using more C++ in the base
 system got derailed by tangents on multiple inheritance, operator
 overloading, misfeatures of STL, and what subset of C++ ought to
 be considered kosher in FreeBSD.  You don't have to get involved
 in any of that because you'd only be proposing to import a
 self-contained third-party library.

Indeed; we would just use it via a C wrapper API.  But I can
see someone thinking this is the camel's nose in the tent :-)
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: [RFC] Replacing our regex implementation

2011-05-08 Thread Bakul Shah
As per the following URLs re2 is much faster than TRE (on the
benchmarks they ran):

http://lh3lh3.users.sourceforge.net/reb.shtml
http://sljit.sourceforge.net/regex_perf.html

re2 is in C++  has a PCRE API, while TRE is in C  has a
POSIX API.  Both have BSD copyright. Is it worth considering
making re2 posix compliant?
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: [RFC] Replacing our regex implementation

2011-05-08 Thread Bakul Shah
On Mon, 09 May 2011 02:37:10 BST Gabor Kovesdan ga...@kovesdan.org  wrote:
 Em 09-05-2011 02:17, Bakul Shah escreveu:
  As per the following URLs re2 is much faster than TRE (on the
  benchmarks they ran):
 
  http://lh3lh3.users.sourceforge.net/reb.shtml
  http://sljit.sourceforge.net/regex_perf.html
 
  re2 is in C++  has a PCRE API, while TRE is in C  has a
  POSIX API.  Both have BSD copyright. Is it worth considering
  making re2 posix compliant?
 Is it wchar-clean and is it actively maintained? C++ is quite 
 anticipated for the base system and I'm not very skilled in it so atm I 
 couldn't promise to use re2 instead of TRE. And anyway, can C++ go into 
 libc? According to POSIX, the regex code has to be there. But let's see 
 what others say... If we happen to use re2 later, my extensions that I 
 talked about in points 2, and 3, would still be useful.
 
 Anyway, according to some earlier vague measures, TRE seems to be slower 
 in small matching tasks but scales well. These tests seem to compare 
 only short runs with the same regex. It should be seem how they compare 
 e.g. if you grep the whole ports tree with the same pattern. If the 
 matching scales well once the pattern is compiled, that's more important 
 than the overall result for such short tasks, imho.

re2 is certainly maintained. Don't know about whcar cleanliness.
See 
http://code.google.com/p/re2/
Also check out Russ Cox's excellent articles on implementing it
http://swtch.com/~rsc/regexp/
and this:

http://google-opensource.blogspot.com/2010/03/re2-principled-approach-to-regular.html

C++ may be an impediment for it to go into libc but one can
certainly put a C interface on a C++ library.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: [RFC] Replacing our regex implementation

2011-05-08 Thread Bakul Shah
On Mon, 09 May 2011 08:30:57 +0400 Lev Serebryakov l...@freebsd.org  wrote:
 Hello, Bakul.
 You wrote 9 =EC=E0=FF 2011 =E3., 5:17:09:
 
  As per the following URLs re2 is much faster than TRE (on the
  benchmarks they ran):
 
  http://lh3lh3.users.sourceforge.net/reb.shtml
  http://sljit.sourceforge.net/regex_perf.html
   re2 is much faster at price of memory. I don't remember details now,
 but I've found (simple) situations when re2 consumes a HUGE amount of
 memory (read: hundreds of megabytes). It work faster than tre, yes. If
 you have this memory to RE engine alone.

As per http://swtch.com/~rsc/regexp/regexp3.html RE2 requires
about 10 KB per regexp, in contrast to PCRE's half a KB.  This
is not excessive in this day and age. But 100s of megabytes
sounds very strange  I'd appreciate a reference to an
actual example (and I am sure so would the author of re2).

But I do not want to defend re2 here. My intent was to just
make sure re2 was at least considered.  Mainly because it was
actually quite surprising to see TRE is 10 to 45 times slower
than re2!

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: man 3 getopt char * const argv[] - is const wrong ?

2011-02-13 Thread Bakul Shah
On Sun, 13 Feb 2011 13:20:58 +0100 Julian H. Stacey j...@berklix.com  wrote:
 Hi Hackers
 Ref.: man 3 getopt
   int getopt(int argc, char * const argv[], const char *optstring);
 
 Ref.: KR 2nd Ed P.211 last indent, 2nd sentence
   The purpose of const is to announce objjects that may be
   placed in read-only memory, and perhaps to increas opportunities
   for optimization
 
 optstring is obviously const, 
 but I don't see that argv can calim to be const ?
 
 Did some ISO standard redefine const ? If so URL please ?
 (I learnt my C from KR #1 decades ago :-)

Not quite what you asked for but this may help in making
sense of const

$ cdecl # from /usr/ports/devel/cdecl
explain const char* x
declare x as pointer to const char
explain char * const x
declare x as const pointer to char
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: find(1): Is this a bug or not?

2010-11-30 Thread Bakul Shah
On Tue, 30 Nov 2010 12:33:54 +0100 =?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?= 
d...@des.no  wrote:
 Bakul Shah ba...@bitblocks.com writes:
  Index: function.c
  --- function.c  (revision 212707)
  +++ function.c  (working copy)
  @@ -560,7 +560,7 @@
  empty = 1;
  dir = opendir(entry-fts_accpath);
  if (dir == NULL)
  -   err(1, %s, entry-fts_accpath);
  +   return 0;
  for (dp = readdir(dir); dp; dp = readdir(dir))
  if (dp-d_name[0] != '.' ||
  (dp-d_name[1] != '\0' 
 
 You should replace the err() call with a warn() call instead of removing
 it outright.

That would print the err msg twice as opendir (or something)
already seems to report the error. Try it!
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: find(1): Is this a bug or not?

2010-11-29 Thread Bakul Shah
On Mon, 29 Nov 2010 12:39:43 PST Matthew Jacob m...@feral.com  wrote:
 can you report out the actual command line you're using and what release 
 it's from?
 
 On 11/29/2010 12:08 PM, Denise H. G. wrote:
  Hi,
 
  I found that, while searching for empty directories, find(1) will not
  continue if it encounters a dir it can't enter (e.g. no privilege). I
  don't know if it's so designed... I've checked NetBSD and OpenBSD's
  implementations (almost identical to that of FreeBSD's). And they behave
  the same way as FreeBSD's find(1) does under the circumstance.
 
  I'm wondering if this is a bug or not.

This looks like a long standing bug:

% mkdir -p a/{b,c}/d/e/f
% find a -empty
% chmod 000 a/b/d/ef
% find a -empty

The fix:

% cd /usr/src/usr.bin/find
% svn diff
Index: function.c
===
--- function.c  (revision 212707)
+++ function.c  (working copy)
@@ -560,7 +560,7 @@
empty = 1;
dir = opendir(entry-fts_accpath);
if (dir == NULL)
-   err(1, %s, entry-fts_accpath);
+   return 0;
for (dp = readdir(dir); dp; dp = readdir(dir))
if (dp-d_name[0] != '.' ||
(dp-d_name[1] != '\0' 

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Support for WD Advanced Format disks

2010-08-10 Thread Bakul Shah
On Tue, 10 Aug 2010 19:44:48 +0200 =?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?= 
d...@des.no  wrote:
 I'm looking into a clean, permanent solution for WD Green drives that
 use 4096-byte physical sectors.  To summarize the information I've
 collected so far:
 
  - There are several types of WD Green disks.  I am primarily interested
in the 1+ TB models: EARS and EADS.
 
  - According to WD's own documentation, EARS disks are Advanced Format
while EADS disks are not; furthermore, EARS disks have 64 MB cache
while EADS disks have only 32 MB.

http://www.wdc.com/en/library/2579-001028.pdf gives an
explanation of what the drive letters mean but they don't
talk about 4k sector size.
 
  - There is at least one source that provides model and serial numbers
for two EADS disks that seem have the performance characteristics of
an Advanced Format disk.  One of them actually reports 4096-byte
sectors, the other does not.  I am not entirely certain that source
is reliable.

See below.

  - Advanced Format disks should have a label that clearly describes them
as such:
 
 http://media.bestofmicro.com/U/O/238272/original/Western-Digital-WD10EARS-top.jpg

From http://www.wdc.com/en/products/advancedformat/

What models utilize Advanced Format technology?

Some models of the WD Caviar Green and WD Scorpio Blue
product families are built using Advanced Format technology.
Over time more models and capacities will be added. WD drives
with Advanced Format technology include special installation
information on the drive label so be sure to read the label
on your drive before installing it.  ^^
^^^

So it seems that only the label distinguishes 4k sector drives.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Support for WD Advanced Format disks

2010-08-10 Thread Bakul Shah
After poking around some, it seems ATA/ATAPI-7 Identify
Device word 106 bit 13 is set to 1 and bits 0-3 are set to 3
(for 2^3 or 8 LBAs per sector) for a 4KB sector size (pin 7-8
jumper on a WD AF disks presumably changes this setting to
0,0).  See page 121 of Atapi-7 volume 1 (google for
ata-atapi-7.pdf).

Hopefully this helps in whatever `clean solution' you are
looking for?
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: an alternative to powerpoint

2010-07-13 Thread Bakul Shah
On Tue, 13 Jul 2010 10:21:40 +0200 Luigi Rizzo ri...@iet.unipi.it  wrote:
 latex based solutions are great when it comes to show formulas.
 I normally use prosper or similar things.
 But placing figures is a bit of a nightmare, though, and at least
 for slides there is a lot of visual clutter in the latex formatting
 (of course one could write a preprocessor from plain text to latex/prosper).

Basically a unified work flow with latex is what appeals to
me.  But agreed on the visual clutter.  A plain text to
prosper preprocessor is a great idea!

Very nice work, BTW! And I can already think of new uses for it!

Ed Schouten asks:

 Why not use the `beamer' class?
http://bitbucket.org/rivanvx/beamer/wiki/Home

I didn't know about it.  I will certainly check it out now.
Thanks!
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: an alternative to powerpoint

2010-07-12 Thread Bakul Shah
On Tue, 13 Jul 2010 06:15:14 +0200 Luigi Rizzo ri...@iet.unipi.it  wrote:
 Maybe you all love powerpoint for presentations, but sometimes
 one just needs to put together a few slides, perhaps a few bullets
 or images grabbed around the net, so i was wondering how hard
 would it be to do something that accepts a plain text file
 as input (without a ton of formatting) and lets you do a decent
 slide show, and supports editing the slides on the fly within
 the browser.
 
 Well, it's not too hard:
 
   http://info.iet.unipi.it/~luigi/sttp/
 
 just 400 lines of javascript and 100 lines of css, plus
 your human-readable text.
 
 Have fun, it would be great if you could report how it works
 on fancy devices (iphone, ipad, androids...) as my testing
 platforms are limited to Firefox, IE and chrome (which unfortunately
 cannot save the edited file)

Seems to work fine in Safari  Opera.

Your note inspired me to search the 'Net!  Since I prefer
\latex{goop} to htmlgoop/html I went looking for a latex
class and found 'Prosper'.  Looks like it can produce some
really nice slides! See the examples here:

http://amath.colorado.edu/documentation/LaTeX/prosper/

And here is a tutorial:

http://www.math.umbc.edu/~rouben/prosper/

And of course, it is already in /usr/ports/textproc/prosper!
I will have to give it a try as I was getting tired of
fiddling around in Keynote (and I don't like powerpoint).

[Hope you don't mind my mentioning Prosper!]
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: head behaviour

2010-06-06 Thread Bakul Shah
On Mon, 07 Jun 2010 00:13:28 +0200 =?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?= 
d...@des.no  wrote:
 
 The reason why head(1) doesn't work as expected is that it uses buffered
 I/O with a fairly large buffer, so it consumes more than it needs.  The
 only way to make it behave as the OP expected is to use unbuffered I/O
 and never read more bytes than the number of lines left, since the worst
 case is input consisting entirely of empty lines.  We could add an
 option to do just that, but the same effect can be achieved more
 portably with read(1) loops:

Except read doesn't do it quite right:

$ ps | (read a; echo $a ; grep zsh)
PID  TT  STAT  TIME COMMAND
 1196  p0  Is 0:02.23 -zsh (zsh)
 1209  p1  Is 0:00.35 -zsh (zsh)

Alignment of column titles is messed up. Using egrep we can
get the right alignment but egrep also shows up.

$ ps | egrep 'TIME|zsh'
  PID  TT  STAT  TIME COMMAND
 1196  p0  Is 0:02.23 -zsh (zsh)
 1209  p1  Is 0:00.35 -zsh (zsh)
71945  p2  DL+0:00.01 egrep TIME|zsh

A small point but it is not trivial to get it exactly right.
head -n directly expresses what one wants.

But there is a deeper point.

Several people pointed out alternatives for the examples
given but in general you can't use a single command to
replace a sequence of commands where each operates on part of
the shared input in a different way.

The reason we can't do this is buffering for efficiency.
Usually there is no further use for the buffered but
unconsumed input  it can be safely thrown away. So this is
almost always the right thing to do but not when there *is*
further use for the unconsumed input.  Some programs already
do the right thing (dd, for instance, as you pointed out).
Some other commands do give you this option in a limited way.
man grep  you will find:

   -m NUM, --max-count=NUM
  Stop reading a file after NUM matching lines.  If the  input  is
  standard  input  from a regular file, and NUM matching lines are
  output, grep ensures that the standard input  is  positioned  to
  just  after the last matching line before exiting, regardless of
  the presence of trailing context lines.  This enables a  calling
  process  to resume a search.

So for instance

$  /usr/share/dict/words (grep -m 1 ''; grep -m 1 '') 
A
a

But pipe the file in and see what you get:

$ cat /usr/share/dict/words | (grep -m 1 ''; grep -m 1 '') 
A
nterectasia

Grep does the right thing for files but not pipes!  Now I do
understand *why* this happens but still, it is annoying.  So
I believe there is value in providing an option to read *as
much as needed* but not more.  It will be slower but will
handle the cases we are discussing.  This will enhance
*composability* -- supposedly part of the unix philosophy.
The slow-but-read-just-as-much-as-needed option to be used
when you need certain kind of composability and there is no
other way.  And yes, now do I think this is useful not just
for head but also any other program that quits before reading
to the end!

[cc'ed Rob in case he wishes to chime in]
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


head behaviour

2010-06-05 Thread Bakul Shah
Consider:

$ yes | cat -n | (read a; echo $a; head -1)
1   y
 2  y

$ yes | cat -n | (head -1; read a; echo $a)
 1  y
456 y

As you can see, head reads far more than it should.  This is
fine most of the time but often it results in surprising
output:

# print ps header and all lines with sh in it
$ ps|(head -1; grep sh)
  PID  TT  STAT  TIME COMMAND

# print first and last two lines
$ look xa | (head -2; tail -2)
xanthaline
xanthamic

Not quite what you expected, right?

Yes, you can use read and echo N times but this is not
as convenient as using head:

$ look xa | (read a; echo $a; read a; echo $a; tail -2)
xanthaline
xanthamic
xarque
Xaverian

The fix is to make sure head reads no more than $N bytes
where $N is the number of *remaining* lines to be read.
Yes this slows head down some but makes it more useful.
[Ideally all commands that quit after partially reading
their input ought to behave like this but that would slow
down their common use far too much]

Comments?

Thanks to Rob Warnock for pointing out the head problem.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: head behaviour

2010-06-05 Thread Bakul Shah
On Sat, 05 Jun 2010 13:32:08 PDT Doug Barton do...@freebsd.org  wrote:
 On 06/05/10 13:12, Bakul Shah wrote:
  Consider:
 
  $ yes | cat -n | (read a; echo $a; head -1)
  1   y
2 y
 
  $ yes | cat -n | (head -1; read a; echo $a)
1 y
  456 y
 
 It's not at all clear to me what you are trying to accomplish here. If 
 what you want is to read only the first line of the output of yes, then 
 what you'd want to do is:
 
 yes | cat -n | head -1 | (read a; echo $a)
 1 y
 
  As you can see, head reads far more than it should.  This is
  fine most of the time but often it results in surprising
  output:
 
  # print ps header and all lines with sh in it
  $ ps|(head -1; grep sh)
 PID  TT  STAT  TIME COMMAND
 
 I don't understand why you think this would work. There is no input to 
 the grep command. The only reason it exits at all is that you are 
 executing in a subshell.
 
  # print first and last two lines
  $ look xa | (head -2; tail -2)
  xanthaline
  xanthamic
 
 Same problem here. There is no input to the tail command.

In general this is not true.  Without running the following
can you guess its output?

$ look '' | (head -2; head -2)

Will it produce
A
a
or
A
a
aa
aal
or
A
a
sive
abrastol
or something else?

Yes, we can always find a work around for a given case but
the issue is that head buffers up more than it needs to.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: head behaviour

2010-06-05 Thread Bakul Shah
On Sat, 05 Jun 2010 14:02:16 PDT Doug Barton do...@freebsd.org  wrote:
 On 06/05/10 13:48, Bakul Shah wrote:
  Without running the following can you guess its output?
 
  $ look '' | (head -2; head -2)
 
 Again, it's not clear to me what you expect is going to happen with the 
 second 'head -2' there. I agree that the actual output of your example 
 is wacky and unexpected, but what I'm trying to get you say is what YOU 
 think should happen. The examples that you pasted in your previous post 
 did not and could not do what you said you wanted them to do, so I don't 
 quite understand what the bug is.

There is no bug per se. What I am saying that it would be
*less surprising* if

$ look '' | (head -2; head -2)

behaved the same as

$ look '' | head -4

[And yes, I would use head -4 if I wanted four lines but the
example was to illustrate the issue that head buffers more
than it needs to].

It would be less surprising and more useful if

$ ps | (head -1; grep ssh)

showed

PID  TT  STAT  TIME COMMAND
all line with ssh in it

The change in head behaviour I am suggesting wouldn't break
anything that already works but make it more useful for what
you call 'wacky commands lines'!

 Put more simply, if you generate wacky command lines you should not be 
 surprised when they produce wacky results. :)

I didn't realize that use of ;(|) constitutes wackiness :-)
They are simply exercising the power of shell!

We have used these commands for so long that we take them for
granted and we learn to avoid their use in such ways.  When
Rob Warnock first mentioned this, my initial reaction was the
same as yours. But thinking more about it, I felt head can be
made more useful with this change and felt it was worth
bringing it to people's attention. But we can let it rest.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: head behaviour

2010-06-05 Thread Bakul Shah
On Sat, 05 Jun 2010 17:02:42 EDT Mike Meyer m...@mired.org  wrote:
 As a general rule, programs don't expect to share their input with
 other programs, nor do they make guarantees about what is and isn't
 read from that input under those conditions. I'd say that shell
 scripts that depend on what some command does with it's unprocessed
 input are buggy.

Fair enough. Thanks!
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Announcing PathDB

2010-05-30 Thread Bakul Shah
[Added -hackers as this may be of some interest to others.
Hope you don't mind]

On Sun, 30 May 2010 01:27:12 +0700 =?ISO-8859-1?Q?=22C=2E_Bergstr=F6m=22?= 
cbergst...@pathscale.com  wrote:

 ps.  Tell me what you need to make it interesting and we'll try to make 
 it happen..

Ok, here are some interesting ideas!

* Add a ups like interface and I will be very happy!
http://ups.sourceforce.net/

  Supposedly the following is needed to make it work on Linux.

cvs -d:pserver:anonym...@ups.cvs.sourceforge.net:/cvsroot/ups login 
cvs -z3 -d:pserver:anonym...@ups.cvs.sourceforge.net:/cvsroot/ups co -P ups
cd ups
./configure --enable-longlong

  I haven't tried this on linux but with a couple of patches
  it builds and runs fine on freebsd-i386.

  It has a built in C interpreter which is very handy; you
  can add C code at breakpoints for conditional bkpts or
  patch a variable etc.

  But the GUI is the best part  -- I won't try explaining it,
  you have to experience it! Perhaps it can be used somehow?

* multi-{thread,core,process,program,machine} debugging. A
  GUI can be very useful here as you can show state of each
  thread in a separate window, pop open a new window as
  threads or processes get created etc. Basically debugging
  distributed programming support!

* A debugger language like say plan9 Acid's. With it for
  instance you can implement code coverage. See section 15
  http://www.vitanuova.com/inferno/papers/acidpaper.html for
  the code coverage trick.  Google debugging with acid to
  see more stuff. ups's c interpreter can be considered a
  debugger language. There is a good paper on a dataflow
  language for debugging from Brown U.

http://www.cs.brown.edu/~sk/Publications/Papers/Published/mcskr-dataflow-lang-script-debug-journal/

* Better integration with testing. There are IDEs that
  integrate debugging with code development but I am not
  aware of any that integrates it well with testing. Testing
  is still a batch process. When a test fails, I want to dive
  right in, figure out what went wrong, fix it and rerun or
  continue! I admit I have the vaguest idea of even what this
  means.  The dataflow lang paper refed may be relevant as it
  talks about automatic assertion checking etc.

* There is a lot that can be done to improve debugging GUIs.
  For instance I'd love to see 3D use. Not for eye-candy but
  for better visualization. Things like as you zoom in, you
  see more details (zoom in on a function name and enter
  function body -- very Fantastic Voyage-ish (1966 movie) but
  for code!), control flow shows up as color change, the more
  a code path is visited the more it lights up or gets
  fatter, IPC is shown as a flash from one thread to another
  and so on. IIRC I have seen the color change idea in some
  verilog code coverage tools.

-- bakul
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Announcing PathDB

2010-05-30 Thread Bakul Shah
On Mon, 31 May 2010 00:34:13 +0700 =?ISO-8859-1?Q?=22C=2E_Bergstr=F6m=22?= 
cbergst...@pathscale.com  wrote:
 Bakul Shah wrote:
  [Added -hackers as this may be of some interest to others.
  Hope you don't mind]

 I don't mind at all..
  On Sun, 30 May 2010 01:27:12 +0700 =?ISO-8859-1?Q?=22C=2E_Bergstr=F6m=22?= 
 cbergst...@pathscale.com  wrote:
 

  ps.  Tell me what you need to make it interesting and we'll try to make 
  it happen..
  
 
  Ok, here are some interesting ideas!
 
  * Add a ups like interface and I will be very happy!
  http://ups.sourceforce.net/

 (I think you meant http://ups.sourceforge.net/ )

Yes. I must've been thinking of Star Wars!

 ups is ugly based on the screenshoot on the homepage and it would be 
 really cool if it had an ncurses based version.. (maybe it does?)

Its visual interface is sparse (like twm or xfig) and
reflects the era it was designed in.  It is very mouse driven
so ncurses wouldn't make sense.  Its debugging capabilities
are outstanding but you have to use it to see that.

 Very nice feedback!

Thanks. Hope it fired a few greycells!
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: GSoC: BSD text tools

2010-05-26 Thread Bakul Shah
On Wed, 26 May 2010 16:54:35 +1000 Greg 'groggy' Lehey g...@freebsd.org  
wrote:
 On Tuesday, 25 May 2010 at 16:16:10 -0700, Bakul Shah wrote:
 
  If you must kick groff out, why not port plan9 troff which
  now does unicode, has 27 macro packages including ms, weighs
  in at about 10K lines of C code written by Joe Ossanna, Brian
  Kernighan, Ken Thompson, Jaap Akkerhuis  others, is now open
  source (subject to Lucent Public License), and traces its
  lineage back to Joe Ossanna's original troff?  There is also
  pic, tbl, eqn and grap (for drawing graphs).  Also
  troff2html.  AFAIK plan9 troff doesn't do dvi but I think
  most people can live with that.
 
 This sounds too good to be true.  I'd certainly be in favour of such a
 change, *if* it proves feasible.

pkg_add -r plan9port

to play with these programs. People who use *roff a lot
should satisfy themselves p9p versions meet their needs.
[p9p has a lot of other goodies worth nibbling on]

There are two issues in integrating with BSD: licensing and
the amount of effort required.

For licensing issues a good place to start would be to look
at /usr/local/plan9/LICENSE (once you install the p9p port).
For what it's worth, my sense is that the relevant licenses
should allow bundling with *BSD but I am not a lawyer.

As for effort, p9p has already made the changes needed to
allow compiling with gcc (plan9 C is not std C but close
enough). p9p programs rely on a porting layer that emulates
some of plan9 environment.  If these porting/ported libraries
are imported into BSD, porting is almost a trivial task. If
you just want troffco, I suspect one would need a small
subset of these libraries.  Only one way to find out!
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: GSoC: BSD text tools

2010-05-25 Thread Bakul Shah
On Wed, 26 May 2010 01:21:20 +0300 Eitan Adler li...@eitanadler.com  wrote:
 On Tue, May 25, 2010 at 7:55 PM, Matthew Jacob m...@feral.com wrote:
  On 5/25/2010 9:52 AM, Julian Elischer wrote:
 
  On 5/25/10 8:33 AM, Eitan Adler wrote:
 
  No. Do not remove groff or associated tools from /usr/src !
  Roff has been in Unix /usr/src since '77 or earlier.
  A lot of people use tools from that descendancy as production tools.
 
  So? If it isn't a very commonly used tool and isn't necessary for 99%
  of cases I don't seem the harm of removing it from base and making it
  a port?
 
  BSD has always =C2=A0been ab;e to produce it's documentation as part of =
 its
  build
 
  Please keep this true.
 
 This is what mdocml will be for. I never advocated removing the
 utilities required for building documentation from the base - just the
 soon to be superfluous groff utility (once the GSOC project is done).

mdocml handles -mdoc and -man but not other formats. There
are documents in /usr/src/share/doc needing -ms and -me etc.
groff can't be replaced with mdocml.

If you must kick groff out, why not port plan9 troff which
now does unicode, has 27 macro packages including ms, weighs
in at about 10K lines of C code written by Joe Ossanna, Brian
Kernighan, Ken Thompson, Jaap Akkerhuis  others, is now open
source (subject to Lucent Public License), and traces its
lineage back to Joe Ossanna's original troff?  There is also
pic, tbl, eqn and grap (for drawing graphs).  Also
troff2html.  AFAIK plan9 troff doesn't do dvi but I think
most people can live with that.

All of these already work under FreeBSD, Linux, MacOS 
others as part of Plan 9 from User space.  Integrating them
into FreeBSD base is bound to be far easier than replicating
its functionality.

See http://plan9.bell-labs.com/sys/doc/troff.pdf for troff
details, http://www.swtch.com/plan9port for Plan9 from
User Space.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: /etc in CVS (was: Another tool for updating /etc)

2010-04-20 Thread Bakul Shah
On Tue, 20 Apr 2010 00:14:13 +0200 Jeremie Le Hen jere...@le-hen.org  wrote:
 Hi Bakul,
 
 Sorry for the late reply, I'm lagging behind in my FreeBSD mailbox :).
 
 On Wed, Mar 24, 2010 at 09:57:48AM -0700, Bakul Shah wrote:
  
  But I wonder... why not build something like this around cvs?
  Basically a three way merge is exactly what we want for /etc,
  right?  cvs because it is in the base system.  I used to
  maintain /etc changes in cvs and that was useful in keeping
  track of configuration changes on shared machines.
 
 By the way, I've been storing my configuration in CVS for a long time
 and I have created a full-fledged tool to help this.  Given you're using
 cvs(1) to store your changes in /etc, you may find it useful.  The main
 purpose of the script if to verify that everything is checked in and you
 didn't overlook to commit a change.  This can very easily be run from
 a crontab(5).
...
   http://jlh.chchile.org/cvsconfchk/

Thanks I will check it out.

My suggestion was in the context of upgrding a system to a
new release. There are changes to /**/etc/**/*(.) files going
from release R to R+1.  I was pointing out that what
mergemaster does (merging in these changes to your locally
modified etc files) is almost exactly the same as merging in
a vendor branch under CVS (vendor here would be freebsd.org).
But merge conflicts have to be resolved carefully and before
any reboots!

I understand John Baldwin's response as to why not cvs but I
haven't thought more about this until today.  It may be
possible to create a separate tool around SCM of one's
choice  Anyway, I'll shut up about this unless I can come
up with such a tool. But don't hold your breath.

Conversely, mergemaster-like interactive scheme can be handy
for managing cvs vendor branch merge conflicts (or three way
merges in git or hg). Here's an idea... May be just that
logic can be factored out in a separate script that can be
run *after* a merge or update.

The problem really worth solving is an atomic rollback in
case an upgrade fails. This is either trivial (if you use
zfs) or very hard! But instinctively I shy away from relying
on a complicated FS like zfs for something as basic as this.

[/**/etc/**/*(.) is a zsh expression to specify every file in
 every etc dir in a system.]
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Another tool for updating /etc

2010-03-24 Thread Bakul Shah
On Tue, 23 Mar 2010 11:08:45 EDT John Baldwin j...@freebsd.org  wrote:
 or 'cvs up'.  If the local changes I made do not conflict, then just merge the
 changes automatically (e.g. enabling a serial console in /etc/ttys should not
 conflict with $FreeBSD$ changing when moving from 7.2 to 7.3).
 
 To that end, I wrote a new tool that I think does a decent job of solving 
 these goals.  It does not force you to read the diffs of any files updated in
 /etc, but there are other tools available for that.  However, if you are ok 
 with reading UPDATING, commit logs, and/or release notes for that sort of 
 info, then this tool may work for you.
 
 It also has a nice feature in that you can generate a 'diff' of your current 
 /etc tree against the stock tree allowing you to easily see what local 
 changes you have made.  I have already found this feature to be far more 
 useful than I first expected.
 
 The UI is (hopefully) minimalist.  The default output looks like the output of
 'svn up' or 'cvs up'.
 
 If you'd like to give it a shot, you can find the script and manpage at 
 http://www.FreeBSD.org/~jhb/etcupdate/  There is a README file that gives a 
 brief overview and instructions on how to bootstrap the needed metadata before
 the first update.  There is also an HTML version of the manpage.

Looks good!

But I wonder... why not build something like this around cvs?
Basically a three way merge is exactly what we want for /etc,
right?  cvs because it is in the base system.  I used to
maintain /etc changes in cvs and that was useful in keeping
track of configuration changes on shared machines.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: C++ incompatability, was C99: Suggestions for style(9)

2009-05-01 Thread Bakul Shah
On Fri, 01 May 2009 08:57:34 PDT Matthew Fleming matthew.flem...@isilon.com 
 wrote:
 [snip exciting discussion on style]
 
  There are several C99 features used already, e.g. designated initializers:
  bla bli = { .blub = foo, .arr[0] = 42 };
  Do you suggest that this should not be used, because it is inconsistent
  with all the other existing compound initialisations?
 
 Regarding this great feature of C99, sadly, it's not C++ compatible.  So
 while designated initializers in a C source file are great, in a header
 file they will give a compile error if included in e.g. a C++ kernel
 module (which otherwise would work fine).

Why would you put initializers in a header file? If included
in more than one file, the linker will complain that the
initialized variable is multiply defined.  If creating header
files that get included in in only one file *and* you want to
use initializers, why not use the right language for include
file code.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: pahole - Finding holes in kernel structs

2009-02-12 Thread Bakul Shah
  So I ran the tool pahole over a 7.1 FreeBSD Kernel, and found that
  many of the struct had holes, and some of which could be rearranged to
  fill the gap.

...

 Certainly plugging holes can also be beneficial but just cautioning that 
 changes of this sort need to be checked if made to critical data 
 structures.  OTOH there aren't that many that matter in practice.

But why do it?  Are the benefits worth churning any ABIs?
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


cc -m32 (was Re: critical floating point incompatibility

2009-01-29 Thread Bakul Shah
On Fri, 30 Jan 2009 05:44:00 +1100 Peter Jeremy peterjer...@optushome.com.au  
wrote:
 
 On 2009-Jan-28 11:24:21 -0800, Bakul Shah ba...@bitblocks.com wrote:
 On a mac, cc -m64 builds 64 bit binaries and cc -m32 builds
 32 bit binaries.  The following script makes it as easy to do
 so on a 64 bit FreeBSD -- at least on the few programs I
 tried.  Ideally the right magic needs to be folded in gcc's
 builtin specs.
 
 #!/bin/sh
 args=3D/usr/bin/cc
 while [ .$1 !=3D . ]
 do
 a=3D$1; shift
 case $a in
 -m32) args=3D$args -B/usr/lib32 -I/usr/include32 -m32;;
 *) args=3D$args $a;;
 esac
 done
 $args
 
 You also need to manually populate /usr/include32 since it doesn't
 exist by default and may still get bitten by stuff in
 /usr/local/include.  Do you have a script (or installworld patches) to
 do this?

Yes, includes for native programs will may cause trouble --
but you can't use -nostdinc (as that would take away that
feature from a user), which is why this needs to be in the
gcc specs.

I don't have a script as I just copied include directories
from a i386 system.  But a script would be better.  This
script was an initial stab at proper -m32 support and needs
more work.  I will be happy to work with you or anyone else
to make this happen.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: critical floating point incompatibility

2009-01-28 Thread Bakul Shah
On Mon, 26 Jan 2009 16:51:28 EST John Baldwin j...@freebsd.org  wrote:
 On Friday 21 December 2007 3:16:33 pm Kostik Belousov wrote:
  On Fri, Dec 21, 2007 at 10:11:24AM -0800, Bakul Shah wrote:
   Peter Jeremy peterjer...@optushome.com.au wrote:
On Wed, Dec 19, 2007 at 09:40:34PM -0800, Carl Shapiro wrote:
The default setting of the x87 floating point control word on the i386
port is 0x127F.  Among other things, this value sets the precision
control to double precision.  The default setting of the x87 floating
point control word on the AMD64 is 0x37F.
...
It seems clear that the right thing to do is to set the floating point
environment to the i386 default for i386 binaries.  Is the current
behavior intended?

I believe this is an oversight.  See the thread beginning

 http://lists.freebsd.org/pipermail/freebsd-stable/2007-November/037947.html
   
   From reading Bruce's last message in that thread, seems to me
   may be default for 64bit binaries should be the same as on
   i386. Anyone wanting different behavior can always call
   fpsetprec() etc.
   
   I think the fix is to change __INITIAL_FPUCW__ in
   /sys/amd64/include/fpu.h to 0x127F like on i386.
  I think this shall be done for 32-bit processes only, or we get into
  another ABI breaking nightmare.
 
 How about something like this:  (Carl, can you please test this?)

Your patch works fine on a recent -current.  Here is a
program Carl had sent me more than a year ago for testing
this.  May be some varition of it can be added to
compatibility tests.

#include stdio.h
int main(void)
{
 unsigned short cw;
 __asm__ __volatile__ (fnstcw %0:=m(*cw));
 printf(cw=%#x\n, cw);
 return 0;
}

-- bakul

PS:
tangent

On a mac, cc -m64 builds 64 bit binaries and cc -m32 builds
32 bit binaries.  The following script makes it as easy to do
so on a 64 bit FreeBSD -- at least on the few programs I
tried.  Ideally the right magic needs to be folded in gcc's
builtin specs.

#!/bin/sh
args=/usr/bin/cc
while [ .$1 != . ]
do
a=$1; shift
case $a in
-m32) args=$args -B/usr/lib32 -I/usr/include32 -m32;;
*) args=$args $a;;
esac
done
$args

Ideally x86_64 platforms run *all* i386 programs (that don't
depend on a 32 bit kernel).

/tangent
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: Remote kernel debugging in FreeBSD using serial communication

2009-01-17 Thread Bakul Shah
On Sat, 17 Jan 2009 17:57:10 PST Kamlesh Patel shilp.ka...@yahoo.com  wrote:
 Hi All,
 
 I am trying remote kernel debugging in FreeBSD using serial communication. I 
 got the following link.
 
 http://www.ibm.com/developerworks/aix/library/au-debugfreebsd.html#list1
 
 My problem is my developing and target system does not have DS25 female port.
 
 Anyone have any idea about Remote kernel debugging in FreeBSD using DS9 F/F S
 erial cable or any other remote debugging idea?

Get one or more modular kits like the one below:

http://www.cablesnmor.com/modular-kit-db9f-rj45.aspx

Now wire it according to the pinouts here:

http://yost.com/computers/RJ45-serial/index.html

If (in the unlikely event) you have any more RS-232 devices,
you can attach a similar db{9,25}{f,m}-rj45 adapter to each
-- just make sure that on the RJ-45 side signals come out as
specified in the web page above (also reproduced below).

Now you can use a half-twist 8 conductor cable to connect
*any* two such devices.  You can even use a 4 or 6 conductor
half-twist cable with the same RJ-45 jacks since the layout
is such that the more important signals are towards the
middle.

 1   2   3   4   5   6   7   8
CTS DCD RD  SG  SG  TD  DTR RTS 
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: FreeBSD kernel Debugging tools for Virtual Memory Module

2009-01-02 Thread Bakul Shah
ddb and kgdb are two useful and often indispensable tools for kernel
 debugging on FBSD. ddb won't allow you source level debugging, kgdb will,
 but you'll need an extra machine. 

If the code you are debugging doesn't depend on specific
hardware, one option is to run FreeBSD (with the kernel being
debugged) under qemu and run kgdb on the host FreeBSD.
Something like

In Window1
$ qemu -s freebsd-disk-img ...

In Window2
$ cd where the kernel under test was built
$ kgdb kernel.debug
(gdb) target remote localhost:1234
do your debugging
(gdb) detach
Ending remote debugging.
(gdb) q
$

Note: I have not tried this recently but it should work.

 AFAIK, if you are modifying the kernel source directly  there is no option
 but to recompile all the changed and dependent files.

Well... there used to be a debugger called ups with a builtin
C interpreter. It allowed you to add code at run time.  This
was quite handy when you wanted to temporarily patch things
up and continue debugging or set conditional breakpoints or
insert assertion verification code on the fly.  The C
interpreter is worth adding to gdb but I am not sure if any
of ups code can be reused.  See http://ups.sourceforge.net/
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: continuous backup solution for FreeBSD

2008-10-08 Thread Bakul Shah
On Wed, 08 Oct 2008 10:18:47 +0200 =?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?= [EMAIL 
PROTECTED]  wrote:
 Bakul Shah [EMAIL PROTECTED] writes:
  Dag-Erling Sm=C3=B8rgrav [EMAIL PROTECTED] writes:
   What really annoys me with this thread is that nobody has provided any
   information at all that would allow someone to understand what needs to
   be done and estimate how hard it would be.
  From their http://forum.r1soft.com/CDP.html page:  [...]
 
 You completely missed the mark.  I know what R1Soft's product is.  What
 I want to know is what needs to be done to port it to FreeBSD.

Sorry, my mindreader is broken.  If this is what you really
wanted to know, why not ask R1Soft?  -hackers is not going to
shed any light on the specifics of R1Soft's product.  I
replied to say that implementing a similar solution is not
hard (I am sure you knew that too but I wasn't responding
just to you).  It may even be worth doing.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: continuous backup solution for FreeBSD

2008-10-08 Thread Bakul Shah
On Wed, 08 Oct 2008 22:19:48 +0200 =?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?= [EMAIL 
PROTECTED]  wrote:
 Bakul Shah [EMAIL PROTECTED] writes:
  Dag-Erling Sm=C3=B8rgrav [EMAIL PROTECTED] writes:
   Bakul Shah [EMAIL PROTECTED] writes:
Dag-Erling Sm=C3=B8rgrav [EMAIL PROTECTED] writes:
 What really annoys me with this thread is that nobody has
 provided any information at all that would allow someone to
 understand what needs to be done and estimate how hard it would
 be.
From their http://forum.r1soft.com/CDP.html page:  [...]
   You completely missed the mark.  I know what R1Soft's product is.
   What I want to know is what needs to be done to port it to FreeBSD.
  Sorry, my mindreader is broken.  If this is what you really wanted to
  know, why not ask R1Soft?  -hackers is not going to shed any light on
  the specifics of R1Soft's product.  I replied to say that implementing
  a similar solution is not hard (I am sure you knew that too but I
  wasn't responding just to you).  It may even be worth doing.
 
 I didn't actually ask a question, and I don't mind that you don't have
 the answer.  What I do mind is that you interpreted my statement of
 frustration as a question, and provided a completely irrelevant answer.
 You don't need to read minds to understand this, just English.

Interpreting an expression of frustration as a request for a
solution is a common engineering trait:-)  I can see you may
not prefer my interpretation but I can't understand why you
mind it.  But so be it.  I do not wish to annoy you.

 If you take a step back and go through and read the entire thread again
 from the start, though, I think you will understand my frustration.

I understand your frustration but I chose to instead focus on
the technical part.  I too can get frustrated in similar
situations but every time that happens I can trace it back to
my own stress.  I can't really control what others say so the
way I deal with it is to ignore it or joke about it.  I do
try to clear misunderstandings but people don't always
understand my point of view!  As for feeling frustrated, I
now view that as a warning signal.

 Evren asked a question which everybody else is doing their best not to
 answer in as many words as possible; and when I try to answer, I find
 out that Evren doesn't really know what the question is.

Actually that was clear in his very first email.

 This is where - in an ideal world - somebody at R1Soft would jump in and
 start asking the right questions...

Don't bet on it.  My musings even makes me wonder if they do it right.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: continuous backup solution for FreeBSD

2008-10-08 Thread Bakul Shah
Sorry about that.  Didn't mean to continue this discussion in
-hackers but forgot to remove the cc list.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: continuous backup solution for FreeBSD

2008-10-07 Thread Bakul Shah
On Tue, 07 Oct 2008 11:56:09 +0200 =?utf-8?Q?Dag-Erling_Sm=C3=B8rgrav?= [EMAIL 
PROTECTED]  wrote:
 Evren Yurtesen [EMAIL PROTECTED] writes:
  They actually do not think that it is an easy job to adapt their
  software to support FreeBSD even. See this post:
  http://forum.r1soft.com/showpost.php?p=3D4224postcount=3D3
 
 All this shows is that they don't know anything about FreeBSD at all
 (plus they need a refresher course in OS design; Linux is also a
 monolithic kernel)
 
 What really annoys me with this thread is that nobody has provided any
 information at all that would allow someone to understand what needs to
 be done and estimate how hard it would be.

From their http://forum.r1soft.com/CDP.html page:

R1Soft's Continuous Data Protection solution is a
==near-Continuous Backups system capable of providing ==
hundreds of recovery points per day scheduled as little
as 5 or 10 minutes apart.
...
CDP Server works by reading your hard disk volumes at the
sector level, bypassing the file system for the ultimate
in performance and recovery.  This disk sector
synchronization is performed while the server is online
and provides no interruption to other I/O requests even
on a busy server.

Clearly near-Continuous is *not* the same as continuous
but never mind -- truthiness in business is so last century!
But this could be the cause of some confusion.  What they do
is backups, not mirroring.  A remote mirror would essentially
require a continuous backup -- every disk write must be
sent right away but in pure mirroring there is no access
previous snapshots.  In a true backup solution you can
restore disk state to some number of previous backup points,
regardless of whether you have *online* access to them.

My guess is they have a driver that keeps track of disk
writes.  Something like set bit N of a bitmap when sector N
is to be written.  Then once every 10 minutes (or whatever
snapshot interval you have selected) a client app scans the
bitmap and sends these sectors to the backup server.

If they did *just* this, there'd be consistency issues --
between the time a snapshot is taken and some sector N is
actually backed up, there may be new writes to N by the OS.
To deal with this, the new write must be delayed until N has
been backed up.  Another alternative is to slide forward
the snapshot point.  That is, if the snapshot was taken at
time T1 and the backup finished by T2, and there were
conflicting writes during [T1..T2), backup these writes as
well and slide forward this snapshot time from T1 to T2.
Repeat until there are no conflicting writes.  This latter
method won't block any writes from the OS.

So my guess is they need an interface where they get notified
for every disk write and optionally a way to delay a write.

[To respond to an earlier point raised by others, I believe
 OS X Time Machine does a filesystem level backup and not
 at the disk level.]
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: continuous backup solution for FreeBSD

2008-10-06 Thread Bakul Shah
On Mon, 06 Oct 2008 18:09:06 +0300 Vlad GALU [EMAIL PROTECTED]  wrote:
 On Mon, Oct 6, 2008 at 5:33 PM, Evren Yurtesen [EMAIL PROTECTED] wrote:
  Bob Bishop wrote:
 
  Does anybody have free time and skills to give a hand? Please see:
  http://forum.r1soft.com/showpost.php?p=3414postcount=9
 
  Should be possible to do this with a geom(4) class?
 
 
  I am not saying it is impossible. They just need somebody to put them to
  right track I guess. I personally cant do that. It would be nice if somebod
 y
  who has knowledge in this area contacts r1soft. At the very least r1soft
  seems to be willing to communicate on this issue.
 
  Continuous backups as well as bare-metal-restore seem to be a key feature
  for many hosters. FreeBSD is loosing users because of this issue.
 
gmirror+ggate come to mind as a nifty solution ...

My guess is these guys do something simpler like keep keep
track of changed blocks since the last backup and
periodically dump those blocks to a server.  This is good
enough for backups (but not mirroring) and it has low memory
overhead (1 or 2 bits per block), lower network overhead than
remote mirroring (you send a block at most once every sync
interval), and a tiny loss of performance (over no backups).
May be someone ought to do a garchive device!
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Fwd: strdup(NULL) supposed to create SIGSEGV?

2008-04-23 Thread Bakul Shah
On Wed, 23 Apr 2008 11:03:10 BST Robert Watson [EMAIL PROTECTED]  wrote:
 On Wed, 23 Apr 2008, Garrett Cooper wrote:
  Of course I did some more research after you guys gave me some replies and 
  realized I'm not the first person to bumble across this fact, but I haven't
  found FreeBSD or Linux documentation supporting that errata. It was harmless
  in my tiny program, but I would hate to be someone adding that assumption to
  a larger project with multiple threads and a fair number of lines...
 
 Consider the following counter-arguments:
 
 - In C, a string is a sequence of non-nul characters followed by a nul
character terminating the string.  NULL is therefore not a valid string.
 
 - Currently, strdup(3) has an unambiguous error model: if it returns a
non-NULL string has succeeded, and if it has failed, it returns NULL and
sets errno.  If NULL becomes a successful return from strdup(3), then this
is no longer the case, breaking the assumptions of currently correct
consumers.

I suspect Garrett has a more fundamental misunderstanding.

C is a low level language and for efficiency sake most of its
standard functions *do not check* that their inputs are legal
-- it is the caller's responsibility to give valid inputs and
when that is not done, all bets are off!  In general a NULL
is an illegal value to pass in place of any kind of pointer.

The *exception* is where a function is explicitly prepared to
handle NULLs.  One must read its man page carefully and if it
doesn't say anything about how NULLs in place of ptrs are
handled, one must not pass in NULLs!

He should also note that function specifications (e.g. man
pages) will specify what are legal inputs but usually they
will *not* specify what happens when illegal inputs are given
since a) that set is usually much much larger, and b) the
effect is likely to be machine dependent.

FWIW!
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Architectures with strict alignment?

2007-12-29 Thread Bakul Shah
 (though the AMD29K could apparently generate
 dummy bus cycles to limit the number of bit transitions on any cycle
 to reduce the I/O load).

Are you sure it was the amd29k?  I don't recall anything like
that (and am too lazy to dig out its datasheets!).

It too requiredd strict alignment though you could fix up
unaligned accesses in a trap handler at some cost.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: critical floating point incompatibility

2007-12-21 Thread Bakul Shah
Peter Jeremy [EMAIL PROTECTED] wrote:
 On Wed, Dec 19, 2007 at 09:40:34PM -0800, Carl Shapiro wrote:
 The default setting of the x87 floating point control word on the i386
 port is 0x127F.  Among other things, this value sets the precision
 control to double precision.  The default setting of the x87 floating
 point control word on the AMD64 is 0x37F.
 ...
 It seems clear that the right thing to do is to set the floating point
 environment to the i386 default for i386 binaries.  Is the current
 behavior intended?
 
 I believe this is an oversight.  See the thread beginning
 http://lists.freebsd.org/pipermail/freebsd-stable/2007-November/037947.html

From reading Bruce's last message in that thread, seems to me
may be default for 64bit binaries should be the same as on
i386. Anyone wanting different behavior can always call
fpsetprec() etc.

I think the fix is to change __INITIAL_FPUCW__ in
/sys/amd64/include/fpu.h to 0x127F like on i386.

Also, while at it, comments above this constant in this file
and above __INITIAL_NPXCW__ in /sys/i386/include/npx.h needs
to reflect what was chosen and why.

Filing a PR would help ensure this doesn't get lost.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


timezone printing in date messed up?

2007-11-03 Thread Bakul Shah
$ sh 'EOF'
for a in 0 1 2 3 4 5 6 7 8 9 10 11 12 
do
  date -j -f %s `expr 1194163200 + 600 \* $a`
done
EOF
Sun Nov  4 01:00:00 PDT 2007
Sun Nov  4 01:10:00 PDT 2007
Sun Nov  4 01:20:00 PDT 2007
Sun Nov  4 01:30:00 PST 2007 ---
Sun Nov  4 01:40:00 PST 2007 ---
Sun Nov  4 01:50:00 PST 2007 ---
Sun Nov  4 01:00:00 PDT 2007 ---
Sun Nov  4 01:10:00 PDT 2007 ---
Sun Nov  4 01:20:00 PDT 2007 ---
Sun Nov  4 01:30:00 PST 2007
Sun Nov  4 01:40:00 PST 2007
Sun Nov  4 01:50:00 PST 2007
Sun Nov  4 02:00:00 PST 2007
$

Look at the lines with ---!  This is with the latest
timezone files.  OS X Leopard has the same bug.  I assume
this is a bug and not due to an act of congress that mandates
a flip flop timezone?
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: timezone printing in date messed up?

2007-11-03 Thread Bakul Shah
  OS X Leopard has the same bug ...
 
 How did you test it in Leopard?  I tried it in Tiger, intending to
 contribute another data point, and I got:

Leopard's /bin/date accepts -j.  You can try compiling FreeBSD
date on Tiger.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Looking for speed increases in make index and pkg_version for ports

2007-05-30 Thread Bakul Shah
Peter Jeremy [EMAIL PROTECTED] wrote:
 On 2007-May-27 16:12:54 -0700, Bakul Shah [EMAIL PROTECTED] wrote:
 Given the size and complexity of the port system I have long
 felt that rather than do everything via more and more complex
 Mk/*.mk what is is needed is a ports server and a thin CLI
 frontend to it.
 
 I don't believe this is practical.  Both package names and
 port dependencies depend on the options that are selected as
 well as what other ports are already installed.  A centralised
 ports server is not going to have access to this information.

I didn't mean a centralized server at freebsd.org but on your
freebsd system and can know about what ports are installed.
Conditional dependencies have to be dealt with.  Perhaps the
underlying reason for changing package names can be handled
in a different way.

What happens now is that mostly static information from
various files is recomputed many times.  While that can be
handled by a local database, it seems to be a daemon provides
a lot of benefits.

Come to think of it, even a centralized server can work as
there are a finite number of combinations and it can cache
the ones in use.  But all this is just an educated guess.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Looking for speed increases in make index and pkg_version for ports

2007-05-27 Thread Bakul Shah
Not quite what you asked for but...
 
Given the size and complexity of the port system I have long
felt that rather than do everything via more and more complex
Mk/*.mk what is is needed is a ports server and a thin CLI
frontend to it.

This server can store dependency data in an efficient manner,
deal with conditional dependencies, port renames, security
and what not.  It can build or fetch or serve packages,
handle updates etc.  Things mentioned in UPDATING file can
instead be done by the server.  In general it can automate a
lot of stuff, remove error prone redundancies etc.  If it is
small enough and written in C, it can even be shipped with
the base system instead of various pkg_* programs.

It can provide two interfaces, one for normal users (with
commands like add, check, config, delete, info, search,
update, which) and one for port developers (command for
adding/remove/renaming ports, etc.).  Initially it must work
with existing Makefiles.

 I have been thinking a lot about looking for speed increases for make 
 index and pkg_version and things like that.  So for example, in 
 pkg_version, it calls make -V PKGNAME for every installed package. 
 Now make -V PKGNAME should be a speedy operation, but the make has to 
 load in and analyze bsd.port.mk, a quite complicated file with about 
 200,000 characters in it, when all it is needing to do is to figure out 
 the value of the variable PKGNAME.
 
 I suggest rewriting make so that variables are only evaluated on a 
 need to know basis.  So, for example, if all we need to know is 
 PKGNAME, there is no need to evaluate, for example, _RUN_LIB_DEPENDS, 
 unless the writer of that particular port has done something like having 
 PORTNAME depend on the value of _RUN_LIB_DEPENDS.  So make should 
 analyze all the code it is given, and only figure it out if it is needed 
 to do so.  This would include, for example, figuring out .for and .if 
 directives on a need to know basis as well.
 
 I have only poked around a little inside the source for make, but I have 
 a sense that this would be a major undertaking.  I certainly have not 
 thought through what it entails in more than a cursory manner.  However 
 I am quite excited about the possibility of doing this, albeit I may 
 well put off the whole thing for a year or two or even forever depending 
 upon other priorities in my life.
 
 However, in the mean time I want to throw this idea out there to get 
 some feedback, either of the form of this won't work, or of the form 
 I will do it, or I have tried to do this.
 
 Best regards, Stephen
 ___
 freebsd-hackers@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
 To unsubscribe, send any mail to [EMAIL PROTECTED]
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: [patch] rm can have undesired side-effects

2006-10-31 Thread Bakul Shah
 Having thought this over some more, if a
 shred/scramble/scrub command is created in its own
 right, then a number of new features could be added
 that do not currently exist.

 - The command could be writen to protect a single
 file, or, it could also write to an entire file
 system/media.

These won't share much beyond what patterns to write
and how many times.

 - The command could offer many types of randomising
 possiblities, eg the current 0xff, 0x00, 0xff; or
 perhaps /dev/random could be written; or perhaps the
 user could specify exactly what is to be used to
 overwrite the file/file system - from memory some
 large organistations (govt depts) have specific rules
 about how files/file systems should be overwritten
 before old medie is thrown out and replaced (so no-one
 can scavenge the media and read sensitive data)

IMHO even this does not address paranoia very well.  The
point of rm -P is to make sure freed blocks on the disk don't
have any useful information.  But if the bad guy can read the
disk *while* it also holds other files on it, the battle is
already lost as presumably he can also read data in live
files.  If you are using rm -P in preparation to throwing a
disk away, you may as well just use a whole disk scrubber.
If you are using rm -P to prevent a nosy admin to look at
your sensitive data, you will likely lose.  He can easily
replace rm with his own command.  A separate scrub command
may help since you can verify the data is erased.

This is not to say rm -P or scrub is not helpful.  If you
know what you are doing it is perfectly adequate.  But if you
don't or you make mistakes, it will give you a false sense of
security.  For example, once a file is unlinked through some
other means (such as mv) you don't have a handle on it any
more to scrub.  Basically you lost the ability to scrub your
data due to a mistake.  Worse, editing such a file may free
unscrubbed blocks.  A separate command won't help.

This is why I suggested to have the system do this for you
(through a mount option -- I don't care enough to want to
implement it).

 Kind of thinking out loud here, apologies if its
 noisy, Tim.

If the end result is clear headed go right ahead!
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: [patch] rm can have undesired side-effects

2006-10-30 Thread Bakul Shah
Sorry if I tuned in late:-)

I vote for taking *out* -P.  It is an ill-designed feature.
Or if you keep it, also add it to mv, cp -f  ln -f since
these commands can also unlink a file and once unlinked in
this matter you can't scrub it.  And also fix up the behavior
for -P when multiple links.  And since mv can use rename(2),
you will have to also dirty up the kernel interface somehow.
Not to mention even editing such a sensitive file can leave
stuff all over the disk that a bad guy can get at.  If you
are truely paranoid (as opposed to paranoid only when on
meds) you know how bad that is!

If you are that concious about scrubbing why not add
scrubbing as a mount option (suggested option: -o paranoid)
then at least it will be handled consistently.

What's the world come to when even the paranoid are such
amateurs.

-- bakul

Doug Barton writes:
 Peter Jeremy wrote:
  On Sun, 2006-Oct-29 18:11:54 -0800, [EMAIL PROTECTED] wrote:
  I think a very strong case can be made that the *intent* of -P --
  to prevent retrieval of the contents by reading the filesystem's
  free space -- implies that it should affect only the real removal
  of the file, when its blocks are released because the link count
  has become zero.
  ...
  In this interpretation, rm -P when the link count exceeds 1 is
  an erroneous command.
  
  I agree.  Doing rm -P on a file with multiple links suggests that
  the user is unaware that there are multiple links.  I don't think
  that just unlinking the file and issuing a warning is a good solution
  because it's then virtually impossible to locate the other copy(s)
  of the file, which remains viewable.  I believe this is a security
  hole.
  
  Consider: In FreeBSD, it is possible to create a hardlink to a file if
  you are not the owner, even if you can't read it.  Mallory may decide
  to create hardlinks to Alice's files, even if he can't read them today
  on the off-chance that he may be able to circumvent the protections at
  a later date.  Unless Alice notices that her file has a second link
  before she deletes it, when she issues rm -P, she will lose her link
  to the file (and her only way of uniquely identifying it) whilst
  leaving the remaining link to the file in Mallory's control.
 
 I think Peter is right here. I recently patched the -P option to error
 out if a file is unwritable. I think that is the correct behavior here
 too. If the file is not removed, then it is correct for rm to exit
 with an rc  0. Another poster mentioned the case of using rm in a
 script, or for a large directory where this kind of warning might get
 missed, which is one of the reasons I think it needs to exit with an
 error code.
 
 My suggestion would be to change warnx() to errx(), and drop the
 return(1); from that patch. If there are no objections I'll do it
 myself if no one gets to it first.
 
 In any case I think that this is a good addition to the code, and I'm
 glad that this issue was raised.
 
 Doug
 
 -- 
 
 This .signature sanitized for your protection
 ___
 freebsd-hackers@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
 To unsubscribe, send any mail to [EMAIL PROTECTED]
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: [patch] rm can have undesired side-effects

2006-10-30 Thread Bakul Shah
Doug Barton writes:
 Bakul Shah wrote:
  Sorry if I tuned in late:-)
  
  I vote for taking *out* -P.  It is an ill-designed feature.
  Or if you keep it, also add it to mv, cp -f  ln -f since
  these commands can also unlink a file and once unlinked in
  this matter you can't scrub it.  And also fix up the behavior
  for -P when multiple links.  And since mv can use rename(2),
  you will have to also dirty up the kernel interface somehow.
  Not to mention even editing such a sensitive file can leave
  stuff all over the disk that a bad guy can get at.  If you
  are truely paranoid (as opposed to paranoid only when on
  meds) you know how bad that is!
  
  If you are that concious about scrubbing why not add
  scrubbing as a mount option (suggested option: -o paranoid)
  then at least it will be handled consistently.
 
 The patches to implement your suggestions didn't make it through on
 this message. Please feel free to post them for review and send the
 URL to the list.

Writing code is the easy part, too easy in fact, which is
part of the problem.  Interface changes need to be discussed
and made carefully.  But since you asked, here's the patch to
remove -P from rm.

Index: rm.c
===
RCS file: /home/ncvs/src/bin/rm/rm.c,v
retrieving revision 1.54
diff -w -u -b -r1.54 rm.c
--- rm.c15 Apr 2006 09:26:23 -  1.54
+++ rm.c30 Oct 2006 19:43:40 -
@@ -57,7 +57,11 @@
 #include sysexits.h
 #include unistd.h
 
+#ifdef HALF_PARANOID
 int dflag, eval, fflag, iflag, Pflag, vflag, Wflag, stdin_ok;
+#else
+int dflag, eval, fflag, iflag, vflag, Wflag, stdin_ok;
+#endif
 int rflag, Iflag;
 uid_t uid;
 
@@ -66,7 +70,9 @@
 void   checkdot(char **);
 void   checkslash(char **);
 void   rm_file(char **);
+#ifdef HALF_PARANOID
 intrm_overwrite(char *, struct stat *);
+#endif
 void   rm_tree(char **);
 void   usage(void);
 
@@ -103,8 +109,13 @@
exit(eval);
}
 
+#ifdef HALF_PARANOID
Pflag = rflag = 0;
while ((ch = getopt(argc, argv, dfiIPRrvW)) != -1)
+#else
+   rflag = 0;
+   while ((ch = getopt(argc, argv, dfiIRrvW)) != -1)
+#endif
switch(ch) {
case 'd':
dflag = 1;
@@ -120,9 +131,11 @@
case 'I':
Iflag = 1;
break;
+#ifdef HALF_PARANOID
case 'P':
Pflag = 1;
break;
+#endif
case 'R':
case 'r':   /* Compatibility. */
rflag = 1;
@@ -289,9 +302,11 @@
continue;
/* FALLTHROUGH */
default:
+#ifdef HALF_PARANOID
if (Pflag)
if (!rm_overwrite(p-fts_accpath, NULL))
continue;
+#endif
rval = unlink(p-fts_accpath);
if (rval == 0 || (fflag  errno == ENOENT)) {
if (rval == 0  vflag)
@@ -357,9 +372,11 @@
else if (S_ISDIR(sb.st_mode))
rval = rmdir(f);
else {
+#ifdef HALF_PARANOID
if (Pflag)
if (!rm_overwrite(f, sb))
continue;
+#endif
rval = unlink(f);
}
}
@@ -372,6 +389,7 @@
}
 }
 
+#ifdef HALF_PARANOID
 /*
  * rm_overwrite --
  * Overwrite the file 3 times with varying bit patterns.
@@ -436,7 +454,7 @@
warn(%s, file);
return (0);
 }
-
+#endif
 
 int
 check(char *path, char *name, struct stat *sp)
@@ -462,6 +480,7 @@
strmode(sp-st_mode, modep);
if ((flagsp = fflagstostr(sp-st_flags)) == NULL)
err(1, fflagstostr);
+#ifdef HALF_PARANOID
if (Pflag)
errx(1,
%s: -P was specified, but file is not writable,
@@ -472,6 +491,7 @@
group_from_gid(sp-st_gid, 0),
*flagsp ? flagsp : , *flagsp ?   : ,
path);
+#endif
free(flagsp);
}
(void)fflush(stderr);
@@ -583,7 +603,11 @@
 {
 
(void)fprintf(stderr, %s\n%s\n,
+#ifdef HALF_PARANOID
usage: rm [-f | -i] [-dIPRrvW] file ...,
+#else
+   usage: rm [-f | -i] [-dIRrvW] file ...,
+#endif
   unlink file);
exit(EX_USAGE);
 }
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: [PATCH] adding two new options to 'cp'

2006-08-02 Thread Bakul Shah
 As a general comment (not addressed to Tim):  There _is_ a downside
 to sparsifying files.  If you take a sparse file and start filling
 in the holes, the net result will be very badly fragmented and hence
 have very poor sequential I/O performance.  If you're never going to
 update a file then making it sparse makes sense, if you will be
 updating it, you will get better performance by making it non-sparse.

Except for database tables how common is this?  And for such
files how important is the sequntial I/O performance?  For
database tables perhaps there is a size range where not
making them sparse helps but for really large tables you
wouldn't want to fill in the holes.  I suspect that making
not writing zeroes the default would actually help overall
performance.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: [PATCH] adding two new options to 'cp'

2006-08-01 Thread Bakul Shah
 Eric Anderson wrote:
 
  It could possibly be bad if you have a real file (say a 10GB file, 
  partially filled with zeros - a disk image created with dd for 
  instance), and you use cp with something like -spR to recursively copy 
  all files.  Your destination disk image would then be a sparse file, so 
 
 Incidentally, this is exactly why I've needed it - I like to create disk 
 images for virtual machines as sparse files, when I know they won't be 
 much filled, but need the virtual space :)

[Basic idea from the plan 9 guys]
Rather than modify every tool for this may be the OS should
avoid writing a block of zeroes?  The idea is to check if the
first and last word are 0.  If so, check the whole block (you
can avoid the necessary bounds check by temporarily unzeroing
the last word).  If all zeroes and there is no existing
allocation for the range being written, just advance the
current offset (essentially lseek).  Else proceed as normal
write.

This test is very cheap in practice and if you can avoid one
write in ten thousand this way you will likely see overall
savings.  Of course, you still need rsync but it help all
local copying, the common case.  This being an optimization
you don't need to implement a complete solution.  For
instance writing 1 zero byte in one call and then 4095 zero
bytes in another may defeat the optimization.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Bad block - file mapping

2006-02-18 Thread Bakul Shah
 I have a hard disk that's been in service a long time.  I recently
 installed the SMART monitoring tools.  On occasion, I get reports of
 LBAs it can't read.  I'd like to map the LBA to an actual file in the
 file system, if possible.  Does anybody have any tools that can help
 me with this?
 
 I know I need to get a new disk.  In the mean time, I need to cope
 with these errors in a sane manner...
 
 Warner
 
 P.S.  Here's a sample report:
 
 Num  Test_DescriptionStatus   LifeTime(hours)  LBA_of_first_error
 # 1  Extended offlineCompleted: read failure 8949 65818210
 # 2  Short offline   Completed without error 8948 -

Wouldn't bad block forwarding by the disk take care of this?
Generally you want the read of a bad block to return an error
but if you write the block the disk will automatically remap
this block to one of the spare blocks.

What exactly are you trying to do by mapping a bad block to a
file?  Nevertheless may be fsdb will help?  You still need to
map LBA to the slice/partition offset.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Bad block - file mapping

2006-02-18 Thread Bakul Shah
 However, I'd kinda like to know
 which file that is.  If it is a boring file (foo.o, say), I'd dd the
 bad block with 0's and then remove it.  If it is a non-boring file,
 I'd try to recover it a couple of times, etc.

So you want a function that does this?

LBA - slice/partition/offset - fs/inode - list of file names

Logic for the second step should be in fsck.

I haven't kept uptodate on disk stds so likely I am talking
through my hat but in ST506 there used to be a diagnostic
read function that returned the bad block and its CRC.  That
allows at least a chance of a manual correction.

 Once I have the file in BAD, I'd planned on overwriting it with 0's
 and then removing it if I could read the block again.

Why do you care?

 Maybe there's a better way to cope, maybe not.  I don't know.  Hence
 my question :-).
 
 This is with an ata disk, btw.

My sympathies.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: RAID5 on athlon64 machines

2006-02-12 Thread Bakul Shah
  You compute max data rates by considering the most optimistic
  scenario, which is large sequetial writes.  For *this*
  situation write rate will be higher than a single disk's.
 
 How can the RAID5 write rate be higher for the whole array if not
 only it needs to write the data to all if its drives, but also
 compute and write a parity block?

You write to all the disks at the same time.  While the disks
are busy writing you compute parity for the next stripe.
In my case disk bw is 60MB/s.  Memory bw is I thin 3GB/s.
There ought to be plenty of bw and cpu for xor computing.

 IMO, RAID does not protect against system crashes - all it does
 is provide performance increase and/or some protection against
 hardware failure (which will be detected with extremely high
 probability) enabling the admin to restore some data.

No it can't if you don't do the parity check on reads and a
previous write to the stripe was incomplete due to system
crash.  You will happily deliver incorrect data to the user
and he only knows *something* is wrong when his system
crashes or program misbehaves or some binary data doesn't
quite feel right or some text is garbled or some secondary
bad effect.

May be you need to use the same principle in your learning?
Check your understanding by applying it and trying to extend
it.  Don't just believe what you read, cross-check it.
Question the (so called) authority!  The revolution will not
be televised.  Oops I think I have a scrambled brain block.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


RAID5 on athlon64 machines

2006-02-11 Thread Bakul Shah
I built an Asus A8N SLI Deluxe based system and installed
FreeBSD-6.1-BETA1 on it.  This works well enough.  Now I am
looking for a decent RAID5 solution.  This motherboard has
two SATA RAID controllers.  But one does only RAID1.  The
other supports RAID5 but seems to require s/w assistance from
windows driver.  The BIOS does let you designate a set of
disks as a raid5 group but Freebsd does not recognize it as a
group in any case.

I noticed that vinum is gone from -current and we have gvinum
now but it does not implement all of the vinum commands.  But
that is ok if it provides what I need.

I played with it a little bit.  Its sequential read
performance is ok (I am using 3 disks for RAID5 and the read
rate is twice the speed of one disk as expected).  But the
write rate is abysmal!  I get about 12.5MB/s or about 1/9 of
the read rate.  So what gives?  Are there some magic stripe
sizes for better performance?  I used a stripe size of 279k
as per vinum recommendation.

Theoretically the sequential write rate should be same or
higher than the sequential read rate.  Given an N+1 disk
array, for N blocks reads you XOR N + 1 blocks and compare
the result to 0 but for N block writes you XOR N blocks.  So
there is less work for large writes.

Which leads me to ask: is gvinum stable enough for real use
or should I just get a h/w RAID card?  If the latter, any
recommendations?

What I'd like:

Critical:
- RAID5
- good write performance
- orderly shutdown (I noticed vinum stop command is gone but
  may be it is not needed?)
- quick recovery from a system crash.  It shouldn't have to
  rebuild the whole array.
- parity check on reads (a crash may have rendered a stripe
  inconsistent)
- must not correct bad parity by rewriting a stripe

Nice to have:
- ability to operate in degraded mode, where one of
  the disks is dead.
- ability to rebuild the array in background
- commands to take a disk offline, associate a spare with a particular disk
- use a spare drive effectively
- allow a bad parity stripe for future writes
- allow rewriting parity under user control.

Thanks!
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: RAID5 on athlon64 machines

2006-02-11 Thread Bakul Shah
 
  Theoretically the sequential write rate should be same or
  higher than the sequential read rate.  Given an N+1 disk
 
 Seq write rate for the whole RAID5 array will always be lower
 than the write rate for it's single disk.

You compute max data rates by considering the most optimistic
scenario, which is large sequetial writes.  For *this*
situation write rate will be higher than a single disk's.

 The parity blocks are not read on data reads, since this would be
 unnecessary overhead and would diminish performance. The parity
 blocks are read, however, when a read of a data sector results
 in a cyclic redundancy check (CRC) error.

You can only do so if you know the array is consistent.  If
the system crashed there is no such guarantee.  So you either
have to rebuild the whole array to get to a consistent state
or do a parity check.  If you don't check parity and you have
an inconsistent array, you can have a silent error (the data
may be trashed but you don't know that).  But if you use RAM
without parity or ECC, you probably already don't care about
such errors.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: increasing dd disk to disk transfer rate

2006-01-12 Thread Bakul Shah
 In the last episode (Jan 12), Christoph Kukulies said:
  My notebooks' hard disk, a Hitachi Travelstar 80 GB starts to develop
  read errors. I have FreeBSD and Win XP on that disk. Although FreeBSD
  ist still working , the errors in the Windows partition are causing
  Windows do ask for a filesystem check nearly everytime I reboot the
  computer. One time the error was in the hibernate.sys file, which
  impedes powering up quickly after a hibernate.
  
  Anyway, I decided to buy a second identical hard disk and tried to
  block by block copy the old disk to the new one using
  
  dd if=/dev/ad2 of=/dev/ad3 conv=noerror
  
  The process is running now since yesterday evening and it is at 53 MB
  at a transfer rate of about 1.1 MB/s.
 
 Everybody has mentioned the first obvious fix: raise your blocksize
 from the default 512 bytes.  The second fix addresses the problem that
 with a single dd, you are either reading or writing.  If you pipe the
 first dd into a second one, it'll let you run at the max speed of the
 slowest device.
 
 dd if=/dev/ad2 conv=noerror,sync bs=64k | dd of=/dev/ad3 bs=64k

So now on the new disk he has files with random blocks of
zeroes and *no* error indication of which files are so
trashed.  This is asking for trouble.  Silent erros are
worse.

He ought to do a file level copy, not disk level copy on
unix.  That way he knows *which* files are trashed and can do
a better job of recovering.  Assuming he has backups.
Windows is pickier about things but I am sure there are
windows tools that will handle all that and allow more
retries.

dd is the *wrong* tool for what he wants to do.

If it were upto me first I'd backup all the data I may need;
using multiple retries and all that and then install freebsd
from scratch on the new *bigger* disk.  Perfect time for
house cleaning and removing all those ports you don't use any
more! 

As for windows  I'd use the recovery disk and in effect
reinstall windows from scrach and then reinstall all apps and
move over my data files.  [What I actually do is to run win2k
under qemu on my laptop.  Good enough for what I need it for]
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: increasing dd disk to disk transfer rate

2006-01-12 Thread Bakul Shah
 Bakul Shah wrote:
 In the last episode (Jan 12), Christoph Kukulies said:
 
 dd if=/dev/ad2 conv=noerror,sync bs=64k | dd of=/dev/ad3 bs=64k
  
  
  So now on the new disk he has files with random blocks of
  zeroes and *no* error indication of which files are so
  trashed.  This is asking for trouble.  Silent erros are
  worse.
  
  He ought to do a file level copy, not disk level copy on
  unix.  That way he knows *which* files are trashed and can do
 
 The problem is, FreeBSD panics when it encounters bad sectors in 
 filesystem metadata. I had the same situation ~a month ago and gave up, 
 restoring from old backups. It will also probably panic on corrupted or 
 zeroed metadata, but at least it's on a readable disk...

Good point.  Would fsdb help?  If not someone ought to extend it.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: increasing dd disk to disk transfer rate

2006-01-12 Thread Bakul Shah
 I think after the dd is done, fsck should be run against the affected 
 filesystems, which should take care of most of the issues.

For metadata yes, but not for normal file data.  He wouldn't even know
what got trashed.

 The OP's question was how to make dd faster, not really how to get the 
 data across safely. :)

Sometime you have to answer the question they should've asked!
That is what a diagnostician has to do.  Fix the cause.  Not
the symptom.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Backup methodes

2005-11-08 Thread Bakul Shah
Thomas Hurst [EMAIL PROTECTED] writes:
 * Carlos Silva aka |Danger_Man| ([EMAIL PROTECTED]) wrote:
 
  what is the best method to backup network information and local disk
  information with another disk?
 
 dump/restore performs snapshotted incremental backups of complete
 filesystems.

I have been using venti from plan9ports (a set of plan9
programs ported to unix like OSes) for the past few months
now.  See http://swtch.com/plan9ports

Features:
- backup ufs1 and ufs2 over the net to a venti server

- Initial full backup seems faster than dump's level 0 backup
  (I get about 3 to 7 MBps to a USB2 disk).

- Saves only one copy of every distinct block no matter
  which file it belongs to or how many times you give it to
  venti = less filling, more nutricious!

- Every backup is a full backup but because of the above
  feature venti stores only changed or new blocks.  This
  incremental backup works at close to max disk speed.  (I
  can backup a 30GB filesystem  in under 25 minutes to a USB2
  disk).  Speed of the disk being backed up is the bottleneck
  so you can simulteneously backup multiple disks to utilize a
  venti server's full disk/net bandwidth.  This is fast enough
  that backing up everything every night actually works!

- each backup returns a single `score'.  This serves
  as a handle to grab the same backup later on.

- you can nfs mount the backups.  *every* snapshot is
  available.  For example, /dump/my-host/2005/1105/usr.

- you can ftp browse a specific backup by giving its score.

- You can recreate the image of a partition as per a
  specific backup.  For instance 'vcat score  /dev/da0s1e'
  will recreate a specific disk image.  You can then mount it
  just like a normal disk partition.  Though I'd much prefer
  it if md(4) did this as then it can fetch data on demand.
  mdconfig -a -t venti -v venti-server -s score

It still has some warts (for example its security model
doesn't quite work well for a full restore and you have to
resort to vcat) but overall it has been a vast improvement
over dump/restore for me. venti can also be used back up file
trees like tar does.  Venti is close to a Ginsu knife of
archiving:-)

-- bakul
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: help regarding : To recieve and tranmit packet th' an interface

2005-10-18 Thread Bakul Shah
 we are writing a driver for HDLC-Controller We have coded upto some extent
 and actully we are able to transmit and recieve a char buff in loopback
 (from inside a driver).

 But we want to tranmit/Rx a real packet in (mbuf structure) and test our
 code .As it is a HDLC controller does'nt have std MAC ADDRRSS . How can i
 actually achieve a packet transmition and reception .Are there some drivers
 which does the same

Look at /sys/net/if_spppsubr.c or /sys/netgraph/ng_sppp.c.
One other option is to let your driver present a simple
serial IO interface and implement higher level logic in a
user level daemon that uses a tun device to plug into the
network layer (like /usr/sbin/ppp).  Also be, sure to read
RFC1661!
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: A smarter mergemaster

2005-10-01 Thread Bakul Shah
Here is an idea for the mergemaster hackers' consideration!

By keeping /etc files in a source repository one can archive
and document all local changes.  This is useful for some of
the same reasons for which we keep sources in a repo:
recovery from mistakes, reuse of old code, checking who did
what, more than one person can make changes, tracking
history, debugging etc.

If mergemaster handled or worked with a local cvs /etc repo
that'd be very nice!  The idea is to make changes and test
them in a temp workspace and commit them *only if they do the
right thing*!  I envision a workflow something like this
(using make for illustration purposes):

cd etc workspace
make etc-diff   # ensure your workspace reflects what is in /etc
if resync is needed, commit them to local repo

make import # import the latest /usr/src/etc into etc workspace
make diff   # look over the changes
make any local repairs
make install# install to /etc; do mkdb etc.
check out your changes

Finally:
make commit # commit changes to local repo
OR
make undo   # if things didn't quite work, restore /etc to old state.

Roughly, the current mergemaster does the work of make
import, make diff, repairs and install.

Comments?

-- bakul
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: A smarter mergemaster

2005-10-01 Thread Bakul Shah
 : cd etc workspace
 : make etc-diff   # ensure your workspace reflects what is in /etc
 : if resync is needed, commit them to local repo
 : 
 : make import # import the latest /usr/src/etc into etc workspace
 : make diff   # look over the changes
 : make any local repairs
 : make install# install to /etc; do mkdb etc.
 : check out your changes
 : 
 : Finally:
 : make commit # commit changes to local repo
 : OR
 : make undo   # if things didn't quite work, restore /etc to old state.
 : 
 : Roughly, the current mergemaster does the work of make
 : import, make diff, repairs and install.
 : 
 : Comments?
 
 I implemented something very similar to this for maintaining all the
 etc files at Timing Solutions.  We have a tree that gets installed
 over the base OS.

 However, it doesn't easily allow for a mergemaster step since it
 installs all the files with schg set, and doesn't have three way merge
 in potential.

mergemaster just has to do a merge in a temp workspace
(initially a copy of /etc).  Makefile can do all the schg
magic when it installs to /etc.  But this can get messy
and I don't have a clean model

One would have to keep Freebsd's /usr/src/etc in a vendor
branch and do a checkout -j or something.

When there is no conflict, an update goes very fast.  In case
of conflicts perhaps one can use the interactive merge
feature from mergemaster.  For files of same name but with
entirely different content, merge with the vendor branch
needs to be avoided.

Basically anything we can do to make it easy to use this
best practice would be nice  even nicer if it covers
/usr/local/etc!
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: debugging with Qemu

2005-06-08 Thread Bakul Shah
Hmm... I've used qemu a bit to debug the kernel.  Even used
it to debug a loadable module.  Here is what I did:

# qemu -s img
# cd path to where the kernel was built on the host
# gdb kernel.debug
(gdb) target remote localhost:1234
...
(gdb) l kldload
739 /*
740  * MPSAFE
741  */
742 int
743 kldload(struct thread *td, struct kldload_args *uap)
744 {
745 char *kldname, *modname;
746 char *pathname = NULL;
747 linker_file_t lf;
748 int error = 0;
(gdb) b 743
(gdb) c
Continuing.

Breakpoint 3, kldload (td=0xc1419c00, uap=0xc8105d14)
at /usr/src/sys/kern/kern_linker.c:744
744 {
(gdb) c
Continuing.
...
^C
Program received signal 0, Signal 0.
cpu_idle_default () at /usr/src/sys/i386/i386/machdep.c:1113
1113}
(gdb) detach
Ending remote debugging.
(gdb) q

I am using kqemu and qemu built from May 2 snapshot if that
matters.  This was a stock 5.4-RELEASE complied locallly
with 

makeoptionsDEBUG=-g

added the kernel config file.  The host was also running 5.4
but that should not matter.

May be if you describe the exact symptoms
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: [Qemu-devel] Re: debugging with Qemu

2005-06-08 Thread Bakul Shah
 I am using kqemu and qemu built from May 2 snapshot if that
 matters.  This was a stock 5.4-RELEASE complied locallly
 with 
 
 makeoptionsDEBUG=-g
 
 added the kernel config file.  The host was also running 5.4
 but that should not matter.

Ugh...  Should've done a diff with GENERIC since the
options are needed for debugging:

--- /sys/i386/conf/GENERIC  Tue Apr 12 12:50:23 2005
+++ /sys/i386/conf/DUMBLEDORE   Mon May  9 17:51:10 2005
@@ -58,6 +58,12 @@
# output.  Adds ~215k to driver.
 optionsADAPTIVE_GIANT  # Giant mutex is adaptive.
 
+optionsKDB
+optionsDDB
+optionsGDB
+makeoptionsDEBUG=-g#Build kernel with gdb(1) debug symbols
+
+
 device apic# I/O APIC
 
 # Bus support.  Do not remove isa, even if you have no isa slots
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FS impl.

2005-05-06 Thread Bakul Shah
   I have been trying to write my own UFS-like filesystem
 implementation for fun. I had read somewhere that UFS was developed in
 user space (correct me if I'm wrong on that one) and then moved over
 to kernel-space. I was wondering if there are any existing facilities
 in the kernel source tree that would allow me to develop an fs in user
 space easily or with a little tweaking? As of right now, I have to
 develop, compile, panic, reboot, debug etc. which is frustrating and
 time consuming.

A stub FS that directs all vfs calls to userland would be a
handy thing  Similarly a stub disk --  one should be able
to debug support for Petabyte size disk without having to buy
one.

As for shortening the compile/debug/panic/reboot cycle, you
can use qemu.  Once a guest os is installed on a disk-image,
you can do this:

# qemu -s disk-image
# cd /usr/obj/usr/src/sys/KERNEL
# gdb kernel.debug
(gdb) target remote localhost:1234

That is it!  No need to set up serial console or anything.

I haven't tried this but I guess this should work: If you
make the FS module a kernel module, and use qemu's snapshot
feature, after a crash you can reload from your image right
before FS module loading and go from there.

Now with a kernel module `kqemu', qemu runs approx twice as
slow as real h/w for usercode (as opposed to about 25 times
slower without kqemu).
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: C-style expression processing...

2005-04-26 Thread Bakul Shah
 I am trying to add a new feature in Gridengine
 (free/opensource) to support ex-LSF users - there are
 more and more LSF users migrating to Gridengine), and
 some requested this one:
 
 In LSF, a user can specify from the command line the
 resource requirements of a batch job:
 
  (mem = 100 || pg  200.0)
 
 Where mem and pg are variables (they changes in time,
 and the master cluster scheduler has the most
 up-to-date information). And what I need is to find
 out whether the expression is true or not.
 
 My question is, is there an expression processing
 library that can handle complex equations easily?

See
http://www.bitblocks.com/src/expr

BSD style copyright.  README has some notes to help you turn
it into what you want.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: vmware reads disk on non-sector boundary

2002-10-03 Thread Bakul Shah

 It was desired, and was sort of promised.

I never understood why removal of block devices was allowed
in the first place.  phk's reasons don't seem strong enough
to any unix wizard I have talked to.  Did the majority of the
core really think the change was warranted?  Removing
compatibility when the change _doesn't_ bring a *substantial*
improvement doesn't seem right.

How hard would it be to bring back block devices without GEOM?

Is there a write up somewhere on what GEOM is and its
benefits?  I'd hate to see it become the default without
understanding it (and no, reading source code doesn't do it).

Thanks!

-- bakul

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: vmware reads disk on non-sector boundary

2002-10-03 Thread Bakul Shah

phk writes:
 You are welcome to peruse the mail-archives to find out such
 historically interesting decisions.

I am aware of the technical arguments discussed via -arch,
-current  -hackers.  I just don't agree with them (seems
like most hackers who are afraid to cross you).

 You are not welcome to build another bikeshed over it.

If block devices are trivial to build with geom, they
should've been not removed until geom was in place.  Oh well.
I am not going to argue about this over and over and over
again.  But I was hoping sanity would prevail (my hopes were
raised with perl-5's removal and Julian  Bruce piping up).

 Man 4 geom is a good place to start.

Thanks. More on this in a separate email.

 There will also be a tutorial friday afternoon about GEOM
 at BSDCONeuro2002 in amsterdam next month.

Too far to travel :-)

Julian writes:
 He had some backing, for example Kirk made a good argument for removing
 them. The arguments about not being able to do devfs and geom without
 removing them are of course specious as it can and was done before
 by others.

Hmm.. I don't recall Kirk McKusick's argument for removing a
buffered block device.

 One provides a stacking system for disk geometries wand layouts
 where the upper interface is the same as that provided by the actual
 disk.

Thanks!

-- bakul

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: -fomit-frame-pointer for the world build

2002-08-02 Thread Bakul Shah

 I tried to build some binaries with -fomit..., then tried to debug it a
 bit, and gdb shows me both backtrace stack and arguments, so I was in
 doubt a bit -- so here is my question ;-)

I can answer that.  Consider the following two functions:

f(int n)
{
int x; int y;
... // no calls to other functions, no nested
// declaration of variables
}

g(int n)
{
int x;
int array[n];   // allowed in gcc
int y;
... // no calls to other functions, no nested
// declaration of variables
}

First assume stack grows toward higher addresses.  For the
other direction just swap - and + in computing local
var addresses.

Note that when using a frame ptr (also called a dynamic
link in compiler lingo), one generally needs to something
like this:

on entry:
push frame ptr
frame ptr = stack ptr
stack ptr += framesize

just before returning:
stack ptr -= framesize
frame ptr = pop
return

Its caller then removes arguments on the stack.

Now notice that the framesize of f() is a constant!  To
address a local variable(or any args), one can either use
frame ptr + offset1,
where offset1 = its distance from the frame ptr,
note that for arg n the distance is -ve.
or use
stack ptr - offset2,
where offset2 = its distance from the stack ptr.

Given that the framesize is constant, we can always compute
where the frame starts from the stack ptr and hence we don't
need the frame ptr.  Debugging should also work if the
framesize constant is made known to the debugger -- usually
by storing it just before the generated code for the
function.  Consequently you don't need to save and restore
the caller's frame ptr -- hence the time savings.

But if the framesize is not a constant (such as when a
variable sized array is declared as in g()), things get a bit
complicated.  If we have a frame ptr as well as a stack ptr,
you can address x as well as y with a constant offset.  If
you remove the frame pointer, you need to use the value of n
in computing the offset of x.  Further, the debugger may find
it very difficult to locate the framesize (but it can be
done) to unravel the stack.  So you may or may not see any
time savings.

Note that there are tricks to making the framesize constant
even when there are variables declared in nested blocks.
This is done by hoisting the vars to the function level.

My view is that -fomit-frame-pointer is almost always worth
it provided gdb is smart enough to locate all the frames.

If this is not clear enough, try drawing a picture of how
the frames are popped and pushed on a stack as well as stare
the generated code until you 'get it'!

-- bakul

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: MFC status for retransmit timer min/slop

2002-07-21 Thread Bakul Shah

 Wow.  I'm flattered.  Everyone so far thinks 200ms will be ok!

I'd still prefer the default left at 1 sec until there is
enough real testing so that people not taking part in the
test don't get surprised.  That is, dampen any potential
future oscillations in this value.

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Large variables on stack

2002-07-12 Thread Bakul Shah

 In general, it's bad to use stack if the only reason you are using
 it is to seperate context, which is the point I was trying to make.
 
 OpenSSL takes this one level worse, and uses stack to avoid the
 allocation and deallocation of context structures that are copies
 of context structures translated to parameters, and back (one could
 make a similar criticism of the FreeBSD VFS descriptor mechanism,
 but at least there was a valid design reason for that one 8-)).
 
 I guess I could offer the alternative argument that buffers that
 are allocated on the stack are subject to overflow in order to
 get malicious code to execute... and that avoiding such allocations
 makes such attacks much harder.
 
 The stack is really a necessary evil to handle the call graph;
 abusing it for other reasons makes my teeth itch.  8-).

Do a Google search on Cheney on the M.T.A..  It is a
technique by Henry Baker for using a stack as a heap.  The
idea is to keep allocating on the stack until the OS says you
can't any more and *then* do a copying garbage collection.
Live objects are then copied off of the stack into a proper
heap.  The stack is now free so we reset the stack pointer to
the bottom of the stack.  The stack can be reused.  Again and
again.  I am aware of at least one Scheme compiler that uses
this technique.

The point of this reference is that large objects on the
stack are not necessarily bad.  The real answer, as usual, is
it depends.  I'd avoid using threads before I'd avoid using
large objects on the stack (I am only talking about userland
programs).  Said another way, getting thread programming
right is far harder than ensuring stack usage (or dealing
with stack overflow).  Just because some fool some where will
misuse/abuse a technique is not reason enough to proscribe
it.

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: offtopic: low level format of IDE drive.

2002-07-08 Thread Bakul Shah

 One of my FreeBSD development boxes had a hernia last week when it lost
 power while writing to disk. The drive wrote out garbage to a track.
 
 I want to reformat the drive, (low level) but the bios doesn't have any
 support to do this (In the past That is how I did this).
 The machiine has 1 CD drive and no floppy..
 
 anyone with any ideas as to how one can reformat a hard drive feel free to 
 lend me a clue..

Modern drives have low level formatting done at the factory
due to drives having multiple zones (different sectors/track
in each zone) and other horrible things done to sqeeze out
many more bits of storage.  They even retired the FORMAT
opcode from ATA standard!  I think your best bet may be to
see if you can find a windows program that will reinit the
disk.  Your disk's vendor may provide such a utility, usually
mislabelled DiscWizard or something for free.

I am ready for Millipede :-)

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



linker bug?

2002-06-12 Thread Bakul Shah

In order to measure call overhead on an Athlon XP system I
compiled and ran the following program and saw some curious
results!

$ cat foo.c
#include stdlib.h

void func() { }

void(*funp)() = 0;

int main(int argc, char **argv) {
int i, j;
if (argv[1][0] != '?')  /* defeat compile-time optimization */
funp = func;
i = atoi(argv[1]);
for (j = i; j  0; --j)
(*funp) ();
}
$ cc -O -fomit-frame-pointer foo.c
$ time a.out 10
a.out 10  4.11s user 0.01s system 97% cpu 4.215 total

Then I did a static link and saw the time increase by 10 seconds!

$ cc -O -fomit-frame-pointer -static foo.c
$ time a.out 10
a.out 10  14.28s user 0.01s system 96% cpu 14.759 total

nm reveals the problem.

$ cc -O -fomit-frame-pointer foo.c  nm a.out |grep func
08048490 T func
$ cc -O -fomit-frame-pointer -static foo.c  nm a.out |grep func
080481c4 T func

Here is what
void func() {}
gets compiled to:

.p2align 2,0x90
.globl func
.typefunc,@function
func:
ret

This is on a 4.6-RC system with gcc-2.95.3.  The fact that
func is aligned on a 16 byte boundary in the -dynamic case is
likely conincidental.  gcc-3.1 seems to put it on an 8 byte
with -dynamic and 4 byte boundary with -static.

So the question is: does the linker ignore alignment
altogether or did I miss some magic flag?

-- bakul

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: tconv

2002-05-13 Thread Bakul Shah

 # XXX broken: tconv   has been broken since at 
 least 4.4...
 # XXX Use GNU versions: apropos bc dc diff grep ld man patch ptx uucp whatis
 # Moved to secure: bdes
 #
 
 Any idea when this will get fixed?  I need this program.

I ran into the same thing a while ago.

tconv fails because lib/libmytinfo has been retired.  Seems
like there are no plans to bring it back.  There is an
ncurses port in /usr/ports/devel/ncurses but it does not have
tconv.  Also, for FreeBSD version = 4.0 it won't compile.

As a workaround you can bring libmytinfo back from the dead
and compile tconv as follows (do this as root).

cd /usr/src/lib
cvs up -d libtmyinfo -D'23 November 1999'
cd libmytinfo
make install
cd /usr/src/usr.bin/tconv
make install

This assumes CVSROOT points to a proper CVS repository.
Seems to work on a FreeBSD-4.5 system but I have not done
much testing.

-- bakul

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: organic documentation

2002-05-03 Thread Bakul Shah

Terry Lambert writes:
 JJ Behrens wrote:
  The online documentation for PHP allows users to post comments at the end o
 f
  every page of the online documentation.  Often times, these comments serve 
 to
  enlighten others about various quirks of the libraries.  Perhaps doing the 
 same
  thing with the FreeBSD handbook pages (only online) might be a good idea.
 
 The problem with this (and the similar FAQ-o-matic) approach
 is that they are very deep.
 
 In other words, you have to go through a large set of branch
 points to get to the information.
 
 Aside from the classification problem (everyone has to classify
 the same way for them to be able to get the information out),
 the human factors argue that the depth should not exceed 3 on
 any set of choices, before you get to what you want (HCI studies
 at Bell Labs confirms this number).

It is interesting to note that the plan9 people from the same
Bell Labs are using a wiki for information pertinent to
installing, configuring, and using the operating system Plan
9 from Bell Labs.!

http://plan9.bell-labs.com/wiki/plan9/plan_9_wiki/index.html

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: organic documentation

2002-05-03 Thread Bakul Shah

Terry Lambert writes:
 Bakul Shah wrote:
   Aside from the classification problem (everyone has to classify
   the same way for them to be able to get the information out),
   the human factors argue that the depth should not exceed 3 on
   any set of choices, before you get to what you want (HCI studies
   at Bell Labs confirms this number).
  
  It is interesting to note that the plan9 people from the same
  Bell Labs are using a wiki for information pertinent to
  installing, configuring, and using the operating system Plan
  9 from Bell Labs.!
  
  http://plan9.bell-labs.com/wiki/plan9/plan_9_wiki/index.html
 
 This is a perfect example of everyone has to classify the
 same way.

Agreeing on common convention makes it easier to collectively
evolve a document.  True for most things done by a large and
disparate group.

Also note that a plan9 person seems to act as an editor and
he does correct/omit/note wrong/misleading entries.  Wiki
just happens to be a very easy medium to share your tidbit of
knowledge.

 It also demonstrates the other problem of hierarchical
 categorization, which is that it's impossible to get a single
 document with all the information on it so it can be linearly
 searched (e.g. via a browser find text).

I agree with you here.  Frequently I prefer downloading
archived emails when I subcribe to a new mailing list and
scan through it linearly.  But there is nothing that says you
can't provide a linear editing history of the wiki or
whatever.

 since what's an important keyword or key phrase to you is
 often not important to the indexing software (simple indexing
 fails to identify phrase matches at all, and you are stuck
 with a phrase being treated as unordered keywords).

This is an open problem.  Unless search engines start
digesting documents using something like frames (ala Marvin
Minsky) to create a structured representation, you don't have
anything better.  There is only so far you can go with just
numerology (counting words, counting links to a webpage and
so on).

 A good example of why simple indexing is bad is the search
 facility for the FreeBSD mailing list archives.  The facility
 that's there is better than nothing, but it's unfortunately
 less useful than google (for example) when looking up specific
 topics and issues (e.g. try and find the OpenVRRP FreeBSD VRRP
 implementation via the mailing list search -- it's in there:
 google found it, but the local search engine didn't).

A good search facility is always welcome regardless of how
information is organized.

As I see it, you first want to make it easy for people to
contribute knowledge while minimizing the organization they
have to know and follow.  Wiki seems to strike a good balance
but undoubtedly there will be better ways to do it.  When
there is a good enough collection, it does make sense to
reorganize it in a better format.

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Missing PT_READ_U

2002-03-25 Thread Bakul Shah

 } As the culprit behind PT_READ_U's demise, I'm willing to dive in
 } and help here if needed.
 }Thanks but Julian sent me a patch for 4.5 that seems to work
 }with no changes in ups.  Would be nice if PT_READ_U is put
 }back in 4.x.

 As a followup to this old thread (and as the poster of the original
 question on the ups mailing list in late Feb) I note there has still
 been no change on the RELENG_4 branch to fix this. Could we have the
 patch posted here at least so other people can use ups again (with
 signals)? I'd just apply a reverse patch from kern/sys_process.c 1.51.2.2
 to 1.51.2.1 except that I don't know if other files (apart from sys/ptrace.h)
 have been affected.

Julain Elischer's diff as applied to the 4.5-RELEASE included
below.  With this change ups-3.37-beta4 compiled unchanged.

But note that you still can't change any registers.  If
PT_WRITE_U is added back to the FreeBSD-4.x branch, no change
is necessary to ups.  So how about it, Peter Wemm?

The other alternative is to change ups to understand
PT_{SET,GET}{REGS,FPREGS} -- this would be needed for
FreeBSD-5 in any case.  But this is not a quick change as ups
uses PTRACE_{PEEK,POKE}USER for dealing with registers and
signals and these need to be replaced something more
discriminating.  I took a quick look at it but then got
distracted.  Also, not every arch. has separate FP regs and I
didn't look deep enough in ups to figure out how to add
machine dependent code like this.

-- bakul

Index: sys/ptrace.h
===
RCS file: /home/ncvs/src/sys/sys/ptrace.h,v
retrieving revision 1.10.2.1
diff -u -r1.10.2.1 ptrace.h
--- sys/ptrace.h3 Oct 2001 06:55:43 -   1.10.2.1
+++ sys/ptrace.h1 Mar 2002 21:52:57 -
@@ -40,7 +40,7 @@
 #definePT_TRACE_ME 0   /* child declares it's being traced */
 #definePT_READ_I   1   /* read word in child's I space */
 #definePT_READ_D   2   /* read word in child's D space */
-/* was PT_READ_U   3* read word in child's user structure */
+#definePT_READ_U   3   /* read word in child's user structure */
 #definePT_WRITE_I  4   /* write word in child's I space */
 #definePT_WRITE_D  5   /* write word in child's D space */
 /* was PT_WRITE_U  6* write word in child's user structure */
Index: kern/sys_process.c
===
RCS file: /home/ncvs/src/sys/kern/sys_process.c,v
retrieving revision 1.51.2.3
diff -u -r1.51.2.3 sys_process.c
--- kern/sys_process.c  22 Jan 2002 17:22:59 -  1.51.2.3
+++ kern/sys_process.c  1 Mar 2002 23:45:18 -
@@ -257,6 +257,7 @@
 
case PT_READ_I:
case PT_READ_D:
+   case PT_READ_U:
case PT_WRITE_I:
case PT_WRITE_D:
case PT_CONTINUE:
@@ -413,6 +417,33 @@
}
return (error);
 
+   case PT_READ_U:
+ if ((uintptr_t)uap-addr  UPAGES * PAGE_SIZE -
+sizeof(int)) {
+ return EFAULT;
+ }
+ if ((uintptr_t)uap-addr  (sizeof(int) - 1)) {
+ return EFAULT;
+ }
+ if (ptrace_read_u_check(p,(vm_offset_t) uap-addr,
+ sizeof(int))) {
+ return EFAULT;
+ }
+ error = 0;
+ PHOLD(p);   /* user had damn well better be incore!*/
+ if (p-p_flag  P_INMEM) {
+ p-p_addr-u_kproc.kp_proc = *p;
+ fill_eproc (p, p-p_addr-u_kproc.kp_eproc);
+ curp-p_retval[0] = *(int *)
+ ((uintptr_t)p-p_addr +
+ (uintptr_t)uap-addr);
+ } else {
+ curp-p_retval[0] = 0;
+ error = EFAULT;
+ }
+ PRELE(p);
+ return error;
+
case PT_KILL:
uap-data = SIGKILL;
goto sendsig;   /* in PT_CONTINUE above */

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Missing PT_READ_U

2002-03-01 Thread Bakul Shah

 As the culprit behind PT_READ_U's demise, I'm willing to dive in
 and help here if needed.

Thanks but Julian sent me a patch for 4.5 that seems to work
with no changes in ups.  Would be nice if PT_READ_U is put
back in 4.x.

Now that I think about it, ups will need to be fixed up since
the ability to write registers is lost with PT_WRITE_U gone
(have to use PT_SETREGS).  If you want to put PT_WRITE_U back
in 4.5, I wouldn't complain;-)

 Incidently, PT_READ_U didn't actually work for the case where the
 signal handlers were shared between rfork()'ed processes.

Hmm... Probably neither does ups:-)

 Do you have any suggestions as to how PT_GET/SETSIGSTATE should look and
 feel?  UPS's requirements seem pretty trivial (ie: return the handler
 for a given signal number), but that feels a bit minimalistic given that
 we have flags and a mask per signal as well.  There is also the signal
 mask as well (masks are 128 bit).

I just copy struct sigacts in my code for this.  There is no
PT_SETSIGSTATE (that would require a whole bunch of checking
for very little gain).

 On the other hand, maybe we should just keep it simple for ptrace() since
 the API is so limited.

There is time to think through API changes for 5.x.  Reporting
signal state is a small part of this!  Some random thoughts:

- should be able to get at additional registers (SSE etc. on x86).
- I'd just merge access to all registers in one register
  space.  This allows you to access any special or additional
  registers intel/amd may throw at you (ditto for ppc)
  without having to add more request codes.  This is why
  READ_U/WRITE_U were so useful.
- would be nice if the old interface of just returning one
  word was put back even for registers.  Typically you access
  a very small number in a debugger (more typically never).
- May be for reading registers there is some value in a
  read-all register interface but hardly ever for writing.
- Need a way to find out what threads exist and may be in
  what state (if 5.x had a u-page, this would be part of
  it!).
- Need PT_{ATTACH,DETACH,CONTINUE}_THREAD to deal with kernel
  threads.  Some sort of thread-id would be handy for this.
  [But I don't know how you find a particular thread]
- On a breakpoint a number of threads may stop -- if you
  allow other threads to proceed while the first thread at a
  bkpt is stopped.  Need an ability to report this as well as
  continue/step any subset of these threads.
- Inserting debugging code that is run by a particular thread
  and no one else can be tricky [ability to insert code is
  one of the strengths of ups].
- All this gets somewhat trickier (or impossible) to
  implement if you allow threads to run on multiple
  processors!
- If all this is done, it should be not too hard to add
  support (in a debugger) for debugging multi-process apps.
- Need to look at how multi-threaded apps are debugged on
  other OSes and learn from that as well.
- Need to experiment before settling on an interface.

-- bakul

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Kernel Debugging over the Ethernet?

2002-02-25 Thread Bakul Shah

 The value of network debugging to me is not that I can
 avoid buying a serial cable (big deal), it's that I can
 do the debugging remotely.

Agreed.

 If I'm going to ssh into a local machine and debug from
 there, then I can use a serial cable.

The serial cable solution does not scale too well when you
have many people simulteneously debugging multiple kernels.
If you use 8-port serial cards on a machine and connect every
other machine's serial console to it, that machine can become
the bottleneck + even in a lab keeping track of all the
serial cables etc.  can be a pain.  For us the ability to log
in any test machine and debug any other test machine was very
valuable.

 The other issue is that, doing remote debugging from a
 local machine, means I have to expose my source code on
 that machine.  If I tunnel in, insteaD, well, then I'm
 not exposing the source code.

There are other alternatives if this is an issue.  For
instance on a local machine you can put a little proxy  or a
tunnel endpoint.  But see below.

   For me the biggest reason for not using any IP was to
   minimize any perturbation due to the debugger.  The fact that
   we have to steal mbufs is bad enough.
  
  I agree, especially when we will have locking etc for the mbuf queues.
  It's a pitty we can't intercept the mbuf allocate routines..
  then we could keep a couple for ourself :-)
 
 IP is so you can make it through a cisco, etc. to another
 routable segment.

Oh I know that; but the cost of that convenience seems high.

For us, with a lab full of test machines (used for simulating
and testing various IP network clouds) a non-IP solution was
preferable.

But I can see that for other situations (such as debugging a
machine colocated at your ISP, or debugging kernels in the
field (ouch!)) our solution is far from ideal.  Still, adding
a separate tcp/ip stack just for debugging (as someone seemed
to suggest) seems excessive.

-- bakul

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Kernel Debugging over the Ethernet?

2002-02-23 Thread Bakul Shah

 Without TCP, you have to implement your own version of
 retry and ack (equivalent to negotiating a window size
 of 1), and so you have to redo what's already there.

Would be nice to have a reliable channel but in our
experience not having this was not a big deal.  The gdb
serial protocol is fairly resilient.

 The other issue with TCP is that you can set up specific
 flows in the company firewall, and also permit SSLeay
 based tunnel encapsulation from outside via an intermediate
 machine.  This isn't really required for off-site debugging,
 but it gives another option.

You are better off ssh-ing into a machine on the same net and
running gdb there.

For me the biggest reason for not using any IP was to
minimize any perturbation due to the debugger.  The fact that
we have to steal mbufs is bad enough.

-- bakul

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Kernel Debugging over the Ethernet?

2002-02-20 Thread Bakul Shah

 On Tue, 19 Feb 2002, Julian Elischer wrote:
 
  Hi George.
  
  There was someone recently that posted that they had some sort of 
  remote debuging working over an ethernet (or at least that they ALMOST
  had it working.). I remember thinking Cool. I have however had good
  success with the serial crossover cables needed for the curren serial
  debugger. I know of course that it's not as convenient but
  the serial debugger can possibly work in network debugging situations
  where the ethernet debugger is too close to the action :-)
  
  I'll see if I can find the reference in the archives...
 
 I've looked but I can't find a reference..
 maybe I was dreaming

This is the way we did it:
- add a low level console device that sends  receives
  ethernet packets of special ether type.  Packets are sent
  to an ethernet multicast address (N).
- we enhanced if_ethersubr.c to deal with packets of this
  type (when addressed to the local machine)
- if a console packet is received, it is unpacked and chars
  from it are interpreted normally.
- Interrupts were disabled only while there were outstanding
  chars to send out or while receivied chars were being processed.
- at compile time we hardwired a particular ethernet driver
  to act as console.
- on the remote machine a program can be run that watches for
  enet dest.addr==N and src.addr==M (machine whose console we
  are interested in).  This program extracts chars from
  packets from the host M and displays them to /dev/fd/1 and
  packages up chars from /dev/fd/0 and sends them to enet
  dest addr==M.  This gives you in effect a remote console.
  You can exit out of the program using cu like commands
  (. in first column).  Among other things this allowed
  you to use ddb remotely.
- To connect to a particular machine from the remote console
  program we either used its ethernet address directly or via
  /etc/ethers.
- the same program can optionally open a pty and start up
  gdb (or a debugger of your choice).  Basically it just
  fork-execed a specified program.

Use of ethernet multicast allowed us to access the console
from any directly connected machine.  By not using IP we
avoided dealing with a bunch of issues and depended on fewer
things that had to work right.  Of course, security is
compromised.  But this is a given if anyone can run gdb
remotely in any case.

I may have forgotten a few things but this is the gist of how
it worked.  Credit for all this work goes to someone else.
We had meant to give this back to the FreeBSD community but
didn't get around to it in time and now it is not possible.

-- bakul

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Kernel Debugging over the Ethernet?

2002-02-20 Thread Bakul Shah

  We had meant to give this back to the FreeBSD community but
  didn't get around to it in time and now it is not possible.
 
 Why not? (curiosity, not disbelief)

The company got sold before we could sort all this out and a
bunch of the original people no longer work there.  Actually
anything is possible and I will try to rattle some cages but
don't hold your breath.

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Kernel Debugging over the Ethernet?

2002-02-20 Thread Bakul Shah

Forgot to add: this is a pretty straight forward thing to do
and anyone can hack it together in a few days especially when
you have a functional spec of a sort!

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Linking libc before libc_r into application causes weird problems

2002-02-08 Thread Bakul Shah

 As you can see from my log there was no library explicitly linked with
 libc and no -lc command line option, but resulting executable ended up
 with libc recorded right before libc_r. Any clues?

I don't get this ordering problem with your test.c file on a
very recent -current.  What do you get when you use the -v
flag (not for the test case but the program (ximian) that hangs)?

By using the -v flag as in

$ cc -v test.c -o test -lc -lc_r

reveals (after snipping uninteresting stuff)

/usr/libexec/elf/ld -m elf_i386 -dynamic-linker /usr/libexec/ld-elf.so.1 -o test 
/usr/lib/crt1.o /usr/lib/crti.o /usr/lib/crtbegin.o -L/usr/libexec/elf -L/usr/libexec 
-L/usr/lib /tmp/cc6O0HGi.o -lc -lc_r -lgcc -lc -lgcc /usr/lib/crtend.o /usr/lib/crtn.o

$ ldd test
test:
libc.so.5 = /usr/lib/libc.so.5 (0x18068000)
libc_r.so.5 = /usr/lib/libc_r.so.5 (0x1811b000)

Here ./test hangs.  Based on the follow

But
$ cc -v test.c -o test -lc_r -lc

reveals

/usr/libexec/elf/ld -m elf_i386 -dynamic-linker /usr/libexec/ld-elf.so.1 -o test 
/usr/lib/crt1.o /usr/lib/crti.o /usr/lib/crtbegin.o -L/usr/libexec/elf -L/usr/libexec 
-L/usr/lib /tmp/ccrxO4nI.o -lc_r -lc -lgcc -lc -lgcc /usr/lib/crtend.o /usr/lib/crtn.o

$ ldd test
test:
libc_r.so.5 = /usr/lib/libc_r.so.5 (0x18068000)
libc.so.5 = /usr/lib/libc.so.5 (0x18086000)

Here ./test does not hang.

So it is clear that the cc frontend sticks on -lc at the end.
To prove it:

$ cc -v test.c -o test -lc_r

reveals

/usr/libexec/elf/ld -m elf_i386 -dynamic-linker /usr/libexec/ld-elf.so.1 -o test 
/usr/lib/crt1.o /usr/lib/crti.o /usr/lib/crtbegin.o -L/usr/libexec/elf -L/usr/libexec 
-L/usr/lib /tmp/ccxNFUZi.o -lc_r -lgcc -lc -lgcc /usr/lib/crtend.o /usr/lib/crtn.o

ldd test
test:
libc_r.so.5 = /usr/lib/libc_r.so.5 (0x18068000)
libc.so.5 = /usr/lib/libc.so.5 (0x18086000)

and ./test works.

Also note that on 4.5-RELEASE, ./test core dumps where it
hangs on -CURRENT so there too it misbehaves but differently.

Elsewhere you said

 I think that ld(1) should be smart enough to reorder libc/libc_r so that
 libc_r is always linked before libc.

I don't believe this would be wise.  ld should do exactly
what it is told and link against libraries in the order
specified.  It is the frontend's (cc's) repsonsibility to
specify libraries in the right order.  I do think that
explicitly specifying libc or libc_r to cc is asking for
trouble (though I understand your doing it in a test case to
illustrate the problem).

-- bakul

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: A question about timecounters

2002-02-05 Thread Bakul Shah

 Btw, regarding the volatile thing:
 
 If I do
   extern volatile struct timecounter *timecounter;
 
   microtime()
   {
   struct timecounter *tc;
 
   tc = timecounter;
 
 The compiler complains about loosing the volatile thing.
 
 How do I tell it that it is the contents of the timecounter pointer which
 is volatile, but now what it points at ?  I don't want the tc pointer to
 be volatile because it obviously isn't.  Do I really need to cast it ?
 
   tc = (struct timecounter *)timecounter;

[I see that jdp has answered your question but] cdecl is your friend!

$ cdecl
Type `help' or `?' for help
cdecl explain volatile struct timecounter *timecounter
declare timecounter as pointer to volatile struct timecounter
cdecl declare timecounter as volatile pointer to struct timecounter
struct timecounter * volatile timecounter

-- bakul

PS: Chances are most people don't have cdecl any more.  You
can get it like this:

mkdir cdecl;cd cdecl
fetch 
ftp://gatekeeper.dec.com/pub/usenet/comp.sources.unix/volume14/cdecl2/part0{1,2}.Z
gzcat part01.Z | gunshar
gzcat part02.Z | gunshar
patch 'EOF'
diff -ru ../cdecl-orig/cdecl.c ./cdecl.c
--- ../cdecl-orig/cdecl.c   Tue Feb  5 14:24:23 2002
+++ ./cdecl.c   Tue Feb  5 12:12:30 2002
@@ -57,6 +57,9 @@
 # include stddef.h
 # include string.h
 # include stdarg.h
+#ifdef BSD
+#include errno.h
+#endif
 #else
 # ifndef NOVARARGS
 #  include varargs.h
@@ -110,6 +113,9 @@
   void docast(char*, char*, char*, char*);
   void dodexplain(char*, char*, char*, char*);
   void docexplain(char*, char*, char*, char*);
+#ifdef __FreeBSD__
+#define setprogname _bad_bad_bad_FreeBSD
+#endif
   void setprogname(char *);
   int dotmpfile(int, char**), dofileargs(int, char**);
 #else
diff -ru ../cdecl-orig/makefile ./makefile
--- ../cdecl-orig/makefile  Tue Feb  5 14:24:19 2002
+++ ./makefile  Tue Feb  5 12:10:10 2002
@@ -13,7 +13,7 @@
 # add -DdodebugTo compile in debugging trace statements.
 # add -Ddoyydebug  To compile in yacc trace statements.
 
-CFLAGS= -g -Ddodebug -Ddoyydebug
+CFLAGS= -g -Ddodebug -Ddoyydebug -DBSD
 CC= cc
 ALLFILES= makefile cdgram.y cdlex.l cdecl.c cdecl.1 testset testset++
 BIN= /usr/lubin
EOF
make
# as root:
make install BIN=/usr/local/bin

No idea if c++decl is valid any more!

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: A question about timecounters

2002-02-05 Thread Bakul Shah

 Is C a great language, or what? ;-)

Nah, just mediocre even when it comes to obfuscation!
Have you played with unlambda?!

 The way I always remember it is that you read the declaration
 inside-out: starting with the variable name and then heading toward
 the outside while obeying the precedence rules.  When you hit a *,
 you say pointer to; when you hit [], you say array of; and when
 you hit () you say function returning.  For example:

I remember something about switching declaration reading
direction when you hit a bracket; but why bother once you
have cdecl?

cdecl declare f as array of pointer to function returning pointer to function 
returning int  
int (*(*f[])())()

It is not clear to me how to apply your rule.  It doesn't
matter though, it is gotten to the point where I can only
store ptrs to ptrs to information in my ever shrinking brain!

To the people who pointed out the cdecl port, I did look in
/usr/ports/devel but missed cdecl somehow.  Sigh... :-)

-- bakul

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Unix Philosophers Please!

2001-11-01 Thread Bakul Shah

  Answer 2.  All the data goes into another dimension, and comes out of
  /dev/random.

 That would be so funny... I cat /dev/random, and I get your
 files, as you delete them.  8-).

Of course you do, it is just that the bytes are in random order.

But I see that you are thinking of /dev/null as a bitbucket
for files.  Hmm... that means we can get rid of the unlink()
given an atomic rename() syscall.

mv file1 file2 dir1 et cetera et cetera et cetera /dev/null

Neat!

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Unix Philosophers Please!

2001-10-31 Thread Bakul Shah

 Please specifically define where data goes that is sent to /dev/null

The same place where /dev/random gets its data from.  Unless
your computer is owned by gummint, in which case FBI gets it
as you have to keep a copy of all output.

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: precise timing

2001-09-30 Thread Bakul Shah

   On a totally unrelated subject to my sio.c message, I have a second problem. 
  I've built a computer-controlled drill, that is controlled via the parallel 
 port.  This drill uses stepper motors, at 1/2 step.  My driver software 
 implements a maximum-acceleration control algorithm that ensures that at any 
 point in time, any axis will not experience more than X m/s/s of 
 acceleration.  This keeps the drill from self-destructing. :)  Unfortunately, 
 it means I need access to a very precise timing source to issue the step 
 instructions to the motor control board.

Are you controlling the rotation speed of the drill or the
x,y,z position?  I'd guess  the latter.  Don't you also need
guaranteed real time response (which FreeBSD won't provide
you)?  I suppose if you are controlling the position (and not
the velocity) RT response won't be too critical.  At any rate
you are better off writing a device driver which can run
timing critical code while blocking out all other interrupts.
Or else between the time you measure time and supply the next
pulse a higher prio interrupt handler else may sneak in.  As
was suggested you may want to consider a dedicated cpu based
controller.  Thre are a number of solutions for hobbyists
(such as the handyboard, see www.handyboard.com).

Is this a totally homebrew drill or something from a kit?
I'd be interested in details (probably better offline since
the interection of freebsd s/w hackers  h/w hackers is
tiny).

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: precise timing

2001-09-30 Thread Bakul Shah

   Hrm, I was planning on investigating the RT capabilities of fbsd after I got 
 myself a decent timer mechanism.  I was hoping they would be enough to get 
 close to RT.  I have an SMP system I can use, so 1 CPU can be dedicated to 
 the task.

I doubt even an SMP system would help.

  you are better off writing a device driver which can run
  timing critical code while blocking out all other interrupts.

   Not an option.  It would stall the whole system during the (possibly 20 
 minute) drilling operation.  Maybe it'll be possible with SMPng, but not now.

I meant blocking other interrupts only during critical
periods.  For instance, when your s/w gets control, you find
out the current time and figure out what speed you want to
set.  Then you set timout for the next time you want to do
this and return.  Basically you are approximating a curve
and while doing this at regular interval is easier, you can
also approximate with an irregular interval (use Bresenham).

But this is just a generic suggestion; I do not know enought
details to do more than that.  One other thing you can do is
to increase clock tick rate to 1000 Hz from the default 100
Hz.

  was suggested you may want to consider a dedicated cpu based
  controller.  Thre are a number of solutions for hobbyists
  (such as the handyboard, see www.handyboard.com).
 
   Unfortunately, money is a big factor.  So that's not an option. :/

IIRC you can buy a kit (including a two sided PCB) for under
$100.  A few years ago I built the precursor to the Handy
Board (called Miniboard) from a kit for a lot less.  It had a
68hc11E2 (with a 2k EEPROM) + you can control upto 4 motors +
a bunch of sensors and digitial output control pins.  Someone
may still be selling it.

What I was thinking of was not a completely dedicated
controller.  You interface to something like the miniboard
via a serial port and do all the fancy computation on your
freebsd system and let the controller do the PWM by feeding
it precomputed parameters (at time t0 velocity v0, at time t1
set it to v1 and so on).

   It's home brew, I'll forward you more details in personal email.

Thanks!

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: adding a new function to libc

2001-05-12 Thread Bakul Shah

 Any comments, suggestions, swears concerning adding a new function, 
 strndup(), to libc?

Many very good programmers I know carry around a library of
useful functions (and usually don't bother about inclusion in
libc).  So I would suggest first you should keep this
function in your own library for a few years and *only* then,
and only if from experience you truly think it to be
generally useful should you propose it for inclusion in a
standard library.  And *when* you do that you should
a) present a correct version of the function (not a buggy one
   as you did here)
and, more importantly
b) a clear explanation of its function including how boundary
   conditions are handled.
For your own library you don't have to jump through these
hoops; this is necessary only when you want to let loose one
of your favorite functions on unsuspecting libc users!  Then
the function behavior must be fully and carefully specified.

Even for your own use, as was suggested by Valentin Nechayev,
the strangely named function strnlen(str, max) is a better
lower level function since it guarantees str won't be
traversed beyond max chars and it is likely to be useful in
more situations.  As suggested by Terry Lambert, `asnprintf'
would be another alternative.

meta-comment
Though IMHO these string functions have sprouted like weeds,
for very little added functionality.  The fact str[a-z]* is
reserved namespace should tell you how bad the situation is.
What is needed is a decent unicode string library, derived
from the collective experience with perl scripts.  Unicode
strings should be counted instead of null terminated so we
are talking about a brand new set of functions.  May be such
a library can be standardized in a future standards effort
after some experience with it.
/meta comment

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message



Re: Interesting article.

2001-04-15 Thread Bakul Shah

  From the top level page I read hotmail handles 550,000 change
  requests a day.  Later in the article they say they have a
  5000 server farm.  That translates to 110 change requests a
  day on average per server.  If the peak rate is 10 times the
  average, that is still only about 1100 requests/server/day or
  about 78 seconds on average.  This rate seems quite low even
  when you account for multiple web page servings per change
  request  Am I missing something obvious?
 
 You neglected to deduct the number of servers that are down/rebooting from
 the 5k. :)
 
 http://www.microsoft.com/backstage/column_T2_1.htm
 
 You just can't make this stuff up

It was a back-of-an-envelope kind of figuring to make some
sense of their numbers -- not that it helped.  But even if
50% of the machines are down (we don't have data to prove
that) at any given time, the request serving rate still seems
low.  Also, the above article talks about www.microsoft.com
servers not hotmail servers.

Thanks for the url, though.  Without that I would not have
seen this gem:

Not having a one-to-one ratio of VIPs to DIPs gives
us a mixture of fail-safe and ease of maintenance,

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Interesting article.

2001-04-15 Thread Bakul Shah

  Though, a lack of good Unicode support on FreeBSD seems like
  a legitimate enough reason for the move.
 
 Yes, it would, if it were true, see /usr/ports/devel/libunicode.

One port does not make good support.  For that FreeBDS has to
have native unicode support.

 In order to determine if they really made any savings or not -- I
 notice that they've increased the number of servers at Hotmail from
 3,400 to 5,000 - you'd also have to determine how much they could have
 improved the performance by merely writing their code as an Apache
 module.

If as they claim they doubled the performance, they saved a
few mil in not having to use 10,000 servers.  My point was
they didn't save *as much money as* they could've, had they
used various performance increasing tricks we are well aware
of.

 So, was that 18 month development project really necessary from a
 technical standpoint, or only justified as a marketing cost?  Nobody
 outside Microsoft management will ever really know.

Suspect the most likely cause of conversion can be summed up
in the phrase `eating your own dogfood'.

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Interesting article.

2001-04-10 Thread Bakul Shah

From the top level page I read hotmail handles 550,000 change
requests a day.  Later in the article they say they have a
5000 server farm.  That translates to 110 change requests a
day on average per server.  If the peak rate is 10 times the
average, that is still only about 1100 requests/server/day or
about 78 seconds on average.  This rate seems quite low even
when you account for multiple web page servings per change
request  Am I missing something obvious?

In
http://www.microsoft.com/technet/migration/hotmail/hotdepl.asp
they say this:

With these major changes and few other minor ones, the
number of requests per second that could be handled
nearly doubled what the live site was experiencing.

Compared to what?  FreeBSD servers running perl based cgi
scripts?  Or converted servers running c++ based scripts?
Even if the latter, a mere doubling of performance makes me
think very likely they did not look at *all* the hotspots
(meaning carefully look at where and how much time is spent
for each request on every machine taking part, bandwidth of
network and disk etc).  And you have to keep looking since
hotspots move as you speed things up.

Though, a lack of good Unicode support on FreeBSD seems like
a legitimate enough reason for the move.

Regardless, note that doubling of the performance meant they
saved anywhere from $10M to $20M (5000 servers x (price +
maintenance of each server) - development and testing costs).
Another doubling would still save them $5M or so!  I'd take
that challenge if I can get 50% of the savings!:-)

It would be interesting to see what Yahoo has done for Yahoo
mail.

-- bakul

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: [kernel patch] fcntl(...) to close many descriptors

2001-01-29 Thread Bakul Shah

This caught my eye:

 Besides, there is no such thing as a
 perfect hash ... at least not one that has a small enough index range
 to be useful in a table lookup.

If you can get to old CACMs see `Minimal Perfect Hash Functions Made Simple'
by Richard J. Cichelli, Comm. of ACM, Jan 1980.  AFAIK gperf uses some
variation of that algorithm and may have some details.  A minimal perfect hash
function is only worth it (IMHO) when the set of input keys is mostly fixed and
the hash function is used many many times (e.g. programming language keywords).

As for a generic  extensible syscall, you'd have to pay the cost of finding
a new minimal perfect hash function every time you load a kernel module that
implements a new system call.  AFAIK the best algo. to generate a new perfect
hash function runs in O(n log n) time -- not too bad (kernel module loading
doesn't have to be lightening fast) but even for an experimental syscall the
time to reach syscall code should be minimized if it is to ever get past the
experimental stage.  Someone mentioned discovering the syscall name-index
mapping via sysctl but that opens up a window where the mapping can change.

For what its worth...

-- bakul


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: [kernel patch] fcntl(...) to close many descriptors

2001-01-29 Thread Bakul Shah

 If you can get to old CACMs see `Minimal Perfect Hash Functions Made Simple'
 by Richard J. Cichelli, Comm. of ACM, Jan 1980.  AFAIK gperf uses some
 variation of that algorithm and may have some details.  A minimal perfect hash
 function is only worth it (IMHO) when the set of input keys is mostly fixed and
 the hash function is used many many times (e.g. programming language keywords).
 
 And even then it's seldom worth it according to the people behind the LCC
 compiler...

I'd be interested in a reference if you have one [I don't doubt you, just
curious).

I used gperf to generate such a function in a verilog parser and came to
the same conclusion but it can't be generalized to the syscall (or for
that matter database) case.  The reason it doesn't buy much in a compiler
is because what is not a keyword is an identifier and you end up doing a symbol
table lookup on it.  So you may as well just do a symbol table search for
everything (also, typically there are more identifiers than keywords in a
program so it all comes out in a wash).  This is not the case for a simple
lookup (database or syscall) -- you don't do another lookup on the same
string if the first search fails.

I agree that for a very small number of syscalls a simple hash or even
a strcmp is good enough but that doesn't scale well.  Min. perfect hash
function is about as good as it gets *if* one wants to allow for a large
number of loadable syscalls *without* apriori assignment of numbers.  With
an IANA like central number assigning authority, preassignment of syscall
numbers is far better.

-- bakul


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



  1   2   >