Linux-Development-Sys Digest #288, Volume #6     Sat, 16 Jan 99 05:14:07 EST

Contents:
  Re: - deprecated - why? (Christopher B. Browne)
  Re: Registry for Linux - Bad idea ([EMAIL PROTECTED])
  Re: Linux should not support non-free modules (David Steuber)
  Re: Making reliable profilings under linux !!!! (Nitin Malik)
  Newbie needs help =)! (JP)
  Re: Registry - Already easily doable, in part. (Frank Sweetser)
  2.2.0pre7 problem with can't find map file (Frank Hale)
  Viper V550 X driver? (Gunnlaugur Thor Briem)
  Re: Open Configuration Storage - was Registry for Linux (George MacDonald)
  USR modem (Baha Karahan)
  Re: IPMasquerading / SSH (Daniel R. Grayson)
  Re: Open Configuration Storage - was Registry for Linux (George MacDonald)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Christopher B. Browne)
Crossposted-To: comp.os.linux.advocacy
Subject: Re: - deprecated - why?
Reply-To: [EMAIL PROTECTED]
Date: Sat, 16 Jan 1999 05:02:07 GMT

On Fri, 15 Jan 1999 15:28:33 +0100, Toon Moene <[EMAIL PROTECTED]>
posted:
>Christopher Browne wrote:
>
>> On Thu, 14 Jan 1999 19:44:10 GMT, Alan Curry <[EMAIL PROTECTED]> wrote:
>> >One question though: did the Linux community, or any of the *BSD people, have
>> >a vote in deciding which version of ps would be standardized in UNIX98? If
>> >not, then why should we bloat up procps with a bunch of options no one will
>> >ever use just so we can claim to follow this lousy standard which we had no
>> >opportunity to discuss before it was adopted?
>
>> No, there was no vote for these communities.  Much as these communities
>> do not have direct representation with X11.  And much as there has been
>> limited representation at IETF.  And not too much at W3C.
>
>Indeed, and we do not need to.
>
>We can just embrace and enhance these standards/protocols.
>
>:-) :-)

In the long run, if free software is *actually* the "wave of the future,"
this means that the free software community needs to be prepared to help
define the standards, rather than merely being followers.

I see the smileys; I get the joke; anyone that is actually serious about
the notion that free software might dominate against proprietary software
and ignores the issue that "we" have to start setting the standards,
*and paying for the process,* is pretty naive.
-- 
Those who do not understand Unix are condemned to reinvent it, poorly.  
-- Henry Spencer          <http://www.hex.net/~cbbrowne/lsf.html>
[EMAIL PROTECTED] - "What have you contributed to Linux today?..."

------------------------------

From: [EMAIL PROTECTED]
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.setup
Subject: Re: Registry for Linux - Bad idea
Date: Fri, 15 Jan 1999 01:56:00 GMT

George MacDonald writes:
> Anyhow john expressed a desire that we not do the system related
> aspects. It was not clear why, and I would like to know what his concerns
> are. There is always something outside my view/perspective that is
> important, and I would like to hear it.

I have no objection at all to seeing the system related aspects done.  I
just think they should be kept distinct from the application aspects.  Use
seperate clients, servers, and databases for system and application stuff.
It's fine if the system and application tools heve identical UI's.

My concern is that you not fall into the trap of treating the system as
just another bunch of applications.  This could be very bad for stability
and security.
-- 
John Hasler
[EMAIL PROTECTED] (John Hasler)
Dancing Horse Hill
Elmwood, WI

------------------------------

From: David Steuber <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.misc,comp.os.linux.advocacy
Subject: Re: Linux should not support non-free modules
Date: 14 Jan 1999 19:27:42 -0500

I don't have the original post that started this topic.  But I do have 
a comment on the subject line itself.  If Linux ever supports UDI or
any other standards that make it possible for device driver writers to 
make binary only releases, and they choose to go that route, I'm not
going to object.  If it is the only way to get a piece of hardware I
want to use to work, then so be it.  I won't be happy about it.  But
something is always better than nothing (unless it's the flu).

It is always preferable to go open source.  But if it takes allowing
closed source modules to be distributed to get commercial support then 
I will allow it.

I believe the NeoMagic chipset drivers for the video display on my
Laptop are binary only releases.  Am I happy about that?  No.  But at
least I have KDE working.

-- 
David Steuber
http://www.david-steuber.com
s/trashcan/david/ to reply by mail

"Hackers penetrate and ravage delicate, private, and publicly owned
computer systems, infecting them with viruses and stealing materials
for their own ends.  These people, they're, they're  terrorists."

-- Secret Service Agent Richard Gill

------------------------------

Date: Sat, 16 Jan 1999 01:06:46 -0500
From: Nitin Malik <[EMAIL PROTECTED]>
Subject: Re: Making reliable profilings under linux !!!!

doesn't "readprofile" do this for u?? i mean doesn't it give u the CPU
ticks?? btw, i am having some problems using it!! if anyone can help me
out with it....

Thanks,

nitin

On Thu, 14 Jan 1999, Bryan Hackney wrote:

>I use a delta TSC function for this purpose. It's do-it-yourself
>profiling sort of.
>
>The "rdtsc" instruction returns CPU ticks (on a Pentium).
>
>inline unsigned long long drdtsc( ) {
>        static unsigned long long llLast;
>        unsigned long long llNow, llRet;
>        __asm__ __volatile__ ( "rdtsc" : "=&A" (llNow) : );
>        llRet = llNow - llLast;
>        llLast = llNow;
>        return llRet;
>}
>


------------------------------

From: JP <[EMAIL PROTECTED]>
Subject: Newbie needs help =)!
Date: Thu, 14 Jan 1999 20:38:49 -0600

I just installed RedHat 5.2, and I cannot get X to work. It says it
cannot support my video card. Is there some kind of driver I could
download that would enable me to use my STB Velocity 4400 AGP? Also, is
this the proper forum for me to ask this?
Thank you.

--
JP


------------------------------

From: Frank Sweetser <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps
Subject: Re: Registry - Already easily doable, in part.
Date: 14 Jan 1999 23:54:50 -0500

Todd Knarr <[EMAIL PROTECTED]> writes:

> And often at a company they _will_ be forced from the top, and inevitably
> there will be situations where this will be inappropriate ( maybe because
> the company admins simply didn't ever think that this situation would come
> up ). If your configuration system doesn't allow the local machine to
> ignore mandatory top-down policies, you'll end up with a situation where
> the $1k/hour consultant can't print the report the CEO _must_ have for
> the board meeting in 15 minutes, and about 15 minutes after that your
> configuration system will no longer be in use no matter how good it is
> otherwise. This would be a Bad Thing.

hmm... i see your point.  the "administrator" section does indeed equally
span both the global network, and the local machine.

however, i'm hesitant to say that a special case check should be put in to
force local machines config to override global configs.  if all values are
assigned a flexible value (ie, an int) for priority, then it should simply
be a matter of the admin never defining a network wide property to have a
max value.  IOW, is it worth the extra code to hold root's hand? 

-- 
Frank Sweetser rasmusin at wpi.edu fsweetser at blee.net  | PGP key available
paramount.ind.wpi.edu RedHat 5.2 kernel 2.2.0pre5ac1 i586 | at public servers
"All language designers are arrogant.  Goes with the territory..."
(By Larry Wall)

------------------------------

From: Frank Hale <[EMAIL PROTECTED]>
Subject: 2.2.0pre7 problem with can't find map file
Date: 15 Jan 1999 05:32:46 GMT


I have 2.2.0pre7 loaded on my RedHat 5.2 box and everything is good. I
upgraded my modutils package to modutils-2.1.121.

I get the following errors on boot 

Jan 15 00:29:16 FranksPC kernel: Cannot find map file.
Jan 15 00:29:16 FranksPC kernel: Error seeking in /dev/kmem 
Jan 15 00:29:16 FranksPC kernel: Error adding kernel module table entry. 


But the system is fine. All my modules load. PPP works, I can dial up to
my ISP, mount drives, use my sound card. No other error but that.

I copied the System.map to the /boot directory after I compiled
2.2.0-pre7 and I created the symlink. But I can't get rid of that error.

Anyone else have the same error messages?

-- 
From:      Frank Hale
Email:     [EMAIL PROTECTED]
ICQ:       7205161
Homepage:  http://members.xoom.com/frankhale/
Jade:      http://jade.netpedia.net/

Windows VirusScan 1.0 - "Windows found: Remove it? (Y/N)"

------------------------------

From: Gunnlaugur Thor Briem <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.hardware,comp.os.linux.x
Subject: Viper V550 X driver?
Date: Sat, 16 Jan 1999 00:04:53 -0800

Hi folks,

is anyone working on an X driver for Diamond's new
Viper V550 card? Anyone know whether the V330
driver will grok the V550 card?

Thanks,

        - G.

------------------------------

From: George MacDonald <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.setup
Subject: Re: Open Configuration Storage - was Registry for Linux
Date: Fri, 15 Jan 1999 06:55:30 GMT

Phil Howard wrote:
> 
> On 09 Jan 1999 00:52:29 -0500 Frank Sweetser ([EMAIL PROTECTED]) wrote:
> 
> | > And will you be able to copy among the config sets as easily
> | > as you can with files to duplicate an existing setup before
> | > starting to make changes?
> |
> | no reason why not to.
> 
> With the usual tools, like cp, tar, or whatever, just like you do now
> with files?
> 

Yup.

> | you're thinking a bit too specifically here.  the idea is to have the
> | configuration values returned dependant upon some external value - whether
> | it's the hostname, ip, username, domain, etc.
> 
> Such as in a way like X resources?
> 

Yup, but with the ability to write them, mark values as final, and
have them spread out across the network so you can roam around and
still use the set of your own reasources you want.

> | > There is something to be said for making read() the only system
> | > call needed to get values, but using /proc will make the concept
> | > OS-specific.  Has anyone considered the Netscape style of picking
> | > up a global configuration file via http?  Everyone with an ethernet
> | > has a web server somewhere these days.
> |
> | the application itself won't directly grab the information.  it'll just
> | call the opstore_get_config (or whatever) function, which in turn consults
> | the metadata information, and from there gets the information from a flat
> | text file, http, /proc, RDBMS, or whatever other module has been defined.
> | the app won't even know where the data is coming from, let alone have to
> | really care.
> 
> Will it be able to obtain that information asyncronously or in bulk so
> that network delays won't result in length configuration acquisition?
> Imagine 1 second turnaround for 1000 configuration variables obtained
> one at a time.  I've seen programs get bogged down just on account of
> delays in DNS when a lot of names have to be looked up.  And that's
> even with DNS being quite fast.  Of course /etc/hosts would help to
> speed things up, but so would an asyncronous resolver.
> 

Cache! Like DNS, but perhaps better. I was thinking about also allowing
server side pushes to client's, perhaps even with delayed activation.
This would allow PDA style syncronization, plus something a bit new,
"coordinated changes", i.e. every device reconfiging at a preset time.
We obviously would need a new protocol for those features.

As for performance, I have been thinking
about timestamping each config value, thus allowing smart cacheing
and delta updates to the cache. Also each time a reference is
made to a non local store a "related object" meta data field could
be used to pull the related objects(i.e, if you go to a network store
for a company address, grab the company telephone number and
zip code ...). This would be advisory information which could
help improve performance. One strategy is simply to keep the 
clients cache as upto date as possible. So one could constantly
"trickle charge" the local cache. This would be especially
useful for laptops that can unplug at anytime. Of course the
rates of cache recharge should be configurable if it's used at all.


I suppose we could also allow batching of requests in the application
by providing api calls to request a list of config values.
Like the var bind arrays in SNMP get/set's.

> If you conceptually tree structure the information, would it be possible
> to simply ask for the information at some higher level and get a data
> object (perhaps as a stream) that represents everything at that pruned
> point in the tree?
> 

Hmm, interesting idea, yeah that would be nice. i.e. when an app
starts one could just ask for all the app config values. That
sounds a bit like the X resource mechanism, where they merge
all the different resource files when an app starts. In
essence providing a resource cache localized to the application.

The tree is a given, it's just a question of what are the nodes
in the tree. Fields/Attributes or Objects? I tend to favour objects
as it allows meta data to be captured for *both* the attributes
and the object. A app config file can then be said to be a subtree
container of application configuration object fragments.


-- 
We stand on the shoulders of those giants who coded before.
Build a good layer, stand strong, and prepare for the next wave.
Guide those who come after you, give them your shoulder, lend them your code.
Code well and live!   - [EMAIL PROTECTED] (7th Coding Battalion)

------------------------------

From: [EMAIL PROTECTED] (Baha Karahan)
Subject: USR modem
Date: Sat, 16 Jan 1999 05:41:37 +0000

I would like to know how to get my USR Internet non-winmodem working in linux (duh!).  
It is setup in isapnp properly with not shared IRQ but it still will not work.  As far 
as I know, I have the write 0x value of ttyS2/3

------------------------------

From: [EMAIL PROTECTED] (Daniel R. Grayson)
Subject: Re: IPMasquerading / SSH
Date: 16 Jan 1999 00:55:48 -0600

[EMAIL PROTECTED] (mumford) writes:

> A while ago, Daniel R. Grayson<[EMAIL PROTECTED]> begot:
> >Greg Boehnlein <[EMAIL PROTECTED]> writes:
> >
> >> Hello all,
> >>    I've got this particularly annoying problem when SSHing out
> >> through my 2.0.36 box w/ IP Masquerading. If I'm sitting behind the box
> >> and connecting to an outside server, the SSH connection eventually goes
> >> away. This only happens when I am idle for a period of time.
> >>    I'm running SSH 1.2.26-1us from ftp.replay.com.
> >> 
> >> Any suggestions? It's a minor annoyance right now, but enough to piss me
> >> off every couple of hours.
> >
> >This has nothing to do with ssh, but has to do with an time limit for
> >automatic expiration of any masquerading connection imposed by the kernel.
> >I'm using a 2.1 kernel, but it must be pretty similar, and I haven't figured
> >out to increase the expiration time to anything other than the default 15
> >minutes.
> >
> >In linux/include/net/ip_masq.h one sees this line
> >
> >#define MASQUERADE_EXPIRE_TCP     15*60*HZ
> >
> >which seems to set the expiration time to 15 seconds.  But changing the
> >number here doesn't help.
> >
> >In the documentation to ipchains (yes, used only with 2.1 kernels) one sees
> >an option -S for setting these times to something else, but it doesn't work.
> 
> Perhaps a trip to ye ole' man pages are in order.  The manpage for ipfwadm
> clearly states that -s must be used with the -M option.  The manpage for
> ipchains similarly states that -S must be used with -M.
> 
> # ipfwadm -M -s 7200 0 0
> changes the TCP timeout to 7200 seconds (2 hours), and doesn't touch the 
> TCPFIN and UDP timeouts.  The equivalent IPCHAINS command is 
> # ipchains -M -S 7200 0 0

Thanks for the hint, but I did read the man page about the -S option.  Here's
what I tried:

        ipchains -M -S 3600 120 300

------------------------------

From: George MacDonald <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.setup
Subject: Re: Open Configuration Storage - was Registry for Linux
Date: Sat, 16 Jan 1999 09:10:12 GMT

Christopher Browne wrote:
> 
> On Fri, 15 Jan 1999 04:03:09 GMT, Phil Howard <[EMAIL PROTECTED]> wrote:
> >Will it be able to obtain that information asyncronously or in bulk so
> >that network delays won't result in length configuration acquisition?
> >Imagine 1 second turnaround for 1000 configuration variables obtained
> >one at a time.  I've seen programs get bogged down just on account of
> >delays in DNS when a lot of names have to be looked up.  And that's
> >even with DNS being quite fast.  Of course /etc/hosts would help to
> >speed things up, but so would an asyncronous resolver.
> 
> *Very* good point.
> 
> I've been following (due to other interests :-)) the developments
> relating to the OMG Working Group that is building a "standard" General
> Ledger IDL.  (This is relevant, trust me...)
> 
> They provide both:
> a) Bindings that can retrieve a single accounting transaction, as well
> as
> b) Bindings that can retrieve a group of transactions based on some
> query criteria.
> 

Another example of how this is done in another context is the 
SNMP get command. One can ask for one or more values(in SNMP terms
an object) in one request. This is done by creating an array of
"varbinds", each varbind saying which value(object) to get and
it's expected type.

What is returned is an array of varbinds with the values filled in, or
an error indicating the index position that the error occured. Any
values after the first error are invalid.

SNMP is connectionless and limits messages to 4k packets, so only
a limited number of values can be requested at once. But at
least it's better than one at a time. For really large data
transfer SNMP punts and people typically use tftp.

It might be nice to allow one request once value, a short list,
a subtree and or a pattern. Perhaps others.

> If a process just wants to look at a single "transaction," whether that
> be details of corporate operations or what is the IP address
> corresponding to a host name, it is reasonable to use the "single
> retrieval."
> 
> It is equally important to make sure that there's a way of getting data
> across the "wire" in bulk, in case we're looking for a whole lot of
> information.
> 
> This allows us to cope with slow latency.

This is an interesting problem. It's faster and reasonable to use
connectionless protocols for short/small messages(less than one packet).
It's faster to use connection based protocols for large amounts of
data. It might be interesting to build a mechanism that does
both and picks the correct one!

b.t.w. I have implemented both kinds, plus a reliable messaging
API using sockets. As you say latency is the big thing to look
out for. Another subtle thing to watch out for is doing transactions
across a stream socket. If they are synchronous, the performance
drops like anvil on jupiter.

> 
> If it takes a second to get data "across the Great Divide," that is a
> pretty bad situation, regardless of how we handle it.  If that latency
> gets associated with *every key,* then that is *REALLY BAD.*

If we want to make it really slick, then the smart way is to use
a trickle cache(tm).

> 
> By having a message queue (ala MSMQ/TIB/OM3/...; I'm assuming *some*
> magic here; this corresponds to the "asynchronous resolver") and
> queueing requests, we *might* get some parallelism, but if that results
> in there being thousands of across-the-wire requests being queued up, I
> suspect that this just puts off the problem for a few minutes until the
> messaging system crashes because the queue got up to 150,000 queries,
> and the memory consumption from the promises made blew out swap space.
> Thus asynchronous resolution doesn't *necessarily* solve the problem.
> It certainly doesn't if "putting off resolution" consumes some memory,
> and we do it tens of thousands of times...

On 10 Mbit ethernet you can get at least 200kbytes/sec on a stream
socket(application to application). Anything gained by parallelism
would most likely be lost in the reassembly of the messages. That might
be more useful on a Windblows box. I don't think you will get much
benefit on Unix/Linux(based on measurements).

Stream sockets also handle the buffering issue, i.e. it fills up
local buffers, tehn remote buffers, then the sender either blocks,
detects the full pipe, or gets a signal.

I'm not familier with MSMQ/TIB/OM3. What do they give you over 
normal sockets?

> 
> If, instead, we can pre-bundle up 20,000 of those key queries into a
> single request, and get 1 second latency plus a couple of seconds of
> "variable cost" associated with adding a whole whack of data to the
> request (going in both directions), the latency is still kind of ugly,
> but it's probably *workable.*
> 
> In the above bit, I've assuming having some hi-tech stuff like CORBA and
> message queueing; the issues should still hold true if we're talking
> about having some processes talking through sockets with less
> sophistication involved.

Actually CORBA would limit your ability to pick the connection style,
hence will increase your connection times for a single request.
Well I should say IIOP, other ORB protocols could be implemented.

The real problem is that useful knowledge about efficiency lives
a layers above the level where it can be used.

Now that we have objects, attributes, transactions ... We
can define relations between them. Then useful meta data within
the frame of reference and the particular context can be captured.
This would allow for greater flexibility in leveraging 
transactional patterns.

It would be nice to build in a transaction engine that can do
throttling, priority handling ... A lot of this is defined
in CORBA Services and Facilities but is not yet freely 
available on Linux/Unix.

We could either build those services/facilities or code the
equivalent. I have some recent experience with these kinds
of services ;-) , so don't have to think too much about how
to do it.

I wouldn't mind building that stuff, in fact I have a few ideas
that I haven't seen elsewhere that I think would work out
quite well. Hmm, first things first.

> 
> A different example of the same thing:
> 
> My ISP has been having some trouble lately with INN due to some recent
> patches that make building the news feed more efficient, but
> unfortunately makes it hard for users to actually get connections.
> Similar sorts of stuff.
> 
Yeah, INN, NNTP are interesting examples of distributing info!

> The NNTP protocol includes commands to request articles, which generally
> is a one-by-one thing.  Commands to request information about newsgroups
> and articles tend to offer options to pull lots of "keys" at one shot to
> allow them to be "batch processed," which is a lot more efficient.
> 
> Overall points:
> 
> 1- Despite some adverse comments above, message queueing at some level is
> probably a useful idea.
> 
Are these "messages" persistent across system reboots?

> Microsoft has MSMQ, which definitely qualifies as a "heavyweight
> implementation;" Linux has no freely available equivalent.
> 
> (See <http://www.hex.net/~cbbrowne/tpmonitor.html> for some commercial
> options...  As near-coincidence, I talked today with one of the
> marketing guys at Level8, vendors of FalconMQ, a UNIX-based system that
> interoperates with MSMQ.  No Linux version yet...)
> 
> It would be nice to have some options of varying weights, between:
> 
> - "it's working with local data, so it's basically a simple set of
> arrays..." approach that works internally to a process, to
> 
> - something that uses sockets to let processes talk on a single host, to
> 
> - something heavier weight using CORBA that provides lots of
> functionality and is certainly able to cope with multiple hosts...
> 
> 2 - While there will certainly be differences in performance between
> lightweight implementations that sit within local processes, or, at the
> far opposite end, processes that marshall arguments to push data
> requests using CORBA through IIOP, there *is* a cost to doing single key
> requests, which mandates having some way of doing bulk data transfers.

Multi-modal messaging(tm) -> Linux Subspace relay station 3Qh4-I59 

b.t.w.

I have started discussions with the GNU team(Richard Stallman) regarding
the project. He has an interest in doing an "application configuration"
project for GNU.  His focus is a little different than what we have been
discussing but there is also much in common. I'll keep you posted regarding
it's progress.

Anyone object to doing a gnu project?  opStore -> Gnustore ?

-- 
We stand on the shoulders of those giants who coded before.
Build a good layer, stand strong, and prepare for the next wave.
Guide those who come after you, give them your shoulder, lend them your code.
Code well and live!   - [EMAIL PROTECTED] (7th Coding Battalion)

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.development.system) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to