So for business, you could set up request for tender systems, contracting companies would need to apply to be added to a tender approval list.

Then from the early beginnings, a project could be estimated, quoted, prepared, executed and completed using separate company systems that interacted with each other using standard service interfaces, to share information combined together to produce dynamic complex systems, that can be used to track and link all relevant information.

But that could be a pipe dream, other applications might be distributed social networks, complete with graphics and video, textual chat and other interfaces. This sort of software could be developed and advanced rapidly, no reliance on prior protocol deployment or different protocol versions or upgrades. Java platform deployments are relatively long lived. The trick will be to make UI's simple and intuitive. Service UI's don't have to be dynamically downloaded for every service, and they don't need to originate from the underlying service, a service UI could be common to many global distributed services with an identical Service API. An app service might simply provide downloads of Service UI (with signed jars) for other popular services, the service UI actually appears as the application to the user, it might consume a number of services, which could be dynamically discovered.

Just some thoughts, I think the possibilities are only limited by imagination, time, ability and a few java platform limitations.

Cheers,

Peter.

Peter Firmstone wrote:
Well, I think it's possible to do most things with web based services, using style sheets html and javascript on the client and java or modperl on the server, usually with an SQL database behind that. What happens though is as you develop this, it becomes unmaintainable the java or perl code get's tied to the database structure, the javascript might do some local client processing before submitting forms etc, then you start looking for something to abstract the database, but everything becomes very difficult to manage and support over different browsers due to fragmentation, so you try and standardise your IT deployment environment, then this forklift upgrade cycle develops, where you have a system upgrade period where nothing works properly. But it's the old 3 tier server and client model. This is tied to HTML and TCP/IP.

But this is what people know and it's majority rules.

I guess you could sort of argue that Java has become fragmented too, I mean if you consider Android a form of Java and you've got Java EE, SE, ME CDC and CLDC, blue ray, embedded SE and RTSJ and all the various versions.

But then if you settle for Java SE, JERI CORBA for C+ / C++ back ends and use Surrogate for the ME stuff, you're pretty well covered.

But that's not what this is about, by using Interfaces, services can become mix-ins, or runtime dependency injected, so it no longer matters to the client what the server does, where or how it stores its data, whether it's served by a legacy back end system, what communication protocol it's using etc. It's not about what they can do and what we can do, if that's what we focus on, we've already lost, it's about maintenance cost.

It costs more to maintain business software than it does to write it, now most programmers might tell me I'm talking crap, however I'm not a professional programmer and have a different perspective, which might be a breath of fresh air, or just totally wrong, I'm a Mechanical Project Engineer / Project Manager, working with Mega Machinery, eg Big, Bigger, Biggest, shows you see on the telly, software must adapt to business and business processes, must be made as efficient as possible, with as little bureaucracy as possible. But too often in business I see the tail wagging the dog, IT, HR, Safety, Legal and Accounting departments are support infrastructure, they're supposed to support the departments that produce or bring in income, be they manufacturing, maintenance or customer sales or service, they are supposed to find ways of making innovation possible, too often they unknowingly stand in its way. Too often the power lies with those that control infrastructure and everyone else tries to work around documented policy and process, simply because infrastructure doesn't fit the business. Typically IT software and hardware companies deliberately introduce incompatibilities into their systems, differentiating features, especially if they've gained a large market share, it becomes part of their market strategy, along with patents and copyright laws: intellectual property. I once heard someone say: "Ideas are worth nothing, if you don't have execution."

Workers used to worry about automation putting them out of work, however today, complex government legislation has created more work than automation ever replaced, it's how rich countries keep their populations employed. Money that changes hands (payroll) is highly taxable, money itself is partly an illusion, when lending is high, or there's an international trade surplus, increasing the availability of money, we have booms and when there's a shortage, bust. Countries trade in treasury certificates, IOU's, it's bad (for both countries) when a country can't repay the interest on it's IOU's. Money is created in production and destroyed in consumption, you'd think that government would legislate the amount of available credit based on international trade surplus / deficit and GDP, then tax credit creation at a fixed rate (requiring a referendum and vetting by a broad knowledge base to alter), abolishing all other taxes, it would stop the boom bust cycle and you wouldn't need the complexity of current taxation systems, interest rates would be based on competition between lenders, not monetary policy. Government would be forced to promote production and innovation, to increase their tax base, rather than creating new taxes and legislation complexity as they do currently.

But I'm in danger veering off topic into fantasy land and maybe even talking crap.

Using services is a way to create systems that plug in and connect data silo's together, join the old with the new and adapt.

I see the hidden costs of software systems unable to adapt to business changes, it's tough in business, it is dog eat dog, to be successful requires good people skills, understanding your market, your customers and competitors. You need your support departments fighting for you, not with you. IT is the support department that supports all other departments and can be a big impediment to progress or an enabler.

To answer the question about the internet, everything is connected, as a business grows it branches out and opens up in new locations, the internet is a communication channel.

If River can't communicate / traverse the internet, it becomes a legacy data silo, it can't expand with the business, it can't fully service the businesses needs.

River could be the glue that works with everything else, or it can remain in a niche and be consigned to history.

River still has some Java platform warts, but they're not as bad as the html, sql, javascript worlds warts. Systems need to be efficient and clean, even the simplest tasks can become daunting when dealing with some of today's web pages. Computers can automate and simplify complex tasks, or they can make the simplest task a nightmare, it depends on the implementer.

That's why I contribute development time, to use River in my business, it needs to do some basic things it can't do presently. To do things no one else can; adapt quickly for a competitive edge.

Cheers,

Peter.

Gregg Wonderly wrote:
This is one of those places where Jini's power, using mobile code, creates more "necessary" overhead, than people familiar with other forms of "marshalling" start to wonder "why would you do that then?" I think it's important to look at what "external mechanisms" Jini is using now, and start looking at providing other forms of "marshalling" at the InvocationLayerFactory level.

Simple, "document transfer", is what it seems many people feel is "tenable" for them in enterprise level systems. I've long argued, that the Jeri ILF is actually just like a "document transfer", in that the method arguments are sent, in a package, to the server, whose "invoke" action is passed this "document." The remote server then processes the document and returns it, potentially with a "hyper link", in the form of a remote reference or just the resultant "value." The type information available from the result, is the "complete, self documenting" description. It tells you what you have and what you can do with it.

It's this simple view of the "Jini transport", that would enable a lot of different possible mechanisms to be used at the ILF layer. Because I've never really had the need for anything else, I don't have anything in production different than the standard Jeri ILF. But, I did, at one point, create an ILF that did do MODBUS-over-TCP as an exploration of what it would mean to move something, which I could do at the "service layer" via a "delegation model", into a lower level interface.

What I found, was that there wasn't a "distinct" advantage, so I threw that stuff away and just kept it at the service implementation level. This is one of the things that I found important to understand about Jini. It works well as a layer of communications and unification of interface, that enables features that are not what the "service" is about, but rather what "distributed systems" are about. So, as a tool set, it works well for that specific task.

Things like Rio, JavaEE integration, Realtime systems monitoring etc, are the "domain" targeting mechanisms that enable specific types of system "construction" which in turn can enable specific kinds of "features". Rio lets you build lots of "small" or "large" services and get all the dynamic, built-in, life-cycle management features that a large enterprise environment needs. The Harvester and other such systems, provide ways to use Jini features inside of a JavaEE environment to take advantage of "both" tool sets together. The dedicated solutions world, which has plagued the Jini platform with no "demonstrable" users, is what we've always held up and waved around, saying, "people feel it's so valuable to them, that they don't want their competition to "see" or "know" how they are using it.

So, there is the whole other side of the internet, on untrusted networks, where people are constantly using the "Web" for their transactional, data transport systems/models. I'm not sure where Jini fits in that world, without some very specific, dedicated systems that do stuff that the web can't do. Looking for some of that "lone" fruit to pick, is what I'm not sure about.

What kind of transactional, leased or other data services could you imagine Jini being a key part of on the Internet?

Gregg Wonderly

On 1/27/2012 7:04 PM, Peter Firmstone wrote:
I've been thinking about the practicalities of a djinn running in untrusted networks (internet), the first thing that springs to mind is, security is much simpler if people can get away with only "dumb" or reflective proxies.

I'd like to the see the default security setup requiring DownloadPermission.

I we sign our download jars (a number of developers could do this, requiring at least this group of signers), a standard policy file template could include a certificate grant for DownloadPermission, allowing anyone to load classes from a standard River download proxy.

This gets our smart proxy's out of the way.

Then all developers need to worry about are Principals and MethodConstraints, allowing people to get started using River with reflective proxy's over the internet.

Later if people want to get into smart proxy's that power's still there, this change prevents unauthorised class loading.

Cheers,

Peter.






Reply via email to