On Sun, 2005-09-04 at 11:15 -0400, Kevin Toppenberg wrote:
> I am working on document imaging.  And more specifically, transferring
> images (i.e. binary data) to and from a Windows client, written in
> Delphi pascal.
> 
> I have become convinced that transferring the file through the RPC
> Broker is the way to go:
> 
> 1. It doesn't require the server to set up any sort of file server to
> the client (i.e. ftp server, or exposed file system)
> 2. It avoids potential security problems, such as users requesting
> images/files that they should not be accessing.  The server function
> can screen the client requests
> 3. It would use the already-established connection--which may be a
> client outside the server's usual network.  With the new xinetd
> connection method, it can no longer be assumed that the client already
> has access to the server's local network.
> 
> Having decided this, I therefore need to establish binary file
> routines on the server and with the RPCBroker.  To this end, I have
> written the following code:
> 
> 1. BFTG -- BINARY FILE TO GLOBAL.  It works just like the kernel
> function FTG, except that it handles binary data.  And FYI, M globals
> appear to hold binary data with no problems.
> 2. GTBF -- GLOBAL TO BINARY FILE.  Again, just like kernel GTF
> 3. CPYBG -- a binary global resizer.  The kernal OPEN function
> hard-codes in a record length of 512 bytes.  This might not be the
> best size for passing through RPCBroker, so this function will copy
> the binary global to a different global using a different record(line)
> length
> 
> I have then written a RPCBroker call that attempt to pass back the
> binary data.  I think I am writing my function call correctly, because
> when I pass ASCII data, it comes through, but when I pass binary data,
> it doesn't come through properly to the Delphi/Windows client.  I have
> yet to test exactly which part of the character set messes up
> RPCBroker.

It has nothing to do with the character set, and everything to do with
the Delphi RPCBroker client.

If you look at xwb/Trpcb.pas in the CPRS source, you will see on line
(in my copy) 808 the start of TRPCBroker.Call.

This code eventually uses pchCall in the same file, starting at line
973. The comments identify this as the lowest level possible RPC call w/
the RPC Broker.

It looks like using anything above pchCall will not work for sure, as it
is converted into a string or a string list, however you should be able
to get it to work using pchCall, as that just returns a PChar.

> 
> So to address this problem, I had decided to use some sort of ascii
> armouring (encoding) of the binary data to get it to successfully pass
> through--such as simple hex encoding, or even UUENCODING.
> 
> Maury pointed out that I should post my overall purpose here and see
> it I am working on the wrong problem.
> 
> So here are my question:
> 1. Is there some good reason that I should not be passing binary data
> through RPCBroker?
> 2. Could RPCBroker be beefed up to handle binary data?

It already does. We use it to transfer progress notes in languages that
use a MBCS very successfully.

> 3. Any other thoughts?

Mostly, I would question the need/desire to store this data in VistA,
and access it via the RPC Broker.

I understand the three points you made above, however it seems to me
that using some form of https (with a password) accessible webserver
makes this far more manageable from the client end. Also far more
scalable. Tasking vista to transfer potentially large binary files down
the RPC Broker connection seems somewhat like using a sledgehammer to
tap in a push-pin. Not to mention not nearly secure enough for use.
Great that you provide some 'security' by deciding if you will give a
user a file, but you could provide the same 'security' by deciding if
you will give a use a URL. But that security goes right out the window
the minute you transfer that file over the unencrypted RPC connection.
Of course, this is also a more general issue facing the current RPC
method, so it is not specific to this issue at all.

It would seem that using urls + https + authentication is the better way
to go to handle this, as it does also 'solve' all 3 points brought up,
but (imo) in a better, more secure, and far more scalable fashion.

--Todd



-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
Hardhats-members mailing list
Hardhats-members@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/hardhats-members

Reply via email to