[Pipet Devel] summary for web

Brad Chapman chapmanb at arches.uga.edu
Sat Apr 1 12:58:16 EST 2000


[snip... my 8 step idea for node sharing between remote location]

Jarl wrote:
> If you want to accomplise 1) you do a ReteiveInfo::uriStatus() on 
the BL.
> 5) will work by Representation::upload()

Agreed. I think that points 1 through 5 are well taken care of in the 
idl for dl->bl communication.


> What we'll need ain't a DL <-> DL communication line, but a BL -> DL 
one.
> (opposing the DL -> BL commmunication we've described by the
> vsh-pilot.idl)
> 
> This way all authentication will remain at one layer, otherwise 
we'll get
> some
> serious syncronising problems.
> 
> Please do doubt & discus my opinion!

I've been thinking a lot about this and like the idea, and even want 
to extend it a level to include the following (probably controversial) 
idea:

All communication between vsh implementations at different locations 
should occur in the definition layer.

First let me describe how my 8 step thingy would work under this new 
idea and then mention what I think are the advantages to this approach.

Okay, as before, the first 5 steps would occur, and the definition 
layer would pass an XML description of a workflow diagram containing a 
remote node (on Jeff's computer, in my example) to the processing 
layer.

6. The brokering/processing layers would set up it's path of 
processing and start processing things.

7. When the information from the remote node is needed, the brokering 
layer will send a message to the definition layer (through the 
as-of-now undefined bl->dl idl that Jarl proposed) to send the XML to 
the remote node for processing.

8. The definition layer on my computer will contact the dl on Jeff's 
computer and authenticate again.

9. My dl will then send the XML to be processed to Jeff's dl.

10. Jeff's dl will contact Jeff's brokering/processing layer, do 
the computations, and return the results.

11. My dl will be able to query Jeff's dl about the process of the 
nodes, and will then fetch them when they are finished, and pass them 
back to my processing layer.

12. The results will be available on my computer through the dl->bl 
idl (vsh-pilot.idl) as if they had been executed on my computer 
instead of Jeff's.

Okay dokee, that's my proposal. Let me tell you what I feel are the 
advantages of this:

a. All authentication between remote vsh implementations occurs in one 
layer, the definition layer (this is the point that Jarl made).

b. The processing and brokering layers don't have to worry about 
networking and can be optimized for maximum processing speed instead 
of worrying about network connectivity (note: I am not talking at all 
about "independent" GMS or Overflow programs, which can support any 
kind of network communication they want, but rather about the 
communication for vsh).

c. This will be an overall speed-up for communication. From reading, 
it seems like a major cost of communicating across a network is 
establishing and authenticating the connection. This way we only need 
to authenticate once and pass one big hunk of data to be processed 
(the XML description of the node processing to do). In the other 
scheme I proposed, individual processing nodes would need to 
transfer information after every process, which could get expensive 
really fast if you are executing a lot of processes (as for instance, 
the Overflow guys do). I am assuming that a person would use a 
non-local node if that processing was expensive or time consuming and 
they want to: 1. execute expensive code on two or more computer 
simultaneously or 2. execute the expensive code of a faster computer 
(or cluster of computers :-). 

Well, I think that's all of my ranting on this idea. Am I making any 
sense? Is this idea any good? 

Criticism is heartily encouraged.

Brad








More information about the Pipet-Devel mailing list