OVERVIEW
This application has 2 main components: Provider and Viewer.
Provider runs on a system that can be light-minutes far from other
systems with Viewer running on them. Viewer instances and Provider
communicate by DTN and operate asynchronously, though they use a common
time-stamp convention at the application level.
The purpose is to allow a human to see remote pictures, with notable
informations, that are produced in heterogeneous ways; spanning from single
slides to sequences of pictures with free-hand annotations, till moving
pictures.
For sure this application is not original because it's a fundamental one.
Creativity lies on focusing concepts like "best effort" and
"robustness by simplicity".
PROVIDER
This component mantains folders that contain "time stamped" pictures,
following a policy to purge them and to allow indirect access to their
contents by means of remote requests.
Provider is aware of the "DTN bundle logic", so it produces pictures small
enough to be atomically delivered to requesting Viewer. Indeed the
application tolerates delivery failures resulting in a moderate loss of whole
pictures.
Provider integrates several tools to produce pictures.
A notable one allows an user to start and stop the production of pictures
that are screen-shots, at regular intervals, of what s/he sees on the
system display while interacting with other applications. This way s/he can, for
example, record a sequence of pictures representing a slide show in progress, or
the dynamic of free-hand annotations on a given background.
Supposing 1000s of elapsed time between start and stop user actions, and a
period of 900ms between screen-shots, the "recording" produces 1111 pictures,
in a sequence where time-stamp of each one differs of 900ms from the next or
previous. That can be very redundant in the case of a simple slide show in the
notice board, but this redundancy plays an important role in avoiding
round-trips while communicating with Viewer.
VIEWER
Basically this component sends messages to Provider so that it sends back
pictures: one atomic request message for one whole picture.
As written above the application tolerates delivery failures resulting in a
moderate loss of whole pictures; and this includes a moderate loss of request
messages.
Viewer integrates several tools to manage requests and received pictures.
A notable one allows an user to view pictures remotely produced after a
given universal time.
That can be achieved, for example, sending a "request for picture" each 1100ms.
In the first request the time-stamp matches the wanted "beginning time", while
in the following ones it increases by 1100ms steps. This way if Viewer receives
pictures then it tries to render them in a sort of virtual blackboard, that a
human can look at. Rendering is based on position, size and time-stamp
parameters integrated in the received picture data.
LOGIC
If Provider receives a "request for picture" message, from a Viewer
autorized to indirectly access a given folder, then it checks for the existence,
in that folder, of a picture with time-stamp greater than the one
specified in the request.
If such a picture exists Provider sends it to the requesting Viewer; otherwise
it waits for the production of a new picture for which the "requested
condition" is true.
The Provider implementation sets a suitable balance between maximum quantity of
pending requests and maximum time to wait before
discarding a pending request.
To better understand the implications of all this we can now combine the example
in the PROVIDER section of this document with the example in the VIEWER
section...
Provider produces the first picture of the sequence at universal time
X+10s. The following ones at times X+10.9s, X+11.8s, X+12.7s, X+13.6s,
X+14.5s, X+15.4s, X+16.3s, X+17.2s, X+18.1s, X+19s, X+19.9s, X+20.8s and so
on...
Viewer sends the first request at universal time X+46s. The following
ones at times X+47.1s, X+48.2s, X+49.3s, X+50.4s, X+51.5s, X+52.6s, X+53.7s,
X+54.8s, X+55.9s, X+57s, X+58.1s and so on...
The time-stamps specified by Viewer in each request are respectively X, X+1.1s,
X+2.2s, X+3.3s, X+4.4s, X+5.5s, X+6.6s, X+7.7s, X+8.8s, X+9.9s, X+11s, X+12.1s,
and so on...
For an end-to-end delay of 300s Viewer could receive the first picture (with
time-stamp X+10s) at time X+646s. (But now we imagine that Viewer, to
provide some time buffering, renders it at time X+700s.)
At time X+647.1s Viewer could receive the picture with time-stamp X+10s again.
Indeed Provider does not receive a request for its second picture (the one with
time-stamp X+10.9s) and sends its third picture (the one with time-stamp
X+11.8s) at the eleventh Viewer request (the one with time-stamp X+11s).
So at time X+657s Viewer could receive the picture with time-stamp
X+11.8s.
At time X+658.1s Viewer could receive the picture with time-stamp
X+12.7s.
In other words, Viewer has a chance to render the 1000s recording mantained in
the "remote notice board" though, for asynchronous processing issues, it skips
202 or more remote pictures of the 1111 ones in the available sequence.
Also, with this logic Viewer can start rendering the remote recording while
recording is in progress.
Acceptance of moderate loss of pictures, and of some level of redundancy,
reduces to 1 round-trip time (600s in the example) the nominal delay from
"production to fruition".
But the delay can be further reduced if Viewer can predict the starting of
recording... If, in the example above, Viewer knows that recording will start at
universal time X+10s then it can send its first request (with time-stamp X+10s)
at time X-290s or before, so that Provider has a chance to send "immediatly" the
new produced picture, that could potentially be rendered at time X+310s.
VARIANT
Provider can be splitted in 2 components: Producer and Archivist.
Producer runs on a system that can be light-minutes far from a systems
with Archivist running on it. Archivist and Producer communicate by DTN
and operate asynchronously, though they use a common time-stamp convention at
the application level.
Producer produces pictures as described in the PROVIDER section of this
document.
Archivist serves Producer, storing the pictures coming from it. It mantains
them as described in the PROVIDER section of this document; also, in the same
way, it serves Viewer instances.
Archivist acts similarly to a web server, so it can be "mirrored" to
improve accessibility by Viewer to the "notice board" of interest.