[Pipet Devel] GST, widgets & figure building

J.W. Bizzaro bizzaro at bc.edu
Mon Jun 21 19:20:52 EDT 1999

"Alan J. Williams" wrote:
> I like your point which reinforces the idea that the editor should simply
> be used to define features, not how they are displayed.

Me too.

> I would agree with you, in that the fig builder should provide basic
> drawing/rendering capabilities that the locus could use to render the
> data.  This reduces redunency and improves consistency. So if we go with
> the idea of a rendering widget derived from gnome-canvas-item, the
> rendering widget could use calls to the standard figure builder
> canvas-item sets for rendering. So the basic set of widgets would be:
>    Main Figure Builder Widget: Uses Gnome Canvas
>    Standard Set of Builder Widget Items: Each Uses Gnome Canvas Items
>       (ie box, arrow, star, line, circle, ... we can assume that any loci
>        installation will know about these)
>    Non-Standard Set of Builder Widget Items: Each Uses Gnome Canvas Items
>       (These are items which Figure Builder Will manage, however they are
>        essentially extensions that we cannot assume that an installation
>        has)
>    Specific Data Rendering Widget Items: Each Uses Gnome Canvas Item and
>       is associated with a certain Loci Data Type

Hmmm.  I think we have basically the same idea.  This is the way I see it:

    GnomeCanvas is a GtkLayout widget.
    It contains GnomeCanvasItems, which are GtkObjects.
    The items can be put into a GnomeCanvasGroup, which is also an item.
    The items are group, image, line, polygon, rectangle, ellipse, text,
        icon_text, and widget.

    Viewer is a GnomeCanvas (wraps GnomeCanvas) (what you're calling "Figure
    It inherits all the GnomeCanvas items, etc.
    It defines new high-level (predrawn) bioinformatics items.
    The items are derived from the standard GnomeCanvas items.
    The items are 1D nucleotide sequence, chromosome map, protein motif, etc.
    (the items are what you're calling "rendering widget"?)

BTW, "traditionally", anything that displays publication-quality images from
data is of the class "viewer".  Anything that alters data is of the class
"modifier".  And data are of the class "document" or maybe just "data".  I think
we can also define subclasses for each of these (like your "rendering widget" is
a subclass of "Figure Builder"?).  "Figure Builder" referred to an actual
drawing program (like Adobe Illustrator ;-) that also uses a viewer widget.

> So I could implement a MultiSeqRenderWidget(MSRW) which is derived from
> gnome-canvas-item. This widget must handle output and may optionally
> handle input.  So the MSRW at the bare minimum must take data from a Multi
> Sequence Data Object (ie maybe a XML SAX interface) and render that data.

I think the viewer should be a rather dumb widget that only draws what it is
told to (draw a circle, for example).  There should be a component(s) in-between
the modifier and viewer ("translators" maybe) that will translate biodata into
canvas items.  It just makes things smaller, more modular, flexible and
extensible (following the Loci paradigm).

I believe a translator would be part of what you are calling "Figure Builder". 
Translators in general take the XML/DOM objects and call Gtk and derived Loci
widgets.  They are typically sandwiched between data and viewers/modifiers

> Optionally, the MSRW may handle input from the user such as:
>    Don't display this feature
>    Display this feature with a yellow box
>    Add a comment to the feature just below the sequene
>    Display labels on the first line only
>    Don't display a ruler
>    Add an arrow to this residue

Yes, connecting events to handlers.  They will also have to be high-level

> All of these display characteristics could be stored in XML form in the
> Multi Sequence Data Object or maybe better yet, there should be a markup
> XML file that has a reference to the actual data.  This would allow one to
> generate multiple "displays" or markup for the same data. So:
>    Sequence Data Object:
>       Sequences
>       Sequence Labels
>       List of Features and their location
>       List of Markup Data Objects that Reference this object


>    Markup Data Object: (or actually a Figure Builder Data Object)
>       XLink to the Sequence Data Object an potentially other Data Objects
>       List of characteristics on how to display each "feature" from each
>         Object (The Figure Builder doesn't have to understand this data,
>         it just passes the data off to the appropriate rendering widget)
>       List of other markup to display (3 types)
>         - Items unknown to Figure Builder:
>            * Call the appropriate rendering widget and pass along any of
>              the stored rendering data for the item which is stored here
>         - Items known to Figure Builder:
>            * Independent markup which isn't tied to one of the data
>              objects
>            * Dependent markup which is tied to one of teh data objects

Besides the terminology, I think we see it the same way.  And I guess the viewer
items should be separate modules from the viewer.

>  > Not anymore. We could also allow the fig builder to call back to the multiseq
>  > editor, so if we click on a base and add an arrow in the fig builder, the
>  > multiseq editor gets called to add a new feature to the multisequence, this is
>  > harder, but the right way to do it.

Will we then be directly mixing graphical markup with biodata markup?
> Another way (with the rendering widget idea) is that each rendering widget
> can register whether or not it can handle dependent markup. If it can,
> then when the user selects that object (lets say a multi sequence
> alignment) and adds an arrow, the figure builder will generate an id for
> the arrow and pass all the position info for the arrow to the rendering
> widget.  The rendering widget will store the characteristics of
> the arrow in the markup object.  The rendering widget will also create a
> new feature in the sequence data object using the position and id info in
> addition it will prompt the user for a label as well as a residue to
> anchor the arrow to.

How will the graphical feature be marked in the sequence XML?


> Now when an old Markup Data Object is to be rendered (or a Figure Builder
> Data Object), figure builder will go through the list of markup rendering
> each one.  When it gets to an unknown item type, it will create the
> appropriate rendoring widget to handle the display.

I'm a little foggy on how the viewer ("figure builder") will create a viewer
item ("rendering widget") it knows nothing about.

> When it gets to
> independent items, it will render them directly. When it gets to a
> dependent item, it will pass the item to the rendering widget that the
> item is dependent upon.  The rendering widget can then look up any info it
> needs from the data object (ie for the arrow, what is the position of the
> residue it is anchored to) modify any of the rendering data if necessary,
> and then pass the data back to the figure builder to be rendered.


>  > Another option is that the figbuilder simply calls the multiseq editor with a
>  > pointer to the canvas region where it should draw its output. The multiseq
>  > editor will own that piece of the canvas, and register callbacks for the
>  > canvas items it draws. All of the multiseq editor's display (draft and final
>  > mode) can be done to the gnome canvas.
> I guess this was basically my idea with the rendering widget.

I agree.

>  > How do the gnome and KDE office suites handle embedding objects into other
>  > documents (like a graph in a paper)? This seems to be exactly the same kind of
>  > situation.

I don't THINK that anything like XML is used for doing this in Gnome.  I know I
have seen a screenshot of a graph embedded in a spreadsheet:


It has something to do with "Bonobo", which may be an object model.

There is A LOT of new functionality coming to Gnome and even Mozilla (SVG for
example) that is very much like what we need for Loci.  A point though that I
would like to make is that these features are so general-purpose, it would be as
difficult using them to embed a component into Loci, as making a new stand-alone
program (a reason for not using Loci).  In addition, they are made with C/C++,
which would have to be wrapped in Python.  It would be much simpler using only
Python.  (The same argument though can be used against CORBA and in favor of

J.W. Bizzaro                  mailto:bizzaro at bc.edu
Boston College Chemistry      http://www.uml.edu/Dept/Chem/Bizzaro/

More information about the Pipet-Devel mailing list