Close
Notification:  
v2.2.1 Professional
Login
Loading
Wiki About this wiki Volume 1 Volume 2 Test Page Vol 1 - v053 - Errata Vol 1 - v053 - Aux Book Features Vol 1 - v053 - Alternative Format Vol 1 - v054 - Notes Vol 1 - v054 - Pages Vol 1 - v054 - New Paragraph Sources Vol 1 - v055 - The Delivery Method Vol 1 - v055 Notes Vol 1 - v056 Notes Vol 1 - v057 Notes Fundamental Images test -paste in table Test Buttons New Page New Page Where is the Password for Additional Features? Next Word Version FR Issues with TOC and Book Interleaving Dynamic Text Display Everywhere Very Large Books Book content mapping New Behavior Evolutionary News Microsoft Courier Literate Programming Currently Reading We're going to Mars - Mission to Mars 2 New Big Book Links Circular library Wiki Distribution The Mind's You Preservation in the Digital Age - REPRINT Introduction If Words were Flowers Foreword and | or Preface HyperTextopia and the Docuverse Chronology Time Quantum Self Reference print paragraphs of text in pseudo KANJI - Paul Haeberli - 1996 Hypertext that works Les Sous-Sols du Revolu Napoleon romance novel finally released Books and architecture The Archivist - Schuiten -- Peeters Authoring Bots More Book Stats Non-Ownership Collaborative Writing Literary Evolution and the Russian Formalists New Printing Surfaces Failed Time Capsule Methods Toilet Paper Novels Bed Cover Non-Fiction Texting Jargon Finding books in other books with x-rays Data in Motion is Safer Data Rosetta disk Calendar Based Update What we can learn from slow music Media that last for ever Plastic Logic E Books Future or Libraries by Thomas Frey This Book's Seven Wonders Oreilly Montly Subscription Book Borrowed for the Longest time v055 stats Count how many dragees results - January 1 Jen and William's Annual Hangover Brunch- Experiment Results My Name is Zachary, I am 21 and I am hot 10 Literary Exploits - Commented The Tyranny of Gadgets RSVP techniques Book Pricing Algorithn New Links Political Parametrics - 2d to 3d conversion of the American Political Landscape TOPANGA to DOWNTOWN LA - Good Books Graze, Hunt and Browse Expedition Typing without a keyboard Computing_Timeline Software Cracking for the Mass by Google, inc. Fixes for Multi-Level Moving-Image Semantics Chalkbot Hardware Accelerated Bible Code extreme poetry New Page New Page New Page New Page Interview with a chatbot - (c) New Scientist anthropomorphic middle 'man' Reinterpreting Mount Rushmore Books that became algorithms Reading old stones Norsam Technology 219 Years of bets at Cambridge Long Term Backup strategies Recovering Mesopotanian Tablets Carlos Ruiz - Book Cemetary Flexible OLED Foldable displays - what happened to Readius Copyright law issues that inline linking raises Deep Linking - Printing the internet with the Google clause New Page Math Tables keyword reading scheme - teaching reading Best Man Speech Flowchart comments New Page

One of the purpose of this Book is to enumerate / collect literary ideas. We use a system named HyperTextopia here to  help identify some evolutionary difficulties of pure non-linear hypertextual systems. This is not a critique of the idea.

Essentially the HyperTextopia system as far as I understand has titles and authors as global entry points. From there one can add links to text (to a sequence of 1 to N words). 

http://www.hypertextopia.com/

make snapshot of expanded "hot" words

HyperTextopia: What's wrong and what's nice

It's a nice way to typify links:  for example are you in discordance with what is expressed, do you agree, are you adding a better argument, giving a personal example of the abstract idea etc

Although it's cool to see micro-fragment graphs showing the overall linkage, it's not a robust way to organize both material and the content generating activity, perhaps with the exception of poetry which strive in ambiguity of meaning and semantic slips and the simpler problem of formal programming language (with a small set of reserved key words forming the language per se). 

visualizing RDF (w3c metadata data) looks cool but is rarely useful except perhaps at a very local scale

http://www.mkbergman.com/wp-content/themes/ai3/images/2008Posts/080128_mkbergmanweb.png

Info-Mapping versus Navigation

The main point of what follows is essentially that the way we connect things is different outside of the Book then inside. It's a bit like a geo mapping system where you have maps which slowly evolves (e.g. a new street is built) and the individual request to see how you go from point A to point B, where there might be different alternative ways to get there. For example if you are planning a road trip, you are not necessarily looking for the shortest path, to do the exact same amount of driving every day... there are other factors affecting the decision process. One could argue that rating eventually retrofeedbacks on the road system, popularity eventually change population patterns, yet I think it's clear that the price and availabiity of hotels is a different issue then the driving road network. That said, the actual travel path might be transient (once done is done)  yet can always be commoditize as an information object in a  travel log.

You can only link to, not at within

Here's what we say.  Micro-content should have a single IN point. You cannot link-to a point within a micro-content fragment (a logical paragraph). That IN point is located in a logical Page. The logical Paragraph assumes the logical Page as context. Now OUT point (any form of links), should they be inside the stream ("hot points")?  I don't think so. I think the logical Paragraph can have multiple OUT connector (for example a bibliographic reference) and where in the logical Page context, the first one is the next logical Paragraph in the Page.  This guarantees a much clearer nodal model.

Once that is stated as the back bone model, then yes we can create a meta-literary layer which is essentially a conversational / annotational layer. This layer is much more fragile and transient. The simplest example would be to fix a typo. Once the typo is fixed, the annotation is not relevant anymore. Such annotation can be carried in (attributed) in the creation history but is never resident in the content per se.  Another reason why this is a very transient layer is that logical Paragraphs are much more subject to merge, join and delete operations. 

Paragraph Unique Identifier and Time-Stamped Version

Essentially in this Book we have a logical Page made of logical Paragraphs. The indexing of the collection of paragraphs might be:  A.0.0, A.0.1, A.0.2, A.1.0, A.2.0, B.0.0, B.1.0, B.2.0, B.2.1 and so on.  The info-mapping might look like:  PageId.version-ParagraphId.version.   External links use the locator PageId.version-ParagraphId.version.  Each time a Paragraph is modified (including straight deletion) it's version thus it's ID gets modified. This as well increments the PageId.version. As well moving a logical paragraph (a micro-content node) somewhere else on the Page as it changes the graph layout, also triggers a PageId.version increments but not a ParagraphId.version.  Such system has the following navigation issue.

Link To Page

1) Your intention as a linker (link-to) is either link to a Page and not to a place on Page. Then you want to arrive at that PageId.version and be provided with the option of advancing towards the most current version. On a meta level to the Page, the Volume version technically get sub-versioned each time a PageId.version change. However when you land on a Page you typically want to advance through versions on that Page so globally jump to the next relevant VolumeId.Version where that Page was modified, hence simultaneously updating all the other Pages in the Book referenced or not.

Link to Paragraph

2) Your intention as linker is to map to a particular logical paragraph.  Then the expectation is similar. You arrive at the initial ParagraphId.version and you want to evolve towards current time by jumping to the next document history event that modified the ParagraphId.version.  Any ParagraphId.version event type modified the PageId.version which modified the VolumeId.version which modified the BookId.version which modified the Docuverse.version (thus created a tick in the docuverse chronology).  

The Docuverse Time Tick

I am not quite sure how many bits are needed to create a unique time tick for any new logical paragraph created or edited. Just consider year.day.hour.second subdivided many many times. Since our aim is multimedia, imagine just that each video camera running simultaneously would have a unique time code in the docuverse for each frame ever recorded. Consider as well that this unique chronological tick must work anywhere in the universe as well. For our purpose this is essentially saying we need a universal reference for the whole universe which origins time much like we needed the Greenwich reference at some point for train scheduling.

All we know is the Docuverse can be abstracted as a unique indexing system which locates a particular piece of content somewhere at some chronological time. Simply said each time a new piece of content is created/edited it receives a time-stamped unique locator. Because this theory assumes chronology protection conjecture and because hypertextual links can create time-dependent relevance (or if you prefer where dead links are not an option), we probably end up having to even more precise about our evolutionary model. On the Docuverse level it's never ambigious. There is a literary event that happens at each time tick (a logical paragraph has been added or modified) so no two event ever happens at the same exact historical time. As Steven Hawkins says: "makes the universe safe for historians".

Reference: "The quantum physics of chronology protection" ,  Matt Viser, Washington University, Saint Louis, 2002

How to avoid wormholes with transclusions

Essentially what hypertextual transclusions introduce in literary systems theory is the idea that a Book might not made of parts all at the same time. A transclusion here being including the hyperlink (from here to there in parent doc). Since it's possible for A to point that B says this here and B can then change what he said there, such links can introduce time entanglements which must be resolved. In essence what we are saying is if each event in the Docuverse is time-ordered then there cannot be "closed timeline curves".  Essentially the concept of a time machine only makes sense if you view the docuverse as something where each literary event is archived and a particular snapshot of the Docuverse can only be performed at some time t.  And the main time t that matters is now and simply temporal ambiguities caused by transclusions can simply be resolved by converting the transclusion to an inclusion at time (simply a particular ParagraphId.Version).

= All literary event somehow  happen in sequence

At the first degree what this means is one cannot review another publication before it is published or worse an effect cannot be it's own cause basically. Cause preceeds effect.  At a second degree this is saying there is the need for a global space-time indexing locator constraint. The most compressed way to set unique indexing locator for each literary event creation / edition is to simply add +1 each time an event happens. This has the side effect in the case of an information explosion curve to appear as if we are converging towards a singularity. An alternative to linearizing all events sequentially would be to localize event in a sparse grid and then loosen the chronology constrain to two events cannot occur at the same time at the same position locator. Then our time tick can be much smaller but we then would need to encode spatial coordinates as well.  Ignoring the particular topology of the Docuverse, what matters is that by design we assume that there is some global constraint that allows to clearly identify that something happened before something else.

One thing that separates a Docuverse from a Universe is that a docuverse is a document with a complete history of creation, while the Universe is an accumulated state from which you might be able to infer history. In practice the total archivability issue is simplified by only a save (update) action creates an archivable event, local saved action can be grouped in a larger publishing revision update.

= Resolving Transclusions chronological aberrations

So the idea is to have both document closure and openess to evolution of ideas, so where revisionism does not contradict evolution. What we promote here is that transclusion can only happen at the logical Paragraph level.  This is a large constraint on what a transclusion can be since in theory a transclusion could be anything between two anchor points in a URL. The first simplification we then do therefore is by saying a logical Page is made of N logical paragraphs and only these paragraphs can be transcluded we eliminate the need to maintain dual anchors to define a transclusion window, now we only have a single # anchor. Without such constraint in theory any document in the docuverse could create any arbitrary double anchors locators and this from an archival standpoint would require an external system to differentiate any two different anchor sets references.  What such limit on what a transclusion is also provide a clear "chronology horizon".  That is then there is only two point-of-view in the Docuverse that must be resolved. Locally,where the transclusion and made and remotely, what is being transcluded.

= User Interface for Transclusion Ambuguity Resolution

Imagine we had a 3 color coding system to visualize the state of a transcluded logical paragraph. Red: transcluded content has changed;  Yellow:  transcluded context has changed;  Green: Nothing changed.  First the "Red" case, if the previous or next (or any other in Page) references the content in that Page and the change in the transcluded paragraph changes the relevance of another logical paragraph on our Page then the Author has two options:  1) Update the other Paragraphs so it now has relevance to revisioned content  2) Convert Transclusion to Inclusion, that is stop making the linked window on the other document live and instead copy the content as it was when you created the transclusion.

= Multiple Levels of Transclusion Nesting

Now what can happen, if you select to Update the Transclusion to current latest indexing locator version of that logical Paragraph then it is now "Green". That process has created a new tick in the Docuverse where now someone else in the Docuverse (another "Read-Write" point-of-view) might get a different color coded state if they transclude you.  Now if you decide to Convert the Transclusion to an Inclusion, you can first check the last PageId.Version before the ParagraphId.Version changed. You can forward in chronological time the initial transclusion you did.  That possible time difference is the scope of potential chronology horizon where there is a great chance that nothing has affected even the context of your initial transclusion, in a manner where it would change the local context of what you are saying.  That is even if the content has not change, the source origin context of that same content might have sufficiently change to make your context around that paragraph less relevant.  Finally since one could have transcluded a transclusion, then one would have an additional option which is to remove a level of transclusion indirection. One reason people use indirect transclusion is that they use the other Author as a filter to what changed on a particular subject of interest.

Again we have to be careful about building too many analogues between the physical universe and the digital docuverse. The physical universe only exists now and must account for example for every photons while the digital docuverse has a complete creation history where read wise we can travel to anywhere. You can travel from any point to any point in the docuverse with the same cost while in the physical universe distance matters and as well the vehicule used is part of the universe. Also the theory of the Book can in part be implemented with binders and loose leaves, a photocopier, a telephone and the physical mail system. It can also to a larger part be implemented  The internet is not the docuverse.

Looking at it as a graph

Once we have made logical paragraphs unique segments in the docuverse,  we can simply visualize the Page as a simple list of nodes, a stream as the core structure, where each node has as first output the next logical paragraph in the Page and a set of optional additional links connecting to somewhere else in the docuverse. The connection, linking mechanism should not and does not really differentiate between "as discussed in [link]" and "more on this here [link]" -- we come from here, or go there. From the standpoint of the Book all we have is a set of links emanating out.

The text might have traditional hyperlinks in it's body but think of these as try - catch. Essentialy what pressing the link is equivalent to is to throw the message "go to this URL".  All this is orthogonal to the Book. That content we are linking to might not be in the Docuverse but simply in the traditional internet web cloud. For visualizing purpose imagine the logical paragraph in isolation, it has a link coming into it (the previous one) and one coming out (the next one) and it can have on the "right" additional goto connectors, yet these might push you out of the Book.

This is somehow left as an exercise to the reader or the writer. Essentially the writer can contextualize in a preceeding node the reference to someone else or just mention it within the same node, in which case the option to jump there would simply follow the logical paragraph referencing it. This way as well a paragraph A referencing a paragraph B that references back paragraph A would still resolve as a directed graph in the docuverse. It would not if A could have an output connected to B as input and B an output connected to A as input.