Terry Jones is founder and CTO at Fluidinfo, where he is building a distributed storage architecture for a new representation of information, and creating a variety of applications that will use the underlying architecture. You can find Terry on Twitter at: @terrycojones.
Historically, the ability for readers to contribute to published information has been sharply limited. Publication was considered, and very often still is, almost exclusively a one-way process in which a publisher produced information and readers consumed it. But readers have never been passive. While reading, they generate information of their own, storing it largely in their heads, marginalia or separate notes, and sharing it verbally. The subsequent publishing of reader-written letters to the editor, book reviews, errata and corrected editions, and the practice of putting laudatory quotes on the back of book covers are all examples of reader contributions to published information. The library cards in books borrowed from libraries sometimes carry annotations—occasionally social—left by other readers. Even the simple practice of inserting a bookmark into a book or dog-earring a page to remember where one is up to is a form of user contributed metadata.
The rise of digital systems—computers and networks—has allowed us to dream of and then implement systems that allow normal users to contribute and share information. This thinking can be charted over the last century via the landmark works of Paul Otlet, Vannevar Bush, Doug Engelbart, Ted Nelson, Tim Berners-Lee and others. For an overview of this history, see Glut: Mastering Information Through The Ages.
The rise of user-generated online social information systems, triggered by Delicious, sparked a “there’s gold in them thar hills”-like recognition of potential value and numerous attempts have been made to monetize that value.
User-generated content, such as tags for URLs, is valuable in two main ways. First, allowing normal users to add their own information to things can provide for new and personal forms of search and organization. This value accrues to the user but also potentially to the publisher, supposing they have access to the data. Why bother with the complexities of semantics or natural-language understanding when, if you simply let them, users will happily tell you what they’re interested in and what web pages are about? Second, if user-generated data can be shared, additional value is created as the data itself becomes, in a sense, social. Shared data creates value because it makes it possible to know what specific others (e.g., friends) are doing. It also allows non-specific discovery, creates network effects, and allows unanticipated combinations of independent heterogeneous information about the same things.
The recognition of this value—to end users, to content creators, to advertisers—is pushing technology towards giving users a voice. To date that has been done in specific vertical ways in the context of applications. Horizontally, we have the example of Wikipedia which offers a writable web page for anything (modulo editorial control). While no attempt has (yet) been made to monetize Wikipedia, its value is immense. In my opinion, Wikipedia, including the computational infrastructure supporting it, is the most impressive of human artifacts.
When Does Publication Begin? Does It End?
Brian O’Leary has written a compelling wake-up call to publishers, warning that the traditional publishing model, driven primarily by content “containers”—both physical and, more recently, digital—is outdated. Brian urges a focus instead on “context” and argues that the form of the final information container should be merely a circumstantial by-product. He argues that publishers must focus on context, i.e., on the rich ball of information that surrounds the content that has traditionally been produced in the act of publishing. When focus is a priori placed on the container rather than the context, valuable information that doesn’t fit the specific targeted container will necessarily wind up on the cutting room floor. From this point of view, the context—information surrounding and contemporaneous with the primary content should be considered a potential part of the eventual publication. The context might, for example, include metadata about a book that is created and disseminated before the traditional content.
Some questions: Given that readers are generating and accumulating information about published information, even if they are often forced to store it elsewhere (in their minds, in marginalia, in separate notebooks, etc.), might it also make sense to take the position that this activity is also a form of publication? If the information from readers is “about” the same thing, does it not conceptually form part of the same publication? If, with the rise of digital, the audience is increasingly able to contribute to the content, at what point can we consider that the act of publication is over, if ever?
These questions point to a change in how we look at the act of publishing. In the traditional model, a moment of publication was reached. Contextual information that did not fit the publishing container was left behind. “Read-only” content was delivered into the hands of the audience, who went away individually to consume it while publishers moved on to work on their next publications.
Contrast that model with one in the digital world. Free from the constraints of more rigid containers and with mechanisms that allow the audience to contribute content, it makes more sense to regard publication as something that starts much earlier and doesn’t finish at the moment of initial dissemination. In fact I think it’s defensible to argue that the act of publication never finishes—even after the last physical copy of a book goes out of existence. The memory of the book, the fact that it did exist, comments on the book, other metadata—in other words “context”—continue to exist, and also to accrue.
Book Publishing Goes Digital and Online
ebooks and ebook readers have taken the world by storm in the last several years. Amazon has announced that they are now selling more ebooks than certain types of print ones. As extraordinary as this is, a more profound change will take place, because ebook readers are also online.
When an ebook device is online, it can make dynamic API calls across the network to send or receive information. The device and its content are not static. Additional content (context!) can be pulled from remote servers. For example, the Google ebook Web Reader supports several operations on individual words. New content (user-generated, or otherwise) can be uploaded.
This means that the online ebook device maps perfectly onto a broader definition of publishing—one that starts earlier and doesn’t really stop. It means that the container in which content is delivered has in some sense become irrelevant. There are many ways to look at this, but the bottom line is that information is being communicated between the external networked world and the user by means of an infinitely flexible general-purpose computer. The form of the “container” that the top-level software ends up displaying to the user isn’t particularly relevant (ignoring for now the controversial issues about which data formats are supported). If the device can pull arbitrary information from the network and assemble it as needed, to the extent that a container is even necessary, we arrive at a form of just-in-time container, where, as O’Leary puts it, the container is just “an output of digital workflows.”
Just as browsing has to a large extent moved from delivery of static pre-built HTML containers to a model of looking at a page that is constructed and updated on demand, the ebook world can be expected to move from delivery of static pre-built ebook containers to something similarly dynamic. The ebook experience will degrade when network connectivity is lost, just as it does when browsing (perhaps falling back to just showing the static non-interactive content). When the formal container shrinks to nothing more than a few HTML tags to hold the “content”—a program—then the container is more like a part of the handshaking protocol between device and server. What goes on computationally behind the user experience is pure application. It operates internally much more like a game than something with static discrete “pages,” like a traditional website or book. Indeed, the intermediate step of the static ebook will in some cases be skipped entirely, as we will see below.
Openly Writable Storage
When we are given the chance, we choose to store information in places that make it most useful, and therefore most valuable. This is illustrated in many small everyday acts. Consider the name tags we unthinkingly put around our necks at conferences. We put bookmarks into books or dog-ear pages to recall where we stopped. We are adept at using Post-it notes, putting them into places where the information they carry will be most useful. These are all examples of the same thing, of how we naturally tend to put information into context (there’s that word again) when given the opportunity. In the digital world, Wikipedia gives people an always-writable location to store information about almost anything, possibly subject to the whims of Wikipedia editors.
Today, the digital default is that we can’t always write. Most of the time we’re in a read-only world, and when it is in theory possible to contribute information we can only do so in ways that have been anticipated, and if we have permission. It’s a radically different environment for working with information. All too often when we have additional information about something we are forced to put it elsewhere, making that information less valuable. While Wikipedia works well for humans contributing shared information about things, it is not a suitable framework for applications to do a similar thing.
At Fluidinfo we are building what we regard as a core piece of missing architecture—shared storage. Fluidinfo is a single shared online storage platform. Like a wiki, Fluidinfo has a writable location for everything. Unlike a wiki, Fluidinfo has a permissions system that content owners can use to control read and write access for other users and applications. It also supports typed data and arbitrary data with a MIME type (image, audio, PDF, etc), and it has a query language for data retrieval.
The advantage of an openly writable storage platform like Fluidinfo is that it makes the world I described above possible. Given an openly writable object representing a book, a publisher has a place to store arbitrary information (context, metadata). Applications can use the identical place to store arbitrary user-generated information. All this information is stored in the place where it is most useful: with the thing it relates to. The Fluidinfo platform is built to provide flexible storage that allows applications and their users to work with information digitally in the same way they work with information in the non-computational world—by putting it into context, where it is most useful and valuable.
While some of these changes will be pushed by the value created by social user-generated data, as described above, there are additional economic incentives. Revenue models are a persistent problem in publishing static, monolithic chunks of information (as books, as HTML, in ebook formats). A move to a world in which devices run programs that pull small pieces of information from the network and assemble that information into a user experience provides not just richer and more dynamic content, but also a more viable and much more interesting revenue model.
For example, platforms like Fluidinfo make it possible to monetize the additional context information that a publisher holds but is forced to discard in a world of formal containers. Separating this context from the content it describes sharply limits what can be shown to users. If this information is stored in context, on the same objects that hold other metadata about the book, then so much the better. It is easy to imagine a publisher selling digital copies of Keith Richards’ book Life and also making it possible to (later) pay a little more to have his footnotes or author’s commentary embedded or provided alongside. This rich and valuable contextual information might take many forms.
Alternate models can also be tried. These include razor and blades freebie marketing, giving away partial content and charging for the full version, or even wild-ass ideas like giving away the consonants and selling the vowels. Anything is possible.
The combination of atomized content, networked devices that can make API calls and a move away from static monolithic information formats takes us to a world of new revenue opportunities. If this model is backed by general underlying shared writable storage, it is also easy to imagine end users who are adding valuable information and charging for it. Reader contributions might be valuable to other readers in the form of reviews or annotations. Such annotations might be valuable to publishers in the form of errata, ratings, recommendations, and much more. Third-party contributions to the underlying data could also result in a richer experience and revenue to those third parties, as discussed below. In such a world, the distinction between writer and reader, between publisher and consumer, becomes extremely blurry.
An amusing and provocative demonstration of these kinds of ideas, was given during a talk entitled “Hacking THE book” at Book Hackday in London. Nicholas Tollervey demonstrated a small program that dynamically queried Fluidinfo to locate objects about verses in a given book of the King James Bible. It retrieves tags on objects that hold information from the LOLCAT Bible and The Brick Testament (Warning: NSFW!), and automatically assembles the result into an EPUB document that can be read on a wide range of ebook devices.
Although a version of the Bible written in LOLCAT dialect with illustrations from the Brick Testament is obviously not going to be of interest to a wide audience, it illustrates how customized and personalized these things can be. Such a program could easily request and display opinions, ratings, annotations and page numbers your friends are up to. It could provide definitions, translations, footnotes, extra images, links, and the like. Additional information can be independently tagged onto the same underlying objects by other applications, with the “book” being rebuilt or updated dynamically as needed.
Skipping an Intermediate Step
If static ebook containers represent an intermediate stage, a step on the way from paper books to “books” that are more like applications, it should be possible for that stage to be skipped in some cases. We are already beginning to see this. For example, the Pearson subsidiary Dorling Kindersley announced an API Developer Initiative around their Eyewitness Travel guides. “In the US, sales of international guidebooks fell 20 percent between 2007 and 2009” as described in this Sydney Morning Herald article. The travel information is still valuable, but people are rapidly realizing that it is no longer necessary to travel with a book when you are already carrying a mobile phone that has the considerable added advantage of built-in GPS.
While one reaction might be to modernize your valuable content by moving it to a digital ebook format, it likely makes more sense to instead jump straight to building travel “book” applications on a device that can make API calls when needed, in order to pull (or push) small pieces of individual content or to run searches. For example, “find all the moderately priced and nearby Indian restaurants that are already (or still) open.” Dorling Kindersley has already done the work of collecting, creating, and curating all the necessary information and getting it into book form. The steps of extracting and atomizing it, making it available via an API and building a user interface are relatively cheap.
This approach makes good sense as a way to ensure published content is accessible in modern mobile contexts and hence retains or even increases its value. An API opens the possibility of licensed programmatic access to published content, making it possible for third-party developers to build applications. In many cases, the most cost-effective user interfaces will be mobile web browsers. This reality explains the strong interest in Books in Browsers, sponsored by the Internet Archive.
To take the example a step further and to link it back to shared underlying storage, consider the extraordinary careers of Bob Arno and Bambi Vincent. They run a blog called Thiefhunters in Paradise. They’ve spent decades visiting hundreds of cities, filming and interviewing criminals and collecting unique and valuable data on street crime. If some of that data were stored in the same location as Pearson information, a travel application could additionally license it from Bob and Bambi for display. Display could take the form of mundane additional textual content, but could also be presented in the form of augmented reality. You could hold up your phone and “see” city crime contours, showing you dangerous neighborhoods. You might receive a vibrating alert if the phone detects that you’re entering a bad part of town. The possibilities are endless. While this might seem like science fiction, it is not. Applications of this type are already in the marketplace.
Tying These Threads Together
I’ve mentioned two areas in which valuable content around books is not being realized: contextual information that publishers accumulate but have nowhere to put, and user-generated content that appears following publication. The monolithic, read-only nature of books, including ebook formats, does not offer a natural place for this additional content to be stored, used, combined, augmented, and monetized.
The natural world well illustrates how we very often make information more useful and valuable by storing related information together. So the first point is that publishing can be looked at as a process that is more spread out in time, beginning earlier and perhaps never ending. If we had a guaranteed openly writable information store, including a permissions system, we would always have a place to store and act on metadata around any digital book. The value in this additional data pushes the system in that direction, and away from static read-only content (containers).
If users and other applications are to be allowed to add information, there needs to be a mechanism for ebook devices to get at this information and to add to it. APIs and network calls provide that mechanism. Once information is present in APIs, devices will be able to pull it on demand and in small chunks. Given a permissions system that operates at the level of the pieces of information instead of on entire ebooks, there is a natural transition to a world in which access to that information is not free, in which application developers can license content and in turn charge for it by selling applications.
In summary, I believe the digital book world, and the digital world in general, is on the verge of becoming a lot more interesting as we move inexorably towards a common aspect of the visions of Paul Otlet and his successors. We are moving towards a world of applications, including ebook readers, that are based in part on shared writable storage with its attendant benefits.
Give the author feedback & add your comments about this chapter on the web: http://book.pressbooks.com/chapter/books-databases-terry-jones