[ba-ohs-talk] p2p as a solution to the loose connectivity of the mobile Internet
Greetings -- here's a different take on the mobility issue: (01)
regarding the mobile use of OHS, I'd like to address the issue of
disconnected and loosely connected devices. There are other -- perhaps
more fascinating -- discussions around mobile and ubiquitous computing,
such as contextuality and new innovative user interfaces, rising from the
differences in use situations of mobile/ubiquitous vs. stationary use.
Here the focus is on a more "earthly" issue: missing and nonfunctional
data transfer service. (02)
PDAs were long used mostly disconnected, so this is by no means a new
issue. Also stationary PC users with modems often work off-line, e.g.
reading previously retrieved posts/articles, writing new ones or even
checking out web pages from the browser (that can be set off-line) cache.
Today this is often the case when using laptops while travelling or
visiting places that don't provide seamless connectivity. For PDAs and
laptops there are quite advanced syncing solutions for calendar etc. data
and for subscribing to Web publications (e.g. AvantGo). (03)
Now the buzz is about mobile connectivity, WLAN/WiFi, GPRS, UMTS etc. In
an ideal situation a mobile device is "always-on", so that e.g. a mobile
phone can ask the bus timetables from the server whenever via GPRS --
although there is no constant connection, the fact that it can be opened
whenever automatically is perceived by the user almost as a fixed line.
These advances are perhaps seen to render the issue of connectionless use
irrelevant. (04)
However, long-range mobile connections are always slowish and can be
expensive to use. Also, despite advances in wireless technology, there
will probably always be places and situations with poor or no coverage.
This is the first argument why the issue is still relevant. (05)
Another problem is the unreliability of the Internet in general. Even if
the mobile connection was working ok, or the user is using a stabile wired
connection, any server can always stop responding or the connections to
even large parts of the network can fail. (06)
So there will be situations where the user is not connected to the network
at all, or can not reach the server / parts of the network where the
needed service is provided. (07)
But, often in these situations the needed resources could be available in
the local machines or some other machines that are withing reach.
Therefore it is promising to look for solutions in architectures where the
the location of the resource does not matter, is not fixed in a sense,
where it can be retrieved from whereever feasible. (08)
Already with the current Web, and before that with ftp repositories etc.,
mirroring is a common practice to share server load and also ensure that
critical resources are not behind a single point of failure. Perhaps the
most advanced of the traditional Internet services architectures is the
news protocol (nntp) that indeed does not address with locations (as in
URL:s) but article-ids, that can be used to retrieve from any server. But
in these cases the number of servers is still low, and the mirroring etc.
have to be manually set up. With the Web, usually only large corporations
and institutions can set up large-scale mirroring systems and even there
the URL-addressing makes it tricky to avoid a single point of failure. (09)
Peer-to-peer networks have a totally different approach: any node in the
network can provide services, not only few select servers. Retrieval is
not necessary tied to the location or even provider of the service at all,
but service is provided from where it is best available. If e.g. the part
of the network where the original provider resides is not available for
the user at the moment, the user necessarily does not even notice it as
he/she is probably served from somewhere nearer anyhow. And in the extreme
situation where there is no connection at all, if the needed resources
happen to be already in the user's device they're found there. Besides
normal caching of bits that the user has requested him/herself, this may
include bits that have passed through the particular node while it was
connected -- e.g. if it was acting as a gateway to some other, perhaps
some more poorly connected device at the moment. Also ubicomp visions
often assume a p2p solution, so that all the devices can connect others
nearby without having to access a network backbone. Yet another case is
e.g. when two or several travellers with wlan/bluetooth equipped laptops
meet, for example on a train without network connectivity -- with a p2p
OHS they could collaborate directly with each other. (010)
There is a lot of research going on in p2p networks, and I'm by no means
an expert. But there are situations in my everyday life where I wish these
systems had already replaced the traditional filesystems, servers etc. (011)
A project that is aiming at an implementation that would allow the
scenario above is gzz, http://gzz.info/ - the project that started as a
gnu implementation of Ted Nelson's ZigZag (and Xanadu). There the
component relevant to this is called Storm, the storage module, which is
(as most of the other parts of the project as well) independent of
zzstructure (so that patent issue does not touch it), but does to my
understanding implement parts the Xanadu-model (no patents there, or?).
There the idea is that every entity is assigned (cryptographically) a
globally unique id (guid) at creation. The GUID is the way to refer to the
data, and it does not contain any information about the location (e.g. no
name of the server or anything like that). In the vision this goes for all
data -- personal notes, e-mails like this, publications etc. The GZZ
project has assigned an informal URN namespace for this purpose, the URN-5
http://www.iana.org/assignments/urn-informal/urn-5 (012)
One famous project that's partly similar, but with major differences, is
Freenet, http://freenetproject.org/ . As most of you probably know, the
aim there is to ensure liberty of speech globally on the Internet,
effectively so that anything that's published there can not be removed nor
tracked to who published it (or where it is). It is also a p2p network
with GUIDs attached to the contents. A major difference to GZZ/Storm,
however, that Freenet does not guarantee that no data is ever accidentally
lost (e.g. if no one reads it), and that GZZ/Storm does not have the
requirement for absolute anonymity -- might be vice versa, thinking of
copyright protection and Xanadu-style transpublishing with micropayments
etc. (013)
There are numerous other projects innovating and experimenting with p2p
networks, and, as far I know, numerous unresolved issues as well with e.g.
efficiency of searches, protecting the infrastructure from attacks (e.g.
malicious users who set a set their hosts to tell others that all
resources are best available from one single source than consequently
suffers from a denial-of-service attack (i.e. is flooded with requests).
Like I said, I'm really no expert, mostly been just following the research
done in the gzz project (specifically a master thesis with a comparison
of the different approaches, that's in
Documentation/misc/hemppah-progradu in the gzz cvs hosted in savannah,
http://savannah.nongnu.org/cvs/?group=gzz ). (014)
Besides connectivity issues, p2p offers a different view of hypermedia in
general -- especially different to the Web, but perhaps closer to original
hypertext and Internet ideas? I did not attend the hypertext conference
last year, but the notes from the p2p session there seem to cover the
range of issues quite well: (015)
Conference on Hypertext and Hypermedia
Proceedings of the thirteenth conference on Hypertext and hypermedia 2002
Peer-to-peer Hypertext (panel)
http://portal.acm.org/citation.cfm?id=513338.513339&coll=portal&dl=ACM&type=series&idx=SERIES399&part=series&WantType=Proceedings&title=HT&CFID=6718933&CFTOKEN=6378917#FullText (016)
A year earlier there was quite a fascinating approach presented in: (017)
Mark K. Thompson and David C. De Roure
Hypermedia by coincidence
http://www.ht01.org/presentations/Session3b/thompson.pdf (018)
To conclude this lengthy post, I suggest that p2p architectures are
evaluated as a potential basis for OHS in the future. Besides providing a
potentially clean and elegant solution to the practical problems of
unreliability of mobile connectivity and the Internet in general, they may
also facilitate sharing and enable more straightforward collaboration
tools than the server-oriented approach dominating the Web. As there are
open problems and little experience with "real" (e.g. business-like) use
of p2p backends to systems that are used to do work, this is definitely
more a potential long-term goal than an easy solution today. Experimenting
with the existing systems could, however, be conducted straight away and
the approach may prove feasible for at least local networks (i.e. not
replacing the whole Web but perhaps something for a project group)
already. (019)
~Toni (020)