Tuesday, November 18, 2008

Web 2.0 Sensorium

This is in response to Tim O'Reilly's blog, "Daddy, Where's Your Phone?" where he says that he thinks that the web is still primarilly a PC experience with mobile as an add-on. I disagree with the characterization, rather than his perception, and that is what this blog is about.

The defining characteristic of Web 2.0 for me is Tim's "harnessing collective intelligence" meme. I've been interested in "intelligence" ever since I saw Hal 9000 when I was 14. In the early 80's I was avidly reading Hofstadter's "Godel, Escher, Bach: An Eternal Golden Braid". More recently I have been interested in swarm intelligence, and now E. O. Wilson has written "The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies" which puts ants firmly in the group intelligence sphere and is re-positing the previously discredited the idea of evolution at the group level.

So what has this to do with accessing the web? If Web 2.0 is about collective intelligence, or how the actions of all the participants generates meta information, we can see the parallels between humans participating in a group mind and ants as part of a colony mind. But whereas ants cannot see the big picture as they mindlessly toil, we humans can do do, or at least, see fragments of it reflected back to us. A good example is Amazon's book suggestions which offers me a lot of information on the quality of a book from reader's purchases and feedback. In effect, Amazon offers me access to the group mind's view of the book, and effectively enhances my book buying cognitive processes. Wikipedia, and other data look up services, is like an extended memory. And here's where we get to the point. I don't want my extended memory and cognive processes turned off when I leave my PC. My brain is mobile by definition. If Web 2.0 is going to increasingly become part of my mind. Today that access has reduced bandwidth and resolution when I use my phone. It's like looking at the world through a rolled up newspaper. But, since my brain is mobile, my access must be too, however limited in scope. It is arguably even more important to be connected to the group mind when I am away from a PC, and as that mind expands, that will become ever more important. The cell phone has become the ubiquitous personal communication device. It is highly portable. Coverage is global. And "smart phones" like the iPhone are bridging the gap between PCs and basic phones. But clearly they will be used differently, much like calculators were not used like mainframe computers. Phones will become connection between my brain and the group brain and thus will need to use their limited bandwidth to efficiently to get me the salient information from that group mind - "What are the good restaurants nearby? Is this restaurant good? How much wil it cost me for a meal? What are the popular dishes and combinations? Has anyone I know eaten here recently? Has the health inspector shut down the kitchen in the last year?" and of course adding my information to the mind, much like an ant adds a drop of pheromone to the sugar trail.

It's early days of course. I would like my phone to have a screen that can be made larger, more like a paperback book, and definitely faster data transfer. The interface to use the device could be a lot better, But I think the trend is obvious to an observer. The mobile web is transitioning to become the dominant paradigm, leaving the richer, PC based access to different usage patterns. This seems to me another of the "good enough" devices that will undermine the use of PCs for web access for anything but the more specialist roles that need its power.


Monday, October 27, 2008

SETI - Day of Science

On Saturday I attended the SETI Institutes "Day of Science", a public presentation on some topics of interest to those of us interested in the Fermi Paradox.

It started off well, with a stimulating lecture about using information theory to describe the complexity of animal languages as a test on what to look for in alien signals. The technique was applied to dolphins and humpback whales, showing that the sounds they made and their usage appeared to indicate that they were languages. Particularly interesting was the observation that baby dolphins babble like human infants before acquiring the adult sound patterns. The problem for the detection of alien signals is that if they are encrypted to maximize transmission, all this pattern is lost as the signal entropy is maximized. So we have to hope that the signals are unencrypted.

This was followed by a rather tedious lecture about the precautions needed to prevent contamination by spacecraft and humans on other planets. While we all want to preserve any living organisms on other planets, I couldn't help feeling that this was the ultimate stalling environmental impact report approach, designed to stymie any serious exploration of the planets where life might occur. Very noble, but if Columbus had had to comply, he wouldn't have bothered to make the trip.

We then received a trio of lectures on the Kepler space telescope that may be able to detect earth sized planets around other stars, the early plans for a Europa mission, and a session on the philosophy behind what should we do if we actually receive unambiguous alien signals. The Kepler mission will probably be getting data for it's first discoveries in a couple of years, the Europa mission probably won't happen until the 2020's, if at all, and I seriously doubt we will receive signals from aliens in my lifetime.

The last part of the program brought on the SETI rock stars. Jill Tarter, who has just won the 2009 TED prize, described the neww Allen Taelescope Array in northern California. I was pleased to hear that it was primarily for doing high resolution ratio telescope work, and only secondarily for alien signal detection. Most fascinating was the effect of Moore's lw on electronics affecting the size of each radio dish. Another few years and they might be small enough and cheap enough to buy at Radio-Shack. Then Seth Shostak took the stage to regale us with a very humorous presentation on the reasoning behind the SETI strategy. definitely a fun talk, but it was clear that SETI makes some huge assumptions about the aliens and thus the nature of the signals SETI hopes to listen for, namely very short, high strength, beacon signals that repeat over longer intervals, perhaps a week, perhaps a year.

IMO, the SETI approach feels very similar to the 19th century idea of buring huge forest fires in a geometric shape as a way to signal to the Martians. SETI assumes that the aliens will use radio ways (or at least the electro-magnetic spectrum) and that they are only located on their home star, and therefore likely to be far away. This just strikes me as incredibly conservative. Aliens could easily determine likely life bearing planets, as we are just about to do today, maybe even pinpoint actually planets bearing life. Then they could send small probes to those planets to monitor them. If they wanted to communicate, a local signal could be generated, much like Clarke's monolith. And if they can send small probes, and they might be very small, then maybe they can communicate at FTL speeds, perhaps using quantum entanglement. In other words, we are assuming aliens will be using a level of technology extrapolated from ours, rather than what an advanced race might really use.

After the program I sopke to Shostak about his assumptions and he agreed, that all bets were off if the aliens were not remote but were indeed scattered through the galaxy.
But I accept that we have to start somewhere, so we might as well look, especially as the new ATA will give us that capability almost for free, piggy-backing on mainstream astronomy.




Monday, October 13, 2008

GWT version blues

I've been working with GWT (Google Web Toolkit) for about 18 months now, ever since I made the decision to code Skollar using this technology. I really appreciate the power to build cross platform javascript applications, using Java, a language I can almost code in my sleep.

But sometimes the boys and girls at Google can be very annoying.

Last week, after returning to make some changes to Skollar, I noticed that some core pieces of my application had stopped working. I suspected it was the GWT libraries, as a version that I had on Amazon's EC2 was still working fine, whilst a development version on a local server using the same codebase but a newer GWT library was not. I tracked the problem down to a change in the Element class API. Previously, when I needed to get an outerHML String, I used the toString() method on the Element instance. But as of v1.5, toString() no longer has this behavior which is replaced by getString(). Is this rather important API change mentioned in the release notes? If it is, it is obscure, and I cannot find a direct reference to this change. Now if I was part of the GWT development team, I would have asked for this change to be made obvious in the documents and I would have used getOuterHTML() instead of getString() to make this change more obvious to the developer.

Fortunately the code changes were simple to fix this time, but I wish the Google folks would think a little more before they release the next beta.