magpiebrain

Sam Newman's site, a Consultant at ThoughtWorks

Posts from the ‘web 2.0’ category

I’ll be in Brighton tomorrow at “d.construct”:http://2006.dconstruct.org/ tomorrow – generally mingling and causing a nuisance. I’ll be the one with the big hair raving about Javascript and “Selenium(Selenium – in browser testing)”:http://www.openqa.org/selenium/.

Advertisements

I was asked again today of what my opinion was about Web 2.0 – what was it about all the buzzwords that was actually important? I probably end up anwsering this question a couple of times a week, and thought that todays response was as worth blogging as any of my previous anwsers.

Web 2.0, being a grab-bag of new technologies that emerged at the same time, can mean many things to many people. For myself, there are two primary trends which I think are important – the availability of cheap hardware and infrastructure software, and the ability to create compelling user interfaces on the web.

We’ve seen the emergence of a robust stack of free software on which Web 2.0 applications can be built. Where previously companies seed funding would be spent on buying Oracle and Application Servers, nowadays people are turning to cheap if not free alternatives such as the LAMP stack. Couple this with the low cost of hosting and bandwidth, and companies can focus their money on delivering what is most important – software that differentiates them from their competitors.

The fact that people are creating more impressive, more usable interfaces on the web, is due to one simple fact – the browsers got better. The lowest common denominator browsers have a good enough standard implementation of CSS, (X)HTML and JavaScript that we are freed from much of the work required to make our code cross-browser compatible. This has been followed by a number of third parties creating Web APIs (such as the Yahoo Toolkit) which further isolates ourselves from browser quirks, and help us concentrate on delivering valuable software.

If any of you are interested in Web 2.0 (or beer) and are in London (or want to travel for beer), then feel free to pop along to the next “London 2.0 meet-up(magpiebrain.com – Posts on London 2.0)”:http://www.magpiebrain.com/blog/category/web-20/london-20-meet-ups/.

Amazon web evangalist Jeff Barr will be “giving a talk(Unixdaemon – Jeff Barr in London)”:http://blog.unixdaemon.net/cgi-bin/blosxom.pl/events/aws_jeffreybarr.html on their webservice offerings at Westminster University on the 15th of May. Unfortunately I’ll be unable to attend – I’d of liked to pick Jeff’s brains about “S3”:http://www.amazon.com/gp/browse.html/002-7839271-9683254?node=16427261 off the back of my “recent post(magpiebrain – Is Amazon S3 the first tier 0 Internet service?)”:http://www.magpiebrain.com/blog/2006/03/21/is-amazon-s3-the-first-tier-0-internet-service/ – but I’m sure it will be an interesting event.

If you want to attend make sure you contact organiser Dean Wilson (email: dwilson at unixdaemon.net) so he can let you know of any changes. I’ve added the event to the London 2.0 calendar too.

Thanks to everyone who attended London 2.0 RC 4 last night (in rather cramped conditions). The “usual(Simon Willison’s weblog)”:http://simon.incutio.com/ “suspects”:http://www.brunningonline.net/simon/blog/ were in attendance, as well as Adrian Holovaty who was in for a few days.

“Phil Dawes”:http://www.phildawes.net/blog/ demonstrated both “Bicycle Repair Man(Bicycle Repair Man – Python Refactoring Tool)”:http://bicyclerepair.sourceforge.net/ and the Python testing tool “Protest”:http://www.phildawes.net/blog/2006/03/17/protest-rocks-generate-documentation-from-tests/ (we need screencasts of both!), and I pointed numerous people towards the impressive “DabbleDB”:http://dabbledb.com/utr/ screencast. Demo of the night probably had to be Remi Delon showing off “Python Hosting”:http://www.python-hosting.com/. They offer a very slick, one-click install of a variety of different webapp frameworks (everything from TurboGears to Zope to Django) for a very “reasonable price(Python Hosting shared hosting plans)”:http://www.python-hosting.com/shared_hosting. His screencast is out soon – but put it this way, TextDrive’s TextPanel is going to have a lot to live up to, not to mention their forthcoming Rails-dedicated hosting (note: I’m a TextDrive user myself). Despite their name, Python-Hosting do also provide Rails hosting, however it’s not yet integrated with their slick user interface.

Details on next month’s London 2.0 RC5 coming up soon.

“Tom Coates'(Podcast of Tom’s Presentation)”:http://www.webuser.co.uk/carsonworkshops/TomCoates.mp3 recent presentation at the “Future of Web Apps summit(Casron Network’s The Future of Web Apps Summit)”:http://www.carsonworkshops.com/summit/ did a very good job of expressing what I believe to be the most potentially interesting aspect of the current crop of Internet applications. Early on, applications such as Flickr and del.icio.us started exposing their API’s which previously been uses solely for internal use. Initially the rational was that people would use rich (desktop) applications to display and manipulate data, and it would also people feel safe in providing their data as they could always export it afterwards.

h3. Mashups – Tier Two Applications

What happened of course, was that people started to integrate these API’s into their own internet applications. In the same vein as probably the most famous mashup of all time – DJ Danger Mouse’s “Grey Album(Illegal Art – Summary of the Grey Album Legal Battle)”:http://www.illegal-art.org/audio/grey.html – application mashups took data from multiple sources and created something new. The idea that emerged is that the original sources themselves had some value (as did Jay Z’s the Black Album and The Beatles White album), but by combining them you created something new, which itself had value (although no-doubt the lawyers of record companies might disagree). One of the poster children for the application mashup is the “Chicago Crime Database”:http://www.chicagocrime.org/. By using “Google’s map API”:http://www.google.com/apis/maps/ and the publicly accessible “Citizen ICAM Web site”:http://12.17.79.6/, Adrian Holovaty’s Django-powered application managed to created something which didn’t exist before, did a better job of displaying the ICAM data than ICAM itself, and yet was entirely dependent on data and services supplied by external parties.

Many websites use Internet application API’s to enrich their content – pulling in news from other sites via RSS for example, or showing a gallery of photographs from Flickr – however Chicago Crime and it’s contemporaries could not exist or operate without the data and API’s of the services they depend on. Applications such as Flickr, Basecamp, del.icio.us etc are increasingly finding themselves the building blocks of other peoples applications. Topographically we can imagine mashup apps like Chicago Crime, Flickr game “Fastr”:http://randomchaos.com/games/fastr/, or the London Traffic Cam site “gmaptrack(gmaptrack – displays London traffic cam feeds onto a Google Map of London)”:http://www.gmaptrack.com/map/locations/24/44 operating in a tier above the services on which they are dependent – with Tier one applications such Flickr, Google Maps, del.icio.us et al supporting the creation of newer Tier two mashup applications.

Theoretically, as long as tier one and two applications continue to deliver value reliably, there is no-reason why the trend cannot continue. One wonders when the first tier three application – a mashup of mashup occurs. Of course the growth of such applications has some major potential stumbling blocks to overcome – the lack of any proper service level agreements, the possibility of tier one suppliers monetarizing their APIs, and copyright concerns when user data from one application makes it’s way into another key among them. However at least one major player is convinced that the trend is here to stay.

h3. Enter Amazon – stage left.

With little fan fare, “Amazon’s S3”:http://www.amazon.com/gp/browse.html/ref=sc_fe_c_1_3435361_1/102-1195441-8120129?%5Fencoding=UTF8&node=16427261&no=3435361&me=A36L942TSJ2AJA webservice was “announced(The Amazon Web Services blog)”:http://aws.typepad.com/aws/2006/03/amazon_s3.html on the 14th of March. The numbers looked good – 15 cents per GB for one month’s storage, and 20 cents per GB of bandwidth use. On the face of it, online storage solutions such as personal favorite “StrongSpace”:http://www.strongspace.com/ or Carson System’s “DropSend”:http://www.dropsend.com/ looked hugely overpriced. However once you look past the headline-grabbing figures, you realise that something is missing from the typical Web 2.0-era announcement. No, I don’t mean the lack of a private, invite only beta. Neither do I mean the lack of a Web UI with gradient fills and rounded corners. Amazon’s S3 has no user interface at all, because it isn’t an application.

When you get down to it, Amazon S3 is simply a large, distributed hash map with an API. Unless people build applications on top of it, it’s useless. Amazon clearly expects this will happen – and what’s more they expect people will pay for it too.

Amazon’s clout, and the fact that paying for a service tends to engender feelings of security, mean that people are far more likely to trust S3 than some new startup with their data. And with those low costs launching using S3 as a backing store rather than going to the expense of writing and hosting your own is certainly attractive.

If Flickr and it’s ilk could be considered tier one applications, then surely Amazon S3 much be considered a tier zero service. Where Chicago crime exists only because of other services, Amazon S3 exists only _for_ other services and applications. Only time will tell if it’s successful – I’ve seen attempts at persistence services fail within big business – but if Amazon are willing to try it out, others may follow.