How Google is putting us back into the Stone Age

Yeah, I know – what a linkbait title. If that’s what it takes these days to get visitors and diggs then so be it. Also, just to forewarn, as you read this you might find that a better title choice for this post would have been “How Web 2.0 is putting us back into the Stone Age” since many of these thoughts generalize to Web 2.0 companies as a whole. I used Google in the title mainly because they are the big daddy in the web world, the model many web 2.0 companies strive to be like, the one to beat. Plus, the title just looks and sounds cooler with ‘Google’ in it.

Here’s the main problem I have with web applications coming from companies like Google: About 2 years ago I bought a pretty good box – which is now fairly standard and cheap these days – 2 gigs of ram, dual core AMD-64 3400+’s, 250 gigs hd, nVidia 6600 GT PCI Express, etc. It’s a beast. However, because I don’t play games, its potential isn’t being utilized – not even close. Most of the applications I use are web-based, mainly because the web provides a medium which is cross platform (all machines have a web browser), synchronized (since the data is stored server side I can access it from anywhere like the library, friend’s computer, my laptop) and it keeps my machine pretty light (no need to install anything and waste disk and risk security issues). The web UI experience for the most part isn’t too bad either – in fact, I find that the browser’s restrictions force many UI’s to be far simpler and easier to use. To me, the benefits mentioned above clearly compensate for any UI deficiencies. Unfortunately, this doesn’t mean that Web 2.0 is innovating the user’s experience. Visualizing data – search results, semantic networks, social networks, excel data sheets – is still very primitive, and a lot can be done to improve this experience by taking advantage of the user’s hardware.

My machine, and most likely yours, is very powerful and underutilized. For instance, my graphics card has tons of cores. We live in an age where GPU’s like mine can sort terabytes of data faster than the top-of-the-line Xeon based workstation (refer to Jim Gray’s GPUTerasort paper). For sorting, which is typically the bottleneck in database query plans and MapReduce jobs, it’s all about I/O – or in this case, how fast you can swap memory (for example, a 2-pass bitonic radix sort iteratively swaps the lows and the highs). Say you call memcpy in your C program on a $6,000 Xeon machine. The memory bandwidth is about 4 GB/s. Do the equivalent on a $200 graphics co-processor and you get about 50 GB/s. Holy smokes! I know I’m getting off-topic here, but why is it so much faster on a GPU? Well, in CPU world, memory access can be quite slow. You have almost these random jumps in memory, which can result in expensive TLB/cache misses, page faults, etc. You also have context switching for multi-processing. Lots of overhead going on here. Now compare this with a GPU, which has the memory almost stream directly to tons of cores. The cores on a GPU are fairly cheap, dumb processing units in comparison to the cores found in a CPU. But the GPU uses hundreds of these cores, in parallel, to drastically speed-up the overall processing. This, coupled with its specialized memory architecture, results in amazing performance bandwidth. Also, interestingly, since these cores are cheap (bad), there’s a lot of room for improvement. At the current rate, GPU advancements are occurring 3-4x faster than Moore’s law for CPU’s. Additionally, the graphical experience is near real-life quality. Current API’s enable developers to draw 3D triangles directly off the video card! This is some amazing hardware folks. GPU’s, and generally this whole notion of co-processing to optimize for operations that lag on CPU’s (memory bandwidth, I/O) promise to make future computers even faster than ever.

OK, so the basic story here is our computers are really powerful machines. The web world doesn’t take advantage of this, and considering how much time we spend there, it’s an unfortunate waste of computing potential. Because of this, I feel we are losing an appreciation for our computer’s capabilities. For example, when my friend first started using Gmail, he was non-stop clicking on the ‘Invite a friend’ drop-down. He couldn’t believe how the page could change without a browser refresh. Although this is quite an extreme example, I’ve seen this same phenomena for many users on other websites. IMHO, this is completely pathetic, especially when considering how powerful client-end applications can be in comparison.

Again, I’m not against web-based applications. I love Gmail, Google Maps, Reader, etc. However, there are applications which I do not think should be web-based. An example of this is YouOS, which is an OS accessible through the web-browser. I mean, there’s some potential here, but the way it’s currently implemented is very limiting and unnecessary.

To me, people are developing web-services with the mindset ‘can it hurt?’, when I think a better mantra is ‘will it advance computing and communication?’. Here’s the big web 2.0 problem: Just because you can make something web 2.0’ish, doesn’t mean you should. I think of this along the lines of Turing Complete, which is a notion in computer science for determining whether a system can express any computation. Basically, as long as you can process an input, store state, and return an output (i.e. a potentially stateful function), you can do any computation. Now web pages provide an input form, perform calculations server side, and can generate outputting pages – enough to do anything according to this paradigm, but with extreme limitations on visualization and performance (like with games). AJAX makes web views richer, but it is not only a terribly hacked up programming model, but for some reason compels developers to convert previously successful client-end-based applications into web-based services. Sometimes this makes sense from an end-user perspective, but consequently results in dumbing down the user experience.

We have amazing hardware that’s not being leveraged in web-based services. Browsers provide an emulation for a real application. However, given the proliferation of AJAX web 2.0 services, we’re starting to see applications only appear in the browser and not on the client. I think this current architecture view is unfortunate, because what I see in a browser is typically static content – something I could capture the essence of with a camera shot. In some sense, Web 2.0 is a surreal hack on what the real online experience should be.

I feel we really deserve truly rich applications that deliver ‘Minority Report’ style interfaces that utilize the client’s hardware. Movies predating the 1970’s predicted so much more for our current state’s user experience level. It’s up to us, the end-consumer, to encourage innovation in this space. It’s up to us, the developer, to build killer-applications that require tapping into a computer’s powerful hardware.The more we hype up web 2.0 and dumb-downed webpage experiences, the more website-based services we get – and consequently, less innovation in hardware driven UI’s.

But there’s hope. I think there exists a fair compromise between client-end applications and server-side web services. Internet is getting faster, the browser + Flash are getting fine tuned to make better use of a computer’s resources. Soon, the internet will be well-suited for thin-client computing. A great example of this already exists today, and I’m sure many of you have used it: Google Earth. It’s a client-end application – taking advantage of the computer’s graphics and processing power to make the user feel like he/she is traveling in and out of space – while being a server-side service since it gathers updated geographical data from the web. The only problem is there’s no cross-platform, preexisting layer to build applications like this. How do we make these services without forcing the user to do an interventionist, slow installation? How do we make it run over different platforms? Personally, I think Microsoft completely missed the boat here with .NET. If MS could have recognized the web phenomena early on, they could have build this layer into Vista to encourage users to develop these rich thin-client applications, while also promoting Vista. I have no reason to change my OS – this could have been my reason! Even if it was cross platform, if they had better performance it’s still a reason to prefer (providing some business case). Instead, they treated .NET as a Java-based replacement for MFC, thereby forcing developers to resort to building their cross-platform, no-installation-required services through AJAX and Flash.

Now, even if this layer existed, which would enable developers to build and instantly deploy Google Earth style applications in a cross-platform manner, there would be security concerns. I mean, one could make the case that ActiveX attempted to do this – allowing developers to run arbitrary code on the client’s machines. Unfortunately, this led to numerous viruses. Security violations and spyware scare(d) all of us – so much so that we now do traditionally client-end functions through a dumb-downed web browser interface. But, I think we made some serious inroads in security since then. The fact that we even recognize security in current development makes us readily prepared to support such a platform. I am confident that the potential security issues can be tackled.

To make a final point, I think we all really need higher expectations in the user experience front. We need to develop killer applications that push the limitations of our hardware – to promote innovation and progress. We’re currently at a standstill in my opinion. This isn’t how the internet should be. This is not how I envisioned the future to be like 5 years ago. We can do better. We can build richer applications. But to do this, we as consumers must demand it in order for companies to have a business case to further pursue it. We need developers to come up with innovative ways of visualizing the large amounts of data being generated with the use of hardware – thereby delivering long-awaited killer-applications for our idly computers. Let’s take our futuristic dreams and finally translate them into our present reality.

Advertisement

7 thoughts on “How Google is putting us back into the Stone Age

  1. I agree. I think the emerging solution to this might lie in Adobe Apollo, Microsoft WPF/E (Silverlight), which encourage creating web services in a desktop application context. I’m not sure, however, how effective they are in utilizing the full resources of your computer.

  2. talking about industry standards my computer has 256 MB ram and I bought it 3 months ago, well it probably has to do everything qith me being a non-consumerist (lol) and having had necessity to buy some labware etc.
    There may be lots of considerations like that.
    Paraphrasing Google’s mission statement – making all applications universally available is what should be pushed forward IMO.
    I’m not even sure about fancy interfaces – there have been periods of time when I had to rely on a cell phone browser for my web needs and frankly the idea that I can access sum of human knowledge virtually everywhere if I succeed in interpreting distorted text-only pages correctly seems very futuristic to me.
    IMO accessibility is a major consideration.
    Google Earth model is a nice compromise although it’s a wee bit tremulous on my computer lol.
    On the other hand I think Joost could force me to upgrade something.
    btw you can install folding@home to make use of idle computing resources for scientific purposes, GPUs like that contribute greatly to protein folding.

  3. I liked this post. I’ve been asking: do we want to build the Google Maps or the Google Earth for this space? analogy for a while as I think it is a great illustration of the points you make here.

  4. So this whole post comes down to: “How the web fails to use your hardware”

    That’s a pretty misleading title and even the title you suggest afterwards is misleading. I think you should have made a post about the possibilities of using every resource the computer has to improve web applications.

  5. This post title’s appropriateness, which can be made subject of a completely separate discussion, is quite irrelevant simply because “never judge the book by its cover” applies on the Net as well. The title was catchy, and whether you think it suits the content – you (all of you) clicked the link and read the text – so shut up.

    Now, moving on to the real point made by the author. I completely agree with Vik in that we’ve spent the last 10 years ramping up our CPUs, VGUs (and other components) in a bid to facilitate demanding applications. The reality is that the general consumer does not typically need 3GHz or 256mB video RAM.

    Sure, we have a few savvy consumers (let’s exclude the techies because, well, we can all find a way to use up 100% CPU or RAM if the challenge arises, even from Bash or Cmd) who not only manage to rotate photos but also encode videos and watch DVDs, thus successfully managing to utilise a PC like a VCR.

    Most applications today are still plain old windows with static buttons and drop-down menus, and their expected output is not much more spectacular than a well crafted colourful Excel spreadsheet filled with sorted grids and meaningful numbers.

    Operating system developers are of course pushing development of visually rich environments but this is happening way too slowly and probably for the wrong reasons (flashy ads are not why one should develop a good looking OS). Try convincing a small company (say 10 PCs) or even a savvy home user to upgrade to Vista. With XP SP2, free Windows Media Encoder and a bunch of other tools that they’ve learnt how to use – the users today have all they need (or think they need). XP has had more its bugs fixed by now, Vista has over 100 patches to be zipped into SP1 – and users feel that grown stability of their existing OS. In contrast, an OS change = painful setting backups, document backups, reset application preferences, reinstalling of applications and most of all the fear of having to learn new ways of doing things.

    What we need is a near stateless operating system, and near stateless applications. That’s right, the OS should be an interface between the program you want to run, and the hardware that you have. What kind of an interface is it, if it’s irreplaceable? If you say this is technically impossible, I say – look at VMWare, Zen, Virtual PC. An OS, as far as the user is concerned is the location of his menus, and the colour of the login screen – not much else. We should be able to write applications today that run on any PC, full stop.

    From a technical system administration standpoint this does present new challenges, but also new solutions to existing problems. Let’s not go into this field at this time, not because it’s not worthy, but because this post is not about this. I will be happy to discuss the implications of a stateless OS on large-scale system administration in a separate blog/discussion.

    Coming back to the original point, yes, these applications will need to take care of visual presentation and other bells and whistles themselves. But so they should – each application should try to be more accessible, friendlier, and more intuitive. Why should the application developer stop at the thought that “the OS will take care of all that stuff”?

    Only then will we have programs that obey our speech, follow our hand/glove (Minority Report reference), and present results (whatever they are) the best possible way we would want them to be presented, considering the available hardware.

    Too difficult? Well, it is happening on the Web. This is where ‘semantic web design’ originated. What does it mean? It means “present the data in the basic form, but if the environment supports JS – use it; if it supports animations – use them; if it supports asynchronous data calls – use them; if it supports fancy visuals driven by Flash – use them!” But even here we feel the lack of interaction and compatibility – why is it that we still need to use Flash to let the user see a pretty font; why is it that web developers have to write hacks for compatibility with the browser?

    It is time. Time for Microsoft to include the Square Root button in the Scientific mode of Windows Calculator. Time for Linux, an environment with so much freedom and potential, to have more menus and options that the Windows Control Panel. Finally, it is time for Apple, the now permanent runner-up of Windows, to contribute to development of a stateless OS for just one reason – so more people can plug-in MacOS on their PCs (DVD, memory stick, a wireless download – who cares how – make it happen – invent).

  6. As a developer/designer I like the state of web applications and as they are now. I like that the web is “lightweight” because as we are today, web applications and services are too primitive (as you pointed out) to utilize the power of my computer. But I think what you haven’t covered is that while many people don’t utilize a lot of their processing power, some of them, like me, do, and due to that, a sizable increase in client-side processing done for web apps and services would essentially push the envelope of high end computing up (what I have now, + extra power to accommodate the new processing requirements and my existing ones).

    That aside, the web itself is not ready for the next stage of development, where we move into more stateless applications. Google Docs/Spreadsheets is a good start, and AJAX is a great tool, but JavaScript was never designed for pushing this kind of information, and current browsers aren’t really suited to data intensive AJAX functions either. Before advanced web applications and advanced services that actually utilize the client as more than something of a smart terminal and utilize its data processing power appropriately can be made, the frameworks used to deliver all this over the web need to be updated with smart data delivery in mind.

    Presently, I’m in favour of using client side applications that handle external data themselves, in whatever proprietary method the developers felt was best suited, because currently that is the best and most efficient way, and currently there is no universal framework that can cater to everyone’s needs. I’m sure one will come about in time, but it’s some way off yet.

  7. I’m late to the party, but look into WPF. They’re starting to get where you (and I) want UI to be, but they’re nowhere there. It is, however, a step in the right direction. Building a WPF interface to Flickr would make a LOT of jaws hit the floor….

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s