Wednesday, June 22, 2005

Search: Depth, breadth and trust

This Yahoo Search vs. Google and Technorati: Link Counts and Analysis (by Jeremy Zawodny) prompted some thoughts about how search engines are going beyond a nice to have to a critical part of business and personal lives. With that comes a host of issues and makes that deciding on which search engine to use based on the number of pages it indexes and how fast it re-indexes them a secondary metric.

The most important metrics are going to become who can I trust most and who is less prone to manipulation either by inside advertisters or through extranal manipulation by sophisticated (and some not so sophisticated) linking schemes. How trust is propogated through a search engine is going to become the key differentiator (IMHO) over the next several years not the index size or refresh rate.

Tuesday, June 21, 2005

Tools and Artisans

This from Doug Tidwell: "They're afraid that if they open up their business processes, their customers will realize just how little value they add" via Mark Baker.

My feeling is that many organizations confuse have cool tools/software with having people that know how to use the output effectively. The goal of business processes is to deliver information to decision makers. The difference between a great artisan and the rest of us is not the tools they use it is in their native and acquired skills. The same is true in any organization - great people succeed no matter what - good tools merely help make their life easier.

and to Doug's other point - no The Specials were the coolest ;-)

Monday, June 20, 2005

Interesting technique for predicating trends

This from Tim O'Reilly via Tim Bray ongoing - The Java + Open Source Sweet Spot. When books were the primary vehicle for knowledge transfer I used to measure success/adoption by shelf inches at Stacy's (Great Technical bookstore in San Francisco). Tim O'Reilly now has a more scientific approach - and please increase the graph size, I need a magnifying glass to read the data.

However I am not sure this now tells the full story. I have noticed a substantial drop in my book buying habit, much to the relief of my wife who has had to navigate the piles around the house. It is not that I read less but rather the internet is more the source of up to date information rather than a book. I have mixed feelings about this, as I enjoy reading information from a book, especially sitting out on a warm sunny day. Laptops just do not cut it for reading outside or on the train etc. While the information junkie in me loves the immediacy of the internet and ability to research any problem quickly and easily. Hopefully tablet machines will get to the point where they are readable in all environments.

Wednesday, June 15, 2005

Podtech

I did a podcast last week with John Furrier of PodTech - fun way of having a live discussion Podtech.net InfoTalkÃ?ƒƒƒ‚™ : A Podcasting Radio Show - "Fresh voices of Silicon Valley, Technology, and Media" He is looking for other CTOs for his series - so drop him a line.

Tuesday, June 14, 2005

Loosely coupled the simpler the better

Matthew Adams makes a great clear case for not prescribing the end point technology. I would further interpret this as by keeping the definition of the end point to the minimum. Not prescribing the end point keeps the service loosely coupled, which is the objective yes?

Monday, June 13, 2005

Peer to peer or client server

Reading a little on Ruby over the weekend I was struck by how artificial we have made that boundary between client and servers (Note: I have very limited knowledge of Ruby so excuse an inaccuracies). Ruby has a built in http server as part of the language. This seemed very sensible - most other language environments make http servers/interfaces a different part of the system.

This blurring of the lines between client and server seems to be the right direction. In this I would include AJAX both the client and server can send and receive events the only real difference is who is in control (machine or human) - does there need to any other difference's? It should be all about breaking down the barriers!

Monday, June 06, 2005

The power of the web

The simple power of the web was brought home to me this week by my six year old. He had figured out Google Image search with a friend and proceeded to announce that Google was his new favorite toy. We spent all weekend helping him spell words to type into Google - he even helped his younger sister find Barbie pictures. It is so easy a six year old can use it - I wish all technology was that easy!

The amount of raw power in the hands of a six year old is something to consider. How will their brains develop when they have access to limitless information and what new skills will they need to develop to process it and determine fact from fiction?

Thursday, June 02, 2005

Economics for inside and outside

The drivers for using services and whether they should be inside the enterprise or outside the enterprise are partially rooted in the economics of fixed and variable costs. The fixed costs we all have are for our workstation, this cost has reduced dramatically from when it was a significant portion of an employees annual salary to the point where it is less than some monthly health insurance payments. Every employee now needs a workstation, an operating system, office software, and a communications suite - these are fixed costs ( this also includes in support and training). Whether you use Linux or Windows for large companies the costs are about the same as the biggest variable is usually support and training. A hard working CFO/CIO can shave maybe a few percent off these costs but in general they follow a fairly fix price model.

Enterprise software and services on the other hand have both potential for significant cost and for significant competitive advantage. The ability to turn these from fixed capital costs to variable utility model changes the way a company spends money. It shifts the capital expenses to the service provider and reduces large multi-year projects to simple connections. This has been touched on by Stefan Tilkov in The Same for Both Worlds and Tim Ewald looking at why inside and outside are different. I do not think they should be the fact they are largely different today is that the inside has not caught up with the fast evolution of the outside. The internet is much more loosely coupled than the enterprise and hence evolves faster. The goal of CIO's should be to move the internet model in house such that the enterprise becomes truly loosely coupled and can take advantage of services.

The other is the the wall between the enterprise and the internet is becoming increasingly porous and more irrelevant as services are delivered faster and better than enterprise software. It is no longer a competitive advantage to have enterprise software installed and managed internally it is both an unnecessary capital cost and an impediment to flexibility and moving the enterprise architecture to a variable cost model that tracks with the corporate performance.

Wednesday, June 01, 2005

Desktop software and services

In Dictatorship of the Minorities Ulrich Drepper argues the point about the distractions caused b y porting to minority platforms. In some ways this mirrors my argument in Open Source - Software or services. However after thinking about the issue for a while I see there being two primary classes of software, software that runs on my desktop and services that run else where. The else where is becoming the major change agent as I do not (and should not) be concerned where the services are run providing they have an acceptable SLA (this includes security, latency, scalability, availability, etc.) for a reasonable cost.

While I am not sure the "dictatorship of the minorities" should or will be solved on the desktop it can certainly be solved by services as they only need to run on a single platform. Putting it another way - the value of desktop software (and I include server software in this group) is measured by the breadth of platform support and deployed seats, and how much it costs the owner of these seats to manage and support them while the value of services are measured by their SLA and it is someone elses problem to manage and support them.

Open source software has tremendous opportunity in the service model as there is no "Dictatorship of the Minorities" and the goals of writing great software can be supported by an economic model based on SLA not supporting software. In a recent article The Open Source Heretic Larry McVoy has a great quote:

"One problem with the services model is that it is based on the idea that you are giving customers crap--because if you give them software that works, what is the point of service?"

Infrastructure is necessary but does not differentiate a company, whether they run Linux or Windows will not materially impact there profitability as they need to have a certain amount of desktops and servers to run their business.

However using subscribing to and using innovative services wisely can transform a company in terms of significant cost reductions, dramatic increases in agility and the ability to quickly adopt innovative technologies that will differentiate them.

Friday, May 27, 2005

there.is.only.xul

I have spent the last few days hacking a Firefox extension. While the platform is pretty cool and extensible the documentation is sparse to say the least. Even searching the web the amount of information is very sparse. This is proably an indication of the platform maturity.
However the number of extensions is however growing (a very good sign), so learning by readiing code demonstrates a key to the success of open source - building on the shoulders of giants.
With a little effort I was able to hack the webdeveloper extension and add a few new features that some of our design team needed. Many thanks to Chris Pederick for all his hard work and making his tools available.

It is really nice to be able to build on a full HTML/CSS rendering engine. I look forward to seeing a lot of great applications come from the platform. Once full EX4 support arrives with 1.1 it should be even better.

Saturday, May 21, 2005

Open Source - Software or Services

The acceleration of adoption and acceptance of open source by the mainstream is accelerating see: Investors to commercialize open source and IBM buys open-source Java outfit Gluecode. These are major chasm crossing events and should be celebrated, and congratulations to all involved. However I question that this is the best model economic model for all open source projects. In fact I will argue that if an open source application project is a technical success it inherently will undermine the current economic model, based around paying for support.

The current thesis is that users of open source software will pay for support, bug fixes, training, consulting and other soft services. The fallacy of this model is that open source software has moved into the mainstream because a vacuum exists. Existing software is not meeting the needs of the users of software. I am not an economist but my instincts tell me that the need for open source software such as Linux, Apache, mySQL. JBoss, etc is caused by economic forces not by any altruism within corporations. Successful open source projects succeed because they meet the market needs, have higher quality and are easier to use and maintain than closed solutions.

I would argue strongly that the most efficient companies in the next decade are those that minimize internal resources IT by maximizing the use of external services. This is a demonstrative trend - we have moved from internal data centers to shared data centers to managed applications within the last 8 years. The economics of shared resources are just too powerful to ignore except for the largest companies. Why does anyone want to hire a set of DBAs to manage a customer database when they can get it done for a few hundred dollars a month - and scales linearly by usage. Why does anyone want to buy install and manage a set of servers when they can have someone do it for a monthly fee - add, change, delete based on need not educated guesses but on actual usage. In other words the utility model is here and it is the economic model for the future enterprise - the most efficient enterprises are the ones that will manage it best.

The key differentiation that I make is between open source solutions and infrastructure. Infrastructure is operating systems and web servers etc. Solutions can be anything from an SFA application to an XSLT transformation engine. The model for the two is orthogonal - infra-structure has to be everywhere to power the grid ( like redhat and Apache) but applications like openCRX or XOOPS to pick a couple from SourceForge should not require deployment - they should just be usable. A recent posting from Tim Bray points out that half of IBM's income comes from consulting - the rest is from sales of hardware and software, with software being only about 1/6 of the total. To put it another way 25% of the cost is license fees the bulk of the costs 75% is from installing and getting it to work. No wonder there is so much shelfware around - a lot cheaper to leave it on the shelf. (Note: this is a somewhat simplistic argument but experience has shown me it is essentially correct).

Returning to open source - the drive for open source projects (IMHO) is to create software solutions that due to its open nature, benefits from continuous peer review and a Darwinian evolutionary process. The current approach to deriving economic benefit is inherently the antithesis of open source. The drive in an open source project is to make it more accessible and through peer review high quality - when these goals are achieved it becomes widely adopted. This inherently(and correctly) diminishes the value and need for support services. Putting it another way the most successful open source projects are those that are most accessible and have the fewest bugs and hence the least need for support. The current support model means for open source application projects to succeed economically they must deliver and continue to deliver software that requires support.

There is another way however that actually combines the best of both worlds and provides companies with a clear economic model for paying for services and enhances the ability of the open source community to deliver high quality software and if appropriate reap an economic benefit from their labors. For the majority of open source projects the model should be services not software. By services I mean applications that run on a managed grid that can be accessed by anyone. The applications are accessible as individual services or collections of services this attacks the largest area of costs that enterprises face.

This approach provides a few interesting benefits:
  1. Drives developers to solve business problems rather than point solutions (how many POP services are listed in SourceForge?, why do we need so many - some is the Darwinian process at work. However the accessibility of all these projects is an issue.
  2. Provides an economic model for developers (and potentially investors) to share in the benefits the work is delivering and motives the correct model - easily accessible bug free software has higher value for both the consumer i.e. I will pay higher for an easy to use robust service and open source developers want to deliver accessible highly secure and robust services.
  3. The open source model still works - I can look at the code and submit fixes and accelerate the development as needed.
  4. There is a reason to pay on going fees for the same high quality solution - the model is now about usage and the service level agreement (SLA). The higher the demands on the SLA the higher the potential fee. I believe this aligns the development and user community better than any other model.
  5. The development effort is now focused on a single platform and not on how many platforms can I make it work on. It is focused on the solution not on how to deliver the solution onto multiple platforms.
This alignment of the economic model and the drives of the open source community is critical for the long term success of this model. Improving the quality and accessibility of applications while removing costs from the IT infrastructure is going to make enterprises more efficient and hence more profitable. Without this alignment the current wave of investments in open source projects will just be more VC roadkill as they are attacking small (25%) problem not the big (75%) problem. Instead of changing the model they are thinly disguised System Integrators that have experience in a specific set of technologies.

Friday, May 20, 2005

Dynamic Languages

In Phil Windley's Technometria | The Continuing March of Dynamic Languages Phil suggests that Scheme is a great dynamic web systems. I have a fond spot for scheme as way back I worked on a system that used scheme as a dataflow language for creating analysis tools. It was based on some cool research a bunch of guys at MIT had done. It was extremely flexible and we could do anything with it. Coupled with XML I think it is a very interesting avenue to explore. I look forward to hearing more about Phil's adventures.

Thursday, May 19, 2005

A small understatement from Stefan

I just LOL when I read footnote [5] in Stefan's posting RPC-style Web Services. The only thing that compares to (and exceeds) the complexity of Corba is OLE2.

Wednesday, May 18, 2005

WSDL 2.0 - First impressions

After reading Dave Orchard's Blog: WITW WSDL 2.0 HTTP Binding I downloaded some of the WSDL 2.0 specs to read on the train. I am pleasantly surprised as compared to WSDL 1.1 it appears much more accessible. Whether my positive impressions are due to too much reading of WSDL 1.1 not sure. Mark Baker as always has a very pithy comment that aside all the people involved should be congratulated for their hard work.
Congratulations aside, I question the idea of a "single SDL to bind them all". So I ask the question, why do we need a single SDL, when would I need to use both? It is certainly a nice idea but as technology evolves, different areas typical move at different rates and evolve in different ways. Linking them usually tends to be a bad idea.

Monday, May 16, 2005

Simplicity - Just ask why

One of the most powerful questions I ask myself (especially) and others in reviews is why is something in a project. As engineers we have a tendency to like shiny new things, elegant abstraction etc.

As is pointed out in Quoderat » Blog Archive » Burden of Proof injecting simplicity is sometimes about just asking why something should be added or why is there a need for a layer of abstraction or why is there a need to make the solution completely generic. In many cases there are very good reasons to add abstraction etc.

We need to always challenge ourselves to see if we are solving a real problem - so just ask why.

Thursday, May 12, 2005

46,213,000,000.00 things to fix

This is classic ongoing · $46,213,000,000.00. Several years ago evaluating a J2EE servers I found this to be so true, downloading, installing and running test program for JBoss - under 10 minutes. While WebSphere the download was 14 times as big and getting running was days (i.e. call IGS). Is WebSphere so much more capable than JBoss - I doubt it and given IBM's recent purchase of GlueCode they might have seen some of the light.

Tuesday, May 10, 2005

Dissolving boundaries - inside and outside

Phil Wainwright comments on dissolving boundaries Loosely Coupled | Dissolving boundaries | May 6th 2005 2:12pm. I think we need to make a clear differentiation between inside and outside boundaries.

Inside the enterprise the CIO has some degree of the framework between the silos and can manage the necessary change to define the fabric to achieve loosely coupled services architecture.

Outside the enterprise the CIO has little control (but has economic power) of the fabric. Here though is where there are compelling economic benefits for a CIO to use external services (see: Why Integration as a service?) or consider the economics of Salesforce versus Siebel. The long term winners here are going to be the providers of business services that destroy the previous economic models. The tools vendors need to change their POV from being the center of the universe (or single religion) and become enablers of wide scale service integration that provide dramatic economic benefits to the enterprise.

Resedel - Updated

John Cowan's entry into service definition language Recycled Knowledge: Resedel. Looks good I think as I am better with Schema than RelaxNG. (Update - thanks to trang here is a XML-Schema version of Resedel resedel.xsd)

Biggest question is why translate HTTP-GET et al explicitly into CRUD - expose the URL verb, another layer of abstraction can be confusing.

Monday, May 09, 2005

Service Descriptions - Just the docs?

Couple of interesting posts: Dare Obasanjo aka Carnage4Life - On Replacing WSDL with Something Simpler and Mark Baker- ongoing - SMEX-D.

Both make the point that having an interface language for services should not do very much. From my POV I want primarily a means of getting accurate documentation on the interface (and also providing it to consumers of services). For all its warts Javadoc was a great step forward as it provided a way for developers to provide documentation will minimal effort. Until Javadoc the best examples of standard docs that I had experienced had been man pages, and some Lisp environments.

For services to be propagated widely there is a need to provide documentation that is easily created and kept up to date. Perhaps this is the starting point for a service description?

Thursday, May 05, 2005

the simplest thing that can possibly work

Quoderat » Problem-first design - great POV and quote

Service Descriptions - There does not need to be just one

There has been some really good analysis and thinking about service description languages recently by several smart people (Tim Bray:SMEX-D, Norm Walsh:NSDL, Mark Nottingham). Everyone of them makes some excellent suggestions and brings a unique POV. One of the challenges that everyone is assuming is that there needs to be only one service description language. I challenge this assumption and suggest that more than one is actually preferably.
The are a few reasons for wishing there should only be one service description language:

  • We have a mind set that is framed by the Web publish-find-bind triangle where we assume that magic happens by everyone automatically discovering services and magically binding to them to solve complex business problems. This implies that there is a many to many relationship between services - all services need (and can) talk to all other services. In the real world this is more like one to a few or few to a few - a slight over design.

  • There is the burning desire for protocol independence - between companies there is unlikely to be a placement for HTTP/SSL anytime in the future. As the enterprise architecture is slowly being decomposed into services provided by outside organizations I would suggest having a single transport is more important than have transport/protocol independence. If the transport information is cleanly separated from the message there should be no issue in replacing it with a different transport - a good example is EDI and AS/2
  • The complexity of integration is not in the communication of messages it is in the integration of the semantics after the message has been received. Currently we are doing several integrations a month with a wide range of systems. The message delivery is not the problem it is the semantic meaning of the messages and this is in a single well defined vertical.

  • Tool development has always been a good reason for having a single set of standards that all vendors build. The tools are complex because the service description is complex, and there is a lot of complexity introduced by RPC. If we focus on the message (where the real integration occurs) there are really good tools available, XMLBEANS for Java does a truly great job of providing a binding between XML and Java, similarly XSD for .NET and I am sure there are many others. These tools work on basic schema's and are generic tools - rather than specific more complex tools.

One of the really good aspects of the service description languages described by Tim Bray, Norm Walsh is that they use the same vocabulary as the transport i.e. HTTP -this makes it a lot easier to understand as it does not add another layer of verbal indirection. This reduces the learning curve and hopefully the ambiguity. Having several service description languages is not an issue if they are all in xml they can be transformed from one to another easily. As the transformation is a design time dependency and not runtime a runtime one there is no performance penalty. The major issue may be loss of meaning but there are only a limited set of MEP's so this should not be a long term issue.

The tendency today for web services is to add more features and hence the complexity is increasing. Shift the focus to how simple they can be and reduce complexity, lets see how easy it can really be.

The focus must be on the communication of information and not overly abstracting the service descriptions to a point where everyone just gives up and writes word documents.

Tuesday, May 03, 2005

Building knowledge from parts

Google has done a great job of decomposing knowledge and making it accessible but is it possible to build knowledge from parts in an automated fashion.

The first questions two ask are 1) Is it possible to build useful knowledge for components in an automated way - (Rooter a machine generated would tend to suggest otherwise:-)) 2) why would we want to do this.

For me the why is the information overload caused in large part by change. A tool to categorize and filter information intelligently would be a better Google. Navigating down to page 15 of a Google search to find a nugget is not very productive. Though compared to the alternative it is a major step forward. Being able to compare and contrast becomes even more interesting an intelligent Google Fight (thanks to Mark Baker for the pointer) may be an interesting starting point. Rather than distill knowledge provide the ability to contrast and compare POV - something our political process badly needs.

How about parallel news tickers showing pros and cons on a viewpoint to help decide on a course of action. How about an uncertainty measure - we all know the uncomfortable feeling of not having enough information to make a decision - does it help to quantify. We live in an imprecise world is it possible to measure/estimate the degree of ambiguity?

So search is great but it is just the start - we have parts now what are we going to build to create new knowledge and understanding?

Tuesday, April 26, 2005

Telescopes and views

Jon Udell's essay "The wrong end of the telescope ?" raises some interesting issues and summarizes some thoughts from several sources.

They all go to illustrate how difficult it is to put a simple application into a browser page. A part of the problem is that the browser/http/web server combination was never designed to support the rich applications we are trying to shoe into it. Therefore the issue is not really different ends of the telescope but rather different telescopes. Programming the browser is very different from programming the server and they do not connect very well.

The problem lies in the marked impedance between the browser model and the server programming model. Server side programming is relatively easy - the tools are there and it is up to the practitioner's to use them correctly.

To match the server side interfaces with the client is hard when you go beyond a simple form. Entering the world of multi-page applications becomes challenging to say the least. We have invented all sorts of interesting work arounds/add-ons - cookies, frames, iframes, DHTML etc to try and make it possible to deliver rich applications. Some people have created neat frameworks to help such as this from cross-browser.com but in the end the complexity remains in connecting the back to the front.

A good indication of the overall complexity is the number of attempts to build easier to use frameworks - everyone has made an attempt at creating either a new scripting language, a new template framework, a new application stack, application builder and now rich clients. The
proliferation of these tools is a symptom of the problem- sophisticated web applications are too hard to build. The reason is, I believe, there is a impedance mismatch between the browser and the server - HTTP is not the issue.

Sometimes it feels like we are all using different telescopes to hunt for the most elusive thing of then all - the easy model of building, debugging and maintaining sophisticated and usable internet applications.

Tuesday, April 19, 2005

EDI - Simple enough?

A comment by Mark Baker around EDI got me thinking about the current amount of infrastructure we are building on top of the transport. Today millions of business messages are flowing formatted as EDI messages and thousands of business processes are built using the MEP defined by EDI documents.

So what is wrong with just using EDI. Probably a couple of things, at one point the cost of the initial transport hardware and communications infrastructure but this has been solved by the draft AS2 standard which is essentially EDI over http(s) which has reduced the costs dramatically. So much so Walmart has mandated it for all communications.

The amount of work that has gone into the MEP and the definition of the document elements is huge and extremely valuable IP. The only remaining piece is the actual format itself which is very “concise” as it was designed when bandwidth was expensive. The format itself is hard to work with compared with XML due mainly to the lack of tools. If every language had an open source EDI parser (or two) and a transformation tool like XSLT would everyone be using EDI today?

The mechanics of EDI at the MEP level provide a fairly complete set of business interactions but getting into the details of the message and extending the messages is very complex and requires very specific knowledge that is only applicable to EDI - no one has every used EDI as a format for config files or build scripts. XML on the other hand has become the universal data language, because it is so easy to manipulate and mold.

In some ways the discipline that EDI imposed has resulted in a loss of simplicity as instead of a set of well defined MEPs we now have a large number of standards that try and do the same but in fact do not focus on the MEP but rather on making the communication more complex. Where is the WS-850 rather than the generic WS-Metadata. Are we missing the point because it is to easy to avoid it?

Monday, April 18, 2005

Information overload: volume or change

While there has been a dramatic increase in information volume the actual problem for me is the rate of information change. Or maybe I am unable to manage the fundamentals of the information flow, how do pick out the key pieces of information and relate back to my existing knowledge framework.

When the information sources and formats change so rapidly we are unable to categorize them before we move onto the next piece. The whole idea of refereed technical papers was to try and place the information in a broader context and relate it to other similar information. Today's hyperlinks may be technically better but the information schema is not well designed so they do not perform the necessary context creation, or at best it is ad hoc.

Of course Blogs (like this one) add to the problem as we publish more a stream of consciousness (speaking only for myself) rather than a well consider view with information to back it up. Blogs are thoughts in progress and do contain many nuggets but they needed to be forged into coherent thoughts and placed in a larger context.

Is the next great Google type company not something that finds the atoms but something that creates the contextual fabric of knowledge?

Thursday, April 14, 2005

Open Source to Open Services

Open source has shown that simplicity can be created in the infrastructure but it still misses its potential as code still has to be designed to be installed and maintained on various platforms that are changing at uneven rates. It should rather be offered as an open service that can be combined with other services to create new services that live in the network. I do not want software I want solutions to problems. (I still like software and looking at code but i really do not need it - figuring out how Google Maps works is fun and interesting but I do not need to know how it works to use it. To integrate with it I should need to know very little but integration is still too hard.

Integration is a problem because most software is not designed to be integrated. Just as till the advent of open source most enterprise software was not designed to be easily maintained, built and deployed. A huge effort went into (and still goes into) designing software to be deployed on a large number of target platforms. This effort provides zero value to the consumer of the solution.

If the same effort by smart people went into creating simple open service interfaces as they have done to create simple maintenance and deployment for open source we would have a very rich ecology of shared services and a very different economic model. Open source is trying (and in many cases succeeding) in moving the economic model to a maintenance model - well take the next logical step and move to a complete utility model that measures economic value by ease of integration and SLA (Service Level Agreement).

The economics are on the side of shared infrastructure and software as a service. Moving to open services would dramatically increase the impact of the open source community and move the value to the SLA of the service not the portability of the service.

More simplicity

After my last post on simplicity I received an email from an old Teknekron/Tibco colleague Steve Wilhem who made a couple of great points that should be shared. I have para-phrased his email so any errors in interpretation are mine.

The first was around refactoring and how to achieve simplicity (specifically in a framework) the need to constantly refactor, do not expect to get it right the first time but keep evolving it. The other was more subtle and I thought most important. Do not try and solve all possible problems - leave the system open enough such that the last ten percent or so can be done by the user of the system. I think this is key as this is typically where we add complexity - trying to solve all the weird edge conditions we can possibly dream up. Let's assume our users are smart and give them the hooks to solve the problem in the way they see fit. Moves any complexity to specific applications rather than the infrastructure where it does not belong. Thanks Steve great suggestions.

Wednesday, April 13, 2005

Simplicity Rules!

Sam Ruby recently posted a great presentation on amongst other things simplicity. Adam Bosworth posted a long article a while ago in a similar vein. It seems that the importance of simplicity is being recognized.

I am not sure that it is simplicity of implementation as in being not sophisticated that is being suggested, rather it is elegance or simplicity of the solution. The original Mac was a complex piece of engineering as was the original Palm but both had simplicity of use.

Open source drives simplicity by rubbing off parts of a design that have unnecessary complexity. Open source projects that have community value tend to get simpler as in easy to build test and deploy because no one wants to go through complex steps again and again using up their most valuable commodity - personal time. To take an example from Sam's talk - try installing and running WebSphere versus JBoss - last time I tried, I gave up on WebSphere after several hours while I had JBoss downloaded and running in less than 30 minutes. (This is not a recommendation that programming J2EE is a model of simplicity ;-)).

Rapid evolution towards simplicity is usually the result of several smart people driving a solution to a shared problem. Open source by its nature attracts more people and as it is a meritocracy it gets rid of over complex solutions pretty quickly. Having development teams organized in a similar fashion grinds out clean solutions faster too.

Making simplicity a goal in integration is key to success - it can be an aspirational goal i.e. integration may never but point and click but unless the goal is simplicity rather than solving many imaginary problems then the possibility to asymptotically approach it will never happen. So setting simplicity of solution is always the right goal.

Monday, April 11, 2005

Deploying hosted applications - boxes are bad

I have used hosting providers for many years now and before that worked with rooms full of computers to serve various customer applications. This is one of the first things where I saw the huge value in outsourcing and the value that shared infra-structure brought to customers, but I want a lot more than is being provided today.

We have come a long way in managing the infrastructure for hosting applications. However the hosting model is still tied to the model of boxed cpu's. Blades are great and racking and stacking has been a great way to go for creating cheap scalable architectures. We are however a long way from the ideal case. The ideal case is where I agree with the hosting provider the high level architecture and then push software to the specific part of the system architecture and pay for an SLA and usage. Both SLA and usage are complex numbers to work out but the degrees of freedom they introduce should make it easy to develop innovative and fair pricing models.

We are a long way from this as the software tools we use to develop are also locked into the box model. This is not surprising as commercial tools are priced by the box. However open source tools do not have this limitation. Another way to look at this is from a virtual machine - why should the JVM or CLR be tied to a box should it not manage resources for the application and demand more resources as demand increases?

Global computing projects have been leveraging this for years but each application has been custom designed is it not time for this to become a standard part of the network?

Tools and Intermediaries

When I think of intermediaries I typically think of additional tools that I have in my tool box and can use. They are something I can use within my processes but are run and managed by someone else - in other words shared infrastructure.

A simple example of an intermediary would be a data transformation service. This service would accept any message in a specific wrapper (SOAP or REST) and a URL pointing to a definition of the transformation instructions and then return the result. Other services could be shipping calculators, tax calculators etc.

Whether the service returns the message to the sender or forwards it to another destination seems to determine whether the service is defined as an intermediate service or an end point. However the service should not really care, where the message goes next, it should be a capability of the service (does it support forwarding) and a decision made by the application developer - where do I want it to go.

Discovery of these shared services is a challenge today as I typically need a complete ecology working together before they are truly useful. The begining of an ecology does seem to require a common set of semantics and a way of transforming the semantics. This implies a common schema for the ecology and transformation services to move in and out of the common semantics. For the ecology to exist there must be sufficient agreement between all parties of common semantics. A good example of ommon semantics is in the retail industry where ARTS that has defined a set of XML Schemas under the heading of IXRetail. This provides the common semantics that enable parts of the ecology to communicate.

Once the semantics are defined and available to the community shared services can be created but how individual services interact and work together is still an issue as there is no really good way to build collaboration between service providers in the loosely coupled framework that is needed to foster collaboration and independant development.

To make interacting intermediaries a viable source of tools the ability to boot strap an ecology needs to be available - anyone got ideas?

Thursday, March 31, 2005

Grids and Minis

There are two opposing forces separating the hardware aspect of the computer industry - on one hand we have the Mac Mini showing that we do not need to have a (multiple) large ugly beige box(es) around the house. At the other end we have the ubiquitous computing platform from which all we ask is for storage and compute cycles.

In the household the ease of use, no wires, and seamless connectivity with all other appliances is going to be the driver for consumer purchases. Macintoshes are just so much easier to use and integrate into a digital lifestyle and for techies we can get to the internals of Unix with no problems - think remote management of Unix versus Windows. How long before we have remote management services for the home network?

On the corporate front I have gone from managing large internal data centers, to leasing rackspace, power and bandwidth to totally managed systems, where I request boxes on a demand basis and then install application software. Here we are still dealing with unique boxes and leasing boxes as a representation of computer resources. It is cheaper in most cases to lease boxes on demand rather than own them, but the real advantage will come when we break free of the computer box and have a true grid.

It is a question of degrees of freedom - with a box (even in a 1U form factor) the degrees of freedom are limited and the amount of management required for load balancing etc is too much. Installing software on multiple boxes and then configuring load balancing etc. should not be required. This is true whether we are in a home setting or using corporate resources - our thinking is constrained by physical boxes. The ideal situation is being able to inject software into a domain and have it managed on demand.

Today we have a one to one correspondence between boxes and resources. This dependency needs to be broken to create the next wave of innovation and ease of use. The question is whether the innovation will come from the hardware vendors or whether some smart software will abstract the hardware into a virtual grid is the question. Realistically both need to happen, however the software will probably come first as the hardware makers are too entrenched in shipping boxes to change.

Once this happens and it will the question will be how to access resources in the local and global grid - this is when things will get fun as there are no longer compute or storage boundaries - we are no longer navigating static information but information that is changing as we interrogate it and follow meaning provided by the compute resources. A bit far fetched? not really, I believe we are only a 5-7 years away from this in terms of software. The hardware will follow the software as the need to own hardware is going to go the way of owning power generators and telephone switches. It is perhaps time to start thinking about the software foundations of this new platform.

Friday, March 25, 2005

Is it the message or the transport

To make integration work the discussion seems to focus on the transport rather than the message. We are trying to convey meaning the transport is largely irrelevant providing both sides speak it and HTTP is the lowest common denominator.

Phil Windley as usual makes good points in his post ? On the death of SOAP | Between the Lines | ZDNet.com but I think the issue is in the white space between the transport - HTTP and the message - XML

If I may make a bold statement - between disparate systems (even similar systems ) the transport protocol is HTTP and the message format is XML, end of story. There are other use cases and approaches but they are edge cases and very domain specific and have propriety end points such as AS/2. For the foreseeable future the common transport is HTTP(S), protocol independence is a futile exercise and (IMHO) should be abandoned.

The real issue for most applications is which end is going to do the transformation into the format necessary to process the message. Whether the message is formatted as a SOAP or REST message is largely irrelevant to the end point that needs to do the transformation, the issues that need to be addressed are:

  • Is the message understandable?
  • Is there sufficient fidelity for understanding?
  • Can the message be converted into the receiving format?
  • Can the response be returned in an understandable format to the sender?
I am not sure REST or SOAP address these issues. A well documented schema and a well written HTML page probably contribute more to message integration than any WSDL or REST URI

This is because any standard needs to both exist for at least five years and be actively deployed for five years before it makes any impact on the industry as a whole. In my current environment we are integrating systems from Windows 3.1 to the present day. This is real life, and it will always be this way, legacy systems will always be in the majority. Interesting however the information flows are pretty constant, how they are described and formatted has not changed much - they have become formal and faster but the core information is the same.

The point is therefore the transport of the information needs to be the lowest common denominator while the ability to convey meaning must the focus not the protocol.

Thursday, March 24, 2005

Impedance Reduction

One of the usual starting points for me in a design is the flow of information. That usually leads me to creating a set of XML Schemas for the information flow. This works well until I need to start developing code. In the past this has been a little tiresome to create (and recreate as the design evolves) the bridge between xml and the application.

This problem seems to be rapidly going away both in the Java and .Net environments, in Java I am using XMLBeans which once you get it installed and working provides a slick and easy way to map between schema and Java. In the .Net world I have been using a tool called xsd that is part of Microsoft .Net toolkit.

On the two approaches XMLBeans seems to be the most polished and functional but it is great to have similar functionality in both environments to improve productivity and reduce the drudgery.

Tuesday, March 22, 2005

REST and SOAP: Fitting them into your toolbox

Two appropriately cautionary posts about SOAP and REST raise some good issues,
Jon Udell: Don't throw out the SOAP with the bathwater and
Steve Vinoski: SOAP vs REST: stop the madness.

I have always had the metaphor of a toolbox where I keep the set of tools I need to do my job. Taking it a little further I have seen the standards process (both informal and formal) as the series of events that shape and separate the tools in the toolbox. This shaping and separation ensures that the tools can work together and overlap enough to solve most problems. The struggle I am having is where do these tools fit in my toolbox?. Jon Udell asks the question:

Why must we make everything into a rivalry? Why can't we just enjoy the best of all worlds?

Today I use both for different purposes but do I want to support both for all customers - this becomes an economic issue that significantly reduces the enjoyment. If I support one over the other, what do I say to a CIO about why one versus the other should be in our mutual toolbox. Is it clear when to use SOAP and when to use REST - if it is clear then we have the best of both worlds - until then we need to keep working on shaping our tools.

Saturday, March 19, 2005

RESTful coding standards

In AJAX Considered Harmful Sam Ruby starts to lay out a set of useful coding standards for REST. He addresses Encoding and Idempotency, however going a little further has helped me. I have tried to apply the KISS principal and be a minimalist i.e. only add layers of complexity when you need them


Most use cases I have come across can be reduced to a CRUD interface (and yes there are exceptions, and exceptions prove the rule). The starting point is some standard naming conventions to create some degree of consistency for users for example:



  • Create POST to http://xml.domain.com/resource/Create

  • Read GET from http://xml.domain.com/resource?QueryString

  • Update POST to http://xml.domain.com/resource/Update

  • Delete POST to http://xml.domain.com/resource/Delete


The other issue is how to pass arguments and results back and forth. Some decisions are obvious, GET always passes a query string and the return data is always XML both using UTF8. I always have issues with POST though for invocations I recommend using XML always as it is hard on a user to have to switch between POST form parameters and XML for different methods. If XML is always used it keeps everything consistent. However when a delete method only requires a single key it is very tempting to just pass form parameter or even just do a GET with a query string. So far I have stayed with consistency - I would be interested in other opinions.


The other of standardization that helps is error returns, SOAP Faults are a good idea that REST can easily adopt. With a CRUD interface there are a simple set of standard errors that can be defined that come back with a HTTP 500 status code:


  • Create: Item Already Exists User has no permissions ...

  • Read: Item Does not Exist User has no permissions ....

  • Update: Item Does not Exist Item Locked User has no permissions ...

  • Delete: Item Does not Exist Item Locked User has no permissions ....


A simple REST Fault schema can be used to always return the same format such that common utility code can be created to handle errors in a uniform way. For errors to return permissions errors there needs to be the idea of identity which requires authentication


For authentication the lowest common denominator seems to work most of the time - interested to hear about real cases where it does not. If the request is over HTTPS and the authentication is HTTP-Auth then we have pretty good security. One step further would be to PGP the message but that would only be needed where the message needed to be highly secure and it traveled over unsecured pipes before and/or beyond the HTTPS stream.


I would be interested in feedback and extensions to this as it would be nice to have a consistent pattern for programming REST interfaces. The consistent pattern would enable us all to build interfaces faster and learn to use the work of others faster and easier.

Friday, March 18, 2005

Laszlo has released 3.0b2

This release is a big milestone for OpenLaszlo (IMHO) as it provides for serverless deployment of the UI components. Obviously a server is still required if you need to interact with data but not for rendering. After working with systems such as Dreamfactory the importance of not needing a server for rendering became very clear both for simplicity of deployment and scalability. Congratulations to the OpenLazlo team for taking things in this direction

Wednesday, March 16, 2005

Nelson on actual implementation of SOAP and REST

Phil Windley posts from ETech: Phil Windley's Technometria Nelson Minar at the Google AdWords API. This is good summary by Nelson Minar on the issues and differences around implementing services in SOAP and REST. I particularly like Low REST and High REST definitions - a good characterization I have been struggling how to explain and rationalize in an implementation for the last few weeks.


For REST to come into its own there needs to be some more formalism along the lines that Nelson suggests and also so does David Orchard.

Friday, March 11, 2005

A good series of RESTful articles

I have not had time to read them all but ( I intend to) Carlos Perez has a great series of articles on REST - Manageability - Brainstorming a ReSTful Toolkit. If the REstful toolkit can be understood in less than a day it could be a winner.

Thursday, March 10, 2005

It Ain't Easy to be Simple - but it is important to understand and communicate

I took the liberty of expanding on the title of this article It Ain't Easy to be Simple by Mike Champion. The article surprised me it was on the front page of MSDN yesterday indicating that the REST - WS debate is moving into the center of attention.


I am not sure I would argue that HTTP/REST is elegant, but rather it is a good approach to working with what we have. Many years ago circa 1997 someone said to me that HTTP is the last protocol that will be invented - the comment stuck but it was not till many years later that I realized the truth of it. Essentially the weight of the installed base of plumbing we have installed at both desktop and server virtually requires that HTTP exists everywhere for at least the next few decades. I think XML is moving into the same position as a data representation language. Therefore it is only natural that they come together to communicate meaning in machine-machine and machine-human conversations. These essence of the debate is do we take a minimalist approach to defining the conversation (REST) or do we take the solve every problem approach which is the WS-*&%$# approach?

Tuesday, March 08, 2005

More voices seeing the erosion of simple in web services

In this article in Jonathan Schwartz's Weblog he cites that corporate IT Architects and developers are getting worried about the erosion of simple in web services.

One reaction to this is to build complex development environments that mask the complexity and make it appear simple, but this is usually the wrong approach as good architects and developers need to understand their tools and how they work. I have never been able to make a poor developer become a great developer by giving them a tool - it takes training, monitoring and patience.

Web Services (and by web services I use the broad meaning - services that communication over the web) are important and are not going away, so we do need to solve the problem. More and more companies are moving to shared infrastructure and shared services and they need to be able to communicate, simply, transparently, reliably and securely. Simply often means solving only 80% of the problem and just getting something working in the real world, the WS-x!*&@ stack tries to solve 100% of the problem before the real world has defined the problem. Time will tell what the outcome will be, but my bet is on simple to win

Monday, March 07, 2005

Linkedin and Craigslist

Over the last several months we have been hiring in several areas. So far the best service has been CraigsList in terms of quality of applicants. However I have just started using Linkedin's new service and it is starting to show what a social network can do. The user interface is dramatically easier than CraigsList (which has completely awful search capabilities - try searching for C# or .Net) and makes use of your social network to get well matched clients.



Is it perfect - no it has a long way to go but it is the first feature in a social network that does something unique to social networking and makes business sense. To early to tell yet if the quality of applicants matches CraigsList but Linkedin has the right demographic for our type of business, technically sophisticated, early adopters and willing to push the envelope. So I imagine that it should evolve into a very good service if they can keep the signal to noise ratio low - I notice a lot of recruiters in there already. While recruiters are a necessary part of our business they are an intermediary that might not work in a social network the same way the work today. It is an area Linkedin should think about to keep the service useful.

Thursday, March 03, 2005

Leaning towards REST but.....

Living a little more in the real world these days, i.e. working at the application layer, solving customer problems I have had the chance to use Web Services (SOAP and REST) in various application senarios. From that I am leaning more towards REST style architectures because in the real world I inhabit it is unreasonable to expect all end points to have the latest technology and be able to navigate the complexities of SOAP.


The biggest problem I see with the WS-*%$#? mess is that it is quickly becoming the next CORBA and is failing one of my simple heuristics, if a single reasonalbe talented engineer cannot pick up the concepts and start being productive in a week or so then there is an issue. In this case the S has been dropped from SOAP and we just have Object Access Protocol which is another way of saying RPC and leads back to CORBA. This version is better as it not tightly locked into a synchronous RPC view of the world but the complexity still exists and is growing.


On the other hand there are things I still like about the web services stack and are missing from REST. One is the degree of formalism in defining interfaces either RPC or document style interfaces. Last week I needed to access several web services and being able to query them for their WSDL was great and made my life a lot easier. Having to go through each one and look for the documentation (we all documented everything all the time right !) would have been a major pain. This is the subject of a good article by Hugh Winkler that Mark Baker pointed to. It makes a good case for have a formal interface description for REST.


So while I lean towards REST for simplicity, the ability to talk to any end point providing they have HTTP and XML the lack of a standard machine description is a big lack - I would almost be happy with a JavaDoc or NDoc type of tool but for REST to take over the Simple part of web services I think this is required

Wednesday, March 02, 2005

Retail becoming more tech centric

I attended an event hosted by Sun Microsystems last week around retail technology. A part of the program was a presentation from Jeff Roster from Gartner. A significant part of his talk was devoted to how the retail industry is moving from techno-phobic to techno-centric, and he had a lot of information to back it up. A major piece of data was showing how more IT centric retailers would tend to dominate in terms of market growth


While I agree with the essence of his presentation, and I have seen the same, I think it needs to be framed in terms of results rather than just bit and bytes. What is happening is that the smart retailers are starting to use technolgoy to give them accurate numbers to run their business. It has started with Walmart and the supply chain and it is now moving to the demand (shopper facing) side of the business. While IT is great the key in the Information not the Technology part of IT. Retailers who empower IT to deliver accurate information and that the business side can act on are the ones that will succeed.


Actionable information is a critical measure of success, there is no point in knowing that store abc is losing shoppers or average basket size is declining if there is no processes in place to act on the data. A part of any information project is ensuring that there are levers to pull and knows to turn to change the information that is measured.