Deploying hosted applications - boxes are bad
I have used hosting providers for many years now and before that worked with rooms full of computers to serve various customer applications. This is one of the first things where I saw the huge value in outsourcing and the value that shared infra-structure brought to customers, but I want a lot more than is being provided today.
We have come a long way in managing the infrastructure for hosting applications. However the hosting model is still tied to the model of boxed cpu's. Blades are great and racking and stacking has been a great way to go for creating cheap scalable architectures. We are however a long way from the ideal case. The ideal case is where I agree with the hosting provider the high level architecture and then push software to the specific part of the system architecture and pay for an SLA and usage. Both SLA and usage are complex numbers to work out but the degrees of freedom they introduce should make it easy to develop innovative and fair pricing models.
We are a long way from this as the software tools we use to develop are also locked into the box model. This is not surprising as commercial tools are priced by the box. However open source tools do not have this limitation. Another way to look at this is from a virtual machine - why should the JVM or CLR be tied to a box should it not manage resources for the application and demand more resources as demand increases?
Global computing projects have been leveraging this for years but each application has been custom designed is it not time for this to become a standard part of the network?
Tools and Intermediaries
When I think of intermediaries I typically think of additional tools that I have in my tool box and can use. They are something I can use within my processes but are run and managed by someone else - in other words shared infrastructure.
A simple example of an intermediary would be a data transformation service. This service would accept any message in a specific wrapper (SOAP or REST) and a URL pointing to a definition of the transformation instructions and then return the result. Other services could be shipping calculators, tax calculators etc.
Whether the service returns the message to the sender or forwards it to another destination seems to determine whether the service is defined as an intermediate service or an end point. However the service should not really care, where the message goes next, it should be a capability of the service (does it support forwarding) and a decision made by the application developer - where do I want it to go.
Discovery of these shared services is a challenge today as I typically need a complete ecology working together before they are truly useful. The begining of an ecology does seem to require a common set of semantics and a way of transforming the semantics. This implies a common schema for the ecology and transformation services to move in and out of the common semantics. For the ecology to exist there must be sufficient agreement between all parties of common semantics. A good example of ommon semantics is in the retail industry where ARTS that has defined a set of XML Schemas under the heading of IXRetail. This provides the common semantics that enable parts of the ecology to communicate.
Once the semantics are defined and available to the community shared services can be created but how individual services interact and work together is still an issue as there is no really good way to build collaboration between service providers in the loosely coupled framework that is needed to foster collaboration and independant development.
To make interacting intermediaries a viable source of tools the ability to boot strap an ecology needs to be available - anyone got ideas?