Tuesday, November 16, 2004

The competition of free

Currently the Mozilla Foundation has 12 full time developers and IE has 100. This probably does not include support staff, marketing etc. In the case of Mozilla these numbers are probably small.

All these smart people are chasing the opportunity to dominate a market that is defined by standards, products are rated how well they adhere to standards, and consumers expect the product to be free. Making rough estimates of salaries etc. Microsoft is spending around $15M/yr to compete in a market that brings in no revenue and is a constant public relations problem. Their competition is a not for profit foundation that is releasing products faster and if not better at least comparable to the majority of users.

Solution, Microsoft gives $5M/yr to Mozilla and ships Firefox branded as IE, they pocket $10M and bask in glow of supporting Open Source and the ability to point fingers at someone else for Browser security problems. Consumers win by having the best browser based on standards, trade press looses though as they can no longer write about the "Browser Wars".

Of course there is a small problem of Avalon.....


Thursday, November 11, 2004

DB40 goes open source

Browsing Freshmeat this morning and I noticed that DB40 has gone open source. I have used this in a couple of little projects and it has been a pleasure to work with. Also I have had the pleasure of meeting the creator of DB40 Carl Rosenberger who is not only very smart but also a real nice guy.

The product info is here db4objects - native Java and .NET open source object database engine and is well worth a look. Nice to see another powerful tool being added to the open source world

Friday, November 05, 2004

The Rise of the Platforms

A small announcement from Amazon in my e-mail in box that they have included queuing in their web service API's is starting to make things very interesting. It was also noted by Phil Windley.

From a simplistic view gmail is just another queuing system uses mail protocols and text rather than SOAP/Rest and XML but not a great stretch for the Google platform to start offering a gig of message queues to developers.


Amazon's first out of the gate but all the other service platforms will follow, Google, EBay, SalesForce, etc and suddenly there is a new distributed platform infrastructure to create applications in, that is totally removed from the platform wars of the last century.


The ability to build data centric loosely coupled applications that live in the network is where the next wave of innovation is going to come from, so think open services rather than open source.

Thursday, October 21, 2004

Slashdot | Software Piracy Due to Expensive Hardware, Says Ballmer

Slashdot | Software Piracy Due to Expensive Hardware, Says Ballmer.

Just had to comment - Orwell would have been so proud, what is next the RIAA suggesting that we need to reduce the price of CD players to eliminate file sharing?

Monday, October 18, 2004

Making distributed systems easier

A major source of complexity in implementing distributed systems today is in the persistence of data. The core of any system is the ability to process information and hence the need to persist information is key.

Unfortunately the majority of mechanisms to store and process data assume that they are the center of the universe and that moving the data in and out is not important, and hence should be done in bulk.

Even modern solutions such as XML Databases (at least the ones that I have worked with require reading all data into an in memory object and then storing). Many years ago I worked on a data analysis system that was based on the idea of infinite object streams - this made everyone think about what was the minimum object and design distributed processing objects naturally.

Short of writing a new stream based XML store myself,anyone got any ideas?


Friday, October 15, 2004

The value of a service network

To follow up on my previous post about information services it is interesting to look at what creates value in a network based offering.

The value comes from four characteristics, producers of the data, consumers of the data, value of the data, and depth of processing.

The value of the users (producer and consumers) is dramatically increased if they are registered users and there is some form of tight coupling between them (think Google versus eBay). The value of the data is around how unique and how hard to reproduce/migrate to another service. Lastly the value of the processing of the data, companies such as SalesForce have built significant expertise in processing data for CRM which is hard to reproduce.

Applying this to a few real world examples. Google has a very low number of registered consumers, so has low value their, while the data it uses is also freely available, each individual piece has low value but they make it up in volume. Their processing of the data has very high value as they have built deep expertise and infrastructure around search. Another example is eBay here they have very strong ties to both consumers and producers which creates significant value. The value of the data is also very high as it is hard to reproduce (and they defend it from scraping etc) the value of the processing is somewhat lower.

I really believe in the service model on information processing but as I have tried to show it does need to be applied to the correct problems to create lasting value. Companies such as Google, eBay, SalesForce are creating lasting value by emphasizing different parts of the model.


Thursday, October 14, 2004

Open source rich client

David Temkin CTO at Laszlo has made the gutsy move to Open Source. The is the first rich client application that has a chance of widespread adoption and can change the way we deliver client applications


It has two significant features going for it (or soon will have apart from Open Source) that will enable its success. One is the upcoming serverless model, this is critical for scalability and ease of deployment. The second is the use of a plugin to deliver high quality rendering and features that are just not possible with other approaches using DHTML. Using Flash 6.0 plugin ensures that Laszlo applications will run in any browser and can take advantage of platform independent features that DHTML applications just cannot.


The last piece of the puzzle is making it Open Source, I can now recommend that we take full advantage of the platform and know that we do not have any major risk of the technology going away. I fully expect a strong and vibrant community to build around this new client delivery platform. I wish them every success.

Web Things, by Mark Baker

Web Things, by Mark Baker responds to my comment about interfaces. While I agree with his next post around document interfaces I feel strongly that the formalism brought by WSDL allows the specification of public interfaces and requires little else. The public interfaces can be document or rpc style both have their place but the need to have a standard human/machine readable interface description is necessary.


I do however share his and others concerns that WS-xxxx is heading into the same pit of complexity that CORBA descended into. At the time of CORBAs brief hay day I was at Tibco (then Teknekron Software Systems) building loosely coupled self describing message based systems. The lack of interoperability and describable interfaces was a drawback - so we built our own dynamic object language, and our own security etc.... Standards are useful to ensure we all use compatible tools but the tools (standards) should not be over designed as the art is in the wielding of the tools not the tool itself.

Servicing information not software

While I have been a strong advocate for software as a service I have recently begun to modify my views. It is not software that is the driver but rather the information that is processed on behalf of the customer that is critical. Software is just a tool - if we processed the information faster, for less cost and more reliably using a slide rule no one would really care.


To do the job better as a service you need to build deep domain knowledge and continuously develop best practices based on customer involvement. This provides the economic model to make a service model work, not providing software.


As we move into this world software is going to become a non issue as it is becoming essentially free (or close to as a percentage of overall costs), it is now a well defined tool. The number of options for tools and platforms is large and the differences are shades of grey.


The real value of a service based model is going to come from those companies who build services to process information faster, cheaper, and more reliably by using the tools more effectively to process information not the builders of tools and platforms.


Wednesday, October 13, 2004

Public versus Private Interfaces

For the past several weeks I have had my head down coming up to speed on .Net and getting new customers up and running. The most interesting thing I have noticed about being heavily involved in real world large scale complex integration is how we naturally approach interfaces.

For most private interfaces we stay with XML Schemas and essentially REST style interfaces for simplicity and ease of development. For customer facing interfaces we always go with SOAP style web services for formalization of definition and reducing the impendance. If you communicate that your are exposing web services that comply with the WS-I basic profile it eliminates a lot of discussion. Similarly if we consume services that are defined the same way it reduces discussion.

Perhaps REST versus WS is really about private and public interfaces.

Wednesday, September 08, 2004

Looking for a few good engineers...

The new venture I am part of InStorecard is growing and looking for a few more great engineers the job descriptions are posted here. In general what I look for is smart, articulate and motivated people who know done is binary.


If you are interested feel free to contact me directly at either my work or personal email.

Thursday, August 19, 2004

It is still about identity

Even though I have shifted industries (slightly) a lot of the big issues are still around identity. Now I am dealing with consumer identity rather than service interface or corporate identity. The issue now is really around tying multiple identities together to create a whole person.


As a consumer I have several identities that are not necessarily connected, I have a physical identity, I have a web identity and in a store I have another identity, which can be specific if I use a credit card or anonymous if I use cash.


The trick we are solving at InStorecard is to bring all the identities together into a single usable solution to retail loyalty and CRM

Tuesday, August 17, 2004

What is a good loyalty program

A key part of our business is delivering programs (as a service) to merchants that enable them to provide better service to consumers and hence encourage them to use the merchants services more. Thinking through what makes a good experience for the consumer is critical. Too many merchants treat personalization as pasting a users name on a standard piece of collateral. This is still mass merchandising, to reach the goal of speaking to each consumer as a unique individual with unique needs requires a lot more than that.


I draw a lot from my formative years when I worked in my family's retail stores. There the staff knew almost every customer by name, there shopping history and something about them and worked to make the shopping experience positive. To translate this to the modern world there are three things a good program does for the consumer:


  1. Speaks to individual users differently and in a way they resonate with

  2. Makes the users life easier in some way.

  3. Provides clear reciprocal value for the personal information delivered.

Sunday, August 15, 2004

New Venture

A few weeks ago I started at a new venture InStorecard as CTO. It is a pretty recent startup in a very interesting space (retail). One of the big things that attracted me (apart from a great team and a founder who I have known for 10 years) was that they were building a valuable business offering as a service to deliver to customers.


There is nothing like building something for real customers to help decide what is really necessary for customers and what is good only from a computer science POV. The good news is that the retail industry is making big strides towards XML, but less towards web services, even though they are aware and interested. In general they are very pragmatic, I should know as my family has been in retail for a long time - my grandfather started our first store just after WW I!


The pragmatists in the retail industry are quickly seeing the advantages of software as a service and XML as they need to get things done quickly and efficiently as margins are critical. This is a great place to be and I am looking forward to the next several years.


Saturday, July 03, 2004

The wisdom of crowds

Having taking some time off I have been able to read a lot more and actually think about what I have been reading. I have just finished reading and contemplating (though I think I need to read it again) "The Wisdom of Crowds" by James Surowiecki. The essential hypothesis is that many are smarter than one. In our society (especially in Silicon Valley) this is not a widely held belief. There are a couple a caveats to this, 1) diversity is important in creating smart groups and 2) the ability to process and synthesize the collective wisdom is key. This is a over simplification but it close enough for the connections I want to make.


The current wave of efforts to create something with social software could learn and build on the thesis in this book, especially point 2 above. Most of the case studies presented in the book were derived from simple voting systems or from organizations with very talented leadership. It takes real talent to really listen and empower the team, while balancing the need for diversity of thought. It is easy to listen if everyone is saying the same thing and agreeing with you, but it does not create the best environment for success.


Social software is very good at collecting information and providing a forum for diverse discussions, but could it provide the foundation for creating a better decision making format? For it to succeed I think it needs to move in this direction.

Thursday, June 17, 2004

The evolution of loosely coupled systems

As always Joel Spolsky makes very good points in Joel on Software - How Microsoft Lost the API War but I think he missed a couple of critical issues in the "war" between the two forces within Microsoft (The Raymond Chen Camp (Win32) and the MSDN Magazine Camp (DLL Hell).


While I admire and respect the efforts that the WIN32 camp go to making everything backwards compatible, we as users of current systems are cursed with an overly complex system that is hard to evolve as its roots go back 20+ years. On the other hand the MSDN Camp believes that the evolution can be through shared libraries and that applications should share code and evolve through DLL's. As DLL's were not implemented well for evolution (VMS and UNIX did a lot better job even before Windows), we are stuck with an incompatible mess. From these two approaches one can see the evolutionary branching of tightly and loosely coupled architectures. The tightly coupled WIN32 approach is made to work through shear hard work, as Joel notes in his article the enormous lengths the Win32 team go to for backwards compatibility, any other company would go bankrupt from this level of effort. On the otherhand because the MSDN Camp does not have strong versioning and hence cannot provide developers a stable platform we have a loosely coupled mess.


As Joel goes on to point out this battle is really over and most of us have moved on. Whether you use, Java, Python, .Net (including Mono) we are getting further from the operating system, when I move Java code from Windows to Linux it typically runs unchanged providing I have set the properties files correctly and a few other details, the same is true for most other high level languages. The next challenge is as we move beyond loose coupling through DLL's and Shared Libraries to services defined in the network we do not repeat DLL Hell and ensure that we can version and reuse services through shared infrastructure. As more shared services run in the network the platform they run on will be all but irrelevant to the users of the services. What will be relevant will be availability, functionality and responsiveness.


What will be relevant will be the client technology, currently the ability to deliver web interfaces using DHTML is state of the art, but it is not progressing, and is too complex. Other, better solutions are starting to appear that are leap-frogging the current browser technologies, they still use the browser as an HTTP pipe but that is all. They are in the desktop but require a minimum from the desktop and are easy to develop and deploy. Solutions like XUL, DreamFactory, Flex, Avalon are going to be where the next war for control is going to be fought. As these solutions are completely decoupled from the backend services there will be a completely new loosely coupled development approach but we should remember the lessons from DLL Hell and consider how to version and evolve service interfaces

Wednesday, June 16, 2004

New look

Seemed time for a new look and to use bloglines for my blogroll. It is interesting that one of the most significant impacts of blogs is the ability to create web sites that have new content without a huge investment in infrastructure and people. Truely putting the web in the hands of the people.

Thursday, June 03, 2004

Service Ecology

A part of the issue around boot strapping SOA and web services is the need for an ecology of services to exist.

Without the ecology in place it is hard to create solutions. Some markets are starting to get critical mass and they will take off first.

Friday, May 28, 2004

Change is good

I do not usually post about personal events in my blog but there are exceptions. I have decided to leave Grand Central and take some time off. I am still going to be onthe advisory board and I still believe strongly in the vision of the company and I am very bullish on the prospects for the company.


Hopefully I can get back to blogging regularly and work on some personal projects and think about what is next. I think we are in new period of innovation and creation and I looking forward to building on the new world of software as a service.

Tuesday, May 18, 2004

Strongly typed message properties and weakly typed messages

Having used both REST style programming and SOAP I see that the difference is how the semantics of the message properties (metadata?) are specified. The ability to specify strongly typed message properties in a well defined and standard format (WSDL) is what separates REST from web services for me. If I am working in a closed community with a high degree of communication and a low rate of change in messaging semantics then REST is fine. However if you are working in a network that participants that are changing independently and there is a poor communication (i.e. people in different organizations that do not even know each other) then web services provides the common framework to express your messaging semantics.


As an example this is a link to POST WSDL for Grand Central that I created. Looking at the WSDL is can be seen that the headers are strongly typed while the message body is a simple XSD:ANY. This allows a user to post any message to Grand Central and route it to its destination using nothing more that the message properties. It is optional to add a strongly typed message.


This approach allows significant extensions through adding more message headers, the second example is STORE WSDL. This example takes an arbitrary message and either stores it in an XML Store or queries the XML Store with XPath gets the results. The only major changes to the wsdl are a new header addressed to the XMLStore service and a new destination. While I freely admit that the WSDL needs some work and some better comments the intent is fairly clear, and with come TLC could be very clear and obvious. As in both cases the message is a XSD;ANY there is a high degree of loose coupling as neither the recipient or sender needs to have apriori knowledge (if they want they can) but the semantics of the message delivery are still well defined. The Grand Central POST Header is an early version of WS-Addressing and WS-ReliableMessaging, and WS-Security is also supported, therefore the message properties are very strongly typed and defined in a way that everyone can read and more importantly understand. This is not possible in REST systems.


To create loosely coupled systems we need to be able to interact at the messaging level and specify the shared semantics as tightly as possible while keeping the business interactions as loosely coupled as possible. Used properly Web Services does this.

Wednesday, March 31, 2004

Programming and visual metaphors

Programming well is a complex task and requires deep understanding of the concepts being manipulated. By way of Sean McGrath and Eric Newcomer comes this thread that is well worth discussing Programming ain't pictures - yet .


The big gap is between those who believe that programming can be done in the whole through a visual metaphor with minimal understanding of the underlying mechanics and those who believe that it is a tool that can assist experienced developers to be more productive. I am more in the later camp, for two reasons, 1) havng spent a lot of time with craftsmen who use tools to work metal and wood I know that the result depends on the craftsman not the tool. (no matter how good the tool I still have two left thumbs) 2) As Sean makes the point everything today is based on imperative and proceedural logic, until computers are based on a new set of concepts, visual programming is always going to assist experienced developers rather than replace them.

Thursday, March 25, 2004

Embedding Visualization into your own page

One of the interesting attributes of creating well defined services is how easy it is to distribute the display code and have it completely separate from the business logic. For the blog mapping software the HTML code below will create a new display widget. The blog displayed on startup in the URL in bold. This example is for Jon Udell's weblog, it does take a little time to load as he has a lot of links but it is fun to be able to naviage all his links in a visual manner.





This is an example of the HTML code for a page to create a blog map

<HTML>
<HEAD>
<title>DreamFactory 6.34 Project: Viewer</title>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
</HEAD>
<BODY bgcolor="#FFFFFF" topmargin="0" leftmargin="0" marginwidth="0" marginheight="0">
<object
classid="clsid:226906C8-B911-11D5-82A3-0000F81A655B"
codebase="http://www.dreamfactory.com/codebase/dfacactx.cab#Version=6,34,0,1"
id="dfacinst"
border=0
width=100%
height=100%>
<param
name="openfile"
value="http://www.mcdowall.com/dfProject/viewer.dfac?root=
http://weblog.infoworld.com/udell/">
<embed
type="application/dreamfactory"
pluginspage="http://www.dreamfactory.com/codebase/winplug.exe"
width=100%
height=100%
openfile="http://www.mcdowall.com/dfProject/viewer.dfac">
<embed>
</object>
</BODY>
</HTML>



Anyone can add this code to their blog and provide a new metaphor for navigating their neighbourhood. As I add more features they will be delivered on demand i.e. no need to update software everything is delivered out of the network.

Tuesday, March 23, 2004

Update to visualization tool

With help from Bill Appleton at DreamFactory I have upgraded the visualization tool. If is now full screen and much more useful. If you want to play with the source click on "Author". There are still some initialization problems that I am working through.


To use just click on display and you will get a graph of the blogs I link to. Then double click on any block and the home page will appear in a new browser window. The URL will also be inserted into the URL field. If you now click on display a view of the blogs around that blog will be displayed.


The next steps are to do some more clean to the initialization code. This will allow anyone to insert this widget on their blog and specify their blog as the root node - cool everyone can have their own personal mapping server courtesy of DreamFactory with no server code. Demonstrates the simple value of software as a service.

Monday, March 22, 2004

Rich Clients and Services

I have been messing around with display graphs of blogs again. This time I have designed the system to be a service (the blog crawler) that can be accessed at http://www.mcdowall.com/servlet/crawl?root="blog url". This will return an XML file of all the related nodes this blog refers to. To see a visual representation I have created a simple DreamFactory client here (windows and mac plugin only). If you see the source code on the page you can embedded this in any page of your own and have a graphical view of your local blog neighborhood.


The ability to "loosely couple" the service and the display is a key advantage of the new world of network clients. DreamFactory is a great example as it does not require a server to be deployed. Therefore the service and the display are truly loosely coupled. All the network client and service know is the interface. As I evolve both they will be able to move independently.


Compared to a classical web approach with significant amounts of display logic embedded in the server this is a dramatic leap forward. We are still at the early stage of this shift but have been through everything from character screens to the web I see this as a truly tectonic shift.


Note: software still buggy and the UI is not very elegant

Friday, March 12, 2004

Why Integration as a service?

Companies must become more agile and work together to remove costs from the value chain to reduce friction. This will improve the overall efficiency the overall value chain that can either increase margins or reduce prices to the consumer. This is a growth strategy rather than a cost reduction strategy. Industry leaders such as FedEx, Wal-Mart, Dell and Cisco have demonstrated the effectiveness of an integrating their value chain to expand their businesses. Integration is a strategic need in the industry and it is going to happen, the question is what is the fastest and most cost effective approach to cross-organizational integration.



The value chain is composed of heterogeneous systems that need to be integrated. The value chain is and will remain heterogeneous. This is due to three powerful forces at work. One of the key arguments for the value chain becoming homogeneous is that standards will, at some point in time, deliver a homogenous world. This will only happen if you believe that the objective of standards is to create a single standard. Rather the goal of standards is to create a number of interoperable standards those information workers can use as their toolbox to solve problems. The goal is to create a well-defined set of tools not a single tool to solve every problem. As a simple example take file transfer, today I can use SMTP, HTTP and FTP to transfer files between two systems. These protocols are all standards and are useful for solving different problems. Standards will not cause two of the three to disappear or any of the other host of means of transferring files.



The number of platforms used to serve enterprise applications is continually changing as new platforms arise, e.g. Amazon, eBay and SalesForce are becoming increasingly important platforms for enterprises while PeopleSoft and JDEdwards have merged. This evolution will continue to happen and as such the range of connections required will continue to change and evolve.



The final argument for a heterogeneous world is the evolution of companies from small companies with limited IT resources and sophistication to large companies with sophisticated systems. In the real world they are all part of the value chain and companies do not select customers and partners solely by evaluating the sophistication of their IT infrastructure. While company size may be a factor in deciding marketing strategy it does not in the final days of a quarter determine sales.



These arguments clearly demonstrate that the value chain is and will remain heterogeneous. It may become a little more homogeneous as we improve standards and reduce the technology gap between the spectrum of large and small organizations but it will remain heterogeneous. Therefore to deliver on the vision of an agile value chain that reduces costs for everyone we need to consider how to bridge the technology gap between heterogeneous systems.



There are three possible approaches to bridging the gap:

  1. Enforce homogeneity by dictating that all parties in the value chain must use the same software, messaging protocols, security standards and levels of service.

  2. Support heterogeneity at the edge by having each party connect point to point and implement a broad set of technologies that are required by all their partners.

  3. Utilize shared infrastructure to broker the conversations and allow everyone to leverage costs that the shared infrastructure. Use the loosely coupled infrastructure to mediate the differences in connections, security and levels of service.


Enforcing homogeneity only works for large organizations where they are the “gorilla” in their value chain. This approach does not improve the cost structure of the company at the other end of the dictate. Unless the gorilla is their only customer they probably need to support multiple systems and the range of interactions the gorilla imposes on them is probably a limited set of the overall interactions they have with other organizations. The costs and the limited range of interactions delivered by this approach make it short term fix to gain market dominance by a gorilla at the expense of its suppliers and other companies that have similar value chains.



The approach of leaving it up to every organization to implement all the necessary infrastructure is probably the most cost prohibitive and approach that delivers the least amount of agility to the enterprise. The amount of costs inserted into the value chain is significantly. As a simple example consider the case of supporting MIME and DIME. These are two attachment formats used to attach non-XML data to SOAP messages. DIME is supported only by .Net while the majority of other toolkits supports MIME. For any company to support attachments in their external web service infrastructure they need to have a means to support both formats and also determine which partners are using which and ensuring that everything works seamlessly. This seams like a small task and it is the specifications, design, coding, management, testing and deployment should not account for more than 6 man months. This translates roughly to $50K, if there are 50 partners/customers in the value chain this equates to $2.5 million in direct costs added to the value chain just for this small feature. There is no resulting business value except that conversations can now happen seamlessly over SOAP. What happens when a partner insists on SOAP 1.2 support or REST rather than SOAP, or FTP? The costs of supporting this and all the other technologies quickly become a significant cost that is inserted into the value chain and provides no economic value in return.



The solution to this problem and that of forcing homogeneity on the value chain is to leverage shared infrastructure. In the example above the MIME – DIME conversion is done in the shared infrastructure as a service available to all participants. This means the cost of supporting MIME-DIME is fixed at $50K no matter how many participants there are using the shared infrastructure. This is true for all services provided, mediation is provided this allows every participant to connect using there own technologies and the infrastructure takes care of the mediation between parties. As the mediation is provided as a shared service it is extremely cost effective for all participants and reduces costs in the value chain dramatically compared to other approaches.



History has shown that the solution to many-many problems is shared infrastructure. As the value chain is a many to many problem and is going to stay that way there is a need for shared infrastructure for enabling integration to be as simple as plugging in a phone jack.


Sunday, February 29, 2004

Flatland

The web is an incredible rich environment for information sources, however as a medium for a rich client experience it is woefully lacking. The current set of technologies fall into two extremes, HTML (fast, easy to program, and runs everywhere) and Java (complex to download, complex to integrate into the web, and complex to configure to run everywhere).


The rate of information growth is approaching exponential and the information about information is growing, but search engines still deliver information as single dimension lists!. The Semantic Web will be stillborn if we do not break free of HTML and deliver a common rich client environment.


The need for a rich client environment that, runs everywhere, is fast, and easy to program is holding back the next wave of innovation and more importantly delivering understanding to the users of information. There are a great many solutions out there but they are all falling short between ubiquity, easy to use and fast. This is the next big challenge for the web and the Semantic Web is DOA without it - who is up to the challenge?

Thursday, February 19, 2004

Linkedin - activity picking up..

I started with Linkedin as an experiment and I have actually seen some benefit from it. I have forced myself to use Linkedin even when I could have used other avenues just to see the effect and to see if it actually kick starts something.


The benefits have not been enough to see any dollar value associated with it as the links I could have made other ways. There may be a critical mass effect at work here, once the network gets to a certain size and you have a certain number of contacts the value actually kicks in. I would be interested in hearing other users experiences.


Tuesday, February 10, 2004

Off to Emerging Technology...

I am off to ETech for the rest of the week, I am speaking on Wednesday at 5:15 "Services not Software or why are you still moving mountains". Fun topic hope to have a lively discussion with everyone.

Monday, February 09, 2004

Shared infrastructure why?

This is one of the questions I get asked, and one of the easiest ways to explain it has been to use a simple use case.


Currently there are two attachment formats in use for web services, both supported by large companies, MIME and DIME. This caused issues for several of our customers as they had customers that supported both. They either had to a) Get everyone to support the same format, b) Support both formats for all the services they built, c) Tell their customers that this is the format we support and hope their customers could support both. Neither of these solutions is either particularly economic or customer friendly.


To solve the problem Grand Central implemented a configuration option that allowed users to specify the format they wanted their attachments delivered in. This code was extensively optimized to ensure that the actual impact on performance was only a few millisecs. The total investment in design, implementation,QA,and installation was about 4 man months. Now every customer can use the feature and is freed from having to worry about what attachment format everyone else is using, and if they change their format they also do not worry.


The engineering behind this is fairly simple and many will cry I could do that. The answer is yes you could do it but why - what is the value to an organization to build this and similar features. If 50 organizations need to build this the cost is 200 man months, at $10,000/man month we are talking about $2,000,000.00. This is a simple example more complex features cause the numbers to increase dramatically.


While not everything should be in the shared infrastructure this simple example clearly shows the value of shared infrastructure in removing direct costs from the enterprise and reducing friction between organizations.

Tuesday, February 03, 2004

Social networks a peer to peer problem?

There have been several (many) posts apophenia: why Orkut makes people insecure bashing Orkut for many reasons. The majority of the posts are valid but I do not see that the problem is unique to Orkut.


Trying to create a general set of semantic information to bring together large numbers of people is inherently an almost unsolvable problem the way Orkut and most of the other networks addresses it. The meaning I assign to any of the fields I select in Orkut is probably different from how others assign meaning. This is a classic problem with using shared semantics - i.e. how do we arrive at the shared semantics. Having a centralized authority specify the semantics (and in the case of Orkut a limited set) is the wrong approach (IMHO). What needs to happen is that the infrastructure should enable peer groups to form around common semantics (or Topic Maps).


There needs to be shared infrastucture to enable social networks but the actual semantics need to be created in a peer to peer interaction between individuals. Is something like Easy News Topics a good start? perhaps, but we need to figure out how to make this easy to use, while delivering benefits to users.

Thursday, January 29, 2004

Social networks - the New Tower of Babel

This posting dennisyang.com: wanted: personal social network coordinator the original link is here from Dennis Yang says it all. Social networks are springing up all over the place and I am sure most bloggers are members of at least 2-3 some who have more time on their hands are membes of more.


Social networks fall into three broad categories, business networking, personal interests, and dating. There is not a clear distinction in various categories but the intent is usually to be good at one or two of these. As I am coming up on my 10th wedding anniversary I cannot comment on the last category. but the first two are freely mingled in my life. I recently had an e-mail conversation with the CEO of a social networking company who declined to join one of my networks as his company had a similar feature. This is just an indication of some of the artifical barriers the networks are placing on my life. Each network can have different purposes but they all should work with my like and not segment it into various incompatible communities.


Are we doomed to having a proliferation of identities and social contacts even though we wish to have only one life that we manage. Social networks are suppoed to help the interactiosn between idividuals through the internet but if they just create artifical segmentation of our lives they are not achieving their full potential.


I believe strongly in the value of shared infrastructure and keeping the complexity in the network but this one case where we have too much in the network and not enough in the hands of the user. The better networks bootstrap themselves from my address book but they need to go further. I need to have my own metdata that works with any network that I choose to join. The role of the network is not to keep my metadata it is to enhance it through unique IP and provide connections and a social fabric.


I predict that all the current social networks will fail unless they start to interoperate and provide distinct obvious value beyond collecting and rearranging my metadata. Some of the work done by people like Marc Canter with FOAF is a step in the right direction

Monday, January 19, 2004

Impact of standards on Business Processes with BPEL

Sometimes the impact of standards in creating value and moving the industry forward are overlooked. In recent months I have been involved in many discussions around business process and BPEL specifically. There are issues with BPEL and they are being publicly discussed and worked on by many people. Propriety solutions are only worked on by internal developers and customers of a single organization. The flaws in propriety solutions are usually closely guarded and do not have the benefit of other smart people trying to solve them.


The impact of publishing the specification for BPEL has created several efforts to implement BPEL (amongst them our own BPEL4e). The advantage of everyone working from the same specification is creating synergism and anything that gets a bunch of smart people working on a problem is a good thing IMHO.

Friday, January 02, 2004

It is the year of software as a service!

From InfoWorld: Best products of 2004

"In what may be a harbinger of things to come, both awards in the Enterprise Applications category went to hosted solutions -- one from Salesforce.com (for its CRM system) and one from Grand Central Communications (for what amounts to a Net-based, Web services switchboard)."



Nice to get early recognition that software as a service is a significant trend.


Thursday, January 01, 2004

Suggestions for Jeff's Top 10 Technologies of 2003

In Top 10 Technologies of 2003 Jeff Schneider asks for suggestions for a 2004 list. So here are a few suggestions:



  • Software as a service propels SOA into a critical part of any enterprise architecture.

  • Rich clients become the delivery vehicle for virtual applications

  • XML Architect becomes a job title as the service bus becomes the XML Service bus


Happy New Year to you and yours!