Feeds:
Posts
Comments

RFP response documents are not well liked whichever side of the fence you sit on, they are painful to write and even worse to read.  Their greatest failing though has to be in their ineffectiveness at helping people decide which solutions to buy.  In the end it is people who buy solutions not companies.

Like pulling teeth

What is it that makes RFP responses so universally despised ?  Well where to begin.  Firstly they are painfully unwieldy – full of many hundreds of mind numbing and often irrelevant questions that are generically reused across many different RFP’s.  Secondly there is a general suspicion that every vendor will say they do everything and fully comply to all of the requirements.  It becoming more of an exercise in whose RFP response weighs the most rather than who has the best solution.  All of which results in a sense of frustration at the utter futility of the whole exercise – people just going through  the motions as no one is brave enough to stop the madness and excessive waste of time and resources.

A better way

Why do people continue to follow this broken, time consuming, resource hungry process ?  It’s hard to say – probably the  involvement of so many people and the mountain of documentation provides some kind of reassurance  that due care and consideration has been given prior to spending company money for a significant purchase.  But in nearly all cases this is a false sense of security.  Most of the real value coming from the interactive discussions with the vendors rather than the forest killing tomes of response documents.  In these enlightened times a new more agile approach to selecting software is demanded.

It’s easy to see that there is much wrong with the RFP process and in particular the associated documentation.  The question is what would be a better approach.  To that end I would like to propose the following software selection manifesto.  A call to arms to everyone who has ever had to endure writing RFP responses or given up the will to live reading them.

Software selection manifesto

Favours the following

  • Interactive dialogue over documentation
  • People over company background
  • Realistic prototypes over slideware and demo’s
  • Reference  data over marketecture

Interactive dialogue over documentation

People much prefer talking to people than reading huge documents.  If there is to be some documentation it should be minimal and really serve the purpose for laying out a framework for the face to face discussion to take place.  Interactive discussions allow key areas to be explored in a way that no document can address.

People over company background

People buy from people not companies.  The cultural compatibility and the confidence in the people who will both manage and help deliver your solution is worth more than all the company marketing blurb you could care to consume.  Which is not to say due diligence shouldn’t be done on company financial and commercial viability – this is of course necessary.  However with all things being equal the people make the difference between success and failure of most software projects.

Realistic prototypes over slideware and demo’s

Prototypes or proof of concepts allow a particular use case with relevant data to be explored.  With some well chosen use cases it is possible to get a reasonable understanding of how a solution may help meet the requirements.  This is often far more valuable in understanding a solution’s fit to the requirements than trying to mentally map slideware and canned demo’s to your specific needs.  The reason for the ‘realistic’ qualifier is that often people lay out too many complex use cases that simply can’t be effectively delivered in a reasonable timescale.  Here it is a case of quality over quantity for prototypes.

Reference  data over marketecture

Is it more useful that I draw you a diagram showing how the architecture will scale or that I  share data with you about a customer who is running the solution with similar traffic volumes and users with equivalent data sets on a given hardware footprint ?  No amount of architecture discussion or drawing will replace the reality of real data proof points.  Architecture diagrams have their place but real data should  be the most persuasive evidence in making a decision.

In the end these are guiding principles not a recipe or process to be followed.  Their objective is to provide a framework that is  more pragmatic and efficient in the use of both time and resources in assessing whether a solution is really fitting your business needs and if the people from the vendor will make it successful.

The trouble with software architecture is that it keeps getting re-invented and new acronyms appear followed by a slew of large unreadable books explaining why this new architecture is going to change everything.  This is actually a widespread phenomenon in the software industry of many emerging approaches/solutions/tools/languages/frameworks/patterns/protocols where adoption rules supreme resulting in a form of natural selection.  It is perhaps inherent in the nature of software that such flexibility results in so many solutions to the same problem.  A good guide through this maze is a pragmatically tuned intuition that tells you when something is unnecessarily complex to be effective.  Keeping things simple means more people will adopt, use, discuss and improve it.  A good example would be RESTful services that are gaining in adoption due to their simple clear approach to exposing services through HTTP.

What is AOA ?

With all of the above taken into consideration then I want to introduce yet another architectural meme, namely Assembly Oriented Architecture (AOA).  This is more of an approach with some guidelines and doesn’t require any standards or reference documentation to understand in order to apply it.  It is an approach that has evolved from real practical experience and is actively used on all projects that Optaros works on so is well proven in the field.

At Optaros we focus on assembling open source solutions which are often very strong on supporting open standards that lend themselves to assembly.  However proprietary solutions can also be assessed in terms of their ability to be part of an assembled solution.

Guiding principles for selecting AOA solutions

  • Lightweight standards based interfaces covering key functionality and data access.  For web based solutions these interfaces should be web oriented such as RESTful services and support returning different formats such as XML, JSON and HTML.
  • Supports open standards such as OpenID, OAuth, RDF, CMIS etc
  • Can the solution be easily disassembled – ie can the built-in search or authentication mechanism be easily switched to use another

Why use AOA ?

Using AOA is around fast delivery of robust, flexible architectures.  It is inherently pragmatic accepting that most real world solutions  are largely comprised of combining disparate applications and not nicely packaged services.  More explicitly the benefits of using it are as follows :-

  • Results in clean standards based architecture without getting locked in to a particular solution
  • Less coding – gaps are plugged by identifying suitable applications or components that meet the need and are assembly friendly
  • Faster to deploy than custom build
  • Best components for the job and ease of changing them out when something better comes along
  • Lower cost of ownership compared to either custom build or customising an off the shelf application due to the above benefits

How it differs from SOA ?

Enterprise Architect’s at this point might be thinking surely this is what SOA is intended to provide, a clean architecture that allows the different systems to be replaced as needed without breaking any of the interfaces.  I should firstly say that AOA is not an alternative to SOA they are completely compatible architectural approaches and I would go further to suggest that both should be adopted to ensure a clean, flexible and  robust architecture.  SOA differs from AOA in a number of different areas namely

  • SOA is concerned with defining clean services independent of any specific application, whereas AOA is about selecting applications that are Assembly friendly
  • AOA looks to have applications that can themselves be disassembled and easily configured to use external components for some areas of functionality such as workflow, rules, search etc whereas SOA would define services for key capabilities and invoke the relevant application interface
  • SOA is more about providing a layer of abstraction on top of applications whereas AOA is about  effectively combining applications to deliver a solution
  • Although not directly tied to SOA there is the whole area of Web Services and associated specifications – AOA doesn’t go to the level of detail specifications but relies on guiding principles

In my experience SOA can be taken too far and alot of time spent agreeing every possible service to cover all of the combined functionality of all of the main applications.  So it can turn into a time and money pit with no real clear business value.  SOA seems to work best when common services that will be called by many systems are developed rather than trying to boil the functional ocean.  The other area that alot of time can be lost is in the dark depths of the many WS-* standards that exist – again that pragmatic intuition should steer you clear of distractions from the task at hand when developing useful services.

AOA patterns

A number of patterns are starting to emerge for different types of assembly architecture – the following is a list of the common ones.

  • Plug-in Platform – Assemble a solution around a central component covering the core functionality and acting as the integration platform for assembling the missing parts, thanks to its extensible architecture.
  • Container Assembly – Assemble a solution around a central container not providing any business functionality but focusing on cross cutting concerns (security, logging, access to resources, …). This framework should be a standard (or de-facto standard) of the other components you want to assemble.
  • Service Oriented Assembly – Assemble a solution using a SOA approach. Each component to be assembled should provide a public interface that would be used for integration.
  • Mash-up Assembly – Assemble a solution using the web-browser as a rendering layer and an integration platform to assemble different application through JavaScript, DOM manipulation, Rest API, iFrames. Each component to be assemble should provide a RESTful API.

In the end pragmatism wins, technologies continue to change and no matter what is done to try and allow for that in an architecture, ultimately effort is needed to accommodate those changes.  Given that reality check it should be clear that spending months developing intricate service definitions for everything is probably not good for anyone.  Therefore AOA offers good guidelines and actually helps deliver solutions faster while allowing for applications and components to be changed in the future as required.

I  first came across the concept of Vendor Relationship Management (VRM) in a new chapter by Doc Searls for the 10th Anniversary Edition of the Cluetrain manifesto.  Meeting Doc Searls recently and then attending the London VRMHub meetup has given me a better idea of what is happening in the VRM space.  Having worked in the CRM space for many years the idea of VRM seemed very radical but i knew it made sense.  The essence of VRM is really around individuals having control of their own personal data and their relationships with organisations and how they interact with them.  Today each company a person interacts with maintains their own separate information that is often hard to access externally.  Moving house highlights the problem of just how many companies you need to tell to change their data about you.   So with VRM an individual should be able to maintain their own personal data store (ie address, contact details, wish lists etc) and decide who and how much of that they share with organisations.  It also includes the idea of people being able to issue a personal RFP for what they want (ie a digital camera with 12megapixels, supporting RAW for a budget of $300) and then allowing companies to respond with their best offers reversing the current model of having to hunt down what you want from sellers.

This is disruptive as it shifts power from the sell side to the demand side.  It’s a kind of revolution waiting to happen.

So it raises a number of questions as follows:-

Why would companies be interested in getting involved ?

Well VRM if done correctly should be able to benefit both demand and sell side.  Allowing people to share information with organisations they trust could result in organisations having a more complete picture of a person not just filtered through their own narrow view.  Sharing a wish list or history of items already purchased from different places should allow companies to present offers that are more aligned to the person’s interest and needs.

The other answer is that people want control of their own data and this demand will increase and ultimately organisations will need to respond and respect how their customers want to interact with them.

Those companies who embrace this first have an opportunity to gain great PR  and a competitive advantage.

What is happening in this space right now ?

So VRM is still quite embryonic at the moment.  A few different projects are underway in different areas.   A good one page overview of what VRM is can be found at http://www.vrmhub.net/vrm-in-a-nutshell/.  The main project site is maintained at the Berkman Center for Internet & Society at Harvard under the guidance of Doc Searls.

A few of the projects underway include :

  • The MINE project – tools to allow individuals to manage and share through personalised feeds their personal data both identity based and any data the user authors such as photo’s, blogs, videos etc
  • The MINT project – focussing on how to get transactional information from organisations shared using standards such as JSON, XML, CSV, atom etc
  • MyDex – storage of personal data with ability to specify which data is shared with which organisations and notification of changes to certain data
  • PAOGAperson – secured safety deposit box for personal identification data that also enables data to be verified/certified
  • MySortingOffice – relationship specific email addresses with ability to embed selected personal data for sharing with specific organisations or people
  • EmanciPay project – a new model for the media marketplace allowing consumers to choose how to pay and how much they pay on their own terms for content they consume.

    The driving force behind these different initiatives varies from those approaching it from an individual perspective of managing your own data (ie MINE) to those focussed on the relationship between an individual and an organisation (ie MyDex).

    These different projects and approaches also highlight the many forms of different “personal data” that exists.  At a high level it would include all of the following :

    • Identity based data (ie name, address, email, telephone,  NI number, passport number etc) – this type of data is fairly static in nature and can often be subject to validation and verification
    • Transactional data (ie purchases, usage based such as mobile or utility data, banking transactions like direct debits) – this is data held by organisations that provide services that can be bought or consumed (ie Amazon, HSBC etc)
    • Records based data (ie medical records, HR, credit history, electoral records, tax records, student records etc) – stored by organisations
    • Personally authored data (ie blogs, photo’s, wish lists, video’s, favourite links, documents etc) – often stored in a variety of online tools such as WordPress, flickr etc

    How can you get involved ?

    Firstly through education.  There is an opportunity to discuss and engage with organisations about  this new way of doing business and help them understand the opportunity.  Disruptive messages can make a difference.  Here is a good slide deck from Adriana Lukas who is one of the VRM evangelists in the UK and organises the monthly VRMHub in London.

    Secondly by helping think through and contribute to the projects out there.  VRM is still evolving and there are a number of initiatives in progress from open source to commercial solutions addressing different areas of the VRM space.  One area that is still relatively unexplored is that of applications that can enable users to manipulate and get value from their own data once it is under their control.  This could be tools that help visualise data, trend analysis, reporting, sharing of data etc – this could well be where a killer app emerges that helps drive adoption.

    VRM is certainly a disruptive concept and highlights how much of our personal data is out of our control.  With new online tools and services emerging all the time this problem will only increase.  It is certainly a worthy effort that deserves support and has wide ranging implications for how data may be managed in the future.

    For many organisations the biggest concern is over losing control if they enter the social media jungle.  Drilling deeper these concerns include the following

    1. Fear of opening the floodgates to customer views in public
    2. Ability (and therefore associated cost) to respond and engage with the volume of discussions being generated
    3. Damage to their image/values caused by inappropriate or offensive content posted to any of their online assets
    4. Concern about what employees might say about the business to customers or prospects
    5. Cost in terms of resources, infrastructure and time to successfully implement a social media strategy and solution

    Fear of opening the floodgates to customer views in public

    The reality is that no matter how good your service or product is you will have some unhappy customers.  Many see social media as simply giving a platform to disgruntled customers to air their views.   That said customers both happy and unhappy are already free to and indeed are sharing their views on companies through Twitter, Facebook and other channels.  Organisations can choose to ignore or engage with such discussions.

    The following are good and well known examples of companies engaging with customers through social media

    • Comcast Cares –  Comcast famously turned around bad feedback by engaging and responding on Twitter initially through the work of one man  Frank Eliason.
    • Hotels on Tripadvisor – there is feedback both good and bad on Tripadvisor for hotels, the smart hotels engage and respond to these comments which shows that they listen and results in a positive impression of them and the opposite for those that are silent.
    • Lego – eventually caught on and embraced the ever growing community and sites that their customers had created – How Lego caught the Cluetrain
    • Starbucks – a great example of how to use customer ideas to change and evolve your product –  My Starbucks Idea

    Ability (and therefore associated cost) to respond and engage with the volume of discussions being generated

    The sheer volume of traffic and discussions taking place on social platforms can seem overwhelming and could appear at first as though a large dedicated 24*7 team is needed to simply keep up with it and respond.

    However much can be achieved with existing resources and relatively small investment of time.  A good example as mentioned previously is Comcast which had a team of 7 people managing the social media interactions to support a customer base of 24 million.

    • Tools to help manage and filter the discussions.  Should first look at the many free tools available before deciding if you need one of the commercial solutions such as Radion6 or Scout Labs.  These tools can help focus time and energy on replying to the most relevent questions and concerns raised by customers.
    • Encourage employees to participate and share the load of responding.  Engaging with social media does not require suddenly creating a whole new team or retraining all of your callcentre – a few individuals with the right tools can utilise a certain percentage of their time to help provide good coverage.
    • Community manager – a role that listens and engages with the community and provides feedback to the internal organisation

    Damage to their image/values caused by inappropriate or offensive content posted to any of their online assets

    Getting the balance with moderation to minimize inappropriate content while not destroying the dynamics of the community is key.  One guiding principle is that trust is cheaper than control.  Manually moderating all user generated content would require vast resources and would never scale.  The approaches to achieving this balance include :

    • Automated spam detection services such as Mollum provide a good first line defense.
    • Community moderation – allow community members to flag inappropriate content and allow the most active respected members the ability to remove it
    • The Community manager role discussed above can also review user generated content and encourage the right kind of behaviours on the site.

    Concern about what employees might say about the business to customers or prospects

    Customers are not listening to the corporate speak language most companies are using to talk to them – they tune out from it as the words are unnatural and they tend to be the same as all companies.  People prefer genuine natural language and conversation.  Employees are having those conversations already with friends about their company.  Employees who want to engage with customers through social media should be encouraged to do so.   Allowing customers direct access to the best asset a company has, it’s employee’s, will have better results than any carefully crafted marketing or scripted call centre dialog.

    Cost in terms of resources, infrastructure and time to successfully implement a social media strategy and solution

    Social media usually fails when approached from a technology direction – there are expensive social media platforms that the big software companies will happily sell you.  But embracing social media doesn’t require big expensive platforms.  It starts with a philosophy of having conversations with customers wherever they might be on the internet and then understanding how to change your existing online assets to enable the kinds of interactions that your customers want.  These changes should be organic and be about assembling the right technology components that allows for change in the future.  The guidelines should be as follows:

    • Think big, start small, move fast and keep moving.
    • Avoid the mindset and associated process of selecting a technology platform
    • Assemble the solutions you need from proven open source technologies and standards

    Catching the cluetrain

    Being successful with social media is within the reach of all companies but it will require a change of mindset and a more agile approach to succeed.  The proof is out there with those companies that have realised great success though often it has required a maverick spirit of a few individuals to make it happen.   The spirit shown by those few is captured well in the Cluetrain Manifesto that while 10 years old is more relevent than ever today – the updated 10th Anniversary edition adding recent examples of the success of such an approach to engaging in conversations.

    Web Content Management (WCM) seems to mean different things to different people.  This of course can lead to confusion.  The term Web Content Management has been around for a while, since the mid 1990’s,  but two key things have changed since the term was first adopted namely the web and the type of content available over the web.

    The web has become a much richer visual experience in recent years with digital content such as video, flash, images becoming far more prevalent on all sites.  Also it has become much more interactive with users generating their own content from comments, reviews, blogs, wiki’s to images, presentations, music, profile pages, video’s and applications.  The web has evolved from being a fairly static publishing tool to a dynamic social media platform.

    The technical infrastructure underpinning web sites has also evolved significantly since WCM was born.  We have moved far from the early days of HTML pages and CGI scripts that add dynamic content often from a single database to platforms providing presentation templating and layout, content creation and editing tools with content aggregated from multiple sources both text and digital media.  Expectations have changed as well with content creation and management being readily available to non-technical users.

    Given these changes it is no surprise that WCM has changed and evolved to adapt to the ever changing landscape.  Broadly you can divide the approaches being taken in the WCM space as those that are coupled or decoupled.

    Coupled WCM (content repository + presentation combined)

    A coupled WCM solution combines the presentation and navigation of the site with managing the content that is available to be included in pages.  These type of solutions typically rely on a database to manage and store content and presentation details with files for templating/layout and styling.

    Examples of coupled  WCM’s include Drupal, Liferay, Joomla, Plone

    Strengths

    • Rich and easy to use editorial process allowing content to be easily combined and seen as it will be displayed on the site
    • Easy to associate and combine user generated content to published content
    • Often many additional modules available supporting authentication, rich media, ecommerce etc which all work off the content model
    • Requires less technical skills to manage and maintain site
    • Usually has a strong multi-site model allowing content and templates to be reused across different sites
    • Built in authentication to control access rights of users to content

    Weaknesses

    • Not so strong for managing file based assets, including versioning, grouping, transformation and workflow
    • Poor API support to expose content externally
    • Design of site needs to be aligned to templating model of solution
    • Challenges of distributing development due to configuration being stored in the database
    • Poor support for managing deployment and versions of a site

    Decoupled WCM (separate repository(ies) and presentation layer)

    The decoupled approach focusses on managing content independently of any presentation of the content.  So the content is managed in a repository which provides versioning, metadata, workflow and the presentation is managed in a front-end platform that allows pages and navigation to be easily managed and often provides user management.  Some decoupled repository based solutions also offer features such as allowing users their own sandboxed version of a site to change so they can preview just their own changes before those updates are deployed to the main site.  However this type of approach does assume that changes are being made to files rather than changing config/content in a database through a social media front-end such as Drupal.

    Examples of content repositories: Alfresco, Nuxeo

    Examples of  front-end presentation layers: web frameworks such as Django, Symphony, Ruby on Rails, coupled WCM’s like Drupal where only User Generated Content (UGC) is stored in the front-end and all other content is retreived from one or more repositories, portals such as JBoss and Liferay

    Strengths

    • Clean separation between content and presentation allowing different tools to be used that best suit the solution or enable use of new tools/technologies as they emerge
    • Strong API access to content within the repository
    • Ability to have several repositories that focus on certain content types such as documents or digital assets and leverage the specialised  functionality of these tools
    • Is possible to use a coupled WCM for front-end and gain the benefits that provides while reducing the limitations due to accessing content from a backend repository

    Weaknesses

    • Challenge of providing an easy to use front end for managing composite pages which combine content from multiple repositories
    • Requires integration between front-end presentation platform and backend repositories
    • Content creation might require different UI’s if this is provided by each backend repository
    • Need to define clear separation of responsibilities between front-end and backend such as where are taxonomies mastered, how is search managed across both UGC and content in backend repositories, where is access control to content managed

    In cases where there is alot of disparate content potentially from many sources the decoupled approach makes the most sense – combining content from multiple sources and being able to present that using one or more front-end platforms.  Developments such as CMIS will help facilitate accessing content from various sources from the front-end platform.  The greater challenge concerns providing easy to use editorial screens to easily manage composite pages that combine content from several sources.  Utilising a rich social media platform such as Drupal for the front end will help ease this process but there is still work to be done to make this even slicker.  There is already a CMIS connector for Drupal currently tested against the Alfresco implementation of CMIS.  For good coverage of some of the future trends being discussed in content management see What is the Future of Content Management ?

    If anything is sure it is that WCM will need to continue to evolve regardless of whether the acronym itself remains or is replaced by a broader content consolidation and publishing meme.  Understanding the current state and trade-offs will help ensure an informed decision is made as to the right approach for any particular enterprise strategy to exposing content over the web.

    Cloud computing is most often associated with scalability (see Amazon CTO Werner Vogel’s definition of scalability).  One commonly held view is that you can simply move an application onto cloud based infrastructure and it will then “magically” scale on demand.   The reality is that there is no free lunch.  Simply throwing additional CPU cycles or storage at an application is not going to deliver linear scalability unless the application was designed to scale in such a manner.

    The cloud era heralds the development of new enterprise application platforms available on demand as well as new social platforms.  However this isn’t as simple as taking the current crop of relational database centric solutions and deploying them on Amazon EC2.    Of course this isn’t stopping vendors from taking that approach and offering on demand versions of their products.  The challenge is that these applications are not designed to scale dynamically and in a distributed manner.   The result is that as traffic and usage grows there will be a continual cycle of monitoring and patches to try and keep the application performing to an acceptable level.   While this will always be necessary to monitor and improve there are lessons to be learnt from some of the largest concurrent, multi-user sites that can help reduce the pain.

    Consideration of cloud based scaling is clearly dependent on the nature of the application and the anticipated volume of usage.  If the application for example is very read heavy and low on write transactions then replicating databases with good caching could well be sufficient.  However for solutions that require massively concurrent heavy write based access to the database consideration needs to be given to architecting to achieve scalability.

    Distributed database versus relational database

    Relational databases are primarily designed for managing updates and transactions on a single instance.  This is a problem when you need massively concurrent access with millions of users initiating write transactions.  The approach taken to address this is usually clustering or sharding.  But this is really attempting to patch up the problem rather than addressing it full on.  That said there are many large scale examples using a relational database and applying these approaches.

    Given a clean sheet and current developments what approaches can be used to address massively concurrent write heavy applications.  Well there are a number of different distributed database solutions that have emerged in the last few years either based on some form of key-value distributed hash table (DHT), column-oriented store or document centric.  They are often built to address precisely the issue of scaling for write heavy applications.  However they should not be considered a direct replacement for a relational database.  They often lack support for complex joins, foreign keys as well as reporting and aggregation – although some of these areas are beginning to be addressed.   Also there is not currently an SQL or object mapping such as Active Record to cleanly and transparently access them from code, so extra development effort is required.   However they should certainly be considered as part of on an overall architecture and leveraged to reduce write heavy bottlenecks in the solution.

    Amazon SimpleDB – simple key value DHT, based on the Dynamo solution created by Amazon

    Apache CouchDB – document centric approach built using Erlang

    Cassandra – DHT variant that supports a rich data model,  originally born at Facebook, now an Apache incubator project

    HBase – column-oriented store similar to Google’s BigTable, uses Hadoop as a distributed map/reduce file system.

    Here is a great blog from the cofounder of Last.fm on the multitude of alternatives to a traditional RDBMS for heavy distributed write based applications http://www.metabrew.com/article/anti-rdbms-a-list-of-distributed-key-value-stores/

    Another blog worth reading on distributed key stores is http://randomfoo.net/2009/04/20/some-notes-on-distributed-key-stores.

    Stateless immutable services

    One of the guiding principles for linear scalability is to have lightweight, independent, stateless operations that can be executed anywhere and run on newly deployed threads/processes/cores/machines transparently as needed in order to service an increasing number of requests.   These type of services should share nothing with any other services they simply process asynchronous messages.  This type of async message passing has been proven to scale in languages such as Erlang.  One paradigm that is closely aligned to this approach is known as the Actor model.  The actor model is all about passing immutable messages and the share-nothing philosophy.  A lightweight stateless protocol such as REST is well suited to allowing these services to be accessed across the internet through HTTP.

    Speaking the language of scalability

    As always choice of programming language can end up being more an emotional rather than necessary decision.  But it is true that it can help to pick the right tool for the job at hand.  Some languages have better support for developing highly concurrent distributed and scalable applications. The characteristics to look for are languages that encourage immutable data structures and referentially transparent methods, typically being functional in nature and supporting asynchronous message passing.  Two popular languages that are receiving alot of attention are Scala and Erlang.  Scala runs on the JVM and was famously used to provide scalability for Twitter by implementing a message queuing solution.  Erlang has it routes in embedded systems and so was optimised to run on minimal resources.  It utilises processes which are much lighter and faster than even O/S threads supporting both multiple cores or multiple machines transparently.  Both Scala and Erlang have good support for the Actor model again encouraging scalable independent async message driven design.

    In the end there is still more learning and maturing to be done in developing the next generation of cloud based solutions and not all will need to scale to high volumes.  It will be an interesting time and there is much that can be learnt from others who are already dipping their feet in this pool.  A good site for keeping track of what others are doing in the whole space of scalability is http://highscalability.com/.  Being aware of these changes is especially important when embarking on new projects where consideration to scale and using cloud infrastructure are factors.

    As Twitter evolves and potential applications of the platform emerge it is already clear that it provides a powerful channel  to build and contribute to a community of like minded individuals and organisations.  It’s openness creates opportunities for establishing far broader communities than other invite only based social network platforms.

    The philosophy behind engaging with a community through a social network is really around how you can contribute to the discussions and ideas of a group of people with similar interests to yourself or your business.  By giving and engaging with that community you will get far more than if you try to use Twitter as a pure marketing / selling channel.

    Establishing an identity

    Your identity on Twitter is determined primarily by what you choose to discuss and to a lesser extent what is in your bio.  Many use Twitter to simply share their day to day experiences as some kind of life stream with family and friends.

    However if you wish to establish a community around a particular domain or area of interest be that some kind of technology, business area or personal interest then it is important that you focus the majority of your tweets and links on those domains and areas of interest.

    If you regularly post on a particular topic then others who share that interest will find you through search and follow you.

    Building a community

    The people you follow or who follow you on Twitter form the community that you will interact with so care and consideration should be given to both who you choose to follow and to those you allow to follow you by not blocking them.

    Avoid mass following – this is analogous to blackhat SEO techniques of generating as many inbound links to a website as possible.  This will completely dilute your community and is likely in the future to impact reputation scoring which could assess the relevance of the people you follow and who follow you.

    There are a number of tools that can be used to help find people who share your interests in a particular topic or space, examples include :

    • Many Twitter clients allow keyword searches to be saved and will update real time showing the tweets containing those keywords – one example is Tweetdeck
    • TweepSearch – performs a keyword search through the bio’s and profile of Twitter users
    • MrTweet – recommendations of people based on who the people you follow are following or interacting with

    Suggestions for Twitter Post Optimization

    Twitter posts are restricted to 140 characters so it is important to think about how best to use those limited number of characters to share something of value with your intended audience.

    The goals of a Twitter post intended to reach those with similar interests should be :

    1. Post is found by as many of your target audience as possible
    2. Post is retweeted alot
    3. Post results in interest in your other posts, in you, your company and leads to new and loyal followers

    Given these goals the following 10 recommendations can help achieve them :-

    1. Use keywords that are relevant to your intended community and also score high in Twitter keyword searches.  The following tools can help identify the popular keywords (helps meet goal 1)
      • Twopular – great for seeing popular keywords over time
      • Twitscoop – good for trending and search
      • Twendz – helps identify related keywords that are popular given a root keyword
    2. Power of the headline – post needs to grab attention and interest (helps meet goal 1 & 2)
    3. Consider time of post – taking into account time zone of the community you are trying to reach (helps meet goal 1)
    4. Share something of value and informative not just blatant advertising (helps meet goal 2)
    5. Post should contain a link to further information (helps meet goal 2 & 3)
    6. Try to ensure that a high percentage of your posts address the topics/themes that are relevant to your intended community (helps meet goal 3)
    7. Give credit to others – so ReTweet or reference other people when sharing information they have posted (helps meet goal 2 & 3)
    8. Break news (helps meet all 3 goals)
    9. Answer questions and respond to other users who are discussing topics relevant to your business domain/speciality (helps meet goal 2 & 3)
    10. Post links to your Twitter posts on other channels such as on blogs, websites, social networks etc (helps meet goal 1)

    The above is also dependent on how Twitter search evolves.  For a discussion on ideas of how Twitter Search could evolve see Ideas for Improving Twitter Search.   Twitter Post Optimization (TPO) is the new SEO and is still evolving – interesting to see how TPO will change as the platform and tools expand.

    Despite the tools evolving the practise of engaging, sharing and contributing to a community remains the same and will be important for individuals and businesses to embrace no matter which channel they use.

    Follow

    Get every new post delivered to your Inbox.