Feeds:
Posts
Comments

Posts Tagged ‘Optaros’

The trouble with software architecture is that it keeps getting re-invented and new acronyms appear followed by a slew of large unreadable books explaining why this new architecture is going to change everything.  This is actually a widespread phenomenon in the software industry of many emerging approaches/solutions/tools/languages/frameworks/patterns/protocols where adoption rules supreme resulting in a form of natural selection.  It is perhaps inherent in the nature of software that such flexibility results in so many solutions to the same problem.  A good guide through this maze is a pragmatically tuned intuition that tells you when something is unnecessarily complex to be effective.  Keeping things simple means more people will adopt, use, discuss and improve it.  A good example would be RESTful services that are gaining in adoption due to their simple clear approach to exposing services through HTTP.

What is AOA ?

With all of the above taken into consideration then I want to introduce yet another architectural meme, namely Assembly Oriented Architecture (AOA).  This is more of an approach with some guidelines and doesn’t require any standards or reference documentation to understand in order to apply it.  It is an approach that has evolved from real practical experience and is actively used on all projects that Optaros works on so is well proven in the field.

At Optaros we focus on assembling open source solutions which are often very strong on supporting open standards that lend themselves to assembly.  However proprietary solutions can also be assessed in terms of their ability to be part of an assembled solution.

Guiding principles for selecting AOA solutions

  • Lightweight standards based interfaces covering key functionality and data access.  For web based solutions these interfaces should be web oriented such as RESTful services and support returning different formats such as XML, JSON and HTML.
  • Supports open standards such as OpenID, OAuth, RDF, CMIS etc
  • Can the solution be easily disassembled – ie can the built-in search or authentication mechanism be easily switched to use another

Why use AOA ?

Using AOA is around fast delivery of robust, flexible architectures.  It is inherently pragmatic accepting that most real world solutions  are largely comprised of combining disparate applications and not nicely packaged services.  More explicitly the benefits of using it are as follows :-

  • Results in clean standards based architecture without getting locked in to a particular solution
  • Less coding – gaps are plugged by identifying suitable applications or components that meet the need and are assembly friendly
  • Faster to deploy than custom build
  • Best components for the job and ease of changing them out when something better comes along
  • Lower cost of ownership compared to either custom build or customising an off the shelf application due to the above benefits

How it differs from SOA ?

Enterprise Architect’s at this point might be thinking surely this is what SOA is intended to provide, a clean architecture that allows the different systems to be replaced as needed without breaking any of the interfaces.  I should firstly say that AOA is not an alternative to SOA they are completely compatible architectural approaches and I would go further to suggest that both should be adopted to ensure a clean, flexible and  robust architecture.  SOA differs from AOA in a number of different areas namely

  • SOA is concerned with defining clean services independent of any specific application, whereas AOA is about selecting applications that are Assembly friendly
  • AOA looks to have applications that can themselves be disassembled and easily configured to use external components for some areas of functionality such as workflow, rules, search etc whereas SOA would define services for key capabilities and invoke the relevant application interface
  • SOA is more about providing a layer of abstraction on top of applications whereas AOA is about  effectively combining applications to deliver a solution
  • Although not directly tied to SOA there is the whole area of Web Services and associated specifications – AOA doesn’t go to the level of detail specifications but relies on guiding principles

In my experience SOA can be taken too far and alot of time spent agreeing every possible service to cover all of the combined functionality of all of the main applications.  So it can turn into a time and money pit with no real clear business value.  SOA seems to work best when common services that will be called by many systems are developed rather than trying to boil the functional ocean.  The other area that alot of time can be lost is in the dark depths of the many WS-* standards that exist – again that pragmatic intuition should steer you clear of distractions from the task at hand when developing useful services.

AOA patterns

A number of patterns are starting to emerge for different types of assembly architecture – the following is a list of the common ones.

  • Plug-in Platform - Assemble a solution around a central component covering the core functionality and acting as the integration platform for assembling the missing parts, thanks to its extensible architecture.
  • Container Assembly - Assemble a solution around a central container not providing any business functionality but focusing on cross cutting concerns (security, logging, access to resources, …). This framework should be a standard (or de-facto standard) of the other components you want to assemble.
  • Service Oriented Assembly - Assemble a solution using a SOA approach. Each component to be assembled should provide a public interface that would be used for integration.
  • Mash-up Assembly - Assemble a solution using the web-browser as a rendering layer and an integration platform to assemble different application through JavaScript, DOM manipulation, Rest API, iFrames. Each component to be assemble should provide a RESTful API.

In the end pragmatism wins, technologies continue to change and no matter what is done to try and allow for that in an architecture, ultimately effort is needed to accommodate those changes.  Given that reality check it should be clear that spending months developing intricate service definitions for everything is probably not good for anyone.  Therefore AOA offers good guidelines and actually helps deliver solutions faster while allowing for applications and components to be changed in the future as required.

Read Full Post »

I  first came across the concept of Vendor Relationship Management (VRM) in a new chapter by Doc Searls for the 10th Anniversary Edition of the Cluetrain manifesto.  Meeting Doc Searls recently and then attending the London VRMHub meetup has given me a better idea of what is happening in the VRM space.  Having worked in the CRM space for many years the idea of VRM seemed very radical but i knew it made sense.  The essence of VRM is really around individuals having control of their own personal data and their relationships with organisations and how they interact with them.  Today each company a person interacts with maintains their own separate information that is often hard to access externally.  Moving house highlights the problem of just how many companies you need to tell to change their data about you.   So with VRM an individual should be able to maintain their own personal data store (ie address, contact details, wish lists etc) and decide who and how much of that they share with organisations.  It also includes the idea of people being able to issue a personal RFP for what they want (ie a digital camera with 12megapixels, supporting RAW for a budget of $300) and then allowing companies to respond with their best offers reversing the current model of having to hunt down what you want from sellers.

This is disruptive as it shifts power from the sell side to the demand side.  It’s a kind of revolution waiting to happen.

So it raises a number of questions as follows:-

Why would companies be interested in getting involved ?

Well VRM if done correctly should be able to benefit both demand and sell side.  Allowing people to share information with organisations they trust could result in organisations having a more complete picture of a person not just filtered through their own narrow view.  Sharing a wish list or history of items already purchased from different places should allow companies to present offers that are more aligned to the person’s interest and needs.

The other answer is that people want control of their own data and this demand will increase and ultimately organisations will need to respond and respect how their customers want to interact with them.

Those companies who embrace this first have an opportunity to gain great PR  and a competitive advantage.

What is happening in this space right now ?

So VRM is still quite embryonic at the moment.  A few different projects are underway in different areas.   A good one page overview of what VRM is can be found at http://www.vrmhub.net/vrm-in-a-nutshell/.  The main project site is maintained at the Berkman Center for Internet & Society at Harvard under the guidance of Doc Searls.

A few of the projects underway include :

  • The MINE project – tools to allow individuals to manage and share through personalised feeds their personal data both identity based and any data the user authors such as photo’s, blogs, videos etc
  • The MINT project – focussing on how to get transactional information from organisations shared using standards such as JSON, XML, CSV, atom etc
  • MyDex – storage of personal data with ability to specify which data is shared with which organisations and notification of changes to certain data
  • PAOGAperson – secured safety deposit box for personal identification data that also enables data to be verified/certified
  • MySortingOffice – relationship specific email addresses with ability to embed selected personal data for sharing with specific organisations or people
  • EmanciPay project – a new model for the media marketplace allowing consumers to choose how to pay and how much they pay on their own terms for content they consume.

    The driving force behind these different initiatives varies from those approaching it from an individual perspective of managing your own data (ie MINE) to those focussed on the relationship between an individual and an organisation (ie MyDex).

    These different projects and approaches also highlight the many forms of different “personal data” that exists.  At a high level it would include all of the following :

    • Identity based data (ie name, address, email, telephone,  NI number, passport number etc) – this type of data is fairly static in nature and can often be subject to validation and verification
    • Transactional data (ie purchases, usage based such as mobile or utility data, banking transactions like direct debits) – this is data held by organisations that provide services that can be bought or consumed (ie Amazon, HSBC etc)
    • Records based data (ie medical records, HR, credit history, electoral records, tax records, student records etc) – stored by organisations
    • Personally authored data (ie blogs, photo’s, wish lists, video’s, favourite links, documents etc) – often stored in a variety of online tools such as WordPress, flickr etc

    How can you get involved ?

    Firstly through education.  There is an opportunity to discuss and engage with organisations about  this new way of doing business and help them understand the opportunity.  Disruptive messages can make a difference.  Here is a good slide deck from Adriana Lukas who is one of the VRM evangelists in the UK and organises the monthly VRMHub in London.

    Secondly by helping think through and contribute to the projects out there.  VRM is still evolving and there are a number of initiatives in progress from open source to commercial solutions addressing different areas of the VRM space.  One area that is still relatively unexplored is that of applications that can enable users to manipulate and get value from their own data once it is under their control.  This could be tools that help visualise data, trend analysis, reporting, sharing of data etc – this could well be where a killer app emerges that helps drive adoption.

    VRM is certainly a disruptive concept and highlights how much of our personal data is out of our control.  With new online tools and services emerging all the time this problem will only increase.  It is certainly a worthy effort that deserves support and has wide ranging implications for how data may be managed in the future.

    Read Full Post »

    For many organisations the biggest concern is over losing control if they enter the social media jungle.  Drilling deeper these concerns include the following

    1. Fear of opening the floodgates to customer views in public
    2. Ability (and therefore associated cost) to respond and engage with the volume of discussions being generated
    3. Damage to their image/values caused by inappropriate or offensive content posted to any of their online assets
    4. Concern about what employees might say about the business to customers or prospects
    5. Cost in terms of resources, infrastructure and time to successfully implement a social media strategy and solution

    Fear of opening the floodgates to customer views in public

    The reality is that no matter how good your service or product is you will have some unhappy customers.  Many see social media as simply giving a platform to disgruntled customers to air their views.   That said customers both happy and unhappy are already free to and indeed are sharing their views on companies through Twitter, Facebook and other channels.  Organisations can choose to ignore or engage with such discussions.

    The following are good and well known examples of companies engaging with customers through social media

    • Comcast Cares -  Comcast famously turned around bad feedback by engaging and responding on Twitter initially through the work of one man  Frank Eliason.
    • Hotels on Tripadvisor – there is feedback both good and bad on Tripadvisor for hotels, the smart hotels engage and respond to these comments which shows that they listen and results in a positive impression of them and the opposite for those that are silent.
    • Lego – eventually caught on and embraced the ever growing community and sites that their customers had created – How Lego caught the Cluetrain
    • Starbucks – a great example of how to use customer ideas to change and evolve your product -  My Starbucks Idea

    Ability (and therefore associated cost) to respond and engage with the volume of discussions being generated

    The sheer volume of traffic and discussions taking place on social platforms can seem overwhelming and could appear at first as though a large dedicated 24*7 team is needed to simply keep up with it and respond.

    However much can be achieved with existing resources and relatively small investment of time.  A good example as mentioned previously is Comcast which had a team of 7 people managing the social media interactions to support a customer base of 24 million.

    • Tools to help manage and filter the discussions.  Should first look at the many free tools available before deciding if you need one of the commercial solutions such as Radion6 or Scout Labs.  These tools can help focus time and energy on replying to the most relevent questions and concerns raised by customers.
    • Encourage employees to participate and share the load of responding.  Engaging with social media does not require suddenly creating a whole new team or retraining all of your callcentre – a few individuals with the right tools can utilise a certain percentage of their time to help provide good coverage.
    • Community manager – a role that listens and engages with the community and provides feedback to the internal organisation

    Damage to their image/values caused by inappropriate or offensive content posted to any of their online assets

    Getting the balance with moderation to minimize inappropriate content while not destroying the dynamics of the community is key.  One guiding principle is that trust is cheaper than control.  Manually moderating all user generated content would require vast resources and would never scale.  The approaches to achieving this balance include :

    • Automated spam detection services such as Mollum provide a good first line defense.
    • Community moderation – allow community members to flag inappropriate content and allow the most active respected members the ability to remove it
    • The Community manager role discussed above can also review user generated content and encourage the right kind of behaviours on the site.

    Concern about what employees might say about the business to customers or prospects

    Customers are not listening to the corporate speak language most companies are using to talk to them – they tune out from it as the words are unnatural and they tend to be the same as all companies.  People prefer genuine natural language and conversation.  Employees are having those conversations already with friends about their company.  Employees who want to engage with customers through social media should be encouraged to do so.   Allowing customers direct access to the best asset a company has, it’s employee’s, will have better results than any carefully crafted marketing or scripted call centre dialog.

    Cost in terms of resources, infrastructure and time to successfully implement a social media strategy and solution

    Social media usually fails when approached from a technology direction – there are expensive social media platforms that the big software companies will happily sell you.  But embracing social media doesn’t require big expensive platforms.  It starts with a philosophy of having conversations with customers wherever they might be on the internet and then understanding how to change your existing online assets to enable the kinds of interactions that your customers want.  These changes should be organic and be about assembling the right technology components that allows for change in the future.  The guidelines should be as follows:

    • Think big, start small, move fast and keep moving.
    • Avoid the mindset and associated process of selecting a technology platform
    • Assemble the solutions you need from proven open source technologies and standards

    Catching the cluetrain

    Being successful with social media is within the reach of all companies but it will require a change of mindset and a more agile approach to succeed.  The proof is out there with those companies that have realised great success though often it has required a maverick spirit of a few individuals to make it happen.   The spirit shown by those few is captured well in the Cluetrain Manifesto that while 10 years old is more relevent than ever today – the updated 10th Anniversary edition adding recent examples of the success of such an approach to engaging in conversations.

    Read Full Post »

    Web Content Management (WCM) seems to mean different things to different people.  This of course can lead to confusion.  The term Web Content Management has been around for a while, since the mid 1990′s,  but two key things have changed since the term was first adopted namely the web and the type of content available over the web.

    The web has become a much richer visual experience in recent years with digital content such as video, flash, images becoming far more prevalent on all sites.  Also it has become much more interactive with users generating their own content from comments, reviews, blogs, wiki’s to images, presentations, music, profile pages, video’s and applications.  The web has evolved from being a fairly static publishing tool to a dynamic social media platform.

    The technical infrastructure underpinning web sites has also evolved significantly since WCM was born.  We have moved far from the early days of HTML pages and CGI scripts that add dynamic content often from a single database to platforms providing presentation templating and layout, content creation and editing tools with content aggregated from multiple sources both text and digital media.  Expectations have changed as well with content creation and management being readily available to non-technical users.

    Given these changes it is no surprise that WCM has changed and evolved to adapt to the ever changing landscape.  Broadly you can divide the approaches being taken in the WCM space as those that are coupled or decoupled.

    Coupled WCM (content repository + presentation combined)

    A coupled WCM solution combines the presentation and navigation of the site with managing the content that is available to be included in pages.  These type of solutions typically rely on a database to manage and store content and presentation details with files for templating/layout and styling.

    Examples of coupled  WCM’s include Drupal, Liferay, Joomla, Plone

    Strengths

    • Rich and easy to use editorial process allowing content to be easily combined and seen as it will be displayed on the site
    • Easy to associate and combine user generated content to published content
    • Often many additional modules available supporting authentication, rich media, ecommerce etc which all work off the content model
    • Requires less technical skills to manage and maintain site
    • Usually has a strong multi-site model allowing content and templates to be reused across different sites
    • Built in authentication to control access rights of users to content

    Weaknesses

    • Not so strong for managing file based assets, including versioning, grouping, transformation and workflow
    • Poor API support to expose content externally
    • Design of site needs to be aligned to templating model of solution
    • Challenges of distributing development due to configuration being stored in the database
    • Poor support for managing deployment and versions of a site

    Decoupled WCM (separate repository(ies) and presentation layer)

    The decoupled approach focusses on managing content independently of any presentation of the content.  So the content is managed in a repository which provides versioning, metadata, workflow and the presentation is managed in a front-end platform that allows pages and navigation to be easily managed and often provides user management.  Some decoupled repository based solutions also offer features such as allowing users their own sandboxed version of a site to change so they can preview just their own changes before those updates are deployed to the main site.  However this type of approach does assume that changes are being made to files rather than changing config/content in a database through a social media front-end such as Drupal.

    Examples of content repositories: Alfresco, Nuxeo

    Examples of  front-end presentation layers: web frameworks such as Django, Symphony, Ruby on Rails, coupled WCM’s like Drupal where only User Generated Content (UGC) is stored in the front-end and all other content is retreived from one or more repositories, portals such as JBoss and Liferay

    Strengths

    • Clean separation between content and presentation allowing different tools to be used that best suit the solution or enable use of new tools/technologies as they emerge
    • Strong API access to content within the repository
    • Ability to have several repositories that focus on certain content types such as documents or digital assets and leverage the specialised  functionality of these tools
    • Is possible to use a coupled WCM for front-end and gain the benefits that provides while reducing the limitations due to accessing content from a backend repository

    Weaknesses

    • Challenge of providing an easy to use front end for managing composite pages which combine content from multiple repositories
    • Requires integration between front-end presentation platform and backend repositories
    • Content creation might require different UI’s if this is provided by each backend repository
    • Need to define clear separation of responsibilities between front-end and backend such as where are taxonomies mastered, how is search managed across both UGC and content in backend repositories, where is access control to content managed

    In cases where there is alot of disparate content potentially from many sources the decoupled approach makes the most sense – combining content from multiple sources and being able to present that using one or more front-end platforms.  Developments such as CMIS will help facilitate accessing content from various sources from the front-end platform.  The greater challenge concerns providing easy to use editorial screens to easily manage composite pages that combine content from several sources.  Utilising a rich social media platform such as Drupal for the front end will help ease this process but there is still work to be done to make this even slicker.  There is already a CMIS connector for Drupal currently tested against the Alfresco implementation of CMIS.  For good coverage of some of the future trends being discussed in content management see What is the Future of Content Management ?

    If anything is sure it is that WCM will need to continue to evolve regardless of whether the acronym itself remains or is replaced by a broader content consolidation and publishing meme.  Understanding the current state and trade-offs will help ensure an informed decision is made as to the right approach for any particular enterprise strategy to exposing content over the web.

    Read Full Post »

    Cloud computing is most often associated with scalability (see Amazon CTO Werner Vogel’s definition of scalability).  One commonly held view is that you can simply move an application onto cloud based infrastructure and it will then “magically” scale on demand.   The reality is that there is no free lunch.  Simply throwing additional CPU cycles or storage at an application is not going to deliver linear scalability unless the application was designed to scale in such a manner.

    The cloud era heralds the development of new enterprise application platforms available on demand as well as new social platforms.  However this isn’t as simple as taking the current crop of relational database centric solutions and deploying them on Amazon EC2.    Of course this isn’t stopping vendors from taking that approach and offering on demand versions of their products.  The challenge is that these applications are not designed to scale dynamically and in a distributed manner.   The result is that as traffic and usage grows there will be a continual cycle of monitoring and patches to try and keep the application performing to an acceptable level.   While this will always be necessary to monitor and improve there are lessons to be learnt from some of the largest concurrent, multi-user sites that can help reduce the pain.

    Consideration of cloud based scaling is clearly dependent on the nature of the application and the anticipated volume of usage.  If the application for example is very read heavy and low on write transactions then replicating databases with good caching could well be sufficient.  However for solutions that require massively concurrent heavy write based access to the database consideration needs to be given to architecting to achieve scalability.

    Distributed database versus relational database

    Relational databases are primarily designed for managing updates and transactions on a single instance.  This is a problem when you need massively concurrent access with millions of users initiating write transactions.  The approach taken to address this is usually clustering or sharding.  But this is really attempting to patch up the problem rather than addressing it full on.  That said there are many large scale examples using a relational database and applying these approaches.

    Given a clean sheet and current developments what approaches can be used to address massively concurrent write heavy applications.  Well there are a number of different distributed database solutions that have emerged in the last few years either based on some form of key-value distributed hash table (DHT), column-oriented store or document centric.  They are often built to address precisely the issue of scaling for write heavy applications.  However they should not be considered a direct replacement for a relational database.  They often lack support for complex joins, foreign keys as well as reporting and aggregation – although some of these areas are beginning to be addressed.   Also there is not currently an SQL or object mapping such as Active Record to cleanly and transparently access them from code, so extra development effort is required.   However they should certainly be considered as part of on an overall architecture and leveraged to reduce write heavy bottlenecks in the solution.

    Amazon SimpleDB – simple key value DHT, based on the Dynamo solution created by Amazon

    Apache CouchDB – document centric approach built using Erlang

    Cassandra – DHT variant that supports a rich data model,  originally born at Facebook, now an Apache incubator project

    HBase – column-oriented store similar to Google’s BigTable, uses Hadoop as a distributed map/reduce file system.

    Here is a great blog from the cofounder of Last.fm on the multitude of alternatives to a traditional RDBMS for heavy distributed write based applications http://www.metabrew.com/article/anti-rdbms-a-list-of-distributed-key-value-stores/

    Another blog worth reading on distributed key stores is http://randomfoo.net/2009/04/20/some-notes-on-distributed-key-stores.

    Stateless immutable services

    One of the guiding principles for linear scalability is to have lightweight, independent, stateless operations that can be executed anywhere and run on newly deployed threads/processes/cores/machines transparently as needed in order to service an increasing number of requests.   These type of services should share nothing with any other services they simply process asynchronous messages.  This type of async message passing has been proven to scale in languages such as Erlang.  One paradigm that is closely aligned to this approach is known as the Actor model.  The actor model is all about passing immutable messages and the share-nothing philosophy.  A lightweight stateless protocol such as REST is well suited to allowing these services to be accessed across the internet through HTTP.

    Speaking the language of scalability

    As always choice of programming language can end up being more an emotional rather than necessary decision.  But it is true that it can help to pick the right tool for the job at hand.  Some languages have better support for developing highly concurrent distributed and scalable applications. The characteristics to look for are languages that encourage immutable data structures and referentially transparent methods, typically being functional in nature and supporting asynchronous message passing.  Two popular languages that are receiving alot of attention are Scala and Erlang.  Scala runs on the JVM and was famously used to provide scalability for Twitter by implementing a message queuing solution.  Erlang has it routes in embedded systems and so was optimised to run on minimal resources.  It utilises processes which are much lighter and faster than even O/S threads supporting both multiple cores or multiple machines transparently.  Both Scala and Erlang have good support for the Actor model again encouraging scalable independent async message driven design.

    In the end there is still more learning and maturing to be done in developing the next generation of cloud based solutions and not all will need to scale to high volumes.  It will be an interesting time and there is much that can be learnt from others who are already dipping their feet in this pool.  A good site for keeping track of what others are doing in the whole space of scalability is http://highscalability.com/.  Being aware of these changes is especially important when embarking on new projects where consideration to scale and using cloud infrastructure are factors.

    Read Full Post »

    As Twitter evolves and potential applications of the platform emerge it is already clear that it provides a powerful channel  to build and contribute to a community of like minded individuals and organisations.  It’s openness creates opportunities for establishing far broader communities than other invite only based social network platforms.

    The philosophy behind engaging with a community through a social network is really around how you can contribute to the discussions and ideas of a group of people with similar interests to yourself or your business.  By giving and engaging with that community you will get far more than if you try to use Twitter as a pure marketing / selling channel.

    Establishing an identity

    Your identity on Twitter is determined primarily by what you choose to discuss and to a lesser extent what is in your bio.  Many use Twitter to simply share their day to day experiences as some kind of life stream with family and friends.

    However if you wish to establish a community around a particular domain or area of interest be that some kind of technology, business area or personal interest then it is important that you focus the majority of your tweets and links on those domains and areas of interest.

    If you regularly post on a particular topic then others who share that interest will find you through search and follow you.

    Building a community

    The people you follow or who follow you on Twitter form the community that you will interact with so care and consideration should be given to both who you choose to follow and to those you allow to follow you by not blocking them.

    Avoid mass following – this is analogous to blackhat SEO techniques of generating as many inbound links to a website as possible.  This will completely dilute your community and is likely in the future to impact reputation scoring which could assess the relevance of the people you follow and who follow you.

    There are a number of tools that can be used to help find people who share your interests in a particular topic or space, examples include :

    • Many Twitter clients allow keyword searches to be saved and will update real time showing the tweets containing those keywords – one example is Tweetdeck
    • TweepSearch – performs a keyword search through the bio’s and profile of Twitter users
    • MrTweet – recommendations of people based on who the people you follow are following or interacting with

    Suggestions for Twitter Post Optimization

    Twitter posts are restricted to 140 characters so it is important to think about how best to use those limited number of characters to share something of value with your intended audience.

    The goals of a Twitter post intended to reach those with similar interests should be :

    1. Post is found by as many of your target audience as possible
    2. Post is retweeted alot
    3. Post results in interest in your other posts, in you, your company and leads to new and loyal followers

    Given these goals the following 10 recommendations can help achieve them :-

    1. Use keywords that are relevant to your intended community and also score high in Twitter keyword searches.  The following tools can help identify the popular keywords (helps meet goal 1)
      • Twopular – great for seeing popular keywords over time
      • Twitscoop – good for trending and search
      • Twendz – helps identify related keywords that are popular given a root keyword
    2. Power of the headline – post needs to grab attention and interest (helps meet goal 1 & 2)
    3. Consider time of post – taking into account time zone of the community you are trying to reach (helps meet goal 1)
    4. Share something of value and informative not just blatant advertising (helps meet goal 2)
    5. Post should contain a link to further information (helps meet goal 2 & 3)
    6. Try to ensure that a high percentage of your posts address the topics/themes that are relevant to your intended community (helps meet goal 3)
    7. Give credit to others – so ReTweet or reference other people when sharing information they have posted (helps meet goal 2 & 3)
    8. Break news (helps meet all 3 goals)
    9. Answer questions and respond to other users who are discussing topics relevant to your business domain/speciality (helps meet goal 2 & 3)
    10. Post links to your Twitter posts on other channels such as on blogs, websites, social networks etc (helps meet goal 1)

    The above is also dependent on how Twitter search evolves.  For a discussion on ideas of how Twitter Search could evolve see Ideas for Improving Twitter Search.   Twitter Post Optimization (TPO) is the new SEO and is still evolving – interesting to see how TPO will change as the platform and tools expand.

    Despite the tools evolving the practise of engaging, sharing and contributing to a community remains the same and will be important for individuals and businesses to embrace no matter which channel they use.

    Read Full Post »

    I have often been asked what is looked for in technical candidates and what skills people need to develop in order to progress in a technical career path.   Typically long lists of technology, languages and certificates on a CV/resume don’t do much for me.  If the list is long it is likely that the candidate doesn’t really have deep knowledge in any of them.  Equally technology changes so rapidly that knowledge of particular languages or platforms rapidly becomes redundant replaced with something new and shiny.  For all of those reasons I have always considered the following to be the most important areas

    1. Ability to rapidly learn new technologies
    2. Problem solving ability
    3. Communication (with people)

    Of these the third is the easiest to explore in an interview asking the candidate to explain some area of technology or some project they have worked on.  If they can’t explain clearly or use too much jargon you have an issue – no developer/architect is an island.  Ask them to assume you have no technical background and to explain in clear English something technical they worked on.

    The area of problem solving can be tackled in a couple of ways either by written tests, or  posing problems during the interview.  A good approach is to give some realistic real world problem and ask the candidate how they would solve it.  Provide them paper, whiteboards etc.  Good candidates will first ask questions before drawing data models and architecture.  The main point is to understand the process they would go through in breaking down the problem and of course is a bonus if they come up with innovative approaches on the fly to solving it.  Depending on time you can probe areas such as performance, security and integration of the solution suggested.  For a more rigorous approach for interviewing developers checkout Joel Spolsky’s Guerilla guide to interviewing

    The hardest area to measure is someones ability to rapidly learn a new technology.   You could for example ask someone to go learn and present on some technology with a demo although this is rather artificial.  Another area to consider is which technologies the candidate has recently learned and applied. Another strong indicator is the passion they have for technology both inside and out of work – if they start talking about modules they contributed to Apache in their spare time then you have struck gold.  It is also important to recognise  how we learn new things which is mainly through comparison to existing knowledge.  This type of pattern matching of course does require some foundations to draw on.  Below then are some of the foundations that will help someone in being able to rapidly learn and apply new technologies.

    For each of the following topics it is important that a candidate has at least an understanding of the concepts and ideally practical experience using at least one of the languages/tools.  These are transferable skills that are not constrained to particular applications or domains and should continue to be a valuable foundation for any new technologies / approaches going forward.

    • Design Patterns – The bible for developer design patterns is still the classic ‘Gang of Four‘ book.  These patterns are now a part of the developer vocabulary and have been adopted by most modern languages and frameworks.  The other essential reference is Martin Fowler’s Patterns of Enterprise Application Architecture.
    • OOP – Object Oriented principles continue to permeate all modern languages and architectures.  Experience of Java or C# should ensure a good exposure to the concepts of encapsulation, interfaces, inheritance, polymorphism etc
    • AOP and IoC – understanding of these paradigms and experience of using them with particular frameworks
    • Frameworks such as Spring, Hibernate, Struts, Rails, Symphony, Django – just knowing a language is not enough anymore as many applications now rely on frameworks and these kind of frameworks ultimately when used correctly should result in less effort and code to be written.  These frameworks typically utilise several core patterns such as MVC, ORM, DI etc
    • Web based development – with understanding of HTTP, REST, SOAP, XML, JSON and Javascript libraries such as JQuery, Prototype, SproutCore
    • Dynamic interpreted languages such as Python, Ruby – these are expressive languages that can require few external libraries and can result in very compact readable code.
    • Build tools and continuous build/integration ie ant, maven, cruisecontrol
    • Source code control cvs, svn, git
    • Testing - unit tests, TDD. Here are some examples of Java testing tools
    • IDE’s – familiarity with a mainstream IDE such as Eclipse or NetBeans and how to use/configure tools within it such as debugging, version control, build control, etc

    A book that I often recommend is The Pragmatic Programmer which encourages developers to embrace many of the areas above as part of a continuing investment in themselves and their career.

    A good refresher and selection of technical interview type questions can be found in this blog by Michael Knopf.

    Software is changing at an ever growing pace with areas such as cloud computing and social networking expanding rapidly so keeping on top of these developments will continue to be a challenge for developers and architects.  However given a solid foundation as outlined above it will enable such new developments to be quickly absorbed and understood.

    Read Full Post »

    There is alot of discussion around how Twitter search can be improved including the following recent post on Mashable.  The following then are my thoughts on how Twitter Search could evolve.

    In some ways the evolution of Twitter is not dissimilar to that of the World Wide Web.  That evolution firstly involved an ever increasing number of sites and then the tools and technology evolved and indeed continue to evolve today.  Twitter has grown rapidly and tools are emerging.  One of the key steps for the Web was providing a powerful search mechanism to access the wealth of information contained in those websites.  A similar challenge is now facing Twitter – how to search effectively to find those nuggets of information and trusted sources. Google’s approach of PageRank for websites revolutionized searching for information across websites as the results returned were deemed to come from more trusted and reliable sources.

    So the question is what is the equivalent of the Google PageRank for Twitter ?  How does a user qualify as being a trusted and reliable source ?

    A possible TwitterRank algorithm that could be used to index Twitter users could help in facilitating more powerful search could comprise of the following

    • Parsing and extraction of high frequency keywords/tags (eg open source, CMS, CMIS) of recent posts by a user (ie last 200 posts) – this approach could use one of the many Information Retrieval algorithms and leverage stemming and synonyms
    • Analysis of content in links could also contribute to keyword/tags for the user
    • Frequency and age of posts
    • Ratio of high scoring keywords to number of posts ( ie 1  in 4 posts contain high scoring keywords)
    • Number of followers with similar high scoring keywords – potentially ratio of these followers to overall followers, though penalizing for having non-relevant followers might be unfair it could help combat the mass follower practise
    • Content of bio and bio link also contributing to keyword score

    This TwitterRank would then be used in sorting the search results for particular keywords.  The goal would be that the people who score highly on the keywords that are searched for have their recent posts returned higher than others as the scoring would indicate they are more active and contribute alot on this topic as well as having a high number of equally relevant followers.

    Read Full Post »

    Agile software development and project management has become much more mainstream in recent years.  This is reflected in the increasing  number of blogs and books available on the subject.  But for someone approaching the subject fresh it can seem somewhat overwhelming and unclear where to start.  It can also look like you have to go read 20 books just to understand it.  Which is unfortunate as underpinning the whole approach are some very clear and simple edicts from the Agile Manifesto :-

    • Individuals and interactions over processes and tools
    • Working software over comprehensive documentation
    • Customer collaboration over contract negotiation
    • Responding to change over following a plan

    Agile then in it’s purest simplest form is about removing layers of complexity and unnecessary paperwork and processes in software development.  It is really about working closely with the customers and end users and iteratively evolving the software to meet their needs.  Understanding these principles is important and can serve as a guide to understanding how to navigate through the many agile techniques and tools.

    Agile projects still contain the same kind of ingredients as any software development project namely

    • gathering and documenting requirements
    • planning and estimating
    • development cycles
    • testing
    • managing change

    Where Agile often differs from other software development approaches is in the quantity and order of these ingredients in the recipe, by doing just enough requirements gathering and iterating quickly through build/feedback loops.  Estimating is still important to see what areas can be addressed during those iterations but often actual team velocity as measured through initial iterations is a better guide to future progress.  You don’t plan too far ahead so the next iteration can be determined based on emerging requirements or understanding.

    The tools of an Agile team

    Often associated to Agile projects but really just pragmatic best practise techniques versus anything inherent in an Agile approach

    • User centric stories to describe requirements leading to Agile goal of less documentation
    • Wireframing visual design – much quicker and easier to change than developing UI using software
    • Relative estimation and team based estimating
    • Time boxed iterations
    • Test driven development
    • Retrospective review cycles
    • Automated and continuous build/integration

    However which of these you choose to adopt very much depends on the needs of the project and the situation at hand.  Keeping it simple is key as well as adapting based on the experience of both the team and client.  Trying to get a team to adopt new approaches completely alien to them will often result in failure.  In the end Agile should be about people interacting which should be very natural and processes and procedures should be minimal and not interfere with progress.

    In the end you need to adopt the approaches, techniques and tools that are appropriate to the project, team and client and this will often vary.  Agile techniques can be very effective but the key is embracing the underlying principles as outlined in the manifesto and not getting lost in acronyms and debates about methodologies.

    Useful resources on Agile approaches

    Books

    Read Full Post »

    The increasing number of social networks and platforms being used is offering different channels for businesses to engage and interact with their customers throughout the customer lifecycle.  There are also geographic considerations to be taken into account, for example Orkut is extremely popular in South America so is the right network to use when reaching clients in that region.

    The typical customer lifecycle from the perspective of a customer is as follows

    • Awareness / research
    • Purchasing
    • Using product/service
    • Purchase add ons or enhancements (perhaps goes back to first step)
    • Getting help/support

    The awareness/research, purchasing  and support phases are typically where people like to interact with other people to see what decisions and answers they might have – the wisdom of the crowds.  People often steer clear of marketing, adverts or callcentres in these cycles.

    Awareness / research

    People prefer to research by discussing with friends and looking on comparison sites and looking at consumer reviews.   The following are good approaches to engage in this cycle in a non-invasive manner.

    • Provide rich content on community sites focused on relevant subjects/products.  This is effectively content/application syndication.  Providing content / interactive tools (ie questionnaires, calculation tools) is much better received than simply pushing adverts that few people click on.  Optaros has developed a cloud based solution for syndicating this kind of content called OView.
    • A number of businesses are now appointing a community manager role in their organisation to go out to the forums and sites that people are interacting on and join the discussions.  A good example of this is hotels that interact on sites like TripAdvisor by responding to concerns raised and sharing information.  This can also include being active on microblogging services such as Twitter.
    • Equally if people find a product on a site during their research then enabling them to customise/share it with their friends/family on social networks such as Facebook can help spread the word through viral means.   Optaros Labs have done some research in this area with a product called FANS

    Purchasing

    Once people have done all their research and made their decision to purchase they often want to share what they have bought – particularly if it is fashion, style related or heavily customised.  This is a good opportunity to help enable this by allowing info about the purchased item to be shared through social networks directly.  Some good examples of this approach are as follows :

    • Mydeco – allows people to configure and design rooms and then save their configuration, tag and share it
    • Nike – allows custom design of shoes, saving into your own locker and sharing with friends

    Getting help/support

    More and more people want a quick response that solves their issue so Google and social networking sites end up taking precedence over callcentres.  This means businesses have the opportunity to participate and provide dynamic support through channels such as Twitter and Facebook.  Related to this is having rich media freely available such as webinars, podcasts, videocasts, slides and having them tagged and available on sites such as Slideshare, iTunes store, youTube etc.  Equally important is having knowledgeable employees active on the various community sites and contributing through blogs.  Salesforce.com has recently offered Twitter integration into it’s customer service product to help capture and flag relevant discussions taking place.  This can also be taken further with semantic analysis of discussions taking place to try and route it through to revelent people in an organisation.  Other good examples of using Twitter for customer service can be found in this blog posting from last year.

    The real key with all these approaches is recognising the need to go where customers are having conversations and not expect them always to come  to your own site.  By being more open and interacting where people are spending time increases the chance to offer value to customers and offer a more immediate and responsive customer experience.

    Read Full Post »

    Older Posts »

    Follow

    Get every new post delivered to your Inbox.