Architecture as a Context for Agility

Agility requires doing focussed things rapidly. The more you know going in, the better decisions you can make quickly. The more you document what you learn, the more knowledge is available for future efforts. Good agile work fills in more of the picture thereby enabling all teams.

The more of the picture is filled in the more we can avoid wasted effort, align our efforts and deliver with less risk. You can’t create the full picture quickly, which is why many agilists avoid architecture.
But you can start with a “paint by numbers” reference model/ontology, which gives you the framework into which to rapidly record your growing knowledge and which indexes where to look for information for your next effort, and what touches the squares you want to colour, so you know how to be informed and compatible.

Every project (agile or otherwise) should:

  • Be informed by our knowledge of current architecture assets and challenges

  • Contribute to an improvement in assets, condition, effectiveness and future readiness

  • Improve the architecture of the portfolio

  • Deliver business value

  • Fill in more of the architecture “big picture” to inform future projects

The environment should:

  • Have a conherent integrative meta model/shared concepts

  • Encourage good work through well conceived principles

  • Have standards for how things get recorded (artefacts) so they are meaningful and sharable

  • Provide a collaborative repository that holds things and makes them findable

#agile #project #architecture #context

If data is the lifeblood, how’s your heart?

Organisations are paying more attention to data management, often driven by compliance, privacy or cyber security concerns. But simply holding data doesn’t generate value.

We need a thorough understanding of the relationships between data (numbers, text, pictures, audio, video, facts), information (data meaningful to humans: salary, sales, order, invoice, fingerprint etc), knowledge (richly connected data: contextual data, trend data, inference) and wisdom (deep insights, experience shaped). Value increases as we move up this hierarchy. Alongside that, if we are to understand what we have, manage it properly, secure it, use it, integrate it etc., we need meta data: data about data. Where is it from? How is it structured? Who owns it? How much can we trust it? How is it derived? What format is it in? Where do we keep it? How long should we hold it? What are the constraints on its use…

All of the above are complicated by the explosion of data brought on by new forms of data; technology capabilities to capture, store, manipulate, communicate, generate, represent and analyse data and innovative applications. Virtualisation of products and services compounds the problem, as more of what we offer and sell to customers is information rather than physical.

There are more opportunities than ever before to profit from data, information, knowledge and its proper use. But there are also more challenges associated with doing it properly, successfully, reliably and securely. All of these rely upon skills and capabilities. Specifically, we need high skills to understand, analyse, model, design and implement data related products and services. This is the realm of Data and Information Architecture.

Architects also need to understand business requirements, facilitate communication and build consensus, define vision, bridge gaps and scope initiatives. They need to guide projects and solution designers. Crucially, they need to connect the business/conceptual view of data with the logical (application) and physical (database and technology) views. They need to devise, apply and encourage use of good principles to evolve the data and information landscape in positive ways.

Data management is ultimately a business responsibility, but can be assisted by many technical skills, including: maturity assessment, modelling, meta data management, technology architecture, risk analysis, integration design and considerations of security and privacy.
A comprehensive data/information architecture and data management capability is vital to deliver business benefits as well as ensure security, privacy and acceptable risk.

These are all topics covered in depth in our Techniques and Deliverables of Information Architecture intensive online live course from the 7-11th November. See details here.

#dataarchitecture #informationarchitecture #digitaltransformation #bigdata #businessintelligence #bi #datamodelling

What comprises a “Solution Architecture”?

A solution is a combination of components in a configuration that solves a problem or exploits an opportunity in a way that meets our goals. Hopefully it is also effective, efficient, sustainable, ethical and relatively risk free.

It is not just a software system, but rather a combination of software, process, people skills, data and technology that meets business, human, technical and legal requirements.

Considering the provided rich picture:

The items outside the circle represent the context in which a solution is developed. We ignore these at our peril. If we do not know the Business Goals for a solution, we can only meet them by extraordinary luck. If we do not know the Legal constraints we will run foul of the law. If we do not understand the Customer and the Stakeholders, we are unlikely to provide something they are happy with. If the service is not delivered via the required Channels or compatible with the Brand strategy, we may miss the mark entirely. In short, many of these should inform our Requirements. Alan Kay famously remarked: Context is worth 80 IQ Points.

Our requirements should also certainly include the Functions that must be performed, the Business Objects (Data) that is used, the Technology we need, the Process to deliver Value, the Application Services we may use or provide, the User Interfaces, the Events we need to respond to or generate and the Locations where we need to be available.

Non-Functional requirements will also play a major role in the viability and success of a solution. These include aspects such as security, reliability, performance, cost, maintainability, flexibility, ease of use, compatibility and many other factors.

Customer Experience is crucial to ensure wide and willing acceptance and delivery of business value. Staff experience is also key, as it that of other professionals who will deal with the solution, including Operations, Support and Maintenance staff.

The Solution Architecture should follow some important principles, including: Modularity, loose coupling, message based communication and open standards. Its also good to have tests built in and automatically repeatable, an affinity for DevOps or Continuous Deployment. User interfaces that are intuitive and built in tutorial aids are really important too.

A cost effective system might be composed of off the shelf components, reusable library elements, configurable components and custom developed elements. The solution includes these and the other elements of human skill and capability, supporting technology, infrastructure, documentation etc.

We may also need to contemplate the development/ implementation dependencies and partition the solution into an initial Minimum Viable Product plus one or more incremental delivery releases to get us to the full capability required.

Solution Architecture is a challenging but very rewarding role.

#SolutionArchitecture #Architecture #Requirements

Just one API?

GraphQL provides single query for queries and updates

Jargon buster at the bottom of post.

First came RPC to call a function across a network. But it was language specific and lacked standard facilities. So DCE was made to address common requirements, such as directory services, time, authentication and remote files. But it was not object oriented when Smalltalk, C++, Java et al arrived. So Microsoft devised DCOM to provide distributed services for Ms languages while others backed CORBA which provided cross platform and cross language services. Both required agreement for message formats ahead of time.

Enter Web Services, leveraging XML to serialise data, WSDL to describe services, UDDI to publish, find and bind to them, and SOAP to message remote objects. Great! We could now find, bind to and invoke services without prior design agreement. But, it was not very efficient and required a lot of plumbing on each end, and quite a bit of knowledge from developers.

So, Roy Fielding devised REST exploiting HTTP to provide a simple way of working with remote Resources. REST allows us to simply access remote servers and retrieve something GET, inform about something POST, store something PUT, update something PATCH or delete something DELETE. This is achieved by creating simple headers and a request line including the URL and parameters. Post also has a body.

REST is very light weight and does not need much infrastructure. Combining it with JSON made it very easy to use from within web pages and mobile applications and it quickly took off.

But there was a problem. Each REST request would get a specific thing from the server. If there is a rich database or knowledge graph on the server, we can create many REST APIs: At least one for each kind of domain object (e.g Customer, Product, Account, Invoice etc. ); Often more than one to cater for different application requirements (partial records, related records etc. ). Plus we will have different APIs to query, to store, to update etc. So, a server with a database managing a score of domain concepts could quickly require 100s of APIs. Ew, that’s a lot of development, testing, deployment, documentation, maintenance…

Facebook ran into this problem at scale. Their solution was a query language that would live in the server as a single entry point and receive a query request as a parameter. This is not dissimilar to the way a relational database receives dynamic SQL requests. Now the tailoring of a response can happen in the server (more efficient) and we have only one API endpoint to maintain. Voila. So that solved the problem for Facebook… Fortunately, they published it as GraphQL which allows writing query and update (mutate) statements and having these fulfilled by a suitable GraphQL processor / application / database on the server. Initially, these were discrete, but they are starting to be embedded in database systems, especially Graph Databases. One good example is DGraph.

JARGON BUSTER:

You can also find good explanations of most of these topics on Wikipedia

  • RPC - Remote Procedure Calls

  • DCE - Distributed Computing Environment

  • API - Application Programming Interface. A way of requesting a service or function contained in another piece of software. Most commonly used today to refer to a REST API

  • COM+ - Microsoft Component Object Model. An architecture that allowed sharing of objects between Microsoft languages.

  • DCOM - Microsoft distributed COM. Similar to COM+, but allowing objects to be remote

  • .Net - Microsoft Component model and framework that succeeded COM and DCOM

  • CORBA - Common Object Request Broker Architecture. An architecture for distributed object messaging across languages and technologies.

  • Web Services - A set of standards, leveraging XML, that allows requesting services across the Internet. Includes WSDL, UDDI, SOAP.

  • XML - eXtensible Markup Language. A standard for encoding data onto text with specific tagging of the meaning of the values.

  • WSDL - Web Services Description Language. An XML document describing a Web Service.

  • SOAP - Simple Object Access Protocol. A way to invoke a (remote) service in the Web Services approach. Effectively an XML message requesting a given service and expecting an XML response message.

  • UDDI - Universal Description Discovery and Integration. A protocol for publishing Web Service Descriptions and for finding these.

  • HTTP - Hypertext Transport Protocol. The protocol of the internet which allows hyper-linking.

  • REST - Resource State Transfer protocol. A protocol that leverages the HTTP intrinsic functions to support requesting services across the Internet with minimal other infrastructure.

  • JSON - Javascript Object Notation. A way of encoding JavaScript data structures on to text for transmission or sharing. Similar purpose to XML, but lighter weight.

  • GraphQL - A query language used on a client and interpreted in a server which allows easy retrieval of data using graph concepts (Nodes, properties and relationships).

  • RDF - Resource Description Framework. A standard for defining facts and knowledge using simple statements with a Subject, Predicate, Object format. Part of Semantic Web standards.

  • DGraph - a Property Graph database that supports graph schemas, RDF, JSON and GraphQL natively at web scale. Also does ACID transactions.

  • ACID - Atomic, Consistent, Isolated, Durable. Desirable attributes of transactions in a database.

#API #Services #REST #WebServices #SolutionArchitecture #Design #GraphQL

Lasting Impact of the Little Language that Could: Smalltalk turns 50

Late 60’s and early 70’s Xero was a major player in the office automation space. Innovative work on user interfaces was happening at Rand Corporation(JOSS, Tablets, GRAIL), Stanford Research Institute (Doug Englebart, Personal Interactive Computing, Mouse etc) and MIT/Lincoln Laboratories (Ivan Sutherland / Sketchpad). Xerox gave Alan Kay and his team at their Palo Alto Research Centre (PARC) free reign to explore human computer interaction. Alan had worked on ARPANET and did a PhD on the FLEX machine, a precursor to a truly personal computer. He conceived the “Dynabook” which conceptually defined a tablet (think iPad, but easier for the user to program and tailor) in 1968!

Amazing things came out of PARC, including:

  • Object oriented programming for UI and general purposes

  • Smalltalk (still one of the best, purest, easiest to learn and productive general purpose languages available today)

  • Keyword syntax facilitating domain / application specific languages

  • Just in Time Compilation (JIT) and Virtual machine execution of bytecodes allowing systems to be ported easily across hardware

  • Integrated Development Environment (IDE ) with introspection

  • Bitmapped displays with graphics and fonts

  • Image storing state of system allowing easy and instant persist/restore and continuation of work

  • Model View Controller (MVC) paradigm for separation of domain model, business logic and user interface

  • Windows, Icons, Mouse and Pointer (WIMP) paradigm with overlapping, resizeable windows and the whole Graphical User Interface

  • Text, Image and Document editing with What you See is What you Get (WYSIWYG)

  • Laser printing

  • Ethernet

We owe these pioneers a major debt of gratitude! Subsequent developments include:

  • GUIs at Apple (licensed from Xerox) then Ms Windows (Imitated)

  • Objective C, the major systems language at Apple (Smalltalk ideas and class libraries on top of C) - the precursor of Swift

  • Object oriented databases

  • Office suites - Charles Simonyi did Bravo at PARC on the Alto system, the first WYSIWYG document editing system. He later spent 20+ yrs at Microsoft and created Word and Excel

  • OO in general, Smalltalk being a major influence on Java, Javascript, Ruby, Eiffel, Dart and many other languages. It is a direct ancestor of Squeak, Pharo, Amber and Newspeak

  • eXtreme & Pair programming (Kent Beck, Ward Cunningham) and aspects of Agile Development

  • Live programming/ debugging

  • Test Driven Development (SUnit, Kent Beck)

  • Agile Visualisation (Roassal)

  • Moldable Tools (Tudor Girba, GTools)

  • EToys and Scratch visual programming for children

I saw Smalltalk ideas in the 1981 Byte article, got hands on and seduced in 1991, and we have used it ever since in our products and tools. Capers-Jones 2017 research confirms Smalltalk still offers a 2-3x productivity improvement over mainstream languages. Vive la difference!

Application Portfolio - Deriving Value from the Asset

The application portfolio in mature organisations represents a very significant investment over an extended period. Expenditure can easily run into hundreds of millions or even billions. This can be a major asset to leverage to produce value, or a major problem that consumes resources and funds.

Managers, architects and analysts often don’t know where to begin or where to focus to improve value delivered and the contribution of the portfolio to strategic goals. Two things that can help are Taxonomy and a Landscape Health Assessment.

Taxonomy is a common architecture technique where we use a set of categories (usually capability or functional) to organise the baseline applications so that we can detect redundancy (multiple things doing the same job), gaps (where we do not have something or the current solution is not adequate) and opportunities (where there is something useful but it is not widely deployed; or there is an easy “off the shelf” solution for a gap). A good starting Taxonomy can often be obtained as an industry or domain reference model.

A Maturity Model is less common. In fact, several years ago we were doing a consulting assignment and looked in vain for a “maturity model” or “portfolio review model” for the application landscape. In the end we created one which we have used since. I recently revised this to include guidance based upon findings (as we have for other models including our Data Management Maturity Model) and we have automated it on our Maturity Model Platform, in turn based upon our Enterprise Value Architect tooling.

This provides a quick, guided, automated way to move from little knowledge to a robust view on the application portfolio status; scores on several important health dimensions; and recommendations of actions to improve the health of the portfolio and value delivery to the business. The instrument also looks at strategic issues and leveraging technology. For a limited time, you can try it free. You can save results, retrieve them in future and compare them over time or scenarios.

Take the Application Landscape Assessment

We welcome feedback and further questions.

If you are keen to build Application and Solution Architecture Skills, consider our Techniques and Deliverables of Application and Solution Architecture online live 5 day course (31 Oct - 4 Nov).

You can read the full details or enrol here.

Zero CRUD* with Domain Models and Patterns

Many organisations still have armies of developers grinding away writing Create, Read, Update, Delete (*CRUD), enquire and report user interfaces and business logic - even in “Agile” projects! With a complex domain model, it can easily consume more than 60% of project effort. Indeed, some administrative systems are nearly 100% CRUD!

One solution is to resort to low-code or no-code tools, but organisations resist them for a variety of reasons, including lack of standards, limited skill availability, strategic risk and the fact that many of these only work well in narrow use cases.

It is really important in stakeholder facing applications and websites to have really good user interfaces, so what to do? One solution we have practiced for over two decades, and adapted for the web and mobile, is to have a domain driven user interface which exploits patterns for the commonly required functions.

Patterns consist of UI code templates which provide layout, controls and common business logic. They have placeholders where the specifics of the entity / concept to be captured, edited, updated or deleted etc. are plugged in. The domain specifics (Concepts, Relationships, Properties) are provided to the application as models which are accessible at generation or run time.

For statically bound environments (e.g. C++, Java, C# etc.) the customisation of a pattern with domain specifics can be done at compile time. In effect many interfaces conforming to the pattern are generated into the production application. For dynamically bound environments (e.g. Smalltalk, Python, JavaScript) the generation can even be done at runtime, effectively serving the code to the client where it will execute through interpretation.

This approach has numerous advantages, including:

  • Major reductions in code base size and hence effort in development and maintenance

  • Consistency of user interface and logic produced across developers and applications

  • Higher quality since the patterns can be developed by expert developers and thoroughly tested

  • Reduced time to market and greater agility. In the case of late bound environments it is even possible to extend the domain model at runtime and have the interface adapt immediately without any deployment

  • Massive reductions in effort when user interface refreshes are required (e.g. the latest / greatest JavaScript framework makes a new style “required”)

If you would like to see a talk and demonstration of this in action, you can watch the video here.

If you would like to skill up on the latest in Application and Solution Architecture thinking, take a look at our upcoming course: Techniques and Deliverables of Application and Solution Architecture.

#agile #SolutionArchitecture #DomainDriven #SoftwareDesign #Patterns

Context is Worth 60 IQ Points

This quote is attributed to one of our favourite great thinkers, Alan Kay, founder of Object Orientation, Graphical User Interfaces and other major innovations. We believe it applies to Business Architecture in a major way.

Most Business Architecture methods/approaches/languages (TOGAF®, BizBok, Archimate®...) originated from an Information Systems/Technology perspective. This is reflected in their view of concepts relevant to the business, which typically include: Organisation (Actor, Role), Process, Service, Capability, Motivation (Driver, Goals, Objective), Metric, Function and Contract. In their latest incarnations they may also recognise Value Chains and Course of Action. The former does relate to the position of the organisation in its industry, while the latter relates to high level change/initiatives.

What is typically missing, in our view, is consideration of the context in which we operate in a serious way. The context includes: Competitors, Product and Service Offerings, Legal and Compliance Environment, Technology, Political Climate, Society, Various Stakeholder Groups (Customers, Agents, Suppliers, Channels, Business Partners, Unions, Regulators, Industry Bodies…) and the Value we exchange with them, the Economy, Resources, Ecology, etc. all of which can provide Opportunities and Threats.

A thorough approach should contemplate these issues. There is no point designing a great new internal combustion engine vehicle in a world where legislation and public sentiment will prevent us selling it. There is no point devising a strategy where there are no resources to realise it. There is no point launching a physical record company into a media space that has gone fully digital (unless we want to be a niche player).

Being fully cognisant of our context helps us be much more intelligent about our strategy, our architecture and the resultant initiatives.

TOGAF® and Archimate® are Registered Trademarks of The Open Group. BizBok is a publication of the Business Architecture Guild.

How to measure business capability improvement objectively after transformational projects have been implemented?

I recently had a delegate on our Business Architecture Mastery Programme ask the above. It is a deep question with a number of dimensions. We have some ideas and a bit of experience...

Capability

First, we have a different take on what a Capability means. Most methods define it a bit fuzzily as "something a business can do" or similar. That is not too different from a Business Function. They use Capability so people don't get confused with Functions used in Organisation Design, which might refer to “HR” or “Finance”.

We think a Capability implies quite a bit more. It implies delivering something of value to a client (potentially internal). There is much more detail in an earlier post. See comments for link.

Motivation

When we want to start discussing improvement, we then also need to think about motivation. What do we want to change (improve), why and how? These motivations could come from a number of sources and will be different for various stakeholders. We need to find a way to merge them and reach consensus on what "improvement" means. Please see the blog entry for a view on merging motivations (link in comments)

Metrics

Once we understand what we value and how it should be improved, we then need suitable metrics. These will depend upon the agreed goals. One stakeholder group (eg investors via the stock exchange) may only be interested in financial performance this quarter. Another stakeholder group (say employees) will be interested in job security. Another group (the community) will want to be assured that we have improved our environmental footprint, etc.

There are a variety of ways we can measure a business, its performance and its health, including traditional financial measures, balanced scorecards, customer satisfaction, sustainability metrics, strategic health checks, maturity models and industry metrics frameworks, etc.

Baseline

Next we need to know the baseline we are improving from. That involves gathering the facts for the chosen metrics via a variety of methods including accounting, scorecards, business intelligence, survey, workshop, etc.

Run the Initiatives

Finally, we can the implement our changes and then measure again.

Tracking

For tracking, we like to use a hierarchical model proceeding through layers of mission/goals/objectives which we then colour code based upon current performance (red for poor to green for excellent). This give a visual "heat map" of where to focus attention. Put it on a wall! Then measure again in about 3-6 months, depending upon the volatility of your industry. Using the new measures, update the colour coded model. Of course, also update the model itself with new concerns, insights and learning.

#capability #improvement #metrics #businessarchitecture

Product Generator (3 of 3)

In the earlier posts we covered what a product/service/business model innovation might look like and how to generate ideas. Here we summarise general guidelines we can leverage when contemplating product and service options:

  • Does it provide a recognisable and desired transformation?

  • Does it offer exceptional client value?

  • Is it easy and convenient to find, evaluate, acquire, migrate to, use, integrate, upgrade?

  • Does it generate an emotional response in the client?

  • Is it a blue ocean play that will encounter minimum competition and still attract a premium price?

  • Can it be sustainably and profitably offered at scale?

  • Have we used all options to reduce capital dependency, to minimise physical components, increase intelligence/utility and to streamline production, duplication, delivery and servicing?

  • Is it a win for the customer, us, suppliers and society at large?

  • Have we contemplated constraints and risks and ameliorated these as much as possible?

  • Have we created unique benefits which will be difficult to replicate?

  • Is there a “unique selling proposition” / “purple cow” - i.e. will it stand out as something different and worth consideration?

  • Have we formulated metrics to track performance and improve benefits through the lifecycle?

Getting there may not be a linear path. We may have to ideate, evaluate, prototype, iterate, pivot, etc. until we get it right.

Summary: Provide more value to customers, more conveniently, quickly, cheaply, sustainably and repeatedly (while ensuring we can sustain delivery, margin and ameliorate risks). The associated canvas may help to consider all the dimensions.

#Product #Service #Strategy #Innovation #BusinessArchitecture #EnterpriseArchitecture #Canvas