Follow me on Twitter

Thursday, 10 April 2014

Three research challenges: my presentation at the NSF Workshop on service innovation

As a workshop participant at the NSF workshop (, I presented what I consider to be the 3 main theoretical challenges for service research. Note that this is not the applied challenge - that is a different challenge entirely and would be a different conversation regarding my work at the institute with industry. Rather, I have focused on theoretical and fundamental challenges because this workshop is about guiding fundamental research, sponsored by the US National Science Foundation and National Academy of Science.

I begin by saying that my approach doesn't assume any sacred cows of knowledge. Instead, I propose that most of our disciplinary knowledge exhibit historical path dependencies and the many assumptions from that history has changed. In other words, when you want to build a house in the 21st century for living, you might need to go back and evaluate the nature of your bricks, wood, mortar and cement and their ability to come together for that modern house, when those materials were made to build a different house 100 years ago.

So what are the challenges?

The customer as endogenous in the system
From a service dominant logic perspective, service is co-creation ( Vargo, Maglio & Akaka, 2008). That means the customer is part of the system and not outside the system. Current movement from Big Data to many systems methodologies do not often take the customer as endogenous in the system. There is a need to develop methodologies, that treat the customer as being an entity in the system ie as a human sensor, a human intelligence, meanings/context creator i.e. a resource integrating and contributing entity.... Something.... We talk about customer as behaviours but not as an endogenous entity within the system (co-creation by another name for scientists and engineers). For example, even when we talk about material technologies, we can talk about its resistive, absorptive etc. properties. Why do we not talk about customer and their abilities within the system. Why do we not talk about their capability to absorb variety (a big capability to scale systems) or their resilience. Because we lack the methodology and the science to understand that. Second, there are two ways to research into at an aquarium as a system: as a viewer looking into at an aquarium, or as a fish within the aquarium. In the former, the research is for the benefit of the manager/policy maker/owner of the aquarium. In the latter, the research is for the benefit of the fish. We need to question the position, mindsets and perspective of the researcher when constructing systems methodologies and the findings from the research. This is becoming increasingly important as customer resources to co-create value is evolving into a more structured resource e.g. personal data. The customer, being a more formalised entity and increasingly empowered through technologies is a driver for future economic opportunities as both a consumer as well as a producer. The application of personal data in co-creating value with a product or service can be a massive multipler effect for the future personal data economy and national economies of the future.

The incomplete product
The boundary between a service and a material product is increasingly obscured. As material technologies evolve, a physical product can be designed to be more dynamically reconfigurable in order to fit in the diverse and dynamic interactions of actors in their contexts. Dynamic reconfigurability as a concept has been widely used in system design, which enable the system to ‘have the capability to modify their functionalities, adding or removing components and modify interconnections between them’ (Rana, Santambrogio and Sciuto 2007).  With the development of pervasive digital technology, dynamic reconfigurability becomes possible in future products because products could have a ‘reprogrammable nature’. This means products could have new capabilities even after a product or tool has been designed, manufactured and sold (Yoo, Boland and Lyytinen 2012, p.1399).  Thus, products may not need to be ‘finished’ to be transferred to the customer but could be designed such that contexts of use could be incorporated into a modular product design and ‘finished’ through customer resources (e.g. personal data) brought into consumption through digital pervasive technologies.  This ‘incompleteness’, resulting in open and flexible boundaries of products, allows offerings to materialise multiple affordances and dynamically alter their affordances with changing contexts. Products evolve to become platforms for service that could provide increasing returns to scale through standardisation even while they can be deeply and uniquely personalised. For example, the iPhone is fully standardised and enjoys economies of scale yet is able to be fully personalised, because of the boundary between the digital ‘app’ layer and the material ‘phone’ layer.

New Transaction Boundaries, Economic and Business Models
An economic model is the model of an ecosystem (like a market) that distributes rents (or revenues) either through the pricing mechanism or regulation, according to what the entity (such as a firm) does to stay within the ecosystem. New economic models, often arising from new business models and/or new entrants, redistributes rents within the ecosystem occasionally resulting in the exit of existing entities (disruption). With the blurring of boundaries between material and digital, firm and customer, product and service, there is a need to understand new ways to obtain revenues and the nature of transactions in the future digital service economy. Transaction is defined as ‘mutually agreed-upon transfers with compensation within the task network’ and ‘serves to divide one set of tasks and others’ (Baldwin, 2008, p.156). Baldwin's (2008) conceptualisation of transaction is developed from a ‘systems of production’ perspective.  This perspective enables us to analyse the dependencies between agents (i.e., consumers and producers). The value-creating context, as a unit of analysis for service, jointly co-created by the customer and the producer, creates an interesting challenge for modularity and product/service architecture for new innovaitons. Modularisations create new thin crossing points where transaction costs are low (p.156) and also create opportunities for new boundaries where new transactions, and new business models can be created. 

The above challenges are not merely research/innovation challenges but impact on education and skills as well as since there are increasingly greater overlaps in domain knowledge, particularly between engineering and computer science and current reductionistic curriculum is not helping in developing future engineers/technologists and managers.

Thursday, 6 March 2014

The HAT (

We live in a world today where data belongs to those who collect it. So even though it's data about me, for example, if it's my purchases at a supermarket, searching online, or spend on my credit card, that data is owned by the supermarket, or by google or by the bank because they own the technology that made the data collection possible. Without the technology, this data won’t even exist. But since we don’t own it and often don’t even have access to it, we can't really benefit from integrating it to make our own lives better. In fact, even if the data is returned to us, we don’t really know what to do with it coz these data are vertically silo-ed – the format and presentation – were all collected to help the institution that collected it and not to help us.

So we now find ourselves in an increasingly digital and connected world where much of our lives can be captured digitally – very diverse types of data on transactions, interactions, movement of people and objects –what we often term as BIG DATA. And as things become connected, through the Internet-of-Things, even more data is being generated.

But again, all this data sits somewhere else owned by different institutions.

And then, as individuals, we become increasingly worried about  privacy, confidentiality, security and trust.

Some of us may get so worried we start to withdraw from becoming too digitally visible, we cancel our Facebook accounts, we stop using google, we don’t want our data stored anywhere because we worry who has what data about us.  Government then takes up this privacy and security issue and could start to regulate, thereby increasing costs. In addition, data starts becoming 'noisy' ie its not true (much like the way I use google search to search for answers on my crossword puzzle so that they wont know if its a genuine search). This means the quality of data goes down. With increasing regulation, decreasing quality of data, this could then lead to institutions become reluctant to invest in innovation and make cool stuff and we don’t get more advanced technologies so this all ends badly for everyone. We get into a downward spiral - Less business opportunities, less innovation, less jobs.

How do we reverse this and help the digital economy spiral upwards?

Introducing the HAT project (….. it’s a Research Councils UK Digital Economy £1.2m funded project with 6 universities, around 20 researchers and a whole host of companies like GlaxoSmithKline, Dyson, DCS Europe…

The HAT takes on the 3 challenges and we’d like to think that we can solve them all but they need to be solved simultaneously to create a upward spiralling effect.

First, about privacy and confidentiality and the ‘shrinking supply’ and 'quality' of data. We are building a human database where the data is owned by individuals, by us. A bit like your email, your HAT, should contain all the data you would like to have to make your life better. That means a place to hold internet of things data from your home, your personal data from social media, your health data etc. etc. If we own our data, we can use it, so that solves all the sharing issues that vertical industries have and if we keep it secure in our trusted environment like we give our money to our bank, it hopefully solves the security and privacy issue. If we owned our data and we treasure it as a digital asset, and it is valuable and useful to us in the way we lead our lives, we would want to generate more of it, basically become more digitally visible but we’ll only to do that if that data is ours and not belonging to someone else. And since we are using the data for ourselves, we will make sure it is as accurate as possible, solving the quality issue.

Second, about the ‘worth’ and ‘value’ of the data. Remember I said that this is still all vertical data and often, a lot of data scientists looking at big data out there are trying to predict us by putting the data on the inside, and the individual on the outside. But making sense from aggregating vertical data is a bit like making sense of snow drift by analysing snow fall. They’re related but not the same. Vertical data needs to be re-organised and transformed in a 'horizontal’ way so that human beings can make better decisions from data. And data can never tell the whole story. It really shouldn’t because human beings interact with our data and we also like to be in control so the human person isn't a passive. we are more like an intelligent and adaptive sensor in a way. the human person can actually perform a service on the data to help in contextualising it to make it meaningful to ourselves and so that we can use it. We don't just want smart things, we want 'smart us'.

So through a service dominant logic we develop a special kind of database. a human schematic database that organises vertical data according to the way we create value with goods and services and use information to live our lives. And we let individuals co-create that database with their own sense making and intelligence. For example, you can have data about temperature in your home from a smart home, temperature in your car by the car company, temperature data from your office building, and the weather data outside and they come from different sources and institutions. but what you really want to know is 'what is the lowest temperature you will encounter today so that you know what to wear?' And to do that you need to acquire all these data into the HAT and then transform it into something useful for your decisions, which is what the HAT can do. So HAT transforms vertical-type data and transform it into horizontal-type data.

What happens next is the fun bit.

When data is meaningful to us, it is not just of VALUE to us, it is now WORTH something,

So the third challenge for the HAT is about creating a market for all this meaningful data. Having all this data to ourselves isn’t going to be useful if we can’t trade it or exchange it and have it surface in the economy so that GDP would grow, wealth and economy would grow and there are more businesses and more jobs, what Economists will call having a multiplier effect. Having all this data is like having money but you hide it under your mattress - it does no one any good. This is where the HAT is also a market platform. Platforms are like meeting places where exchanges can happen e.g. a singles bar is a platform for single men and single women or a bazaar is where buyers and sellers meet to buy and sell. The HAT is not just a database but also a multi-sided market platform for us as individuals to exchange some of our data so that we can maybe buy services like advice on our health, or get some personalised grocery bundles from our diet data. Doing this will create a market for personal data which is important for the future growth of the digital economy but doing it in a way that it fits our lives better, be more democratic with how data is owned and accessed and in general helping institutions tailor what they offer in a way that is scalable.

This is of course not an easy project. We need ethnographers who research into how we use data in our lives, behavioural economists who looks at how our behaviours change, market economists, to understand the incentives on a platform so that both individuals and firms come together to exchange data and products and services, business models, marketing and operations specialists, computer scientists, database programmers, designers for user experiences. The HAT team has all that capability. The best bit is that we are working for both sides – for institutions so that they can give us good advice and personalised products in a way that is scalable, and for us, so that we own a platform to use data better in the way we make decisions.

In summary, the HAT lets you as an individual acquire data,and build your own repository of horizontal and meaningful data that is useful and can help you make decisions (ie contextualisation), and then lets you decide if you want to trade or exchange with firms for discounts or other cool products and services. and when we create a horizontal platform that fit to human lives, we create the next stage of the internet, that of people and things, and an epic collision of all the vertical industry of manufacturing,service and internet companies…and new horizontal-type business and economic models that is human-centric will emerge, and not just the old ways of doing business. That would be just awesome.

Best of all, we think we can bring TRUST back into the digital economy. And we do that by making all of us, who have largely disappeared into words like 'citizens', 'segments', 'big data' into being unique again, paradoxically by becoming making each one of us a 'server' (standardisation) and yet unique with our own data (personalisation). By doing so, we hope to make the use of data more democratic than it is today.

We think that everyone should have a hat of our own data. like the way we have emails or bank accounts. The HAT will be ready in 2015, and we expect it to be free although you can choose your own HAT trusted provider who could differentiate themselves by giving you additional services, like the way your email service, or your bank does. We want to start a revolution to own, control and use our own data, for the good of the economy! So we hope you will follow our blog at and be part of that revolution!

PS: if you are a developer or an institution interested to integrate the HAT into your offerings or develop applications on the HAT, please sign up on our blogsite as well! Software toolkit and APIs will be released from July 2014 in a trade launch and October 2014 will see a HATFest where we expect a week of 'show and tell' sessions of interesting applications around the HAT platform! Consultants helping vertical industries evolve new business models in the horizontal or IoT domain also welcomed!