Improving User Intelligence with the ELK Stack at SCA

SCA is a leading global hygiene and forest products company, employing around 44,000 people worldwide. The Group (all companies within SCA) develops and produces sustainable personal care, tissue and forest products. Sales are conducted in about 100 countries under many strong brands. Each brand each has its own website and its own search.

At SCA we use Elasticsearch, Logstash, and Kibana to record searches, clicks on result documents and user feedback, on both the intranet and external sites. We also collect qualitative metrics by asking our public users a question after showing search results: “Did you find what you were looking for?” The user has the option to give a thumbs up or down and also write a comment.

What is logged?

All search parameters and results information is recorded for each search event: the query string, paging, sorting, facets, the number of hits, search response time, the date and time of the search, etc. Clicking a result document also records a multitude of information: the position of the document in the result list, the time it took from search to click and various document metadata (such as URL, source, format, last modified, author, and more). A click event also gets connected with the search event that generated it. This is also the case for feedback events.

Each event is written to a log file that is being monitored by Logstash, which then creates a document from each event and pushes them to Elasticsearch where the data is visualized in Kibana.


Due to the extent of information that is indexed, we can answer questions from the very simple, such as “What are the ten most frequent queries during the past week?” and “Users who click on document X, what do they search for?” to the more complex like “What is the distribution of clicked documents’ last modified dates, coming from source S, on Wednesdays? The possibilities are almost endless!

The answers to these questions allow us to tune the search to meet the needs of the users to an even greater extent and deliver even greater value. Today, we use this analysis for everything from adjusting the relevance model, to adding new facets or removing old ones, or changing the layout of the search and result pages.

Experienced value – more than “just” logs

Recording search and click events are common practice, but at SCA we have extended this to include user feedback, as mentioned above. This increases the value of the statistics even more. It allows an administrator to follow up on negative feedback in detail, e.g. by recreating the scenario. It also enables implicitly evaluated trial periods for change requests. If a statistically significant increase in the share of positive feedbacks is observed, then that change made it easier for users to find what they were looking for. We can also find the answer to new questions, such as “What’s the feedback from the users who experience zero hits?” and “Are users more likely to find what they are looking for if they use facets?”

And server monitoring as well!

This setup is not only used to record information about user behavior, we also monitor the health of our servers. Every few seconds we index information about each server’s CPU, memory and disk usage. The most obvious gain is the historic aspect. Not only can we see the resource usage at a specific point in time, we can also see trends that would not be noticeable if we only had access to data from right now. This can of course be correlated with the user statistics, e.g. if a rise in CPU usage can be correlated to an increase in query volume.

Benefits of the ELK Stack

What this means for SCA is that they get a search that is ever improving. We, the developers and administrators of the search system, are no longer in the dark regarding what changes actually change things for the better. The direct feedback loop between the users and administrators of the system creates a sense of community, especially when users see that their grievances are being tended to. Users find what they are looking for to a greater and greater extent, saving them time and frustration.


We rely on Elasticsearch, Logstash and Kibana as the core of our search capability, and for the insight to continually improve. We’re excited to see what the 2.0 versions bring. The challenge is to know what information you are after and create a model that will meet those needs. Getting the ELK platform up and running at SCA was the part of the project that took the least amount of our time, once the logs started streaming out of our systems.

A Health Care Information Commons Vision: from frozen assets to liquid gold

This is the second post in a series (1), unpacking interoperability in the healthcare system. The basis in this post is semantic and technical interoperability, hence a systemic overview.

The future of health care relies on the improved flow of captured patient health information across the whole care continuum. This means a shared information system linking systems and devices from participating health care organisations while maintaining patient privacy and security standards. Such a realization would not only enhance the clinician and patient experience but also enable faster treatment and better care coordination for patients.

Information Commons is an information system, …, that exists to produce, conserve, and preserve information for current and future generations.

 A seamless and secure hub, heavily-linked, providing point-of-care access to critical patient data and care decision support information for the delivery of timely care, reducing the duplication of tests and procedures.

All in all, this has to be built upon a participatory community paradigm, where clinicians, policy makers and leaders, and patients share a vision to create an interoperable information space – that is sustainable, regardless of previous lock-in mechanisms set by different technical, and semantic standards, vendors and process and policy making.

Healthcare Information Commons

How do we create a interoperability climate?

 Changes for interoperability lie in the development of new pilots with strong collaboration. They are generally more successful where they are based on patient or illness groups, value-orientated, open and scalable. Post requirements phase, iteration based on early adopters’ feedback can identify the need for improvements and enhancements around the relevancy, format and visual display of data and information, the usability of the solution and provide insight into workflow impact. The Information Commons is also a good arena for clinicians to share positive anecdotes from their experiences upon which scalable pilots can be expanded.

Such developed infrastructure and services can also support or be leveraged by other national or regional health initiatives.

Technical Layers of interoperability

Interoperability can cover many layers but at its basis would be an interoperable access layer that integrates and securely shares clinical data from multiple sources giving one point of access. The user interface (GUI) could then provide and display data and information based on stakeholder users and medical/situational context.

Such a layer would have to accommodate and support various data from the distributed system of actors, aligning both to open standards while at the same time being plastic enough in design and instantiation.

Interoperability not only covers the sharing of information but also its usage. This may include added functionality by the EHR vendor themselves or the creation of further value-adding knowledge layers that can take advantage of both structured and (the untapped wealth of) unstructured data within EHRs.

Findwise in its EU funded KConnect project is doing just that. It is currently collecting use case studies from Jönköping (RJI/Qulturum) in order to create a pilot solution for clinicians to take advantage of ‘hidden’ textual data.

Questions of interoperability also lie in the physical user experience of the systems themselves. Should the basic layer provided by EHR vendors be open to include value-added software from other parties, should it be embedded or be made into another GUI? Which ultimately is best for the clinician workflow and the agility of software solutions in supporting new value-based outcomes and reiteration for improvements in efficiency and effectiveness?

Semantic Transformer

The annotations made in the healthcare systems across different domains, all have very similar outset, but lack coherent interoperable mechanism to work smoothly outside the local context. On a international, and national and regional level there should be services that acts as the electric grid to provide society with energy to be used in many contexts. A semantic grid that host controlled vocabularies within the domain, but also share practices and processes. With the use of open standards these could bridge across organisational boundaries and help clean the current messy Healthcare information space.

The healthcare information commons, do not per se have to be one system, but rather an interoperable set of services/systems that share standards to be able to exchange information and data. Very similar to they way Internet and linked data work today –  not restricted by walled gardens. The governance of the commons, should be a matter of public services, with sustainable resources and open governance agenda that can invite participation and engagement. No single actor in the network, be it a large hospital, private caretaker or regional public governing body will be able take care of this single-handedly. It should be a true “commons” undertaking!

The infusion of the Information Commons into everyday healthcare provisioning use cases with semantic transformer applications could be in several modalities: finding and acting upon information or contributing in the local context.

In the data entry or capture point, there will be options to add semantic layers and attributes to the type of content and data provisioned. An easy way to illustrate this, is the emerging use of templated entities and properties for the MedicalTypes, MedicalConditions, Drugs, Guidelines, Codes from controlled vocabularies like SnoMedCT, Mesh, ICD10 and the like.

 Analogously using digital cameras from smartphones or other devices, means that the user might add “some” metadata or tags about the picture. Devices and sensors add more layers of granularity with attributes that most end-users, never see or bother about. These extra resource descriptions, will interplay with cloud based services as Google Photos – where different algorithms reformat, package the content into new forms, as contextual albums, scenes and so forth.

 A set of semantic transformer application layers should be intertwingled with the Healthcare Information Commons. Firstly to make easy linkages between data sets – as the Web of Data scenarios and Linked Data propose –  but also to  provide smarter integration points in back-end supporting processes in the Healthcare systems where more private and locked-in data-sets exist about the patient conditions, treatments and drugs etc.

 The semantic transformer applications could both be open api:s developed by the community for the commons, but also could be commercial applications provided by line-of-business specialist software vendors. As long as all of these layers, are compliant with the open standards!

For such legacy systems as EHR , and off-the-shelf healthcare applications and business applications that are semantically impaired, these semantic transformer applications could work as a repair-kit for already old broken systems. Consequently there would be no need to overhaul all legacy software within the caretaker’s organisation. A kind of smoother migration path to interoperability.

There also exists the need for semantic interoperability between the contextual patient information within the EHR and the provision of clinical decision support information. This could be in the form of internal medical guidelines and best practices, or from external resources such as medical journals or clinical trial reports.

The KConnect project are providing semantic annotation and semantic search services in different languages for clinicians and researchers to access the very latest in medical literature. This is achievable by semantically annotating required medical information (EHRs, guidelines, journals etc) and having the semantic search engine take full advantage of known key medical entities/concepts and their relationships.

Through the indexing of new information about drug usage, best practices, guidelines, new clinical trials and journals, clinicians then access up-to-date relevant information whenever they need.

In the near future to maximise both clinician and patient user engagement with EHRs, different uses and views of the EHR will have to be driven by suitable context and stakeholder semantics.

Shared Decision making

When moving into valued-based health care and outcome measurement, (as presented here by Sveus), it is critical that all actors participate on a connected level field, so that communication between healthcare practitioners and patients and their social networks works.  This includes the need for shared norms and definitions as well as systems to support the decision making – and obviously a harmonised set of metrics to measure outcomes.

As presented by Peter Ubel, in his talks and recent book on Critical Decisions, it is key that we are able to share a common view between the clinician and the patient. All practitioners share jargon that do not always communicate well to the receiver. Hence there are plenty of communication breakdowns recorded in the everyday practices, leading to “malpractice” in the worst cases for the patient. In the last couple of decades, there has been a shift in power relations between healthcare professionals and patients and their families. Patient empowerment is a good thing, but if things get lost in translation, there is the risk that critical decisions are not fully supported.

With a Healthcare Information Commons pool of resources, there lays opportunities to guide patients and practitioners in their critical decision making. But also to strengthen the learning and innovation within the communities of practice, with open feedback loops to the pool.

Privacy & Security upfront

Just as data interoperability can be seen as the sharing of data, data security can be seen as the sharing of data in the right way and data privacy seen as the sharing of data with the right person in the right way. We are naturally concerned as to who may be using our data and want to be able to control its use.

The boundary between citizens’ App data and their medical data is blurring rapidly as App developments and sensors continue to provide new and different data that the individual, health care and clinical research can capitalise on in the effort to move towards better wellbeing and more value-based healthcare.

While data privacy and security have become the headline darlings of the media, they can often be distractors of innovation, often masking the true benefits of the flow of information. Just as with physical assets there are best practices for data misuse prevention, protection and policing. The majority of misuse or abuse of personal data is more often caused by human error and misjudgement than by the failure of technology.

Data interoperability can be better supported when services have clear guidelines to inform citizens as to who, when and how their data is shared, for what purpose and the available steps to alter said process. A better informed public would then see more free data resources being used for clinical research e.g. the Million Hearts initiative in the US where citizen data is being used to lower heart attacks and strokes.

Open regulations, collaboration and co-ordination along with risk assessment and protection practices such as encryption, anonymisation and de-identification, all can go a long way to allowing secure data interoperability, be it personal or aggregated data. IT has the potential too of rule-based access and forensic data access reports. No system can be made fool-proof, however precautions and the presence of well-designed data breach response plan are achievable.

Obviously we do not want all our healthcare records to be open in the air for anybody to use or read, as little as we want our financial records to be in the open. Privacy is really key! The means with the Information Commons should work with aggregated data. Not the singular set of records for one patient.

Patient security derives the need to a more free flow of data between actor systems. The medical conditions and contexts sets the standards for sharing, where extracts or segments should be possible to share aligned with privacy policies.

Future real-life experience exposé

Having a recent Swedish report on diabetes care and outcome measurement in mind. It makes sense, to illustrate the case of a diabetes patient living and acting in Göteborg, West of Sweden. They have a medical condition, being a lifelong journey with an endocrine system out of order. This has a great impact on the patient’s everyday life, and diabetes related complications. With good life balance to training, exercise and eating habits, it is possible to keep the glucose patterns in such a way that your life expectancy will equal to anybody else.

The use of personal choices to trigger improved behavior, gives the person options to chose selected wellbeing (e.g. Weight Watchers), fitness (e.g. Runkeeper) and health monitoring applications. In most cases these are closed down ecosystems, e.g. iOS included Health app, with options to share in social-media (about your progress, in terms of eating well, or improve your personal training). Many Life Science corporations are developing medical condition / disease area / treatment specific Health monitoring applications (e.g. FreeStyle Libre from Abbot for improving Glucose Monitoring) that clinicians recommend during patient consultations.

For clinical researchers there are ecosystem specific toolkits, like the open-sourced Apple Research Kit.  The existence of a closed ecosystem naturally makes it more problematic to share and exchange data. In this space a Open Standards based on the idea Information Commons makes sense too – where semantic translators could improve the transmission of data from one closed ecosystem to another, without privacy infringement.

A Personal Health Record (PHR) , is a health record where health data and information related to the care of a patient is maintained by the patient

In a future more seamlessly interoperable world, the citizen / patient should be provided one-secure-access point to his/hers health account, e.g. in Sweden 1177 and Mina Vårdkontakter and Hälsa för mig.

The outstanding question: How to get interoperability between PHR and Wellbeing, Fitness and Health apps where it is easy to share vital data bits in a sound manner?

In this scene, open standards should be applied to create a make-do semantic transformation.

Lastly – interoperability within the Professional Clinician Workplace?

The statements and real-life stories from the trenches in any clinical workplace, show a mess of supporting information systems. EHRs that by no means either cooperate or interoperate. Many clinicians realise that they have to do data provision into a handful of systems with significant double manual workload. This comes with risks, given the stressful environment, and many “malpractice” incidents can arise from this workplace disorder.

Each system support its part of the process. While some software suites try to close-down into one-system to ‘rule them all paradigm,’ they still barely lean upon any open standards and they lack semantic and structured ways for the use of data and information outside of the supporting system’s narrow scope.

 A diabetes nurse (post patient consultation) has to enter data into more than 10 different areas, including quality assurance and measurement systems e.g. NDR in Sweden. In some cases there have been integrated point-to-point solutions put in place, but mostly this is not the case and so unnecessary frustration is created.

In every intervention where clinicians and patients communicate, regardless of it being online, remote, on-site, there should be opportunities to tap into the Healthcare Information Commons space. With the potential to find recent new medical treatments, emerging standards/guidelines, breaking news for clinicians as well as patient-oriented and formatted communications. In the best of worlds, semantic translator applications will bridge between ecosystems inside the personal health space as well as into the workplace environment for clinicians – helping, guiding and improving all dimensions of interoperability.

Concluding remarks

Having value-based Healthcare and Outcome Measurement domain as a specific health care change driver, will push the use of standards on all levels to the limit. In the following blog post in this series, the ambition is to unpack information governance, since the data ownership and trust also have to be ironed out. And as stated by Prof Michael E. Porter, the capture of data to do proper Outcome Measurement is one of the major road-blocks ahead. The orchestration of all resources and governance still have to be unfolded. Happily some building blocks to the Healthcare Information Commons have emerged, so we do not need to reinvent the wheel:

  • Wikimedia realm “commons“- with all entries of semantic useful data in
  • Standard Sets for Medical Conditions by international collaboration at ICHOM, and in Sweden Sveus. Standards from Hl7 FHIR, W3C and Web of Data / Semantic Web. The Swedish National Board of Health and Welfare, have an embroic information structure (not in semantic machine readible, RDF, format). Information intermediaries like Google have settle for simple schemas for health and medicin.
  • Open Innovation, and the “open” paradigm, will change evidence based medicine, Bad Pharma and Science on a sociatal level, as stated by Ben Goldacre (TED) where we as patient together with clinicians are able to question treatments based on open data, and improve quality to Healthcare Information Commons.
  • The technology stack with smarter devices, sensors and things, along with Internet anywhere with cognitive computing and computational knowledge on-top of the commons will bring forward semantic translators.
  • New leaps in collaborative work and development with the use of the notebook theme, language and platform agnostic ways.

Making sense, defrosting health data into liguid gold improving healthcare for all.

For more information on Findwise research, please visit KConnect and Orios (Open Standards)

View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Peter Voisey's LinkedIn profilePeter Voisey

Interoperability in Healthcare using Open Standards

The emerging major overhaul to the Healthcare system, aided by Value Based Healthcare and Outcome Measurement, is inevitable and that’s is good!

The outstanding question is how do we infuse Sensemaking in the future Healthcare realm?

The cues for a better interopable worldview is nothing new. The main obstacles and roadblocks could be narrowed down to the following: closed-down data and information silos, with no governanace and policy making that apply the open innovation paradigm. This is the first post in a series (2) unpacking interoperability in the healthcare system.

Open Standards – the remedy for the Healthcare systems incurable prognosis?

The use of open standards to reach for interoperability on all levels should be the main driver for all policy making in the healtchare system regardless of country, region, hospital or clinic. And moving into patient engagement and health monitoring and consumer centric applications and services, this becomes even more obvious.

In a recent thesis “Standardization of interoperability in health care information systems“, (exec brief presentation) the different levels of interoperability was presented. Using Value Based Healthcare change in Sweden as background.

Interop Map

The results presented showed that without a good “Interoperability Climate” determined by sustainable resources and clear governance, the other interoperability levels will be problematic. With the bedrock being healthcare provisioning in Sweden, this could unfold to a better orchestrated interoperability practice, from Government, to the National Board of Health and Welfare, to local regional healthcare providers and hospitals, private clinics. As well as with citizen centric Health services, and consumer Health and Wellbeing apps on any platform. From policy makers, this implies that new policies should stress and enforce the use of open standards as a way to unleash the closed down data silos and practices.

In the future blog posts we will discuss semantic interoperability and technical interoperability, given that Findwise work in EC funded project, KConnect. And the final blog post will relate to information governance models, and why open standard uses make sense in the organisational interoperability domain.

This is a brief conversation with the students presenting their thesis. The first introduction is in Swedish (5-10min). The walk through of the thesis is in English.


Finding business values in the emerging digital workplace

How does one experience the promised business rewards of the emerging digital workplace (a.k.a the intranet)?

A group of renowned intranet professionals have taken on the task this question and offer sound practical advice as to how to achieve real business value in their new book “intranets that create business value” or in Swedish “intranät som skapar värde“,


Today, in fact most days, end-users feel bewildered when using the intranet.It is to some extent impossible to navigate.There exists a hodgepodge of mixed user experiences, given that the intranet often serves as the access point to several tools. And findability too is low! With a coherent, smooth and interoperable workplace, users should be able to find information and data, peers and colleagues to solve their everyday tasks, in an efficient way…  anywhere, on any device and anytime.

The authors’ narrative describes how the intranet can best be used to produce beneficial business transformation, by including detailed chapters on: strategy, content & information architecture, search/findability, governance and stakeholder management, end-user engagement and adaptation. Measures and metrics are also included to qualify the sought after business values.

Findwise have contributed to the sections relating to organising principles. Put simply, it should be easy for a user to know where and how to contribute with information and content in a good manner, so that others are able to find and co-act on such codified knowledge.

Without sound and sustainable organising principles there will be no findability: shit in = shit out! Regardless of the technology platform employed for search or intranet

Buy the e-book today, in advance of the published printed version in May!

First Meteor Meetup this year

Yesterday we arranged a really successful Meetup to start this years Meteor activites.

This years first Meteor Meetup in Gothenburg -18 people showed up, more than expected and more than registered in beforehand on – which is a first! So I can tell you that it was really successful already the beginning.

Pizza and beverages were served in Findwise Gothenburg office. Big thanks to Sebastian Ilves and Benjamin Lilland from Devkittens for helping out with arrangements.

We started up with a short presentation about what has happened in the community and in Gothenburg since last meetup last year. Here is the presentation with some newly added facts.

Then we carried on with demonstrations of some apps we have built at Findwise with Meteor, just mentioning a few:

  • Burnout – A search driven problem finder for websites using crawl-techniques and a search engine backend to deliver diagnostics to the Meteor front end application which handles user sessions and user specific data.
  • Keybox – A brilliant and quick app, which sprung out of a problem with key accesses on many different environment. The app helps out distributing access to servers for people just like you manage your keys in Github but for servers.
  • Signatures – Also a search driven app that helps Findwise staff to generate their email signatures by themselves, using the search to gather data from the company active directory.

Between a couple in-house app demonstrations we invited people from the community to demo what they have built or are working on.

Patrik Göthe first demoed an iPhone app built with Meteor to paint vectors with SVG almost like you paint with the pen tool in Photoshop and you can also change the background hue of the artboard your painting on. The original idea was to enable people to get nice colored background images for their phone, with a hue one could control just by the touch drag event.

Patrik demoed another application for people who do live coding. The app was reconfigured from being only a Meteor in the browser to being a desktop app. You can open code files and divide the code into chunks. In addition you can use this app in the background to help paste each part with a short command in the Mac when you are presenting and live coding a piece of software.

Andreas Rolén from GBG Startup Hack was here and demoed an app that can be used for hackathons or competitions. You can login to the app and adjust the teams and score etc from your phone, and updates would then be visible live on the website and on screens mounted in the hackathon location.

In total I think we got to see 10 apps demoed. As that wasn’t enough, Robin Lindh Nilsson and Johan Carlberg caught everyone’s attention when we all suddenly were playing their game on our own laptops and live on the TV. This was fun and exciting to say the least. Here’s a link to the game:

Robin also demoed his conversion app that has been live for a while now – you can convert anything on this app:

Finally Mickaël Delaunay held a presentation about how you can in the best way publish your app on your own server, which was very professional and greatly done.

So thanks for this great meetup, I hope we see more like this soon again!

How it all began: a brief history of Intranet Search

In accordance to sources, the birth of the intranet fell on a 1994 – 1996, that was true prehistory from an IT systems point of view. Intranet history is bound up with the development of Internet – the global network. The idea of WWW, proposed in 1989 by Tim Berners-Lee and others, which aim was to enable the connection and access to many various sources, became the prototype for the first internal networks. The goal of intranet invention was to increase employees productivity through the easier access to documents, their faster circulation and more effective communication. Although, access to information was always a crucial matter, in fact, intranet offered lots more functionalities, i.e.: e-mail, group work support, audio-video communication, texts or personal data searching.

Overload of information

Over the course of the years, the content placed on WWW servers had becoming more important than other intranet components. First, managing of more and more complicated software and required hardware led to development of new specializations. Second, paradoxically the easiness of information printing became a source of serious problems. There was too much information, documents were partly outdated, duplicated, without homogeneous structure or hierarchy. Difficulties in content management and lack of people responsible for this process led to situation, when final user was not able to reach desired piece of information or this had been requiring too much effort.

Google to the rescue

As early as in 1998 the Gartner company made a document which described this state of Internet as a “Wild West”. In case of Internet, this problem was being solved by Yahoo or Google, which became a global leader on information searching. In internal networks it had to be improved by rules of information publishing and by CMS and Enterprise Search software. In many organizations the struggle for easier access to information is still actual, in the others – it has just began.


And the Search approached

It was search engine which impacted the most on intranet perception. From one side, search engine is directly responsible for realization of basic assumptions of knowledge management in the company. From the other, it is the main source of complaints and frustration among internal networks users. There are many reasons of this status quo: wrong or unreadable searching results, lack of documents, security problems and poor access to some resources. What are the consequences of such situation? First and foremost, they can be observed in high work costs (duplication of tasks, diminution in quality, waste of time, less efficient cooperation) as well as in lost chances for business. It must not be forgotten that search engine problems often overshadow using of intranet as a whole.

How to measure efficiency?

In 2002 Nielsen Norman Group consultants estimated that productivity difference between employees using the best and the worst corporate network is about 43%. On the other hand, annual report of Enterprise Search and Findability Survey shows that in situation, when almost 60% of companies underline the high importance of information searching for their business, nearly as 45% of employees have problem with finding the information.
Leaving aside comfort and level of employees satisfaction, the natural effect of implementation and improvement of Enterprise Search solutions is financial benefit. Contrary to popular belief, investments profits and savings from reaching the information faster are completely countable. Preparing such calculations is not pretty easy. The first step is: to estimate time, which is spent by employees on searching for information, to calculate what percentage of quests end in a fiasco and how long does it take to perform a task without necessary materials. It should be pointed out that findings of such companies as IDC or AIIM shows that office workers set aside at least 15-35% of their working hours for searching necessary information.
Problems with searching are rarely connected with technical issues. Search engines, currently present on our market, are mature products, regardless of technologies type (commercial/open-source). Usually, it is always a matter of default installation and leaving the system in untouched state just after taking it “out of the box”. Each search engine is different because it deals with various documents collections. Another thing is that users expectations and business requirements are changing continually. In conclusion, ensuring good quality searching is an unremitting process.

Knowledge workers main tool?

Intranet has become a comprehensive tool used for companies goals accomplishment. It supports employees commitment and effectiveness, internal communication and knowledge sharing. However, its main task is to find information, which is often hide in stack of documents or dispersed among various data sources. Equipped with search engine, intranet has become invaluable working tool practically in all sectors, especially in specific departments as customer service or administration.

So, how is your company’s access to information?

This text makes an introduction to series of articles dedicated to intranet searching. Subsequent articles are intended to deal with: search engine function in organization, benefit from using Enterprise Search, requirements of searching information system, the most frequent errors and obstacles of implementations and systems architecture.

Using search technologies to create apps that even leaves Apple impressed

At Findwise we love to see how we can use the power of search technologies in ways that goes beyond the typical search box application.

One thing that has exploded the last few years is of course apps in smartphones and tablets. It’s no longer enough to store your knowledge in databases that are kept behind locked doors. Professionals of today want to have instant access to knowledge and information right where they are. Whether if it’s working at the factory floor or when showcasing new products for customers.

When you think of enterprise search today, you should consider it as a central hub of knowledge rather than just a classical search page on the intranet. Because when an enterprise search solution is in place, when information from different places have been normalized and indexed in one place, then there really are no limits for what you can do with the information.

By building this central hub of knowledge it’s simple to make that knowledge available for other applications and services within or outside of the organization. Smartphone and tablet applications is one great example.

Integrating mobile apps with search engine technologies works really well because of four reasons:

  • It’s fast. Search engines can find the right information using advanced queries or filtering options in a very short time, almost regardless of how big the index is.
  • It’s lightweight. The information handled by the device should only be what is needed by the device, no more, no less.
  • It’s easy to work with. Most search engine technologies provides a simple REST interface that’s easy to integrate with.
  • A unified interface for any content. If the content already is indexed by the enterprise search solution, then you use the same interface to access any kind of information.

We are working a lot together with SKF. A company that has transformed itself from a traditional industry company into a knowledge engineering company over the last years. I think it’s safe to say that Findwise have been a big part of that journey by helping them create their enterprise search solution.

And of course, since we love new challenges, we have also helped SKF create a few mobile apps. In particular there are two different apps that we have helped out with:

  • The shelf app, which is a portable product brochures archive. The main use case is quick and easy access to product information for sales reps when visiting customers.
  • The mobile product landing page, which is a mobile web app that you get to if you scan QR-codes printed on the package of SKF kits.

You can read more about the apps in our reference cases.

And this is something that haven’t gone unnoticed. In a speech by Tom Johnstone, the recent CEO of SKF, he mentions a “12% increase in productivity” for their sales force thanks to those smartphone and tablet apps.

And even more recently the tech giant Apple has noticed how the apps makes the day to day work of SKF employees easier and turned the apps created at SKF into major business reference cases for both their iPhone and iPads.

Stay Cleaning and moving boxes for cloud

This is the seventh post in a series (1, 2, 3, 4, 5, 6) on the challenges organisations face as they move from having online content and tools hosted firmly on their estate to renting space in the cloud.  We will help you to consider the options and guide on the steps you need to take.

Starting from our first post we have covered different aspects you need to consider as you take each step including information structure and how it is managed using Office 365 and SharePoint as a technology example.  Planning for migration.

Moving Boxes

Do not even think about moving into the cloud apartment without a proper  cleaning of the content buckets. Moving from an architected household to a rented place, taxes a structured audit. Clean out all redundant, outdated and trivial matter (ROT). The very same habit you have cleaning up the attic when moving out from your old house.

It is also a good idea to decorate and add any features to your new cloud apartment before the content furniture is there.  It means the content will fit with any new design and adapt to any extra functionality with new features like windows and doors.  This can be done by reviewing and updating your publishing templates at the same time.  This will save time in the future.

Leaning upon the information governance standards, it should be easy to address the cleaning before moving, for all content owners who have been appointed to a set of collections or habitats. Most organisations could use a content vacuum cleaner, or rather use the search facilities and metric means to deliver up to date reports on:

  1. Active / in-Active habitats
  2. No clear ownership or the owner has left the building
  3. Metadata and link quality to content and collections to be moved across to the cloud apartments.
  4. Review publishing templates and update features or design to be used in the Cloud

When all active habitats and qualified content buckets have been revisited by their set of curators and information owners. The preparation and use of moving boxes, should be applied.

All moving boxes do need proper tagging, so that any moving company will be able to sort out where about the stuff should be placed in the new house, or building. For collections, and habitats, this means using the very same set of questions stated for adding a new habitat or collection to the cloud apartment house. Who, why, where and so forth, through the use of a structured workflow and form. When this first cleaning steps have been addressed, there should be automatic metadata enhancement, aligned with the information management processes to be used in the new cloud.

With decent resource descriptions and cleaned up content through the audit (ROT), this last step will auto-tag content based upon the business rules applied for the collection or habitat. Then been loaded into the content moving truck, or loading dock. Ready to added to the cloud.

All content that neither have proper assigned information ownership, or are in such a shape that migration can’t be done should persist on the estate or be archived or purged. This means that all metadata and links to either content bucket or habitat that won’t be moved in the first instances, should at least have correct and unique uri:s, address, to this content. And in the case a bucket or habitat have been run down by a demolition firm, purged. All inter-linkage to that piece of content or collection have to be changed.

This is typically a perfect quality report, to the information owners and content editors, that they need to work through prior to actually loading the content on the content dock.

Rubbish and Weed

Finally when all rotten data, deserted habitats and unmanageable buckets have been weeded out. It is time to prepare the moving truck, sending the content into its new destination.

Our final thread will cover how will the organisation and it habitants will be able to find content in this mix of clouds, and things left behind on the old estate? Cloud Search and Enterprise Search, seamless or a nightmare?

Please join our Live Stream on YouTube the 20th November 8.30AM – 10AM Central European Time
View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Mark Morrell's LinkedIn profileMark Morell intranet-pioneer

Placemaking, wayfinding and game rules in the Clouds

This is the sixth post in a series (1, 2, 3, 4, 5, 7) on the challenges organisations face as they move from having online content and tools hosted firmly on their estate to renting space in the cloud.  We will help you to consider the options and guide on the steps you need to take.

Starting from our first post we have covered different aspects you need to consider as you take each step including information structure and how it is managed using Office 365 and SharePoint as a technology example.  We will cover more about SharePoint in this post, and placemaking in the cloud.
Funky Village
In SharePoint there are a set of logic chunks. One could decompose the digital workplace into intranet sites, as departmental and organisational buckets; team sites where groups collaborate, and lastly your personal domain being the my site collection. Navigating between these, is a mix of traditional information architecture and search driven content.  When being within a such a habitat as a teamsite, it is not always obvious how to cross-link or navigate to other domains within the digital workplace hosted in Sharepoint.

One way to overcome this, is to render different forms of portals, based upon dynamic navigation. These intersections and aggregates help users to move around the maze of buckets and collections of the content. Sharepoint have very good features, and options to create search-based content delivery mechanisms.

 A metadata and search-based content model, gives us cues for the future design of the digital workplace, with connected habitats and sustainable information architecture. Where people don’t get lost, and have wayfinding means to survive everyday work practices.

This is where how you manage the content in SharePoint and Office 365 is critical.  As we said in our first post it is important you have a good information architecture combined with a good governance framework that helps you to transform your buckets of content from the estate into the cloud.  We have covered information architecture so we now move more towards how governance completes the picture for you.

There are three approaches to the governance your organisation needs to have with SharePoint and Office 365.  You don’t have to use just one.  You can combine some of each to find the right blend for your organisation.  What works best for you will depend on a number of different factors.  Among them:

  • Restricting use – stopping some features from being used e.g. SharePoint Designer
  • Encouraging best practice – guidance and training available
  • Preventing problems – checking content before it is published

Each of these approaches can support your governance strategy.  The key is to understand what you need to use.

Restricting use

You need to be clear why your organisation is using SharePoint and Office 365 and the benefits expected.  This will shape how tight or loose your governance needs to be.

Once you are clear on this, you then need to consider the strategic benefits and drawbacks such as SharePoint Designer and site collection administration rights.


  • You control what is being used.
  • You decide who uses a feature e.g. SharePoint Designer.
  • You manage the level of autonomy each site owner has.
  • You find out why someone needs to use a feature.
  • You monitor costs for licences, users, servers, etc.
  • You measure who is using what and why for reporting.


  • You stifle innovation by not allowing people to test out ideas.
  • You stop legitimate use by asking for permission to use features.
  • You prevent people being able to share knowledge how they wish to.
  • You may be unable to realise the maximum potential of SharePoint.
  • You create unnecessary administration.
  • You risk adding costs without any value to offset them with.

You need to get the balance right with governance that gives you maximum value for the effort needed managing SharePoint and Office 365.

Encourage best practice

The goal from implementing SharePoint and Office 365 is to have an environment that enables employees to publish, share, find and use information easily to help with their work.  They are confident the information is reliable and appropriate, whatever their need for it is.  People also feel comfortable using these tools rather than alternative methods like calling helpdesks or emailing other employees for help.

Encouraging best practice by giving them the opportunity to test to meet their needs is one approach to achieving this.  There are factors you need to consider that can help or hinder the success of using this approach.


  • You inform employees of all the benefits to be gained.
  • You train people to use the right tools.
  • You design a registration process to direct people to the right tools.
  • You point employees to guidance on how to follow best practice.
  • You encourage innovation by giving everyone freedom of use.


  • You can’t prevent people using different tools to those you recommend.
  • You risk confusing employees using content unsure of its integrity.
  • You can’t prevent everyone ignoring best practice when publishing.
  • You may make it difficult for people to share knowledge effectively.
  • Your governance model may be ineffective and need improving.

Getting the balance right between encouraging best practice and the level of governance to deter behaviour which can destroy the value from using SharePoint and Office 365 is critical.

Preventing problems

As well as encouraging best practice, preventing problems helps to reduce time and costs wasted on sorting out unnecessary issues.  While that is the aim of most organisations the practical realities as it is rolled out can divert plans from achieving this.

You need to get the right level of governance in place to prevent problems.  Is it encouraging innovation and keeping governance light touch?  Is it a heavier touch to prevent the ‘wrong’ behaviour and minimise risk of your brand and reputation being damaged?  How much do you want to spend preventing problems?  What does your cost/benefit analysis show?


  • People using SharePoint and Office 365 have a great experience (especially the first time they use it).
  • Everyone is confident they can use it for what they need it for without experience problems.
  • Employees don’t waste time calling the helpdesk because many problems have been prevented.
  • Effective governance encourages early adoption and increased knowledge sharing.
  • Costs spent preventing problems are justified by increased productivity and reduced risk of errors.


  • People find registering difficult and lengthy because of extra steps taken to prevent problems and don’t bother.
  • People find it too restrictive for their needs and it stifles innovation.
  • People turn to other tools (maybe not approved) to meet their needs and ask other people for help to use them.
  • Too restrictive governance prevents most beneficial use by raising the barrier too high for people to use.
  • Costs of preventing problems are higher than benefits to be gained and not justified.

You need to consider the potential benefits and drawbacks before deciding on the level of governance that is right for your organisation.

Remember, it is possible and probably desirable to have different levels of governance for each feature.  It may be lighter for personal views and opinions expressed in MyProfile and MySite but tighter for policies and formal news items in TeamSites.

That is the challenge!  You have so much flexibility to configure the tools to meet your organisation’s needs.  Don’t be afraid to test out on part of your intranet to see what effect it has and involve employees to feed back on their experience before launching it.

The way forward is to create a sustainable information architecture, that supports an information environment that is available on any platform, everywhere, anytime and on any device.  A governance  framework can show roles and responsibilities, how they fit with a strategy and plan with publishing standards as the foundation to a consistently good user experience.

Combining a governance framework and information architecture with the same scope avoids any gaps in your buckets of content being managed or not being found.  It helps you transform from your estate to the cloud successfully.

In our last concluding posts we will dive into more design oriented topics with a helping hand from findability experts and developers. Adding migration thoughts in next post. But first navigating the social graph being people centric, leaving some outstanding questions. How will the graph interoperate if your business runs several clouds, and still have buckets of content elsewhere?

Please join our Live Stream on YouTube the 20th November 8.30AM – 10AM Central European Time
View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Mark Morrell's LinkedIn profileMark Morell intranet-pioneer

Content Governance – life cycle and reach

This is the fifth post in a series (1, 2, 3, 4, 6, 7 ) on the challenges organisations face as they move from having online content and tools hosted firmly on their estate to renting space in the cloud.  We will help you to consider the options and guide on the steps you need to take.

 Starting from our first post we have covered different aspects you need to consider as you take each step including information structure and how it is managed using Office 365 and SharePoint as a technology example.  We will cover governance and how content should be managed in the cloud in this post.

content buckets

Content created within a context, as either a departmental site, or team habitat has usually only reach and bearing for the local context of fellow members of staff within this unit. Other pieces of content have a coverage that stretches all parts of the business. One simple example, is the bucket of content that makes up the management system, with governing principles, strategies, policies and guidelines that describes the core processes, activities, roles and so forth within an organisation.

Yet other content, as the outcome from a project, will build a bucket of content that either lives in a new context, improves a bucket of content or feeds into yet another following project.

From an information management perspective, it is vital that you have organising principles to all your content, where all these layers have been covered. Both reach, and the life cycle to the set of content.

You need a governance framework that reaches out to every bucket of content.  This covers what is still on your estate as well as the growing amount in the cloud.  All content needs to be managed to remove risks of leakage of sensitive information and prevent people having an inconsistent user experience as they move from one bucket of content in the cloud to another content bucket still on the estate.

You need to make sure people do not see the difference between buckets of content on the estate from content buckets in the cloud.  People using your content to help with their work don’t need to know where the content is kept.  They need to find it as easily as before, preferably even easier!  Content in the cloud  should feel the same and be a natural extension to the digital environment people are already used to.  Manage it with a governance framework that covers every bucket of content and make it more easy to adopt quicker and use more often without caution or delay.

Part of your governance needs to cover publishing standards based on business needs so it is easy to access from any device e.g laptops, tablets and smartphones, and to view without unnecessary authentication levels.  This helps to create that consistent good user experience that encourages people to use your content whether the bucket is in the cloud or not.

A professional team from group HR, might work in their local teamsite, with on-going conversations, work-in-progress documents and so forth. Pieces of their content production leads to governing policies that have a global reach within the organisation, and needs to be linked from the corporate intranet spaces. with versioning and good quality to resource descriptions (meta data). This practice and professional network of HR people, do also share content on a departmental site. With links and resources, that have direct impact on their internal processes. The group of people, have outreaching triggers, and in-bound conversations. And have to balance these two states.

When it comes to temporal content buckets, like a project team site. There are several considerations one have to capture. First where will the outcome and result be stored, when the project is finished. In which context will these content pieces contribute. Second, what should be captured from all on-going conversations (social elements) and work-in-progress and drafts developed during the projects lifecycle? Should a project habitat, be searchable after closing down? Or do the habitat change status, hence all documentation stay within the collection, but the overarching state to the habitat changes? Within Sharepoint these temporal states, versions, workflow and properties. All sum up the organising principles.

If these principles haven’t been ironed out, and been described and decided. Inevitable there will be emerging ghost towns, of dead habitats and lost collections of content. With no governance or ownership whatsoever. All this will become a digital landfill.

We will cover more about SharePoint in our next post in this series. Please visit Michael Sampson‘s recent slides where he takes you through strategy, planning, governance and user adoption for collaboration!
Please join our Live Stream on YouTube the 20th November 8.30AM – 10AM Central European Time
View Fredric Landqvist's LinkedIn profileFredric Landqvist research blog
View Mark Morrell's LinkedIn profileMark Morell intranet-pioneer