The financial crash of 2008 inspired the first cryptocurrency: Bitcoin, developed by the Satochi Nakamoto group. In simple terms, Bitcoin works as an open source, public distributed ledger, meaning the data, in this instance a digital currency, is decentralised. Bitcoin was developed to offer an alternative currency to the centralised, government controlled currencies. But how does Bitcoin work in a safe, transparent and immutable way? By running transactions through a system called Blockchain.

Why Block chain?

Trust, and security are central to much of civilisation, especially in finance but also in property ownership, supply chain, contracts & identity records, healthcare, government-citizen interaction and even democratic election processes. Since the Sumerians first inscribed a clay tablet with cuneiform (and probably before), trusted records of reference have been our best resort for proving what is and what is not, in a quantifiable way to resolve any potential disputes of opinion. Back with the Sumerians, keeping those clay records safe was probably the job of a king and his army or some similar organisation; similarly banks emerged to store and keep a record of what monetary assets, like gold, that a person legitimately owned and this evolved into the issuing of paper money (effectively IOUs) instead of a person carrying around the actual gold value.

With Blockchain, all of that changes.

With a complex network of electronic records holding many, many duplicate records for comparison and validation of a single fact – like whether or not you have the ability to pay for something – the shared knowledge of whether something is true or not becomes the trust factor. If someone attempts to change or hack the network, they must effectively change every single duplicate value record in order to make a change – the power of blockchain is in consensus of the many. The more the merrier.

Encryption on top of Blockchain provides additional security to prevent any hacking, leading to a completely efficient and trusted method of recording our most precious aspects of life.

What is Blockchain?

A blockchain is a ledger of digital information in which the said information is aggregated into data “blocks”. These blocks are made up of binary code that can represent anything from a value, to an image or even a simple sentence. The blocks are then “chained” together through encryption so that every block connects to the next. As such, a blockchain is built as an increasingly long and ordered string of digital information that has been verified and validated by the previous blocks in the chain. It’s a bit like a highly encrypted and verified shared Google document, one in which each entry in the sheet depends on a logical relationship to all its predecessors. In short, Blockchain technology offers a way to securely and efficiently create a tamper-proof log of sensitive activity, anything from international money transfers to shareholder records.

How does it work?

As a distributed database, blockchain storage is not limited to a single server. Rather, the totality of the information in the blockchain is simultaneously stored and updated by nodes, which are defined as any computer connected to the blockchain that processes the information exchanged. A double encryption system ensures the owner and recipient of any bit of transferred information that is recognised through use of the adequate private keys.

The digital information submitted to the blockchain is processed through a cryptographic hash function. This means that all the digital information inputted is transformed into a single fixed length output that is unique for every input. This means that even if two different inputs only have one byte of different data, another separate output will be created. The different nodes that interact with the blockchain by way of an algorithm challenge then verify these fixed length outputs independently. Whichever computer cracks the code first and verifies that the encrypted information can be validated then updates the other nodes working on the problem. These then confirm the validation on their end. Once a majority of nodes have verified the information, the block is consolidated, added to the blockchain and immediately updated across the board. The result is different batches of data can be consolidated through the cryptographic hash function into a single fixed length output that can be traced back to previous blocks in the chain. This process is called a Merkle Tree.

Secure and transparent

With every new block being validated independently by different nodes through algorithms and verified against earlier blocks, security is tight and the risk of fraud minimal. If a malicious user wanted to change even a single byte of information in a block, the validation system would spot the change as the fixed length output would come out differently, making it almost impossible to falsify information.

Furthermore, as the information is immediately uploaded to all nodes once validated, a blockchain is a very transparent system. As the chain grows longer with each block, the information held in each remains available to the blockchain users with the right access key. A blockchain will keep on growing as a result, with previous blocks being an intricate part of the chain that cannot be amended. This makes it easy to implement procedures such as audits or digital paper trails in an automated way.

One word of warning: while blockchain technology has proven to be impervious so far, this does not automatically mean full-data protection. To avoid data being manipulated, blockchain does need an application layer where data security questions are dealt with; you are still dependent therefore on how well this layer is designed and for what purposes you would want to use blockchain.

No intermediaries and the concept of smart contracts

As a blockchain is a decentralised system, there are no intermediaries required to relay information from point A to point B. In the case of Bitcoin for example, this means that money transfers no longer need the banking system to finalise a transfer as in a normal banking network and a state-mandated currency. In fact, the only delay between instruction and release is the time needed for the network of nodes to solve the algorithm and validate the block, which will be about 10 minutes. Said simply, there is no need to trust that intermediaries will do their job, as the process followed by a blockchain is software-based. Nodes work together toward transferring, documenting and safeguarding digital information as perfect strangers, with no strings attached.

Through this intermediary-free, peer-to-peer interaction backed by software, it is possible to implement computer protocols known as ‘smart contracts’. These smart contracts are governed by the same cryptographic rules to ensure clauses of a contract are implemented and enforced in a timely and automated manner.

Blockchain applications

The Blockchain system, though first popularised by Bitcoin, has many other real-world applications. Information can be shared much more securely over the Internet than in the traditional way (Excel files, EDI, etc.). This is due to blockchain cryptography, which requires anyone who wants to access the data to have the correct token.

Already large transport companies, like Maersk, have started using blockchain solutions to simplify their shipping procedures. But there are many other fields that could benefit from adopting Blockchain based data-management solutions.

  • Governments could solve many problems relating to the silo approach different department still adhere to. For example, automatically linking and validating citizen data using blockchain technology could simplify different civic services simplified through a unified, decentralised database. Using a Blockchain, it would be possible to better control government expenditures in real-time and combat corruption as the myriad of intermediaries involved in budgeting would be rendered obsolete.
  • Healthcare could benefit by regulating access to sensitive data using keys to let different industry bodies, from pharmaceutical companies to hospitals and insurance companies, manage patient information. Say a patient’s blood results showed a deficiency, uploading that data could automatically alert an insurance or medical service provider and start up a procedure to get the adequate treatment to the patient.
  • Blockchains and smart contracts could help ensure compliance protocols are adhered to and the legality of any form of information or goods transfer is done in accordance with the governing legislation, again in a centralised and impartial way.

As the world’s computing power continues to increase, and thus the calculating capabilities of nodes evolve, more and more data will be stored in a decentralised public ledger of sorts, increasingly ensuring immutability of data, transparency and impartiality along the way.

Sources:


Joana Schmitz, Patrick Esselinckx and Arnaud Briol of dFakto were recently interviewed by Arnaud Martin, an independent journalist for the newspaper Le Soir, Références, and Trends-Tendances.

In this interview, they share their secrets for success as well as the difficulties SMEs have in finding their place next to large companies, especially when it comes to finding new collaborators.

It’s not all bad news however, as SMEs offer plenty of advantage to attract new candidates (like Arnaud Briol for example).

Read the entire interview published on Saturday October 7, in the newspaper supplement “Le Soir”.

Also consider visiting our Jobs & Careers page, if, like Arnaud, you want choose to grow with a company rather than in a company: https://www.dfakto.com/jobs/


You have reviewed the latest reports and the numbers are close, but something has changed? You are tempted to review the data modeling to try and correct the perceived error ? But you know that it is going to take some time to trace, and you have no guarantee of finding an answer. Perhaps someone has changed something in the model, somewhere and it has rippled out? Maybe it’s just a small change? Who knows?

This is not an uncommon scenario. The fragility of old data modeling approaches being treated as data stores, has often resulted in pesky errors being introduced into the systems.

This is usually not the fault of the people managing the data but an inherent problem with the way that the old methods were designed.

They were not intended for the rapid change and transformation that we experience in today’s business environments. If you change something as innocuous as a field name or add anything like a new field to a dimension, you will have to cope with some anomalies possibly appearing. Changes that one department request thinks are small, will for another department equal the kind of impacts in their figures that they really don’t need to be worrying about this quarter.

It’s a moving feast and sometimes, you can’t please anyone!

Times are changing; data modeling, too.

In yesteryear, very clever computer scientists designed methodologies for data capture that were leading technology of their day. Those days were typically slower-paced and less evolutive. Expectations were that it would take some time to change, time that wasn’t as commoditised as it is now. If a system took a few months to get the data modelling right, before it was useful, well then they had better take that extra time.

This is unthinkable these days, yet the same models are being used!

These old methods are still useful but underneath we now need something more. They are useful as ‘Data Marts’ in our new world, and are repurposed for what they are good at : to capture a snapshot view of data. However, the data itself needs to be held in structures that are more suited to and built for change!

Yet still, you will see many attempts to persist this old way of thinking. Star Schemas and 3NF architectures are at breaking point in business environments. Indeed, they are pushed to analyse history and are constrained by regulations like GDPR. They have their place, but it is not for high performing data warehouses. These older data architectures are more suited to stability and consistency rather than evolutive history and change capture!

Decompose structure to master change in data modeling

HUB

  • Data Vault separates the structure of data from the myriad of data sources that are attached to it.
  • The model is fixed and never changes. So is the data that you attach to it.
  • Once added, you cannot remove it. Initially this concept sounds restrictive, however the intention of the Data Vault is to ‘capture’ data and hold it in a fixed state and the trade-offs are profound.
  • It pulls data from multiple sources around a single reconcilable set of identifiers called a ‘Hub’ (e.g., a business entity, like a customer or product).
  • You can attach as many as you like, because the ‘Hub’ is a central point of management.
  • This becomes ideal if you are looking to understand discrepancies in your data, while keeping a system of record. Master Data is also a possibility, where you can compare and contrast each source into a derived ‘golden record’, further on.

LINKS

  • ‘Links’ form the second part of the core structure of a Data Vault, and these are where the flexibility and agility come into play.
  • You can have different teams working on different ‘Hubs’ that are unaware of each other if need be.
  • They may be working on data cleansing or master data, or whatever.
  • You can keep them separate by design or by schedule and still hold it all together in the last case by building the links separately.
  • The Links are effectively ‘many-to-many’ tables. So the relationships scenarios are what you chose to make them. There are no constraints as long as the business entities are well thought out.

BUSINESS VAULT

If you want to clean and enrich data, derive data, build business rules and data quality or even build out your ‘golden record’ Master Data.

That happens in another stage called the ‘Business Vault’, but the Data Vault itself is truly a single source of unchanging truth, warts and all.

There are benefits to this approach :

  • You know that what is in your data warehouse is truly a historical record;
  • It is an audit-able trail of consistency in your business;
  • You can derive an unlimited number of Data Marts from it, that will be absolutely consistent over time;
  • If you build sympathetic business rules, they will also be consistent with each other;
  • The reports and analyses that you conduct on this data will remain consistent over time, even if you add more data, as nothing is EVER deleted from a Data Vault, unless it is specifically designed to do this by regulatory constraint.

In conclusion, the Data Vault is built from the ground up to manage growth, while maintaining consistency.

The magic happens because of ‘separation of concerns’.

Or find out more from our friendly team of business and technical experts at info@dfakto.com or +32(0)2.290.63.90.


So you actually bought into that new technology stack that was really going to improve your analytics and help you make better decisions? The truth is while it may well help, the conflict between flexible enterprise change and accurate historical reporting will be always be a discipline rather than a technology when supporting a successful modern business. Sadly no technology is the holy grail. After all, none of them has yet managed to fully put you at ease, has it?

Just as you cannot expect a project to run itself, even though you have the latest project management software, it’s the same with data warehouse management because it is the methodology that counts. Yet just like with project management methodologies, Data Vault is also agile, compartmentalizing change to encourage flexibility.

Pitfalls and inadequacies of steady state

In the best of cases it probably takes months to organise changes within your data warehouse. And the crazy thing is – even if you are doing even simple changes – it may be incomprehensible why ‘just adding a few fields’ (or updating one, heaven forbid!) can take such a long time.

The main problem is that it is the methodology of storing, not the technology that is the issue. The fact is that old Star Schemas, 3NF (Third Normal Form) systems and Snowflakes, just aren’t cut out for change as they were (and still are) designed for analysis of consistent data rather than data capture. So while Star Schemas and Snowflakes are especially good at some analytical tasks, and 3NF is great for enforcing cascading point-in-time structure, none of these methods cope well with changes. And none are made to accommodate (and reconcile) the data structure from 5 years ago with the data you have today and requirements for tomorrow … they are simply too prescriptive and only designed for a single, current way of doing things. As a consequence, if it changes, you essentially have to (carefully) throw out the old and start again, hence it takes all that (expensive) time.

The best answer is to accept change, and embrace it.

Enter Data Vaulting. (Applause). It is able to capture data from anywhere, and extract virtualised views as moving snapshots when required.

‘But our data is always changing.’

It always seems to be about ‘new’ and ‘the next big thing’, but glancing backward at the ‘history’ or considering ‘change’ is often an afterthought, and best left to ‘others’ to reconcile. How are we to understand how well the business is doing over time without some consistency amidst the technology transformation? Of course it is interesting and important to adapt so that you can be interoperable to the best standards, but how can we understand (or allow!) real growth and change while still accurately tracking history?

The older methods like Kimball and Inmon (Star Schema, 3NF and Snowflake anyone?) were created a long time ago, in times when change wasn’t so rapid and we didn’t have such volumes of data. Back then, you were designing a data model for specific single solutions and could afford the time to throw out the old data model and start again. Their continued appeal is that lots of people use them – but they are trying to use them for the wrong purpose! They are effectively trying to use buckets to grab rivers of information, attempting to read snapshots of a data stream as if it was a fixed data model rather than a continuum of change. They try to put the snapshots together into some sort of continuous historical view of business information – and it all begins to look a bit odd! Data Vault changes the game from old-school disciplines and bends to the flow of the river making the technology irrelevant.

Distinct purpose: capture raw history, flexibly

Emerging from these older disciplines, Data Vault is a hybrid evolution and has been specifically designed for change management and organizing diverse sources from the ground up. So just as older methods attempted to shoehorn the data streams into the old methodologies and make data analytically usable, Data Vault is all about embracing change across the entire enterprise. The old methodologies are still very useful – they are good at transforming data into insights – but they are not good as raw systems of record. The fact is Data Vault’s sole purpose is to structure heavy workloads of changing disparate data sources, and then handle them in an agile way, and promote onward data quality and processing. Data Vault’s purpose is primarily as its name suggests – to take raw ‘Data’ (rather than information) and put it in a ‘Vault’ (captured and safe) that stores everything in its purest original state – an immovable, yet flexibly structured, system of record.

By borrowing modern social data concepts like ‘relationships’ and ‘entities’ and combining them with the older methodologies, the structure of your business concepts is separated out from the sources and forms the skeleton on which the data linked. It is possible, and recommended, for a knowledgeable business user to design the intuitive core data concepts behind the Data Vault on a whiteboard before involving the data professionals to fill in the details. Data Vault is a ‘business first’ methodology, focusing data around the business ideas, rather than conforming the business ideas to the data model.

Business may change superficially, but the concepts that underpin it, aspects such as customer, product, etc., do not. It is these that form the backbone of the Data Vault.

You want to learn more about Data Vault?

Find out more from our friendly team of business and technical experts at: info@dfakto.com or +32(0)2.290.63.90.


dFakto asked Guido Dauchy, former Head of Transformation at BNP Paribas Fortis, to share his experiences of working with dFakto on the transformation plans at BNP Paribas Fortis. BNP Paribas Fortis is a long-term client of dFakto and we have been fortunate to work with them even before the transformation plans of 2012 were announced. In short, dFakto has been instrumental in keeping senior executives abreast of what was really going on with the programme, and this on an almost daily basis. For as Guido explains, every company needs as much help as they can get.

Starting in June 2012, the “Simple & Efficient” initiative launched with two objectives:

Simplify the Group, particularly after several successful integrations and several years of growth
Improve efficiency, in light of an accelerated regulatory pressure and an uncertain economic environment
The reality is most transformation projects are not successful; indeed the probability of delivering as expected is low (~30%). When you look at the numbers in detail, about 70% of failures are due to poor execution rather than poor strategy.

In Guido’s opinion, two aspects contribute to this: the first is a lack of commitment to deliver by the management, and secondly the fact that plans are outlined at a level of detail that is too high and too strategic, and therefore not actionable. He recommends ensuring that governance meetings are fed with “valuable, fresh, frequent & qualitative insights” from the field, and that actuals and forecasts are monitored on a frequent basis. This ensures that there is an automatic measurement of strategic gaps ‘to date’ and ‘at completion’, with as a consequence an ability to follow the evolution of these gaps and converge them.

In 2012, BNP Paribas launched the Simple & Efficient transformation project: the aim was to generate some 2 billion € of savings by 2016. Now, five years later, not only has the programme more than reached its initial objective, but has exceeded the target by creating savings of 3.3 billion €, an increase of 65% over the initial objective. What happened? How did this happen?

with Guido Dauchy, former Head of Transformation at BNP Paribas Fortis, 20th April 2017

Four key success factors

Four key success factors were identified: an evolving governance, a centralised budget, a simple transversal methodology and a dedicated resources team.

It is important to have governance that can evolve. At BNP Paribas, the governance changed according to the phase and stage of the programme. Initially, the executive committee oversaw the planning of the work, while the heads of different business units worked on the plan. The former gave support to the latter who then had the responsibility of monitoring what actually happened in the field. This ensured that the top management has the right information in the right format as soon as possible, and as directly as possible.

The fact is, however, it’s not always easy in big organisations to have the right information at the right time. This is why there is the need for an incentive. This is where the centralised budget came to play an essential role because by funding specific projects, you trigger the feedback you need for those projects. We speak here of a certain kind of mutual agreement between the steering committee and the heads of business units.

Note though it’s important to remember that top management should not be made to play the role of the police, and that their role is making sure that whatever projects have been promised are also executed. Furthermore, it was important that reporting be as simple as possible: with only the details that are relevant, and including whether or not the business unit was sure to reach the objectives of the overall programme.

A third condition to ensure success in any transformation is using the right methodology. The reality is that in big organisations, you don’t have the time to convince people to be part of something. You therefore have to create a dynamic to make sure that the organisation starts to work within the thinking of the programme from the very beginning. In the case of BNP Paribas, this methodology was based on 3 phases. Initially, of course, the overall goal had to be defined. Once this was agreed to, the 1st phase was a top-down overview plan that included all the potential savings for each business, function and territory. The goal of this phase was to decide what kind of initiatives could be taken to reach the overall objectives, in other words, pinpoint the savings that could be delivered and at what cost. Ideally this phase should last only about 6 to 8 weeks. The recommendation is not to take too much time. Make it easy for those that are doing the job and because transformation is not the normal job of the people, give them the support they need.  The 2nd phase consisted of implementing the ideas from phase 1. More people are involved in this phase, indeed it’s where you bring people in who are capable of turning the initiatives into projects.

At BNP Paribas, it required 12 weeks to translate everything into project programmes. The reality is that you have to know what you’re doing, how you’re doing and when you’re doing it, otherwise the initiatives simply won’t move forward. Also you shouldn’t allow initiatives outside the scope of the plan. Finally the last phase is the implementation phase. This phase is longer compared to phase 1 & 2.  It’s in phase 3 that you’ll make the changes in the governance.

The last key success factor we identified in the BNP Paribas case is the necessity of having a dedicated team. The beating heart of the programme, they are the people who bring everything together: managing the programme, giving information to the governance, and presenting possible solutions. They were also the people that made sure that decisions were acted upon on the ground.

These 4 factors at BNP Paribas helped the company exceed the initial objectives of 2 billion € of savings, to reach 3.3 billion €. Now projects that exceed their initial objectives represent only a small percentage compared to the ones who only reach their objectives (and even smaller compared to the ones who fail). So how did the programme overreach its objectives?

3 ways to exceed objectives

The first one is the extension capability. It refers to the dynamics created once the phases 1 & 2 of the methodology have started. During these phases, you see lots of question being raised, and many people starting to communicate with each other, indeed all begin to be more and more convinced of the real purpose of the programme. The result is they begin to think in another way. Their mindset changes to be in line with the dynamics of the programme. At that moment it’s important to start a new wave of phase 1 & 2 to generate (more) new ideas and solutions. As a result, people will come up with better and more valuable ideas.

Now if a project fails to deliver for whatever reason, the loss needs to be compensated by a new project or extra effort in another project in order to respect the initial commitment. When we talk about compensation we mean launching another project or improving another project. This compensation capability implies that all parties are fully transparent on the progress of their projects, identifying at the earliest possible moment any potential problems so that it allows them to quickly respond to it and limit the risk of “non-achievement” of the objectives. With transparency and non-judgemental communications it’s possible to admit that a project will deliver less than expected. And experience showed us that people always find compensation elsewhere because they don’t want to show failure before the executive committee!

A transformation doesn’t happen in a vacuum. It occurs in a world that is changing and disruptive. This is why a dedicated team is essential. In this way you can be certain of continuous transformation within the company, as if you don’t adapt, you’ll disappear.

At dFakto, the way we see transformation involving different things. First, we encourage pro-active management. We believe that if you can identify issues before they have an impact, it’s easier to resolve them (hence the real value of supplying valuable, fresh, frequent & qualitative insights). When people realise they won’t succeed on their part of the programme, they will report it, and by doing so avoid negative figures in the results. Furthermore, if you can identify what’s wrong as early as possible there is a strong incentive to finding ways to solve the problems, and deliver.

In short, as Guido says,

“We have a more organic approach to transformation. We see it as a living organism. It’s something reactive: you can do something, or you can do something different. We used to plan, execute, and then give feedback. Today, we don’t have this ‘only’ linear approach to transformation. We see it, thanks to dFakto data, as something much more cyclical, and that includes a higher reporting frequency, and consequently a high-planning frequency.”

 


Data Vaults are agile, but what does that mean in simple terms?

In simple terms, it means you don’t have to « eat the whole whale » at one time. Indeed small bites are more effective for achieving data visibility. You can even task the meal to a collection of people all at once, and the pace can increase without errors. It’s because the Data Vault is highly structured that you can split it up into pieces and (without any difficulty) join them back together again. It means you can do it all at once, in small increments, or something in between and at your own pace, according to your own resource constraints.

Just like Lego (snap it together, build out)

Your business decides the core structure of business concepts, and the data snaps onto these concepts, rather than being forced into more traditional styles of data modelling. You can start right now with as little or as much data as you have. Add to it as you figure out what your future requirements are instead of designing a full data model first. And because Data Vault is atomically structured you can use the tiniest piece in a dashboard from the first time you have that smallest bit of agile data available. If your data grows, it won’t affect what you have already measured.

You might want to do a trial on a small scale to evaluate the benefits before expanding it to the entire enterprise? No problem, Data Vault won’t waste your time meaning whatever you do now will not change allowing you to grow and expand your enterprise data warehouse once and one time only. You may choose to scale quickly by setting parallel teams each building up separate agile data stores. With Data Vault, these can all be easily synchronised through the separation of the core structure from data and can be merged together afterwards. If teams agree on a simplistic core structure of business concepts and relationships, they can each develop on top of the shared construct. It isn’t a model, it’s an agreed way of connecting business entities – something you can achieve on a whiteboard!

  • Start anywhere, evolve elsewhere, bring it all together anytime: Integrate business domains driven by real business priorities. Relations between different business domains can be established at any later point in time (doesn’t need to be thought of upfront).
  • Start small, grow any size: It doesn’t matter whether you’re building a small data warehouse for some Master Data or a full enterprise data warehouse. The advantage of Data Vault projects is that they bring results very early after you start the project, and because they are based on business concepts more than data, they are highly flexible as they grow.
  • Automate from the start for fast iterations: The key to Data Vault is deconstructed activities which are rapidly repeatable. Data Vaults and automation go hand in hand, indeed the simplicity and consistency of the approach encourages automation. Standardise your business concepts (on a whiteboard), generate the structure, automate the integration routines and then start growing your agile data warehouse.

Deconstructed data models improve automation

Once the key components of the business and relationships are understood (keys or IDs, like the customer reference number on an invoice), you simply hang data off of it, as you find it. It means that a disjointed approach to data gathering, if that’s all that is possible, doesn’t impact on the final Data Vault method, because that’s how it’s meant to work! Fast, slow, big small – it doesn’t matter. Work on different areas independently and then bring it all together, or start by attaching all of your existing data marts as sources and keep going from there, but with agility. It’s a very intuitive and flexible process for the business who can then follow the data modelling without any requirement other than they understand how the business processes work and can operate a whiteboard.

Breaking down and segregating the work in this way makes it a very repetitive process which scales well with automation. In fact, automation is highly recommended as the core structure of the Data Vault is simply meant to extract and load data ‘Satellites’ onto the lattice-work of ‘Hubs’ and ‘Links’. In fact by segregating the methodology into a network of data tables, the system almost requires automation to fulfil it’s ultimate goal of near-real-time business data for analytics. You’ll be pleased to hear that dFakto has already developed the key automation features you need to build your first Data Vault in just a few weeks!

You want to learn more about Data Vault?

Find out more from our friendly team of business and technical experts at: info@dfakto.com or +32(0)2.290.63.90.


dFakto explains how you can both grow profits while lowering costs by responding more rapidly to your customers’ needs

Many companies are under constant pressure, and this is felt right throughout the company. While boards and CEOs continue to insist on increased revenues and lower costs, the necessity to delight the customer, multiply customer growth and have them at the centre of your preoccupations is becoming increasingly critical to the well being of the business.

The problem is that while companies are trying hard to become better, the results often fall short. dFakto believes that to be successful in building value and providing compelling customer experiences at a lower cost, companies need to use the power of digital and data to transform themselves into a more agile and more effective organisation devoted to the customer growth.


Boosting customer growth – It’s not about big data, it’s about the right data

Sadly, a company’s internal systems are too often disconnected from the reality of the customer. This is the way it is as most of these processes grew from an internal requirement and a traditional focus on siloed products and services. This inside-out focus is at odds with winning over the customers of today who are increasingly demanding and tech-savvy.

Even for those companies who have taken the decision to build a ‘live’ picture of their customer have run into difficulties. There is a natural desire to link legacy ERP system with a new CRM approach, yet the results have been patchy to say the least. This kind of decision often means years of development and decisions on technology that are quickly superseded and behind the times. In short, there is quite a distance between the internal systems and processes (with legacy systems and rigid IT infrastructure) and the demands of the marketing department and ‘customer experience’ management.

Customer growth thanks to fast, evolutive and tailor made data management platform

A rapid and efficient solution to this problem is a dynamic data warehouse from dFakto. It sources the required data no matter from what type of system and puts it in a unique “data factory”. There the data is enriched as necessary, and quality controlled, and converted into advanced reports that offer genuine customer insights.

The result is near real time data on your customers and a far greater marketing agility letting you make decisions and respond to changing customer expectations rapidly. The system offers insights based on real data, real customer-centric data and is highly secure.

The payback to you is operational excellence and a far more nimble organization : dFakto can develop a proposal in days and deliver first data insights within a matter of months. We’ve done it before for the civil engineering, aerospace, finance and the automotive sectors, and we can do it for you.

Ask us for a confidential conversation and see how we can help you generate growth beyond your expectations : Thibaut Ladry – tla@dfakto.com, m : +32 473 514 730.


Is your company’s data warehouse working the way you want it to? Are you happy with your dashboards? Do all the sources of data you produce feed into it? It’s possible that if you’re using the ‘old methods’, the answer is ‘probably not’. Sadly this is going to change anytime soon if your business changes with any pace. Indeed, you likely have a series of overlapping change requests and a reluctance to introduce more until you can have assurance of some progress, or even some semblance of currency.
Rising above the change requests, a more pertinent question is: ‘Does your data warehouse reflect the current reality of your business?’ Certainly not if you are still waiting on those pending changes. In fact, I’ll bet you have probably resigned yourself to all of this lag and labelled it ‘business as usual’. You might be, if you are proactive, be looking out for technology solutions, but it need not be like that.

Old dogs, new tricks?

Using just the ‘tried and tested’ approaches, anytime you change one thing in your data warehouse there are usually knock on effects such as dependent systems. As a result your data warehouse must usually alter (and test!) iteratively until its data model can cope. This happens a lot, and many have resigned themselves to accepting this as the price of change in ‘modern’ data warehouse technology. We know it’s not true.
New regulations, compliance requirements, connected external systems, business mergers, process changes or simply expanding requirements introduce a need for modifying how you capture, store and analyse your data. In fact the most significant challenge of any data warehouse is to capture change while maintaining continuity with historical data so that reporting is consistent and accurate. Sounds impossible? Not really.
Making changes using traditional methods will nearly always require a review of data models, impact assessments and even complete rewrites when new requirements are implemented – it’s all very slow, tedious and sometimes painful. And when you are done, the model changes must be tested for assurance that nothing has been broken, before moving from staging to live analytics. Weeks turn into months with all of this, and it involves a huge amount of time and effort.

A better approach: Data Vault

It’s complicated keeping up with today’s pace, which is why some clever people came up with the Data Vault discipline. It’s a method of separating core data structure from the data content. The result is you can quickly and accurately adapt to change without suffering lag times and the inaccuracies of traditional data warehouse methodologies. You build the structure ONCE and then hang your data off of it in ‘satellites’. Need to change? No problem, just add additional structure without breaking what was already there. If you need to include more data, you simply hang more off of the core structure (‘hubs’ and ‘links’ ).
Data Vault is a methodology, not a technology. It is a way of thinking about data, rather than a shiny new trend. It involves separating the structure of your data from the changes. It simplifies and stabilises your data model, so that it becomes a fixed entity into which your raw data is locked and never changes. It is a vault of truth of your business – warts and all.
Read more: Here

You want to learn more about Data Vault?

Find out more from our friendly team of business and technical experts at: info@dfakto.com or +32(0)2.290.63.90.


Regulation often drives evolution in organisations, but never before has regulation been so close to shaping every organisation’s approach to data. The General Data Protection Regulation, EU’s answer to the increasingly fast evolution of data collection and treatment in our modern world, intends to rule the collection and usage of personal data through a global approach. All public and private organisations must prepare for the scheduled start date of 25 May 2018.

A data protection regulation? What exactly does it protect?

The GDPR intends to protect privacy by applying rules to the collection, safekeeping, treatment and usage of personal data, and in doing so substantially altering the terms of previous European or national regulations on the topic. Personal data is defined as information permitting direct or indirect identification of individual persons. This encompasses the obvious names and identification numbers, but also covers location data and specific characteristics of persons such as physical traits, health information or socio-economic data. Anything that links to a person is considered personal data, whatever the context and whatever the reason the information was collected in the first place.

This means that information on the private or professional context of a person is still personal data, and as a consequence not only is client or prospective client data concerned, but also data on human resources, providers and partners, be it on a commercial or non-profit basis.

Treatment of personal data

As is now commonplace, the regulation encourages organisations to enforce privacy management through a risk-based approach, meaning that the level of control and protection must be commensurate with the sensitivity of data.

The key objective of the data controller is to ensure that data subject rights are protected on an on-going basis, as well as through specific actions and participation in environment-altering projects for assessment and safeguard setup purposes. Even though GDPR aims at protecting privacy, the regulation remains mindful of the importance of data analysis in today’s connected world, allowing some leeway through pseudonymisation, encryption, management of client consent and privacy-by-design concepts. The data controller will be responsible for safekeeping personal data as well as demonstrating compliance upon request from a data subject or regulator.

Where relevant, i.e. in public institutions or certain organisations handling sensitive personal data, a data protection officer must be appointed. The data protection officer has an overarching responsibility for privacy protection, and should be considered for all ends and purposes as the GDPR compliance officer. The data protection officer role encompasses the on-going training of and advice to concerned functions across the organisation, as well as a front-line role towards the relevant regulatory bodies.

Roadmap to compliance

Achieving regulatory compliance usually starts with a gap analysis. This will help identify the key items to address and pave the way for remediation implementation. In the case of GDPR, the data-centric approach to privacy protection means that any future data source or repository underlying existing or new infrastructure will in turn be subject to the same regulatory requirements.

Building a future-proof infrastructure today seems like a vain promise, and it probably is, due to the increasing pace of regulation review, the speed of technological evolution and the relative complexity for organisations to marshal their forces into projects. dFakto offers an easily updated, easily connected solution to the existing infrastructure with the sole purpose of cataloguing and analysing all data in the light of the regulation, shifting the challenges from the entire organisation to just maintaining a single, dedicated application.

Platform-based analytics

Maintenance of compliance standards in an environment where both the infrastructure and compliance rules may rapidly evolve constitutes probably the greatest challenge of GDPR for any private or public organisation.

Maintaining compliance in a traditional project steering environment implies privacy-centric functions in each and every project, substantially altering the momentum of project roll-out through additional governance, compliance gap analyses and potential re-engineering. Even though these constraints are common to project management across most sectors, they do not constitute a sound base to grow a future-proof privacy-minded business and technological environment.

Experience tells us all that both the infrastructure and regulation will evolve over time, probably even faster than we would expect. With that in mind, it seems reasonable to assume that the most relevant action today is to implement a platform, the role of which is limited to connecting all data sources across the organisation to detect personal data, store an audit trail and issue actionable reporting to data controllers.

Today’s technology and process expertise can help us move beyond the limitations of the past, as today, we can build a living catalogue of data able to:

connect easily to all existing and future data sources,
apply rules categorising data depending on their compliance risk level,
store encrypted or pseudonymised data,
report to the data controllers their list of required actions based on compliance rules,
– and structure the follow-up to ensure that actions are executed in due time.

In this way, we can move from a state of on-going monitoring, regular reporting and one-off analyses to a dedicated continuous process, limiting the operational and project impacts on the whole organisation while enforcing a privacy-centric approach to data management.

Conclusion: the case for platform-based personal data management

Connecting any tool to a new source will always represent some IT work to integrate data. But that work is limited in scope and cost, and above all does not hinder the progress of new implementation projects and the maintenance of the existing infrastructure.

Likewise, updating the detection and the compliance rules defining actions to be taken always represents analysis work, sometimes even requiring dedicated regulation specialists. But that work is also limited in scope and time, and can be performed in a yearly review process meaning that it does not need to be started for each new project in the organisation.

Shifting the burden of compliance implementation and maintenance from a large project team to a small taskforce of dedicated individuals not only makes sense from an organisational point of view, but also from a customer-centric point of view, the cost of regulatory compliance usually ending up being borne by the client.

All these elements build the case for a platform-based answer to the issue of regulatory requirements on data management – a solution rooted in data management to answer the challenges of privacy protection.

You want to learn more about GDPR?

dFakto is organizing a workshop with the goal to review and discuss the main issues that need to be tackled when it comes to GDPR. For more info please contact JOANA SCHMITZ jsc@dfakto.com or +32(0)2.290.63.90.

More about the author: LinkedIn – Dorian de Klerk


Transformation has become a necessity in business rather than a nice to have, indeed those businesses not thinking about it in one form or another, are really just sticking their heads in the sand.

Research by the consulting company McKinsey confirms that only 26% of executives say their companies’ transformations have been very or completely successful at both improving performance and equipping the organization to sustain improvements over time. According to the research, no single action explained the difference in success rate; however one thing was clear: the more change actions an organization undertook, the higher the likelihood that they would succeed in their transformation. The research also suggested that practices include communicating effectively, leading actively, empowering employees, and creating an environment of continuous improvement were contributing factors.

16 years of transformation experience

Over the last 16 years, dFakto has gained a lot of experience about what makes a successful transformation and is happy to share some insights. A good plan is obviously a good place to start of course, and our business consultants, who’ve seen a few transformation initiatives in their lifetime, are well placed to give an opinion on the quality of the plan.

It’s usually pretty easy to spot the companies and organisations that are the most likely to succeed, as they all tend to have a clearer view on their future. Realistic answers to a couple of pertinent questions quickly tend to sort out the men from the boys, as it were.

What is the objective of your transformation? How long have you given for it to happen? – Have you got a team dedicated to the project or are they combining other functions? – Are they experienced? – Who is the sponsor of this transformation? – What level are they at? Board of Directors? Management? Head of department? – What funds have been devoted? Is this Capex or Opex? – How will progress be monitored? – How will management be informed of the progress?

It is on these last 2 questions that dFakto is particularly able to advise and help. Too often the focus is just on the ‘actual’ expenditure, thereby only looking backwards, when an eye on both current and future investments is required. It’s a tough nut to crack, and is especially so as the ‘target’ is a moving one … companies tend to start out by thinking they need to be at place A but quickly have to be planning ahead as to how they can arrive at places B and C. There is no doubt that the issue is even tougher when the company in question has no idea as to their progression and how they are doing.

Numbers are not enough

Experience has taught us at dFakto that the numbers are not quite enough, and that a combination of qualitative and quantitative data is required to know where you are, how you are progressing and how much still has to be completed. Of course, financial expenditure and planned investment information is collected and verified, but so too is how people feel about how they are progressing.

The raw data to be collected can be made available in many different forms including project management software, text files, information on websites, SAP databases etc. Once sourced, it is checked, and presented in such a way that the CEO and/or the board can quickly and easily get answers to their questions. This dashboard can be updated in just a few clicks as the numbers are automatically and directly sourced from the client’s system. Consolidation is made quickly meaning that it is easy to play the ‘what if’ game, as in what would be the impact of adding this or that plan for example.

The naked truth

dFakto gives management the reassurance that the data they are looking at is reliable and has been cross-checked, and that the data being presented is the same as that being used all around the company… in other words, “one single version of the truth” with no trickery, no reworking or manipulation of the figures.

The automation of this data collection across the organisation means that the PMO and team can spend more time thinking about the issues and much less time simply organising the collect and verification of data. It means more time can be invested anticipating the next moves in the plan, and on the change management required.

A bonus is that there is no agenda behind the figures as they are what they are, and accepted as such: facts and figures. The happy by-product of this is any discussions for change are non-confrontational making it easier to identify, discuss and rectify the bottlenecks in the transformation.

Change and transformation is painful by definition, but it’s a lot easier than having to make big decisions in a hurry. All the more reason why it’s nice to know where you’re at, where you are going and when you should get there.

Obviously, this doesn’t meant that your plan should be as static as figures can be. Linearity can be the reason for its failure. When implementing a transformation plan, a special attention should be paid to the “non-linearity” of it.

Any transformation plan must allow some flexibility to be able to be quickly modified in order to reduce the damages if one does note that a bad idea is being implemented. On the other hand if one does notice that the plan overdelivers, the plan should be flexible enough to overcome initial expectations. This “non-linear” approach has many consequences: ideas can be tested out and resources can be allocated much faster if needed.

In conclusion

With continuous improvement now becoming a core competency of managers in business today, it is obviously important they stay on top of what is happening in their organization. With dFakto’s help they can understand what’s happening and make the necessary transformations thereby ensuring their business is sustainable and fighting fit for the future.

You want to learn more about Transformation?

dFakto is organizing a workshop with the goal to review and discuss the main issues that need to be tackled when it comes to Transformation. For more info please contact JOANA SCHMITZ jsc@dfakto.com or +32(0)2.290.63.90.