The goal is to turn data into information, and information into insight.
Carly Fiorina, Former CEO of HP
There are many different factors that contribute to dFakto’s ability to deliver insights and results quickly and accurately. The main factor is its proprietary “DataFactory”, that uses a “Data Vaulting” process for inputting data.
Here is what dFakto has put in place :
dFakto’s understanding of the problem
When dFakto looks to solve a client’s data analysis problem, it starts by understanding the business problem. Not from the amount of data available within the company.
First, dFakto Business Analysts are looking for precise answers needed to take a decision.
Second, they are looking for data that will be able to provide insights.
Consequently, they only source the data they need in order to solve the raised problem.
dFakto’s systematic methodology
The “DataFactory” is conceived around a model of how the client does its business. In this way, it mirrors the critical information that is needed to solve a client’s problem. Furthermore, the incoming data is broken down into its most elemental parts and then archived.
This means that if ever new data fields or new sources are added, there is no need to configure again the database architecture.
dFakto creates a “DataFactory” with the data it receives from the client and uses a “Data Vaulting” process for inputting data. It stores everything, regardless of the operating system it comes from. It is a rigorous and systematic way of storing the data. Therefore, it ensures the history of all changes that may appear with the data. Consequently, it enables an auditor to trace values back to their original source. It also lets the Project Manager see who has updated a particular data field, and when.
This has the objective to have an easy traceability of every change that appear and therefore create insights for the client.
The fundamental principle of the “DataFactory” is that there is no distinction at this stage between good and bad data. This aspect is only considered and worked on after the data has first been stored.
There is only “a single version of the facts”.
The advantages of this type of a system are multiple. The input data is always raw data with no previous manipulations. In this way, they can track exactly:
- The origin of the data;
- The accuracy of the data;
- The responsible of the data and;
- The location of the data.
Better still, it doesn’t matter how complex the model becomes or how many new data sources are added.
In this way, there is never any need to go back to the beginning and start the whole process again. It means that if another problem arises and the same data is required, then everybody is able to access the same ‘input’ at any time.
In conclusion, dFakto therefore doesn’t necessarily do “big data”, though it can and does. Instead, it works with the “the right data”. The company uses its business analysis expertise and experience to unpack a client’s problem and see precisely what information is needed to answer a specific question. This results to a reduction of time spent on collecting and checking data. Consequently, the time spent on understanding and interpreting what the results mean increases. Better still, clients will enjoy peace of mind, as they know that the answers and insights generated are based on the most recent available data.
Find out more from our friendly team of business and technical experts at email@example.com or +32(0)2.290.63.90.