There are many different factors that contribute to dFakto’s ability to deliver insights and results quickly and accurately. One of them is its proprietary “dataFactory”, that uses a “data vaulting” process for inputting data.
dFakto creates a “dataFactory” with the data it receives from the client. This is entered into the factory using a “data vaulting” process that catalogues everything, regardless of the kind of operating system it comes from. It’s a rigorous and systematic way of storing the data, and ensures that changes are not possible once the data has been entered into the system without it being noted somewhere. Not only does this enable an auditor to trace values back to their original source but it lets the project manager see who has updated a particular data field, and when. This means that every entry is accompanied by record source and load date attributes for easy traceability.
The fundamental principles of the data factory mean that there is no distinction at this stage between good and bad data – this aspect is only considered and worked on after the data has first been stored – thereby confirming there is only “a single version of the facts”.
A systematic methodology
The dataFactory is conceived around a model of how the client does its business. In this way it mirrors the critical information that is needed to solve a client’s problem. Furthermore the incoming data is broken down into its most elemental parts and then archived. This means that if ever new data fields or new sources are added, there is no need to reconfigure the database architecture.
dFakto starts by understanding the problem
When dFakto looks to solve a client’s data analysis problem, they start by understanding the business problem and not from the mountain of data available within the company. dFakto business analysts therefore ask themselves what precise answers are needed to make a decision, and what data will be able to provide this answer. This means they only source the data they need that will solve the problem posed.
Not necessarily ‘Big Data’ but the ‘Right data’
The advantages of this type of a system are multiple. The input data is always raw data with no previous manipulations – in this way they can track exactly where it has come from and whether it’s correct, and who/where it was inputted. Better still, it doesn’t matter how complex the model for the problem becomes or how many new data sources are added. In this way, there is never any need to go back to the beginning and start the whole process again. It means that if another problem arises, and the same data is required, then everybody is able to access the same ‘input’ for any problem solving required.
In conclusion, dFakto therefore doesn’t necessarily do “big data”, though it can and does, instead it works with the “the right data”. The company uses its business analysis expertise and experience to unpack a client’s problem and see precisely what information is needed to answer a specific question. The result is less time spent on collecting and checking the data, and far more time on understanding and interpreting what the results mean. Better still, clients will enjoy peace of mind, as they know that the answers and insights generated are based on the most recent available data.