Data Quality should always be the top priority for your information strategy. It means getting accurate and timely information to the right people. This includes validation of data from source systems, ensuring that the data is flowing through the systems correctly and that security and access is set up appropriately. It also means that you must proactively reach out to the users when there is a problem. Open communication is paramount when addressing issues. If the users do not believe that they have access to accurate and timely information they will quickly find other ways to answer their questions even if they are outside of the tools and systems that are provided to them.
Data quality can be subjective – Generally with Master Data, Financial, Human Resources or Operational data the quality standards are easy to set with the data owner. However, when you start working with semi-structured, unstructured, or “big” data the definition of “quality” may be different depending on who is using the data. One person may want to clean up the “noise” in the data while a data scientist may be looking for that “noise” to gain insights into the acquisition of the data itself. This is where having different approached for “raw” data and “curated” or “governed” data is critical.