Data cleansing
Range Constraints
Range Constraints: typically, numbers or dates should fall within a certain range. That is, they have minimum and/or maximum permissible values.
Accuracy is very hard to achieve through data-cleansing in the general case, because it requires accessing an external source of data that contains the true value: such "gold standard" data is often unavailable.
Accuracy has been achieved in some cleansing contexts, notably customer contact data, by using external databases that match up zip codes to geographical locations (city and state), and also help verify that street addresses within these zip codes actually exist.
Regular expression patterns
Regular expression patterns: Occasionally, text fields will have to be validated this way. For example, phone numbers may be required to have the pattern (999) 999-9999.
Data constraints fall into the following categories:
Data-Type Constraints Range Constraints Mandatory Constraints Unique Constraints Set-Membership constraints Foreign-key constraints Regular expression patterns Cross-field validation
Data-Type Constraints
Data-Type Constraints - e.g., values in a particular column must be of a particular datatype, e.g., Boolean, numeric (integer or real), date, etc.
In datasets pooled from different locales, weight may be recorded either in pounds or kilos, and must be converted to a single measure using an arithmetic transformation.
In datasets pooled from different locales, weight may be recorded either in pounds or kilos, and must be converted to a single measure using an arithmetic transformation.
Incompleteness is almost impossible to fix with data cleansing methodology: one cannot infer facts that were not captured when the data in question was initially recorded.
In some contexts, e.g., interview data, it may be possible to fix incompleteness by going back to the original source of data, i,e., re-interviewing the subject,
but even this does not guarantee success because of problems of recall - e.g., in an interview to gather data on food consumption, no one is likely to remember exactly what one ate six months ago.
In the case of systems that insist certain columns should not be empty, one may work around the problem by designating a value that indicates "unknown" or "missing", but supplying of default values does not imply that the data has been made complete.
Mandatory Constraints
Mandatory Constraints: Certain columns cannot be empty.
The validation may be strict, such as rejecting any address that does not have a valid postal code, or fuzzy ,such as correcting records that partially match existing, known records.
Some data cleansing solutions will clean data by cross checking with a validated data set. A common data cleansing practice is data enhancement, where data is made more complete by adding related information.
Data cleansing, data cleaning, or data scrubbing is the process of detecting and correcting (or removing) corrupt or inaccurate records from a record set, table, or database and refers to identifying incomplete, incorrect, inaccurate or irrelevant parts of the data
and then replacing, modifying, or deleting the dirty or coarse data. Data cleansing may be performed interactively with data wrangling tools, or as batch processing through scripting.
Cross-field validation
Cross-field validation: Certain conditions that utilize multiple fields must hold.
Inconsistency occurs when two data items in the data set contradict each other: e.g., a customer is recorded in two different systems as having two different current addresses, and only one of them can be correct.
Fixing inconsistency is not always possible: it requires a variety of strategies -
The term integrity encompasses accuracy, consistency and some aspects of validation but is rarely used by itself in data-cleansing contexts because it is insufficiently specific.
For example, "referential integrity" is a term used to refer to the enforcement of foreign-key constraints above.
For example, appending addresses with any phone numbers related to that address. Data cleansing may also involve activities like, harmonization of data, and standardization of data.
For example, harmonization of short codes st, rd, etc. to actual words street, road, etcetera Standardization of data is a means of changing a reference data set to a new standard, ex, use of standard codes.
Unique Constraints
Unique Constraints: A field, or a combination of fields, must be unique across a dataset. For example, no two persons can have the same social security number.
Set-Membership constraints
Set-Membership constraints: The values for a column come from a set of discrete values or codes. For example, a person's gender may be Female, Male or Unknown (not recorded).
Data quality High-quality data needs to pass a set of quality criteria. Those include:
Validity Accuracy Completeness Consistency Uniformity
The set of values in a column is defined in a column of another table that contains unique values.
For example, in a US taxpayer database, the "state" column is required to belong to one of the US's defined states or territories: the set of permissible states/territories is recorded in a separate States table. The term foreign key is borrowed from relational database terminology.
Foreign-key constraints
Foreign-key constraints: This is the more general case of set membership.
For example, in laboratory medicine, the sum of the components of the differential white blood cell count must be equal to 100 (since they are all percentages).
In a hospital database, a patient's date of discharge from hospital cannot be earlier than the date of admission.
Data cleansing differs from data validation in that validation almost invariably means data is rejected from the system at entry and is performed at the time of entry, rather than on batches of data.
The actual process of data cleansing may involve removing typographical errors or validating and correcting values against a known list of entities.
Accuracy
The degree of conformity of a measure to a standard or a true value.
Uniformity
The degree to which a set data measures are specified using the same units of measure in all systems .
Consistency
The degree to which a set of measures are equivalent in across systems .
Completeness
The degree to which all required measures are known.
Validity
The degree to which the measures conform to defined business rules or constraints
After cleansing, a data set should be consistent with other similar data sets in the system.
The inconsistencies detected or removed may have been originally caused by user entry errors, by corruption in transmission or storage, or by different data dictionary definitions of similar entities in different stores.
When modern database technology is used to design data-capture systems, validity is fairly easy to ensure:
invalid data arises mainly in legacy contexts where constraints were not implemented in software or where inappropriate data-capture technology was used e.g., spreadsheets, where it is very hard to limit what a user chooses to enter into a cell.
e.g., deciding which data were recorded more recently, which data source is likely to be most reliable the latter knowledge may be specific to a given organization,
or simply trying to find the truth by testing both data items (e.g., calling up the customer).