Posts

Announcing Prologika Match

Data is your biggest asset and today’s currency but the data can be messy. You know it and we’ve seen it. The messier the data, the bigger the headache. At Prologika, we believe that data quality issues should be addressed as early as possible and the validation process should start with the source systems. You need a solution to detect possible duplicated entries without ETL, custom validation code, and exhaustively hard-coded rules. This is why I’m excited to announce today the availability of Prologika Match!

Data is “dirty” because of misspellings, truncations, missing or inserted tokens, null fields, unexpected abbreviations, and other irregularities. Prologika Match uses approximate string matching (fuzzy lookup) to detect potential duplicate entries to help you improve data quality at the point of data entry. When the user attempts to save a data entry that is similar to what’s already in the database, Prologika Match can detect the issue with a certain level of confidence. Then, the source system can notify the operator about potential data duplication. As a result, the data is cleaned at the point of entry and duplicated data is avoided.

Duplicated data is a serious issue with many industries, such as healthcare or insurance. Suppose that your company sells products to individual customers. Further, suppose that there is no easy way to identify a customer. When a customer contacts your company, the operator responsible for the data entry enters the customer information in the system. However, the data operator might misspell the customer name or its address. As a result, if this issue remains undetected, the system will create a new record for that customer. However, Prologika Match is capable of detecting such duplicated entries by predicting their similarity and notifying the operator.

Sounds interesting? Give Prologika Match a try now!

Announcing Prologika Validator

Data is your biggest asset and today’s currency but the data can be messy. You know it and we’ve seen it – many times. The messier the data, the bigger the headache. At Prologika, we believe that data quality issues should be addressed as early as possible and the validation process should start with the source systems. You need a solution to detect data entry issues without ETL, custom validation code, and exhaustively hard-coded rules. This is why I’m excited to announce today the availability of Prologika Validator!

Available as a cloud service and on-premises offering, Prologika Validator uses predictive analytics to find data anomalies in order to help you improve data quality. It analyzes historical data and discovers common data patterns and relationships. When the user attempts to save a data entry that is outside the norm, Prologika Validator can detect the outlier. Then, the source system can notify the operator about a potential data entry error. As a result, the data is cleaned at the source and wrong data doesn’t end up in downstream systems, such as a data warehouse. Suppose your company buys some products from vendors. The operator responsible for the order entry creates an order in the ERP system, such as Oracle Financials or Microsoft Dynamics. The system might not be capable of detecting a wrong field entry or a wrong combination of fields, such as the user has specified a wrong unit of measurement for the product purchased. However, Prologika Validator is capable of detecting such outliers by predicting the likelihood that the order entry is outside the norm. The validation process is very fast and the operator is notified immediately.

Sounds interesting? Visit the Prologika Validator page and give it a try now. If you want to know how Prologika Validator can help you improve data quality, contact us to discuss your requirements and get a pricing proposal.