Archive

Posts Tagged ‘Enterprise Search’

Unstructured Data = Letters in a Bucket?

August 9, 2011 1 comment

Unstructured Data – The Myth

There’s a lot of noise about “structured” and “unstructured” data. But what is this “unstructured data” beast we keep hearing about?

It’s almost impossible to see a presentation from a search or ECM or eDiscovery vendor who doesn’t talk about how “80% of a corporations data is unstructured.” Aside from the fact that much of this data is garbage (see my earlier postings), is it really fair to call it unstructured?

Personally I hate the descriptor “unstructured content.” When I hear it,all I can think of is letters spilling out of a bucket – no rhyme or reason to the flow or sense to be made (unless you’re Edward Lorenz).

What we so often call unstructured content really has lots of structure that can be used and leveraged for many types of text analytics purposes.

Explicit Metadata

Every file system object has metadata attributes associated to it. This is explicit metadata which search vendors have used for years to help make content findability more successful. Things like “name,” “date,” “filetype” are examples of explicit metadata.

Since the dawn of computers, explicit metadata has been used to help computers store information, and users to find it.

Implied Metadata

This is metadata that can be defined or extracted from the content of objects in file systems. Office documents have a Properties sheet with information such as “last accessed date,” “author, and “word count.” Music files often store album information and song length.

Even deeper, within many content objects we can identify and extract things like credit card numbers, phone numbers, dates, or names. This type of entity identification and extraction enables a rich metadata view into content – something only possible because there’s _structure_ not random letters.

So What?

It’s unfair to call an email or Word document “unstructured.” I prefer the generic “content” because when you really look, there’s rich structure that can be exploited by a business. Companies can use “last accessed date” to support RM retention and disposition activities, or identification of credit card numbers to ensure proper PII strategy adherence.

Thankfully, there’s no such thing as unstructured data. Otherwise the companies I work with wouldn’t be able to use the explicit and implied metadata, and rich object structure of content in their content analytics strategies.

Data Quality and Unstructured Content

February 6, 2011 4 comments

Data quality matters to unstructured content.  Just as quality is a critical requirement in structured data integration, the need is intrinsic in an effective content strategy.

Unstructured content is rife with quality issues.  Spelling errors, nearly random formatting of key attributes like phone numbers, acronyms (and their variants) are just some of the quality issues confronting unstructured content – just like structured data.

There’s no such thing as an irrelevant data quality issue.  To quote Ted Friedman:

“If you look at…any business function in your company, you’re going to find some direct cost there attributed to poor data quality.”  http://www.gartner.com/it/products/podcasting/asset_145611_2575.jsp

The quality of your data directly impacts the businesses ability to support effective content analytics, search and content integration.

If you’re going to leverage content, you must be able to trust it – and that means executing quality processes as part of the semantic enrichment before analysis, search or content integration can be successful.

Data quality processes are key to an effective BI strategy.  So to are they for content analytics, search and content integration (ETL).

Smarter content means trusted content.  You can’t trust your content unless there’s a quality process around it.

Data Quality for Unstructured Content

Not intended to be comprehensive, here are some core quality activities one must undertake to create Trusted Business Content:

Standardization – This comes in the form of spelling, managing acronyms and content formats.
a) Spelling:  Correcting the spellings of product, vendor or supplier names.  Frequently in documents and web pages product names will be misspelt, and it’s simple yet critical to recognize those errors and correct them.
b) Acronyms:  You must recognize and standardize acronym usage.  For example, recognizing and standardizing I.B.M and ibm and “International Business Machines” to the standard “IBM.” 
c) Formats:  Recognize strings such as 647.285.2630 and +1-640-285-2630 and format them into a consistent form for phone numbers.

Verification – This comes in the form of validating strings and recognizing their semantic meaning.  In conjunction with standardization, verification capabilities such as recognizing a string as “AA######” means there’s a potentially valid Canadian passport number.  Or identifying a string of 16 numbers and determining it’s an Amex credit card number.  This verification of the semantic meaning of extracted entities enables the business to both standardize and provide an assessment of unstructured content assets.

Enriching – This means recognizing a document contains any of the above and enriching the surrounding metadata.  It enables more effective search, deeper/richer analytics and supports content ETL processes as well.  For example, identifying a document contains a credit card number is of business value, but then being able to enrich the document metadata attributes with a flag indicator that a credit card number exists is critical to effective, functional assessment and empowers a richer more effective search experience.

Data quality on unstructured content matters.  Quality makes search better, content analytics effective and integration usable.  Semantic enrichment of quality free content is akin to “lipstick on a pig.”

It’s a business imperative to make quality a component of your content strategy.