Applying Syntactic Similarity Algorithms for Enterprise Information Management
published: Sept. 14, 2009, recorded: June 2009, views: 3100
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
For implementing content management solutions and enabling new applications associated with data retention, regulatory compliance, and litigation issues, enterprises need to develop advanced analytics to uncover relationships among the documents, e.g., content similarity, provenance, and clustering. In this paper, we evaluate the performance of four syntactic similarity algorithms. Three algorithms are based on Broder's ``shingling'' technique while the fourth algorithm employs a more recent approach, ``content-based chunking''. For our experiments, we use a specially designed corpus of documents that includes a set of ``similar'' documents with a controlled number of modifications. Our performance study reveals that the similarity metric of all four algorithms is highly sensitive to settings of the algorithms' parameters: sliding window size and fingerprint sampling frequency. We identify a useful range of these parameters for achieving good practical results, and compare the performance of the four algorithms in a controlled environment. We validate our results by applying these algorithms to finding near-duplicates in two large collections of HP technical support documents.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !