An unsupervised machine learning approach to segmentation of clinician-entered free text. Academic Article Article uri icon

Overview

MeSH

  • Area Under Curve
  • Artificial Intelligence
  • ROC Curve

MeSH Major

  • Algorithms
  • Medical Records
  • Natural Language Processing

abstract

  • Natural language processing, an important tool in biomedicine, fails without successful segmentation of words and sentences. Tokenization is a form of segmentation that identifies boundaries separating semantic units, for example words, dates, numbers and symbols, within a text. We sought to construct a highly generalizeable tokenization algorithm with no prior knowledge of characters or their function, based solely on the inherent statistical properties of token and sentence boundaries. Tokenizing clinician-entered free text, we achieved precision and recall of 92% and 93%, respectively compared to a whitespace token boundary detection algorithm. We classified over 80% of punctuation characters correctly, based on manual disambiguation with high inter-rater agreement (kappa=0.916). Our algorithm effectively discovered properties of whitespace and punctuation in the corpus without prior knowledge of either. Given the dynamic nature of biomedical language, and the variety of distinct sublanguages used, the effectiveness and generalizability of our novel tokenization algorithm make it a valuable tool.

publication date

  • 2007

has subject area

  • Algorithms
  • Area Under Curve
  • Artificial Intelligence
  • Medical Records
  • Natural Language Processing
  • ROC Curve

Research

keywords

  • Evaluation Studies
  • Journal Article

Identity

Language

  • eng

PubMed Central ID

  • PMC2655800

PubMed ID

  • 18693949

Additional Document Info

start page

  • 811

end page

  • 815