Medical and Veterinarian Diagnostic Services

[et_pb_section][et_pb_row][et_pb_column type=”1_2″][et_pb_text admin_label=”Text” background_layout=”light” text_orientation=”left”]

Medical and Veterinarian Diagnostic Services

The new product which is a derivative of the current will provide a relatively straightforward classification of “good” vs “bad” images of animal companions providing medical personnel a quick and inexpensive way to determine the next diagnostic step.

Purpose Using state-of-the-art image analysis and machine learning methods, this system provides an informative diagnosis tool to aid medical personnel when analysing MRI, CT, and ultrasound scans.

Introduction Diagnosing malignant nodules in radiological scans is a complicated task, requiring training and expertise that can take years to obtain. Radiologists with less experience may have difficulty in extracting nodule features with sufficient accuracy [1]. Therefore an automatic diagnosis method offers an objective information source for use when interpreting scans. The basic premise of this project is that a veterinarian or radiologist takes a scan in their surgery. The veterinarian or radiologist then outlines any suspected malignant nodules, and the proposed system informs them of the nodule’s probability of malignancy. The system extracts objective features from the scan, which are selected to best discriminate malignant nodules. Research shows that when training such systems on objective features, rather than subjective features rated by radiologists, it is possible to offer greater success in malignancy detection [1].

Proof of Concept The initial stage of the work consisted of creating a proof of concept using publicly available datasets (LIDC and INbreast) and was built upon existing tools.

Data (LIDC) The Cancer Imaging Archive (TCIA) provides a database of CT scans of over 1000 patients (resulting in over 244,000 images) [2], two examples can be seen in Figure 1. These scans have manual markings of nodule boundaries associated with them (see for example the red areas in Figure 1), and a number of subjective features rated by radiologists. In addition to these features (which are outlined in the following section), the dataset contains a rating of malignancy for each nodule (again subjectively reported by radiologists), and for 123 patients pathological diagnoses also exist, giving true ground truth to compare the methods to.

Features The following subjective features are contained within the TCIA dataset: Subtlety Internal Structure Calcification Sphericity Margin Lobulation Spiculation Texture which are associated with each marked nodule. A total of four experienced radiologists have inspected each scan and therefore there are a total of four opinions as to nodule boundaries and their subjective feature ratings. The mean of these is used to verify the model. The nodule classification toolbox enables users to extract the ground truth images from the dataset, as these are stored in an unconventional (for image processing applications) XML format. Basing the work on this toolbox allowed the above subjective features and boundaries to be easily extracted. Moving beyond this, the boundary information is then used to calculate the following objective features: Degree of Circularity Degree of Ellipticity Degree of Irregularity Root-Mean-Squared Variation First Moment of Power Spectrum Tangential Gradient Index Radial Gradient Line Enhancement Index Mean Gradient Mean Pixel Value Standard Deviation of Pixel Value These features are chosen to correlate with the information extracted by the radiologists when rating the subjective features, but being computed automatically and being mathematically defined, offer an objective measure of the nodule. The TCIA dataset also contains a rating of malignancy for each nodule, which is again subjectively reported by each radiologist. In addition to this, there exist pathological diagnoses of 1000 patients giving true ground truth to compare the methods to. Although, the initial study’s aim was to develop a system that can approximate the diagnosis of an experienced radiologist.

Data (INbreast) In addition, the experiment was repeated with the INbreast Cancer patient database (containing 412 scans) using the same methodology except that the dataset does not contain subjective ratings so this part was not included. The nodule outlines are extracted from the XML files and the same objective features as described above are extracted from the DICOM images and used for classification.

Classifier Neural networks have been successfully applied to this problem, and have been shown to exceed the performance of manual markers [1, 3, 4]. Architectures such as the multilayer perceptor (MLP) allow highly non-linear decision boundaries in the data to be modelled and are therefore ideal for application to complex, high dimensional problems such as that tackled here. In addition, the SVM (which gives similar performance to the neural network but offers more reliable training) and KNN (which offers increased training and application performance) have been successfully applied. Feature reduction methods are used to increase classification performance and also to reduce training and execution times. GPU processing is also exploited when an appropriate graphics card is present and the neural network classifier is used to further reduce training and execution timings.

References [1] K. Nakamura, H. Yoshida, R. Engelmann, H. MacMahon, S. Katsuragawa, T. Ishida, K. Ashizawa, and K. Doi. Computerized analysis of the likelihood of malignancy in solitary pulmonary nodules with use of artificial neural networks, Radiology 214 (3): 823—830, 2000.[2] S. Armato, G. McLennan, L. Bidaut, M. McNitt-Gray, C. Meyer, A. Reeves, B. Zhao, D. Aberle, C. Henschke, E. Hoffman, E. Kazerooni, H. MacMahon, E. van Beek, D. Yankelevitz, et al.. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A completed reference database of lung nodules on CT scans, Medical Physics 38 (2): 915—931, 2011.[3] Y. Wu, M. Giger, K. Doi, C. Vyborny, R. Schmidt, C. Metz. Artificial neural networks in mammography: application to decision making in the diagnosis of breast cancer, Radiology 187 (1): 81—87, 1993.[4] Y. Wu, K. Doi, M. Giger, R. Nishikawa. Computerized detection of clustered microcalcifications in digital mammograms: applications of artificial neural networks, Medical Physics 19 (3): 555—560, 1992. 

[/et_pb_text][/et_pb_column][et_pb_column type=”1_2″][et_pb_text admin_label=”Text” background_layout=”light” text_orientation=”left”]

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]