Please use this identifier to cite or link to this item:
https://idr.l2.nitk.ac.in/jspui/handle/123456789/7608
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Narendra, Rao, T.J. | - |
dc.contributor.author | Girish, G.N. | - |
dc.contributor.author | Kothari, A.R. | - |
dc.contributor.author | Rajan, J. | - |
dc.date.accessioned | 2020-03-30T10:02:33Z | - |
dc.date.available | 2020-03-30T10:02:33Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, 2019, Vol., , pp.978-981 | en_US |
dc.identifier.uri | http://idr.nitk.ac.in/jspui/handle/123456789/7608 | - |
dc.description.abstract | Development of an automated sub-retinal fluid segmentation technique from optical coherence tomography (OCT) scans is faced with challenges such as noise and motion artifacts present in OCT images, variation in size, shape and location of fluid pockets within the retina. The ability of a fully convolutional neural network to automatically learn significant low level features to differentiate subtle spatial variations makes it suitable for retinal fluid segmentation task. Hence, a fully convolutional neural network has been proposed in this work for the automatic segmentation of sub-retinal fluid in OCT scans of central serous chorioretinopathy (CSC) pathology. The proposed method has been evaluated on a dataset of 15 OCT volumes and an average Dice rate, Precision and Recall of 0.91, 0.93 and 0.89 respectively has been achieved over the test set. � 2019 IEEE. | en_US |
dc.title | Deep Learning Based Sub-Retinal Fluid Segmentation in Central Serous Chorioretinopathy Optical Coherence Tomography Scans | en_US |
dc.type | Book chapter | en_US |
Appears in Collections: | 2. Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
8.Deep Learning Based.pdf | 1.88 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.