IPAD: Iterative Pruning with Activation Deviation for Sclera Biometrics
Matej Vitek, Matic Bizjak, Peter Peer, and Vitomir Štruc: IPAD: Iterative Pruning with Activation Deviation for Sclera Biometrics. Journal of King Saud University — Computer and Information Sciences (JKSU-CIS), Elsevier 2023. Vol. 35-8, 101630.
The sclera has recently been gaining attention as a biometric modality due to its various desirable characteristics. A key step in any type of ocular biometric recognition, including sclera recognition, is the segmentation of the relevant part(s) of the eye. However, the high computational complexity of the (deep) segmentation models used in this task can limit their applicability on resource-constrained devices such as smartphones or head-mounted displays. As these devices are a common desired target for such biometric systems, lightweight solutions for ocular segmentation are critically needed. To address this issue, this paper introduces IPAD (Iterative Pruning with Activation Deviation), a novel method for developing lightweight convolutional networks, that is based on model pruning. IPAD uses a novel filter-activation-based criterion (ADC) to determine low-importance filters and employs an iterative model pruning procedure to derive the final lightweight model. To evaluate the proposed pruning procedure, we conduct extensive experiments with two diverse segmentation models, over four publicly available datasets (SBVPI, SLD, SMD and MOBIUS), in four distinct problem configurations and in comparison to state-of-the-art methods from the literature. The results of the experiments show that the proposed filter-importance criterion outperforms the standard L1 and L2 approaches from the literature. Furthermore, the results also suggest that: (i) the pruned models are able to retain (or even improve on) the performance of the unpruned originals, as long as they are not over-pruned, with RITnet and U-Net at 50% of their original FLOPs reaching up to 4% and 7% higher IoU values than their unpruned versions, respectively, (ii) smaller models require more careful pruning, as the pruning process can hurt the model's generalization capabilities, and (iii) the novel criterion most convincingly outperforms the classic approaches when sufficient training data is available, implying that the abundance of data leads to more robust activation-based importance computation.
@article{vitek2023ipad,
title={IPAD: Iterative Pruning with Activation Deviation for Sclera Biometrics},
author={Vitek, Matej and Bizjak, Matic and Peer, Peter and \v{S}truc, Vitomir},
journal={Journal of King Saud University -- Computer and Information Sciences},
volume={35},
number={8},
pages={101630},
year={2023},
publisher={Elsevier},
doi="10.1016/J.JKSUCI.2023.101630"
}
Paper available on:
.
Exploring Bias in Sclera Segmentation Models: A Group Evaluation Approach
Matej Vitek, Abhijit Das, Diego Rafael Lucio, Luiz Antonio Zanlorensi, David Menotti, Jalil Nourmohammadi Khiarak, Mohsen Akbari Shahpar, Meysem Asgari-Chenaghlu, Farhang Jaryani, Juan E. Tapia, Andres Valenzuela, Caiyong Wang, Yunlong Wang, Zhaofeng He, Zhenan Sun, Fadi Boutros, Naser Damer, Jonas Henry Grebe, Arjan Kuijper, Kiran Raja, Gourav Gupta, Georgios Zampoukis, Lazaros Tsochatzidis, Ioannis Pratikakis, S. V. Aruna Kumar, B. S. Harish, Umapada Pal, Peter Peer, and Vitomir Štruc: Exploring Bias in Sclera Segmentation Models: A Group Evaluation Approach. IEEE Transactions on Information Forensics and Security (TIFS), IEEE 2023. Vol. 18, 190–205.
Bias and fairness of biometric algorithms have been key topics of research in recent years, mainly due to the societal, legal and ethical implications of potentially unfair decisions made by automated decision-making models. A considerable amount of work has been done on this topic across different biometric modalities, aiming at better understanding the main sources of algorithmic bias or devising mitigation measures. In this work, we contribute to these efforts and present the first study investigating bias and fairness of sclera segmentation models. Although sclera segmentation techniques represent a key component of sclera-based biometric systems with a considerable impact on the overall recognition performance, the presence of different types of biases in sclera segmentation methods is still underexplored. To address this limitation, we describe the results of a group evaluation effort (involving seven research groups), organized to explore the performance of recent sclera segmentation models within a common experimental framework and study performance differences (and bias), originating from various demographic as well as environmental factors. Using five diverse datasets, we analyze seven independently developed sclera segmentation models in different experimental configurations. The results of our experiments suggest that there are significant differences in the overall segmentation performance across the seven models and that among the considered factors, ethnicity appears to be the biggest cause of bias. Additionally, we observe that training with representative and balanced data does not necessarily lead to less biased results. Finally, we find that in general there appears to be a negative correlation between the amount of bias observed (due to eye color, ethnicity and acquisition device) and the overall segmentation performance, suggesting that advances in the field of semantic segmentation may also help with mitigating bias.
@article{vitek2023exploring,
title={Exploring Bias in Sclera Segmentation Models: A Group Evaluation Approach},
author={Vitek, Matej and Das, Abhijit and Lucio, Diego Rafael and Zanlorensi, Luiz Antonio and Menotti, David and Khiarak, Jalil Nourmohammadi and Shahpar, Mohsen Akbari and Asgari-Chenaghlu, Meysam and Jaryani, Farhang and Tapia, Juan E. and Valenzuela, Andres and Wang, Caiyong and Wang, Yunlong and He, Zhaofeng and Sun, Zhenan and Boutros, Fadi and Damer, Naser and Grebe, Jonas Henry and Kuijper, Arjan and Raja, Kiran and Gupta, Gourav and Zampoukis, Georgios and Tsochatzidis, Lazaros and Pratikakis, Ioannis and Aruna Kumar, S.V. and Harish, B.S. and Pal, Umapada and Peer, Peter and \v{S}truc, Vitomir},
journal={IEEE Transactions on Information Forensics and Security (TIFS)},
volume={18},
pages={190--205},
year={2023},
doi="10.1109/TIFS.2022.3216468"
}
Paper available on:
.
A Comprehensive Investigation into Sclera Biometrics: A Novel Dataset and Performance Study
Matej Vitek, Peter Rot, Vitomir Štruc, and Peter Peer: A Comprehensive Investigation into Sclera Biometrics: A Novel Dataset and Performance Study. Neural Computing & Applications (NCAA), Springer 2020. Vol. 32, 17941–17955.
The area of ocular biometrics is among the most popular branches of biometric recognition technology. This area has long been dominated by iris recognition research, while other ocular modalities such as the periocular region or the vasculature of the sclera have received significantly less attention in the literature. Consequently, ocular modalities beyond the iris are not well studied and their characteristics are today still not as well understood. While recent needs for more secure authentication schemes have considerably increased the interest in competing ocular modalities, progress in these areas is still held back by the lack of publicly available datasets that would allow for more targeted research into specific ocularcharacteristics next to the iris. In this paper, we aim to bridge this gap for the case of sclera biometrics and introduce a novel dataset designed for research into ocular biometrics and most importantly for research into the vasculature of the sclera. Our dataset, called Sclera Blood Vessels, Periocular and Iris (SBVPI), is, to the best of our knowledge, the first publicly available dataset designed specifically with research in sclera biometrics in mind. The dataset contains high-quality RGB ocular images, captured in the visible spectrum, belonging to 55 subjects. Unlike competing datasets, it comes with manual markups of various eye regions, such as the iris, pupil, canthus or eyelashes and a detailed pixel-wise annotation of the complete sclera vasculature for a subset of the images. Additionally, the datasets ship with gender and age labels. The unique characteristics of the dataset allow us to study aspects of sclera biometrics technology that have not been studied before in the literature (e.g. vasculature segmentation techniques) as well as issues that are of key importance for practical recognition systems. Thus, next to the SBVPI dataset we also present in this paper a comprehensive investigation into sclera biometrics and the main covariates that affect the performance of sclera segmentation and recognition techniques, such as gender, age, gaze direction or image resolution. Our experiments not only demonstrate the usefulness of the newly introduced dataset, but also contribute to a better understanding of sclera biometrics in general.
@article{vitek2020comprehensive,
title={A Comprehensive Investigation into Sclera Biometrics: A Novel Dataset and Performance Study},
author={Vitek, Matej and Rot, Peter and \v{S}truc, Vitomir and Peer, Peter},
journal={Neural Computing \& Applications (NCAA)},
volume={32},
pages={17941--17955},
year={2020},
publisher={Springer},
doi="10.1007/s00521-020-04782-1"
}
Paper available on:
.
Conferences
Sclera Segmentation and Joint Recognition Benchmarking Competition: SSRBC 2023
Abhijit Das, Saurabh Atreya, Aritra Mukherjee, Matej Vitek, Haiqing Li, Caiyong Wang, Guangzhe Zhao, Fadi Boutros, Patrick Siebke, Jan Niklas Kolf, Naser Damer, Sun Ye, Lu Hexin, Fan Aobo, You Sheng, Sabari Nathan, R. Suganya, R. S. Rampriya, Geetanjali Sharma, P. Priyanka, Aditya Nigam, Peter Peer, Umapada Pal, Vitomir Štruc: Sclera Segmentation and Joint Recognition Benchmarking Competition: SSRBC 2023. IEEE International Joint Conference on Biometrics (IJCB), 2023. 1–10.
This paper presents the summary of the Sclera Segmentation and Joint Recognition Benchmarking Competition (SSRBC 2023) held in conjunction with IEEE International Joint Conference on Biometrics (IJCB 2023). Different from the previous editions of the competition, SSRBC 2023 not only explored the performance of the latest and most advanced sclera segmentation models, but also studied the impact of segmentation quality on recognition performance. Five groups took part in SSRBC 2023 and submitted a total of six segmentation models and one recognition technique for scoring. The submitted solutions included a wide variety of conceptually diverse deep-learning models and were rigorously tested on three publicly available datasets, i.e., MASD, SBVPI and MOBIUS. Most of the segmentation models achieved encouraging segmentation and recognition performance. Most importantly, we observed that better segmentation results always translate into better verification performance.
@inproceedings{ssbc2023,
title={Sclera Segmentation and Joint Recognition Benchmarking Competition: {SSBC} 2023},
author={Das, Abhijit and Atreya, Saurabh and Mukherjee, Aritra and Vitek, Matej and Li, Haiqing and Wang, Caiyong and Zhao, Guangzhe and Boutros, Fadi and Siebke, Patrick and Kolf, Jan Niklas and Damer, Naser and Ye, Sun and Hexin, Lu and Aobo, Fan and Sheng, You and Nathan, Sabari and Suganya, R. and Rampriya, R. S. and Sharma, Geetanjali and Priyanka, P. and Nigam, Aditya and Peer, Peter and Pal, Umapada and \v{S}truc, Vitomir},
booktitle={IEEE International Joint Conference on Biometrics (IJCB)},
pages={1--10},
year={2023},
doi="10.1109/IJCB57857.2023.10448601"
}
Paper available on:
.
SSBC 2020: Sclera Segmentation Benchmarking Competition in the Mobile Environment
Matej Vitek, Abhijit Das, Yann Pourcenoux, Alexandre Missler, Calvin Paumier, Sumanta Das, Ishita De Ghosh, Diego R. Lucio, Luiz A. Zanlorensi Jr., David Menotti, Fadi Boutros, Naser Damer, Jonas Henry Grebe, Arjan Kuijper, Junxing Hu, Yong He, Caiyong Wang, Hongda Liu, Yunlong Wang, Zhenan Sun, Daile Osorio-Roig, Christian Rathgeb, Christoph Busch, Juan Tapia Farias, Andres Valenzuela, Georgios Zampoukis, Lazaros Tsochatzidis, Ioannis Pratikakis, Sabari Nathan, R Suganya, Vineet Mehta, Abhinav Dhall, Kiran Raja, Gourav Gupta, Jalil Nourmohammadi Khiarak, Mohsen Akbari-Shahper, Farhang Jaryani, Meysam Asgari-Chenaghlu, Ritesh Vyas, Sristi Dakshit, Sagnik Dakshit, Peter Peer, Umapada Pal, and Vitomir Štruc: SSBC 2020: Sclera Segmentation Benchmarking Competition in the Mobile Environment. IEEE International Joint Conference on Biometrics (IJCB), 2020. 1–10.
The paper presents a summary of the 2020 Sclera Segmentation Benchmarking Competition (SSBC), the 7th in the series of group benchmarking efforts centred around the problem of sclera segmentation. Different from previous editions, the goal of SSBC 2020 was to evaluate the performance of sclera-segmentation models on images captured with mobile devices. The competition was used as a platform to assess the sensitivity of existing models to i) differences in mobile devices used for image capture and ii) changes in the ambient acquisition conditions. 26 research groups registered for SSBC 2020, out of which 13 groups took part in the final round and submitted a total of 16 segmentation models for scoring. These included a wide variety of deep-learning solutions as well as one approach based on standard image processing techniques. Experiments were conducted with three recent datasets. Most of the segmentation models achieved relatively consistent performance across images captured with different mobile devices (with slight differences across devices), but struggled most with low-quality images captured in challenging ambient conditions, i.e., in an indoor environment and with poor lighting.
@inproceedings{ssbc2020,
title={{SSBC} 2020: Sclera Segmentation Benchmarking Competition in the Mobile Environment},
author={Vitek, Matej and Das, Abhijit and Pourcenoux, Yann and Missler, Alexandre and Paumier, Calvin and Das, Sumanta and De Ghosh, Ishita and Lucio, Diego R. and Zanlorensi Jr., Luiz A. and Menotti, David and Boutros, Fadi and Damer, Naser and Grebe, Jonas Henry and Kuijper, Arjan and Hu, Junxing and He, Yong and Wang, Caiyong and Liu, Hongda and Wang, Yunlong and Sun, Zhenan and Osorio-Roig, Daile and Rathgeb, Christian and Busch, Christoph and Tapia Farias, Juan and Valenzuela, Andres and Zampoukis, Georgios and Tsochatzidis, Lazaros and Pratikakis, Ioannis and Nathan, Sabari and Suganya, R and Mehta, Vineet and Dhall, Abhinav and Raja, Kiran and Gupta, Gourav and Khiarak, Jalil Nourmohammadi and Akbari-Shahper, Mohsen and Jaryani, Farhang and Asgari-Chenaghlu, Meysam and Vyas, Ritesh and Dakshit, Sristi and Dakshit, Sagnik and Peer, Peter and Pal, Umapada and \v{S}truc, Vitomir},
booktitle={IEEE International Joint Conference on Biometrics (IJCB)},
pages={1--10},
year={2020},
month={10},
doi="10.1109/IJCB48548.2020.9304881"
}
Paper available on:
.
Deep Multi-class Eye Segmentation for Ocular Biometrics
Peter Rot, Žiga Emeršič, Vitomir Štruc, and Peter Peer: Deep Multi-class Eye Segmentation for Ocular Biometrics. IEEE International Work Conference on Bioinspired Intelligence (IWOBI), 2018. 1–8.
Segmentation techniques for ocular biometrics typically focus on finding a single eye region in the input image at the time. Only limited work has been done on multi-class eye segmentation despite a number of obvious advantages. In this paper we address this gap and present a deep multi-class eye segmentation model build around the SegNet architecture. We train the model on a small dataset (of 120 samples) of eye images and observe it to generalize well to unseen images and to ensure highly accurate segmentation results. We evaluate the model on the Multi-Angle Sclera Database (MASD) dataset and describe comprehensive experiments focusing on: i) segmentation performance, ii) error analysis, iii) the sensitivity of the model to changes in view direction, and iv) comparisons with competing single-class techniques. Our results show that the proposed model is viable solution for multi-class eye segmentation suitable for recognition (multi-biometric) pipelines based on ocular characteristics.
@inproceedings{rot2018deep,
title={Deep Multi-class Eye Segmentation for Ocular Biometrics},
author={Rot, Peter and Emer\v{s}i\v{c}, \v{Z}iga and \v{S}truc, Vitomir and Peer, Peter},
booktitle={IEEE International Work Conference on Bioinspired Intelligence (IWOBI)},
year={2018},
month={07},
pages={1--8},
doi="10.1109/IWOBI.2018.8464133"
}
Paper available on:
.
Deep Periocular Recognition: A Case Study
Peter Rot, Matej Vitek, Blaž Meden, Žiga Emeršič, and Peter Peer: Deep Periocular Recognition: A Case Study. IEEE International Work Conference on Bioinspired Intelligence (IWOBI), 2019. 21–26.
The periocular region of a face can be used as an autonomous modality in a biometric recognition system. We evaluate two different deep learning pipelines, one with a specific segmentation step and one without it, and show the positive and negative properties of both of them. The obtained results on the newly introduced public dataset SBVPI show that the periocular region offers enough distinguishing information for successful identity recognition.
@inproceedings{rot2019deep,
title={Deep Periocular Recognition: A Case Study},
author={Rot, Peter and Vitek, Matej and Meden, Bla\v{z} and Emer\v{s}i\v{c}, \v{Z}iga and Peer, Peter},
booktitle={IEEE International Work Conference on Bioinspired Intelligence (IWOBI)},
pages={21--26},
year={2019},
doi="10.1109/IWOBI47054.2019.9114509"
}
Paper available on:
.
Semi-automated correction of MOBIUS eye region annotations
Ožbej Golob, Peter Peer, and Matej Vitek: Semi-automated correction of MOBIUS eye region annotations. International Electrotechnical and Computer Science Conference (ERK), 2020. 344–347.
MOBIUS is a publicly available dataset of ocular images with manually created annotations of the various eye regions. However, manual markups are prone to human error, and in this work we explore the semi-automatic approach we utilised to correct flaws in the MOBIUS dataset annotations. This improves the dataset’s usability for segmentation and training deep segmentation models. The program we wrote removes areas that are outside of the required area (outliers), areas that are inside of other areas (inliers) and missing or blurred edges from annotations. This results in fixed annotations and a better deep learning model. Evaluation was performed on a model built on new annotations and a model built on originalannotations from MOBIUS dataset. The model trained on fixed annotations achieved significantly better results in all metrics than the one trained on the original data. Thus, for a better learning model, the distortions on original annotations should be removed.
@inproceedings{golob2020semiautomated,
title={Semi-automated correction of {MOBIUS} eye region annotations},
author={Golob, O\v{z}bej and Peer, Peter and Vitek, Matej},
booktitle={IEEE International Electrotechnical and Computer Science Conference (ERK)},
pages={344--347},
year={2020}
}
Paper available on:
.
Book Chapters
Deep Sclera Segmentation and Recognition
Peter Rot, Matej Vitek, Klemen Grm, Žiga Emeršič, Peter Peer, and Vitomir Štruc: Deep Sclera Segmentation and Recognition. In: Christoph Busch, Sébastien Marcel, Andreas Uhl, Raymond Veldhuis (Eds.), Handbook of Vascular Biometrics (HVB), Springer 2020. 395–432.
In this chapter, we address the problem of biometric identity recognition from the vasculature of the human sclera. Specifically, we focus on the challenging task of multi-view sclera recognition, where the visible part of the sclera vasculature changes from image to image due to varying gaze (or view) directions. We propose a complete solution for this task built around Convolutional Neural Networks (CNNs) and make several contributions that result in state-of-the-art recognition performance, i.e.: (i) we develop a cascaded CNN assembly that is able to robustly segment the sclera vasculature from the input images regardless of gaze direction, and (ii) we present ScleraNET, a CNN model trained in a multi-task manner (combining losses pertaining to identity and view-direction recognition) that allows for the extraction of discriminative vasculature descriptors that can be used for identity inference. To evaluate the proposed contributions, we also introduce a new dataset of ocular images, called the Sclera Blood Vessels, Periocular and Iris (SBVPI) dataset, which represents one of the few publicly available datasets suitable for research in multi-view sclera segmentation and recognition. The datasets come with a rich set of annotations, such as a per-pixel markup of various eye parts (including the sclera vasculature), identity, gaze-direction and gender labels. We conduct rigorous experiments on SBVPI with competing techniques from the literature and show that the combination of the proposed segmentation and descriptor-computation models results in highly competitive recognition performance.
@incollection{rot2020deep,
title={Deep Sclera Segmentation and Recognition},
author={Rot, Peter and Vitek, Matej and Grm, Klemen and Emer\v{s}i\v{c}, \v{Z}iga and Peer, Peter and \v{S}truc, Vitomir},
booktitle={Handbook of Vascular Biometrics (HVB)},
editor={Uhl, Andreas and Busch, Christoph and Marcel, S\'{e}bastien and Veldhuis, Raymond N. J.},
pages={395--432},
year={2020},
publisher={Springer},
doi="10.1007/978-3-030-27731-4_13"
}