Persons
Ing. Klára Janoušková
All publications
Single Image Test-Time Adaptation for Segmentation
- Authors: Ing. Klára Janoušková, Shor, T., Baskin, Ch., prof. Ing. Jiří Matas, Ph.D.,
- Publication: Transactions on Machine Learning Research. 2024, 2024(5), ISSN 2835-8856.
- Year: 2024
- Department: Visual Recognition Group
-
Annotation:
Test-Time Adaptation methods improve domain shift robustness of deep neural networks. We explore the adaptation of segmentation models to a single unlabelled image with no other data available at test time. This allows individual sample performance analysis while excluding orthogonal factors such as weight restart strategies. We propose two new segmentation \ac{tta} methods and compare them to established baselines and recent state-of-the-art. The methods are first validated on synthetic domain shifts and then tested on real-world datasets. The analysis highlights that simple modifications such as the choice of the loss function can greatly improve the performance of standard baselines and that different methods and hyper-parameters are optimal for different kinds of domain shift, hindering the development of fully general methods applicable in situations where no prior knowledge about the domain shift is assumed.
Model-Assisted Labeling via Explainability for Visual Inspection of Civil Infrastructures
- Authors: Ing. Klára Janoušková, Rigotti, M., Giurgiu, I., Malossi, C.
- Publication: Computer Vision – ECCV 2022 Workshops, Part III. Cham: Springer, 2023. p. 244-257. LNCS. vol. 13803. ISSN 0302-9743. ISBN 978-3-031-25065-1.
- Year: 2023
- DOI: 10.1007/978-3-031-25082-8_16
- Link: https://doi.org/10.1007/978-3-031-25082-8_16
- Department: Visual Recognition Group
-
Annotation:
Labeling images for visual segmentation is a time-consuming task which can be costly, particularly in application domains where la- bels have to be provided by specialized expert annotators, such as civil engineering. In this paper, we propose to use attribution methods to har- ness the valuable interactions between expert annotators and the data to be annotated in the case of defect segmentation for visual inspection of civil infrastructures. Concretely, a classifier is trained to detect defects and coupled with an attribution-based method and adversarial climbing to generate and refine segmentation masks corresponding to the classi- fication outputs. These are used within an assisted labeling framework where the annotators can interact with them as proposal segmentation masks by deciding to accept, reject or modify them, and interactions are logged as weak labels to further refine the classifier. Applied on a real- world dataset resulting from the automated visual inspection of bridges, our proposed method is able to save more than 50% of annotators’ time when compared to manual annotation of defects.
Text Recognition - Real World Data and Where to Find Them
- Authors: Ing. Klára Janoušková, prof. Ing. Jiří Matas, Ph.D., Gomez, L., Karatzas, D.
- Publication: 2020 25th International Conference on Pattern Recognition (ICPR). Los Alamitos: IEEE Computer Society, 2021. p. 4489-4496. ISSN 1051-4651. ISBN 978-1-7281-8808-9.
- Year: 2021
- DOI: 10.1109/ICPR48806.2021.9412868
- Link: https://doi.org/10.1109/ICPR48806.2021.9412868
- Department: Visual Recognition Group
-
Annotation:
We present a method for exploiting weakly annotated images to improve text extraction pipelines. The approach uses an arbitrary end-to-end text recognition system to obtain text region proposals and their, possibly erroneous, transcriptions. The method includes matching of imprecise transcriptions to weak annotations and an edit distance guided neighbourhood search. It produces nearly error-free, localised instances of scene text, which we treat as "pseudo ground truth" (PGT).