Publications
216 Übereinstimmungen gefunden / 1-15 16-30 31-45 46-60 61-75 76-90 91-105 106-120 121-135 136-150 151-165 166-180 181-195 196-210 211-216
DZHW-Studienberechtigtenpanel 2012. Daten- und Methodenbericht zur 3. Befragungswelle des Studienberechtigtenjahrgangs 2012.Jahn, V., Spangenberg, H., Ohlendorf, D., Föste-Eggers, D., Niebuhr, J., Vietgen, S., & Euler, T. (2025).DZHW-Studienberechtigtenpanel 2012. Daten- und Methodenbericht zur 3. Befragungswelle des Studienberechtigtenjahrgangs 2012. Hannover: DZHW. Abstract
The DZHW-Panel Study of School Leavers 2012 is part of the DZHW-Panel Study of School Leavers survey series, in which standardized multiple surveys are used to collect information on the post-school careers of school leavers with a (school) higher education entrance qualification. As a rule, several survey waves are conducted at different times before and after the acquisition of the higher education entrance qualification for each year group of persons with a university entrance qualification. Accordingly, this is a combined cohort-panel design. The panel 2012 is the 19th cohort of the study series with currently three waves. Full abstract: https://doi.org/10.21249/DZHW:gsl2012:3.0.0 |
SurveyBot: A new era of web survey pretesting.Shahania, S., Spiliopoulou, M., & Broneske, D. (2025).SurveyBot: A new era of web survey pretesting. In I. Maglogiannis, L. Iliadis, A. Andreou, & A. Papaleonidas (Hrsg.), Artificial Intelligence Applications and Innovations. AIAI 2025. IFIP Advances in Information and Communication Technology. Cham: Springer. https://doi.org/10.1007/978-3-031-96235-6_29 |
Towards automatic bias analysis in multimedia journalism.Hinrichs, R., Steffen, H., Avetisyan, H., Broneske, D., & Ostermann, J. (2025).Towards automatic bias analysis in multimedia journalism. Discover Artificial Intelligence, 2025(5/112), 1-28. https://doi.org/10.1007/s44163-025-00362-1 |
Embracing NVM: Optimizing $B^𝜖$-tree structures and data compression in storage engines.Karim, S., Wünsche, F., Broneske, D., Kuhn, M., & Saake, G. (2025).Embracing NVM: Optimizing $B^𝜖$-tree structures and data compression in storage engines. In Binnig, C. et al. (Hrsg.), Datenbanksysteme für Business, Technologie und Web - Workshopband (BTW 2025) (S. 329-333). Bonn: Gesellschaft für Informatik. https://doi.org/10.18420/BTW2025-137 |
Publishing fine-grained standardized metadata – Lessons learned from three research data centers.Wenzig, K., Daniel, A., Hansen, D., Koberg, T., & Tudose, M. (2025).Publishing fine-grained standardized metadata – Lessons learned from three research data centers. 2025 (12). Berlin: Konsortium für die Sozial-, Verhaltens-, Bildungs- und Wirtschaftswissenschaften (KonsortSWD). |
A multi-objective evolutionary algorithm for detecting protein complexes in PPI networks using gene ontology.Abbas, M. N., Broneske, D., & Saake, G. (2025).A multi-objective evolutionary algorithm for detecting protein complexes in PPI networks using gene ontology. Scientific Reports, 15. https://doi.org/10.1038/s41598-025-01667-y |
AutoML meets hugging face: Domain-aware pretrained model selection for text classification.Safikhani, P., & Broneske, D. (2025).AutoML meets hugging face: Domain-aware pretrained model selection for text classification. In A. Ebrahimi, S. Haider, E. Liu, M. L. Pacheco, & S. Wein (Hrsg.), Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop). Albuquerque, USA: Association for Computational Linguistics. Abstract
The effectiveness of embedding methods is crucial for optimizing text classification performance in Automated Machine Learning (AutoML). However, selecting the most suitable pre-trained model for a given task remains challenging. This study introduces the Corpus-Driven Domain Mapping (CDDM) pipeline, which utilizes a domain-annotated corpus of pre-fine-tuned models from the Hugging Face Model Hub to improve model selection. Integrating these models into AutoML systems significantly boosts classification performance across multiple datasets compared to baseline methods. Despite some domain recognition inaccuracies, results demonstrate CDDM’s potential to enhance model selection, streamline AutoML workflows, and reduce computational costs. |
NVM in data storage: A post-optane future.Karim, S., Wünsche, J., Kuhn, M., Saake, G., & Broneske, D. (2025).NVM in data storage: A post-optane future. ACM Digital Library, ACM Transaction on Storage, 2025. https://doi.org/10.1145/3731454 |
Following political science students through their methods training: Statistics anxiety, student satisfaction, and final grades in the COVID year 2021/22.Vierus, P., Elis, J., Goerres, A., & Höhne, J. K. (2025).Following political science students through their methods training: Statistics anxiety, student satisfaction, and final grades in the COVID year 2021/22. Politische Vierteljahresschrift (online first). https://doi.org/10.1007/s11615-025-00613-x |
VerbCraft: Morphologically-aware Armenian text generation using LLMs in low-resource settings.Avetisyan, H., & Broneske, D. (2025).VerbCraft: Morphologically-aware Armenian text generation using LLMs in low-resource settings. In ¦. A. Holdt, N. Ilinykh, B. Scalvini, M. Bruton, I. N. Debess, & C. M. Tudor (Hrsg.), Proceedings of the Third Workshop on Resources and Representations for Under-Resourced Languages and Domains (RESOURCEFUL-2025) (S. 111-119). Tallinn: University of Tartu Library, Estonia. |
Trendumfrage Forschungsdateninfrastrukturen 2024.Hartstein, J., Blümel, C., & Klein, D. (2025).Trendumfrage Forschungsdateninfrastrukturen 2024. Daten- und Methodenbericht. Hannover: DZHW. Abstract
The Trend Survey Research Data Infrastructures 2024 is part of the accompanying research of the Basic Services for the National Research Data Infrastructure (Base4NFDI). The trend survey captures the perception, use and evaluation of established and new data infrastructures and services in the German research landscape. The focus in on the perspective of (potential) users. |
Stata tip 160: Drop capture program drop from ado-files.Klein, D. (2025).Stata tip 160: Drop capture program drop from ado-files. The Stata Journal, 2025(1), 252-253. https://doi.org/10.1177/1536867X251322974 Abstract
I explain that -capture program drop- is useless in ado-files. While it prevents errors in do-files when redefining programs in memory, it either isn't executed or results in an error in ado-files. Moreover, in ado-files with local subroutines, -capture program drop- can mistakenly remove unrelated programs from memory. |
Tell me more! Using multiple features for binary text classification with a zero-shot model.Broneske, D., Italiya, N., & Mierisch, F. (2025).Tell me more! Using multiple features for binary text classification with a zero-shot model. In IEEE Institute of Electrical and Electronic Engineers (Hrsg.), 2024 International Conference on Machine Learning and Applications (ICMLA) (S. 1613-1620). Jacksonville, Florida, USA: IEEE Xplore. https://doi.org/10.1109/ICMLA61862.2024.00249 |
The standardized data management plan for educational research, an approach to foster tailored data management.Netscher, S., Kaluza, H., Mauer, R., Mozygemba, K., & Stephan, K. (2025).The standardized data management plan for educational research, an approach to foster tailored data management. International Journal of Digital Curation, 2025(1). https://doi.org/10.2218/ijdc.v19i1.910 |
ADAMANT: Hardware-accelerated query processing made easy.Broneske, D., Burtsev, V., Drewes, A., Gurumurthy, B., Pionteck, T., & Saake, G. (2025).ADAMANT: Hardware-accelerated query processing made easy. In K.-U. Sattler, A. Kemper, T. Neumann, & J. Teubner (Hrsg.), Scalable Data Management for Future Hardware (S. 1-38). Cham: Springer. https://doi.org/10.1007/978-3-031-74097-8 |
Contact

