Panels

How to improve human trust in intelligent technologies

Description
Should we trust intelligent technologies? Is our conviction that we know better justified? In almost all benchmark tests, AI shows its superiority, so should it be trusted? How can we improve transparency and explain AI decisions, obtain reliable, unbiased results, protect our privacy, and control AI systems to ensure their compliance with human preferences? So far, large language models are not ideal in any of these respects, and although they can make fewer errors than humans, it is not clear whether this technology can be completely safe. In this panel, we shall discuss the dangers and the advantages of applications of intelligent technologies

Moderator – prof. Włodzisław Duch

male face with long and curly hair and beard
Prof. Włodzisław Duch | Photo: private archive

Bio
Włodzisław Duch is the head of the Neurocognitive Laboratory in the Center of Modern Interdisciplinary Technologies, and the Neuroinformatics and Artificial Intelligence group in the University Centre of Excellence Dynamics, Mathematical Analysis and Artificial Intelligence at the Nicolaus Copernicus UniversityToruń, Poland. PhD in theoretical physics/quantum chemistry (1980), postdoc at the Univ. of Southern California, Los Angeles (1980-82), D.Sc. in applied math (1987). President of the European Neural Networks Society executive committee (2006-2008-2011), Fellow of the International Neural Network Society (2013), Asia-Pacific Artificial Intelligence Association (2022), and the International Artificial Intelligence Industry Alliance (2024). Expert of the European Union science programs, member of the high-level expert group of European Institute of Innovation & Technology (EIT). Worked in the School of Computer EngineeringNanyang Technological University (2003-07, 2010-12 as the Nanyang Visiting Professor). Visiting professor at the University of Florida; Max-Planck-Institute, Munich, Germany, Kyushu Institute of Technology, Meiji and Rikkyo University in Japan, and several other institutions. He is/was on the editorial board of IEEE TNN, CPC, NIP-LR, Cognitive Neurodynamics, Journal of Mind and Behavior, and 16 other journals; published over 380 peer-reviewed scientific papers, has written or co-authored 6 books and co-edited 21 books, and published about 300 conference abstracts and popular articles on diverse subjects. In 2014-15 he served as a deputy minister for science and higher education in Poland. His DuchSoft company has made in 1990 the GhostMiner data mining software package, for many years marketed by Fujitsu.

With a wide background in many branches of science and understanding of different cultures he bridges many scientific communities. To unwind he plays electronic wind instruments and dives with whale sharks.

https://is.umk.pl/~duch/cv/cv.html

Panelists

Prof. Adrian Horzyk – AGH University of Krakow
Prof. Przemysław Kazienko – Wroclaw University of Technology
Prof. Krzysztof Krawiec – Poznan University of Technology
Dr Alina Powała – QED Software
Prof. Leszek Rutkowski – Systems Research Institute of the Polish Academy of Sciences

From a good idea to a good business – transferring AI innovations from academia to companies

Description
Artificial Intelligence research is thriving in academic institutions, but how can breakthrough innovations successfully transition from research labs to real-world business applications? This panel brings together experts from academia, and industry to discuss the key challenges and opportunities in commercializing innovative AI solutions.
Panelists will explore topics such as securing funding, navigating intellectual property, collaborating with industry partners, and overcoming technical and ethical barriers in AI deployment. Join us for an insightful discussion on turning cutting-edge AI ideas into impactful and scalable businesses.

Moderator – prof. Przemysław Biecek

Prof. Przemysław Biecek | Photo: private archive

Bio
Przemysław Biecek is a professor in explainable artificial intelligence (XAI) with background in mathematical statistics and software engineering. His research focuses on AI model’s interpretability, explainability, safety and security. He leads the MI2.AI research group, which explores techniques for making machine learning models more transparent and trustworthy through new research method but also development of tools for model analysis.

Prof. Biecek is the author of numerous scientific publications and widely used open-source tools, including DALEX, a framework for model explainability. He has led multiple research projects on AI eeplainability, safety and security with industrial implementations, particularly in the healthcare and space industries, where AI-driven solutions require interpretability and reliability. A strong advocate for open science, he actively promotes ethical and transparent AI applications in both academia and practice.

https://pbiecek.github.io