Social Acceptance of Artificial Intelligence: The CanikFest Case Study


Abstract views: 0 / PDF downloads: 0

Authors

DOI:

https://doi.org/10.5281/zenodo.18859924

Keywords:

Artificial Intelligence, Trust, Explainability, Social Acceptance

Abstract

The social acceptance of artificial intelligence (AI) systems depends not only on technological adequacy but also on socio-technical conditions such as trust, transparency, and ethical governance. Multinational public opinion surveys show that attitudes toward AI fluctuate between perceptions of “benefit” and “risk,” while confidence levels remain cautious in most countries (KPMG, 2023; Poushter, Fagan, & Corichi, 2023; Vogels, 2023). This study discusses the role of the Information Path strategy (interventions aimed at increasing AI literacy) in increasing trust in AI and how this role operates through two critical mechanisms: (i) reducing uncertainty through explainable artificial intelligence (XAI) (Berger and Calabrese, 1975; Gunning vd., 2019; Lundberg & Lee, 2017; Ribeiro, Singh, & Guestrin, 2016) and (ii) the formation of cognitive trust in the competence of AI engineers (Mayer, Davis, & Schoorman, 1995). The study analyzes the relationships between the dimensions of AI literacy (awareness, usage, evaluation, and ethics) and the variables of trust/acceptance through hypotheses based on survey data (N=713) collected from participants attending the CanikFest Artificial Intelligence themed event.

References

Acemoglu, D., & Restrepo, P. (2020). Robots and jobs: Evidence from us labor markets. Journal of Political Economy, 128 (6), 2188–2244. doi: 10.1086/705716

Autor, D. H. (2015). Why are there still so many jobs? the history and future of workplace automation. Journal of Economic Perspectives, 29 (3), 3–30. doi: 10.1257/jep.29.3.3.

Berger, C. R., & Calabrese, R. J. (1975). Some explorations in initial interaction and beyond: Toward a developmental theory of interpersonal communication. Human Communication Research, 1 (2), 99–112. doi: 10.1111/j.1468-2958.1975.tb00258.x

Chesney, R., & Citron, D. K. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107 (6), 1753–1819.

Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2023). Gpts are gpts: An early look at the labor market impact potential of large language models. arXiv preprint.

Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. doi: 10.1016/j.techfore.2016.08.019

Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G.-Z. (2019). Xai-explainable artificial intelligence. Science Robotics, 4 (37), eaay7120. doi: 10.1126/scirobotics.aay7120

KPMG. (2023). Trust in artificial intelligence: Global insights 2023. Erişim: 2026-01-31, from https://kpmg.com/xx/en/home/insights/2023/11/trust-in-artificial-intelligence.html

Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20 (3), 709–734. doi:10.2307/258792

Poushter, J., Fagan, M., & Corichi, M. (2023). How people around the world view artificial intelligence. Erişim: 2026-01-31, from https://www.pewresearch.org/global/2023/08/28/how-people-around-the-world-view-artificial-intelligence/

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "why should i trust you?": Explaining the predictions of any classifier. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (kdd ’16) (pp. 1135–1144). doi:10.1145/2939672.2939778

Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media+ Society, 6 (1). doi: 10.1177/2056305120903408

Vogels, E. A. (2023). What the data says about americans’ views of artificial intelligence. Erişim: 2026-01-31, from

https://www.pewresearch.org/short-reads/2023/08/28/what-the-data-says-about-americans-views-of-artificial-intelligence/

Published

2026-03-04

How to Cite

KILIÇ, B., GÜMÜŞ, İrfan, Esra, GÜMÜŞ, N., DEMİRCAN, E., ŞENTÜRK, A., KAYA, Z., ÜNAL, Özge, TURNA, Özge, AYDIN, C., BÜYÜK, G., YILMAZ, F. B., & ORUÇ ÇOBAN, H. (2026). Social Acceptance of Artificial Intelligence: The CanikFest Case Study. PEARSON JOURNAL, 8(35), 658–668. https://doi.org/10.5281/zenodo.18859924

Issue

Section

Articles

Most read articles by the same author(s)