Research Article

AI anchors from a uses and gratifications perspective: An exploratory study of past, present, and future trends

Yongbo Niu 1 * , Sharifah Sofiah Syed Zainudin 1, Syafila Binti Kamarudin 1
More Detail
1 Faculty of Modern Languages and Communication, Universiti Putra Malaysia, Selangor, MALAYSIA* Corresponding Author
Online Journal of Communication and Media Technologies, 16(2), April 2026, e202624, https://doi.org/10.30935/ojcmt/18478
Published: 26 April 2026
OPEN ACCESS   67 Views   38 Downloads
Download Full Text (PDF)

ABSTRACT

Artificial intelligence (AI) anchors are increasingly deployed in journalism, entertainment, and education, reshaping how audiences consume media content. Early systems such as Ananova demonstrated the feasibility of virtual presenters but lacked natural prosody and expressiveness. Recent innovations, exemplified by Xinhua’s AI anchor and Microsoft Xiaoice, leverage deep learning-based speech synthesis and multimodal design to achieve greater naturalness, personalization, and interactivity. Drawing on the uses and gratifications (U&G) framework, this study investigates how AI anchors meet cognitive, affective, and trust-related needs in the Chinese context. This study combines acoustic analyses of pitch, formants and intensity with 29 semi-structured interviews, providing complementary insights at both the technical and interpretive levels. Results show that technological refinements enhance clarity and efficiency, but trust is the key mediator that enables functional gratifications to translate into emotional engagement. Participants valued efficiency and multilingual accessibility while expressing concerns about authenticity, credibility, and ethics. This research contributes theoretically by extending U&G to AI-mediated communication and empirically by combining technical and qualitative evidence to analyze user perceptions. Practically, the findings highlight opportunities for deploying AI anchors in routine news, education, and commercial contexts, while underscoring the continued role of human anchors in politically sensitive or emotionally rich communication. These insights add to recent debates on automated journalism and AI-mediated interaction (Jang et al., 2022; Wölker & Powell, 2021). Future research could expand these findings by conducting cross-cultural comparisons, testing the mediating role of trust in experimental settings, and exploring how emerging acoustic and multimodal technologies further shape audience acceptance.

CITATION (APA)

Niu, Y., Zainudin, S. S. S., & Kamarudin, S. B. (2026). AI anchors from a uses and gratifications perspective: An exploratory study of past, present, and future trends. Online Journal of Communication and Media Technologies, 16(2), e202624. https://doi.org/10.30935/ojcmt/18478

REFERENCES

  1. American Psychological Association. (2017). Ethical principles of psychologists and code of conduct. APA. https://www.apa.org/ethics/code
  2. Barakat, H., Turk, O., & Demiroglu, C. (2024). Deep learning-based expressive speech synthesis: A systematic review of approaches, challenges, and resources. EURASIP Journal on Audio, Speech, and Music Processing, 2024(1), Article 29. https://doi.org/10.1186/s13636-024-00329-7
  3. Boersma, P., & Weenink, D. (2023). Praat: Doing phonetics by computer. Praat. http://www.praat.org/
  4. Bohacek, M., & Farid, H. (2024). Human action CLIPs: Detecting AI-generated human motion. arXiv, Article 2412.00526. https://doi.org/10.48550/arXiv.2412.00526
  5. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa
  6. British Educational Research Association (2018). Ethical guidelines for educational research (4th ed.). BERA. https://www.bera.ac.uk/publication/ethical-guidelines-for-educational-research-2018
  7. Carlson, M. (2023). Automating the news: How algorithms are rewriting the media. Columbia University Press.
  8. Couldry, N., & Mejias, U. A. (2019). Data colonialism: Rethinking big data’s relation to the contemporary subject. Television & New Media, 20(4), 336-349. https://doi.org/10.1177/1527476418796632
  9. Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research (3rd ed.). SAGE.
  10. Deci, E. L., & Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behavior. Springer. https://doi.org/10.1007/978-1-4899-2271-7
  11. Denzin, N. K., & Lincoln, Y. S. (2011). The SAGE handbook of qualitative research (4th ed.). SAGE.
  12. Diakopoulos, N. (2019). Automating the news: How algorithms are rewriting the media. Harvard University Press. https://doi.org/10.4159/9780674239302
  13. Feng, Z., & Shu, Y. (2024). AI anchor vs. real anchor: An experimental study based on audience perception. China News Review, 5(3), 16-25. https://doi.org/10.35534/cnr.0503002
  14. Fetters, M. D., Curry, L. A., & Creswell, J. W. (2013). Achieving integration in mixed methods designs: Principles and practices. Health Services Research, 48(6), 2134-2156. https://doi.org/10.1111/1475-6773.12117
  15. Getchell, K. M., Carradini, S., Cardon, P. W., Fleischmann, C., Ma, H., Aritz, J., & Stapp, J. (2022). AI in business communication: The changing landscape of research and teaching. Business and Professional Communication Quarterly, 85(1), 7-33. https://doi.org/10.1177/23294906221074311
  16. Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18(1), 59-82. https://doi.org/10.1177/1525822X05279903
  17. Hennink, M. M., Kaiser, B. N., & Weber, M. (2019). What influences saturation? Estimating sample sizes in focus group research. Qualitative Health Research, 29(1), 148-161. https://doi.org/10.1177/1049732318821692
  18. Hillenbrand, J., Getty, L. A., Clark, M. J., & Wheeler, K. (1995). Acoustic characteristics of American English vowels. The Journal of the Acoustical Society of America, 97(5), 3099-3111. https://doi.org/10.1121/1.411872
  19. Hohenstein, J., Kizilcec, R. F., DiFranzo, D., Aghajari, Z., Mieczkowski, H., Levy, K., Naaman, M., Hancock, J., & Jung, M. (2023). Artificial intelligence in communication impacts language and social relationships. Scientific Reports, 13, Article 5487. https://doi.org/10.1038/s41598-023-30938-9
  20. Huang, Y., & Yu, Z. (2023). Understanding the continuance intention for artificial intelligence news anchor: Based on the expectation confirmation theory. Systems, 11(9), Article 438. https://doi.org/10.3390/systems11090438
  21. Jang, W., Kwak, D. H., & Bucy, E. P. (2022). Knowledge of automated journalism moderates evaluations of algorithmically generated news. New Media & Society, 24(12), 2852-2870. https://doi.org/10.1177/14614448221142534
  22. Kane, J., Johnstone, M. N., & Szewczyk, P. (2024). Voice synthesis improvement by machine learning of natural prosody. Sensors, 24(5), Article 1624. https://doi.org/10.3390/s24051624
  23. Katz, E., Blumler, J. G., & Gurevitch, M. (1973). Uses and gratifications research. Public Opinion Quarterly, 37(4), 509-523. https://doi.org/10.1086/268109
  24. Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80. https://doi.org/10.1518/hfes.46.1.50_30392
  25. Lim, J. S., Shin, D., Zhang, J., Masiclat, S., Luttrell, R., & Kinsey, D. (2023). News audiences in the age of artificial intelligence: Perceptions and behaviors of optimizers, mainstreamers, and skeptics. Journal of Broadcasting & Electronic Media, 67(3), 353-375. https://doi.org/10.1080/08838151.2022.2162901
  26. Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. SAGE. https://us2.sagepub.com/en-us/nam/book/naturalistic-inquiry
  27. Lombard, M., & Ditton, T. (1997). At the heart of it all: The concept of presence. Journal of Computer-Mediated Communication, 3(2), Article JCMC321. https://doi.org/10.1111/j.1083-6101.1997.tb00072.x
  28. Lyu, X., Ramasamy, S. S., & Ying, F. (2023). The role of AI digital anchors in enhancing the news broadcasting user experience: An analysis of the interaction of AI anchors with the audience in live news programs. EAI Endorsed Transactions on Ambient Systems, 23(11), 2343240. https://doi.org/10.4108/eai.23-11-2023.2343240
  29. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734. https://doi.org/10.2307/258792
  30. McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and validating trust measures for e-commerce. Information Systems Research, 13(3), 334-359. https://doi.org/10.1287/isre.13.3.334.81
  31. Napoli, P. M. (2019). Social media and the public interest: Media regulation in the disinformation age. Columbia University Press. https://doi.org/10.7312/napo18454
  32. Nass, C., & Brave, S. (2006). Wired for speech: How voice activates and advances the human-computer relationship. Computational Linguistics, 32(3), 451–452. https://doi.org/10.1162/coli.2006.32.3.451
  33. Nga, L. P. (2026). Digital transformation, artificial intelligence application, and service quality as determinants of user satisfaction in mobile money services. Ianna Journal of Interdisciplinary Studies, 8(1), 1032-1045. https://iannajournalofinterdisciplinarystudies.com/index.php/1/article/view/1425
  34. Niu, Y. (2025). From human anchors to AI anchors: A review of technology, ethics, and audience response in audiovisual media transformation. International Theory and Practice in Humanities and Social Sciences, 2(2), 361-372. https://doi.org/10.70693/itphss.v2i2.166
  35. Nowak, K. L., & Biocca, F. (2003). The effect of the agency and anthropomorphism on users’ sense of telepresence, copresence, and social presence in virtual environments. Presence: Teleoperators and Virtual Environments, 12(5), 481-494. https://doi.org/10.1162/105474603322761289
  36. OECD. (2023). Ensuring trustworthy artificial intelligence in the workplace. OECD Publishing. https://doi.org/10.1787/08785bba-en
  37. Olijo, I. I. (2025). Gender disparities in research return rates: The moderating influence of AI self-efficacy and methodological design. Verlumun Journal of AI, Gender and Cultural Studies, 1(1), 90-100. https://doi.org/10.5281/zenodo.18242833
  38. Partnership on AI. (2025). Improving labor transparency in AI through worker inclusion. Partnership on AI. https://partnershiponai.org/improving-labor-transparency-in-ai-through-worker-inclusion/
  39. Peterson, G. E., & Barney, H. L. (1952). Control methods used in a study of the vowels. Journal of the Acoustical Society of America, 24, 175-184. https://doi.org/10.1121/1.1906875
  40. Reuters Institute. (2024). Digital news report 2024. Reuters Institute. https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2024
  41. Rubin, A. M. (1983). Television uses and gratifications: The interactions of viewing patterns and motivations. Journal of Broadcasting, 27(1), 37-51. https://doi.org/10.1080/08838158309386471
  42. Rubin, A. M. (2002). The uses-and-gratifications perspective of media effects. In J. Bryant, & D. Zillmann (Eds.), Media effects: Advances in theory and research (pp. 525-548). Routledge.
  43. Scherer, K. R. (2003). Vocal communication of emotion: A review of research paradigms. Speech Communication, 40(1-2), 227-256. https://doi.org/10.1016/S0167-6393(02)00084-5
  44. Schreibelmayr, S., & Mara, M. (2022). Robot voices in daily life: Vocal human-likeness and application context as determinants of user acceptance. Frontiers in Psychology, 13, Article 787499. https://doi.org/10.3389/fpsyg.2022.787499
  45. Schwartz, R., & Schwartz, R. (2022). Towards a standard for identifying and managing bias in artificial intelligence. National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.1270
  46. Sharples, M. (2023). Towards social generative AI for education: Theory, practices and ethics. Learning: Research and Practice, 9(2), 159-167. https://doi.org/10.1080/23735082.2023.2261131
  47. Shen, J., Pang, R., Weiss, R. J., Schuster, M., Jaitly, N., Yang, Z., ... & Wu, Y. (2018). Natural TTS synthesis by conditioning Wavenet on MEL spectrogram predictions. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 2018, pp. 4779-4783, https://doi.org/10.1109/ICASSP.2018.8461368.
  48. Song, P. (2025). Integrating AI virtual humans into broadcasting and hosting education: Pedagogical, ethical, and industry implications. UKR Journal of Arts, Humanities and Social Sciences, 1(6), 300-307. https://doi.org/10.5281/zenodo.16881288
  49. Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human-AI interaction. Journal of Computer-Mediated Communication, 25(1), 74-88. https://doi.org/10.1093/jcmc/zmz026
  50. Sundar, S. S., & Limperos, A. M. (2013). Uses and grats 2.0: New gratifications for new media. Journal of Broadcasting & Electronic Media, 57(4), 504-525. https://doi.org/10.1080/08838151.2013.845827
  51. Tan, L., Zheng, H., & Liu, X. (2021). Advances in emotional speech synthesis for AI anchors. Journal of Speech Technology, 19(4), 512-523.
  52. Titze, I. R. (1994). Principles of voice production. Prentice Hall.
  53. Tuan , H. C., & Tung, N. T. (2026). Determinants of AI adoption and how the adoption affects the business performance of small and medium-scale enterprises. Ianna Journal of Interdisciplinary Studies, 8(1), 25-37. https://doi.org/10.5281/zenodo.17689249
  54. Van Den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A, & Kavukcuoglu, K (2016). WaveNet: A generative model for raw audio. arXiv. https://doi.org/10.48550/arXiv.1609.03499
  55. Vasileiou, K., Barnett, J., Thorpe, S., & Young, T. (2018). Characterising and justifying sample size sufficiency in interview-based studies: Systematic analysis of qualitative health research over a 15-year period. BMC Medical Research Methodology, 18, Article 148. https://doi.org/10.1186/s12874-018-0594-7
  56. Wang, Y., Skerry-Ryan, R., Stanton, D., Wu, Y., Weiss, R. J., Jaitly, N., Yang, Z., Xiao, Y., Chen, Z., Bengio, S., Le, Q., Agiomyrgiannakis, Y., Clark, R., & Saurous, R. A. (2017). Tacotron: Towards end-to-end speech synthesis. In Proceedings of the INTERSPEECH 2017 (pp. 4006-4010). https://doi.org/10.21437/Interspeech.2017-1452
  57. Wang, S., Bao, T., & Huang, Y. (2025). A study on the factors influencing audience choices of AI anchors in the era of digital intelligence. In Proceedings of the 2025 2nd International Conference on Innovation Management and Information System (pp. 439-444). ACM. https://doi.org/10.1145/3745676.3745743
  58. Wiederhold, B. K. (2019). Animated news anchors: Where to next? Cyberpsychology, Behavior, and Social Networking, 22(11), 675-676. https://doi.org/10.1089/cyber.2019.29167.bkw
  59. Wölker, A., & Powell, T. E. (2021). Algorithms in the newsroom? News readers’ perceived credibility and selection of automated journalism. Journalism, 22(1), 86-103. https://doi.org/10.1177/1464884918757072
  60. Xie, Y., Zhu, K., Zhou, P., & Liang, L. (2023) How does anthropomorphism improve human-AI interaction satisfaction: A dual-path model. Computers in Human Behavior. 148, Article 107878. https://doi.org/10.1016/j.chb.2023.107878
  61. Xinhua News Agency, & Sogou. (2018). Xinhua-Sogou AI news anchor. https://news.cgtn.com/news/3d3d514d3055444e30457a6333566d54/index.html
  62. Xue, K., Li, Y., & Jin, H. (2022). What do you think of AI? Research on the influence of AI news anchor image on watching intention. Behavioral Sciences, 12(11), Article 465. https://doi.org/10.3390/bs12110465
  63. Zhang, Y., Wang, X., & Zhao, X. (2025). Supervising or assisting? The influence of virtual anchors driven by AI-human collaboration on customer engagement in live-streaming e-commerce. Electronic Commerce Research, 25(4), 3047-3070. https://doi.org/10.1007/s10660-023-09783-5
  64. Zhang,Y. (2024). Analysis of the application of artificial intelligence in information courses in primary and secondary schools. Lecture Notes in Education Psychology and Public Media, 59, 241-250. https://doi.org/10.54254/2753-7048/59/20241783
  65. Zhu, Y.-P., Xin, L., Wang, H., & Park, H.-W. (2025). Effects of AI virtual anchors on brand image and loyalty: Insights from perceived value theory and SEM-ANN analysis. Systems, 13(2), Article 79. https://doi.org/10.3390/systems13020079