Publication
Collective cooperation is fundamental to individual survival and social development, and exploring its mechanism of emergence is of great significance. However, most existing studies related to the evolutionary dynamics on higher order networks assume that all agents within a population follow the same strategy updating rule. This assumption does not align with reality and is an oversimplification.
To this end, we propose a higher order network game framework featuring a hybrid strategy updating rule. Specifically, we use scale-free random hypergraphs (SRHs) to characterize the underlying network topology of the population.
Then, we categorize agents into two types: imitation learners and autonomous learners according to social learning and behaviorism theories. For imitation learners, we apply the Fermi rule to characterize their probabilistic imitation behaviors, while for autonomous learners, we adopt the reinforcement learning method to highlight their decision-making features.
Through a series of simulation experiments and theoretical analyses, we find that autonomous learners have a dual impact on cooperation in groups: they inhibit cooperation at low dilemma intensities but promote cooperation at high dilemma intensities. In addition, we show that smaller group sizes are more conducive to cooperation.
Our findings provide valuable insights for better understanding the impact of hybrid updating mechanisms on the evolutionary dynamics of collective cooperation in higher order networks.
Y. Xu, D. Zhao, T.P. Benko, C. Xia, M. Perc, Reinforcement Learning Can Be a Double-Edged Sword for Cooperation on Higher-Order Networks, IEEE Transactions on Systems Man and Cybernetics Systems (99) (2025) 1-14.
Related
Signup