Graph Neural Networks (GNNs) have rapidly emerged as powerful tools for modeling complex data structures, particularly in the context of telecommunications, chemistry and social networking. Explainability in GNNs holds essential significance as it empowers stakeholders to gain insights into the inner workings of these complex models, fostering trust and transparency in decision-making processes. In this publication, we address the problem of determining how important is each neighbor for the GNN when classifying a node and how to measure the performance for this specific task. To do this, various known explainability methods are reformulated to calculate the neighbor importance and four new metrics, that aid in determining an explainability method’s reliability, are presented. Our results show that there is almost no difference between the explanations provided by gradient-based techniques in the GNN domain, in contrast to other areas of deep learning where this is an active area of research. This means that efforts in this direction may not produce such promising results for GNNs. In addition, many explainability techniques failed to identify important neighbors when GNNs without self-loops are used (The code to replicate our findings will be available here: https://github.com/EricssonResearch/gnn-neighbors-xai).