Latest AI Papers: November 09, 2025

by Admin 36 views
Latest AI Papers - November 09, 2025

Hey everyone! Here's a rundown of the latest and greatest AI papers, covering everything from graph pre-training to graph neural networks. I've compiled a list of the most exciting research from November 9, 2025, so you can stay ahead of the curve. Check out the Github page for a better reading experience and more papers. Let's dive in!

Graph Pre-training: Laying the Foundation

Graph pre-training is all about preparing graph neural networks for success. It involves training models on large datasets of graphs so they can learn general patterns and structures. This pre-training step allows models to perform well on various downstream tasks, even with limited labeled data. It's like giving your model a strong foundation before building the house. The benefits are significant, including improved performance, faster training, and the ability to generalize to unseen data. It's an area with lots of innovation, aiming to create more robust and adaptable models. This pre-training approach is especially crucial when dealing with complex datasets or situations where labeled data is scarce. With these powerful pre-trained models, we can do even more with less data and time.

Here are the papers related to graph pre-training:

Title Date Comment
Text-Free Multi-domain Graph Pre-training: Toward Graph Foundation Models 2024-09-22 Under review
Better with Less: A Data-Active Perspective on Pre-Training Graph Neural Networks 2023-11-21

Graph Foundation Model: The Future of AI

Graph Foundation Models are the next big thing, similar to the transformer models in NLP, that aim to be versatile enough to handle a wide range of graph-related tasks. Think of them as the Swiss Army knives of the graph world! They're trained on massive datasets and can be adapted to various applications, from social network analysis to drug discovery. The potential is huge because they can learn complex relationships and patterns within graph data. The current research focuses on enhancing their capabilities and efficiency. The ongoing research is developing innovative architectures and training methods. It will allow these models to be applied across diverse domains with impressive results. These models are designed to be adaptable and transferable, offering significant improvements in various AI applications. It's an exciting time to watch these graph foundation models evolve and reshape the landscape of AI.

Here are the papers related to graph foundation models:

Title Date Comment
GMoPE:A Prompt-Expert Mixture Framework for Graph Foundation Models 2025-11-05
GraphKeeper: Graph Domain-Incremental Learning via Knowledge Disentanglement and Preservation 2025-10-30
Accep...

Accepted by the Main Track of NeurIPS-2025

Equivariance Everywhere All At Once: A Recipe for Graph Foundation Models 2025-10-28
The Underappreciated Power of Vision Models for Graph Structural Understanding 2025-10-27 NeurIPS 2025
Learning Noise-Resilient and Transferable Graph-Text Alignment via Dynamic Quality Assessment 2025-10-22
LLM as GNN: Graph Vocabulary Learning for Text-Attributed Graph Foundation Models 2025-10-20
Deeper with Riemannian Geometry: Overcoming Oversmoothing and Oversquashing for Graph Foundation Models 2025-10-20 Accept by NeurIPS 25
GFM-RAG: Graph Foundation Model for Retrieval Augmented Generation 2025-10-20
Accep...

Accepted by NeurIPS 2025

GraphLand: Evaluating Graph Machine Learning Models on Diverse Industrial Data 2025-10-16
Stealthy Dual-Trigger Backdoors: Attacking Prompt Tuning in LM-Empowered Graph Foundation Models 2025-10-16

Graph Prompt: Guiding AI with Precision

Graph prompting is a smart way of guiding the behavior of graph neural networks by using prompts. These prompts are like instructions or hints that tell the model what to focus on. It's similar to giving a student a specific question to guide their thinking. By using prompts, researchers can improve the model's accuracy, efficiency, and adaptability. This approach allows models to become more flexible. It means that the same model can be used for different tasks. This can be achieved by simply changing the prompts. It simplifies the design and training of new models. Think of it as a way to fine-tune AI models for specific tasks. It is a powerful technique that helps to unlock the full potential of AI models. It makes them more versatile and user-friendly.

Here are the papers related to graph prompt:

Title Date Comment
FoGE: Fock Space inspired encoding for graph prompting 2025-10-27
Adaptive Dual Prompting: Hierarchical Debiasing for Fairness-aware Graph Neural Networks 2025-10-27
Cross-Paradigm Graph Backdoor Attacks with Promptable Subgraph Triggers 2025-10-26
GraphTOP: Graph Topology-Oriented Prompting for Graph Neural Networks 2025-10-25
Accep...

Accepted by the 39 Annual Conference on Neural Information Processing Systems (NeurIPS 2025)

One Prompt Fits All: Universal Graph Adaptation for Pretrained Models 2025-10-14
accep...

accepted by NeurIPS 2025 main conference

Event-Aware Prompt Learning for Dynamic Graphs 2025-10-13 Under review
Graph Your Own Prompt 2025-10-12
Accep...

Accepted at the 39th Conference on Neural Information Processing Systems (NeurIPS 2025)

MSCPT: Few-shot Whole Slide Image Classification with Multi-scale and Context-focused Prompt Tuning 2025-09-09
This ...

This work has been submitted to the IEEE TMI for possible publication

SGPT: Few-Shot Prompt Tuning for Signed Graphs 2025-08-17 CIKM'25
MX-AI: Agentic Observability and Control Platform for Open and AI-RAN 2025-08-08
This ...

This work has been submitted to the IEEE for possible publication

Graph Contrastive Learning: Learning by Comparison

Graph Contrastive Learning is a powerful technique where the model learns by contrasting different views of the same graph data. It works by creating multiple slightly different versions of the same graph. The model is then trained to recognize that these different versions represent the same underlying structure. This forces the model to learn robust and generalizable features. It's like teaching a child to recognize a cat by showing it pictures from different angles. This approach helps the model to capture the core features. By contrasting different viewpoints, the model becomes better at ignoring noise and identifying the essential features of the graph. This is especially useful when the data is noisy or incomplete. It's a key part of developing models that can work well in the real world. The results are significant because the models will be more robust and adaptable.

Here are the papers related to graph contrastive learning:

Title Date Comment
Cross-Paradigm Graph Backdoor Attacks with Promptable Subgraph Triggers 2025-10-26
Toward General Digraph Contrastive Learning: A Dual Spatial Perspective 2025-10-18
Transferable Parasitic Estimation via Graph Contrastive Learning and Label Rebalancing in AMS Circuits 2025-10-10
Final...

Final version accepted by the International Conference on Computer-Aided Design (ICCAD) 2025. First two authors have equal contributions

Generative Data Augmentation in Graph Contrastive Learning for Recommendation 2025-10-10
The 3...

The 34th ACM International Conference on Information and Knowledge Management

From Moments to Models: Graphon Mixture-Aware Mixup and Contrastive Learning 2025-10-09
Quantum Rationale-Aware Graph Contrastive Learning for Jet Discrimination 2025-10-08
LLM-CoT Enhanced Graph Neural Recommendation with Harmonized Group Policy Optimization 2025-10-05
Are LLMs Better GNN Helpers? Rethinking Robust Graph Learning under Deficiencies with Iterative Refinement 2025-10-02 14 pages
Less is More: Towards Simple Graph Contrastive Learning 2025-09-30
Submi...

Submitted to ICLR 2026

Fractal Graph Contrastive Learning 2025-09-25

Graph Neural Networks: Connecting the Dots

Graph Neural Networks (GNNs) are specifically designed to work with graph-structured data. They are capable of analyzing complex relationships and patterns within networks. GNNs are becoming increasingly popular because they can handle diverse real-world data like social networks, biological networks, and even traffic patterns. The core idea behind GNNs is to allow nodes in a graph to exchange information with their neighbors. This allows the model to capture the complex relationships between data points. Researchers are constantly developing new architectures. These are designed to improve the efficiency and accuracy of GNNs. The applications of GNNs are broad. From predicting the spread of diseases to optimizing traffic flow. This makes GNNs an area of active research. These are powerful tools for understanding and making predictions about complex systems. GNNs are changing how we process and understand information.

Here are the papers related to graph neural networks:

Title Date Comment
SolarCrossFormer: Improving day-ahead Solar Irradiance Forecasting by Integrating Satellite Imagery and Ground Sensors 2025-11-06
14 pa...

14 pages, 18 figures, accepted for publication in IEEE Transactions on Sustainable Energy

WaveGuard: Robust Deepfake Detection and Source Tracing via Dual-Tree Complex Wavelet and Graph Neural Networks 2025-11-06
14 pa...

14 pages, 6 figures, 7 tables

Causal Graph Neural Networks for Healthcare 2025-11-06
Graph Neural Networks for User Satisfaction Classification in Human-Computer Interaction 2025-11-06
ScaleDL: Towards Scalable and Efficient Runtime Prediction for Distributed Deep Learning Workloads 2025-11-06
DeNoise: Learning Robust Graph Representations for Unsupervised Graph-Level Anomaly Detection 2025-11-06
Plan of Knowledge: Retrieval-Augmented Large Language Models for Temporal Knowledge Graph Question Answering 2025-11-06
Submi...

Submitted to the IEEE for possible publication

GNN-MoE: Context-Aware Patch Routing using GNNs for Parameter-Efficient Domain Generalization 2025-11-06 6 pages, 3 figures
Local Fragments, Global Gains: Subgraph Counting using Graph Neural Networks 2025-11-05
Sketch-Augmented Features Improve Learning Long-Range Dependencies in Graph Neural Networks 2025-11-05
To ap...

To appear at NeurIPS 2025

That's all for now, folks! I hope you found this overview helpful. Be sure to check back for more updates. If you have any questions or want to discuss a specific paper, feel free to reach out. Keep exploring and happy researching! Catch you in the next one! Cheers!