2025-07-10 | Meshless projection model-order reduction via reference spaces for smoothed-particle hydrodynamics | Steven N. Rodriguez, Steven L. Brunton, Liam K. Magargal et.al. | 2507.07830 | | | Abstract (click to expand)This work proposes a model-order reduction framework for the meshless weakly compressible smoothed particle hydrodynamics (SPH) method. The proposed framework introduces the concept of modal reference spaces to overcome the challenges of discovering low-dimensional subspaces from unstructured, dynamic, and mixing numerical topology that is often seen in SPH simulations. The proposed modal reference spaces enable a low-dimensional representation of the SPH field equations while maintaining their inherent meshless qualities. Modal reference spaces are constructed by projecting SPH snapshot data onto a reference space where low-dimensionality of field quantities can be discovered via traditional modal decomposition techniques (e.g., the proper orthogonal decomposition (POD)). Modal quantities are mapped back to the meshless SPH space via scattered data interpolation during the online predictive stage. The proposed model-order reduction framework is cast into the \emph{meshless} Galerkin POD (GPOD) and the Adjoint Petrov--Galerkin (APG) projection model-order reduction (PMOR) formulation. The PMORs are tested on three numerical experiments: 1) the Taylor--Green vortex; 2) lid-driven cavity; and 3) flow past an open cavity. Results show good agreement in reconstructed and predictive velocity fields, which showcase the ability of the proposed framework to evolve the unstructured, dynamic, and mixing SPH field equations in a low-dimensional subspace. Results also show that the pressure field is sensitive to the projection error due to the stiff weakly-compressible assumption made in the current SPH framework, but can be alleviated through nonlinear approximations, such as the APG approach. Ultimately, the presented meshless model-order reduction framework marks a step toward enabling drastic cost savings of SPH simulations. |
2025-07-10 | Computationally Efficient Information-Driven Optical Design with Interchanging Optimization | Eric Markley, Henry Pinkard, Leyla Kabuli et.al. | 2507.07789 | | | Abstract (click to expand)Recent work has demonstrated that imaging systems can be evaluated through the information content of their measurements alone, enabling application-agnostic optical design that avoids computational decoding challenges. Information-Driven Encoder Analysis Learning (IDEAL) was proposed to automate this process through gradient-based. In this work, we study IDEAL across diverse imaging systems and find that it suffers from high memory usage, long runtimes, and a potentially mismatched objective function due to end-to-end differentiability requirements. We introduce IDEAL with Interchanging Optimization (IDEAL-IO), a method that decouples density estimation from optical parameter optimization by alternating between fitting models to current measurements and updating optical parameters using fixed models for information estimation. This approach reduces runtime and memory usage by up to 6x while enabling more expressive density models that guide optimization toward superior designs. We validate our method on diffractive optics, lensless imaging, and snapshot 3D microscopy applications, establishing information-theoretic optimization as a practical, scalable strategy for real-world imaging system design. |
2025-07-10 | The Pandora's Box Problem with Sequential Inspections | Ali Aouad, Jingwei Ji, Yaron Shaposhnik et.al. | 2507.07508 | | | Abstract (click to expand)The Pandora's box problem (Weitzman 1979) is a core model in economic theory that captures an agent's (Pandora's) search for the best alternative (box). We study an important generalization of the problem where the agent can either fully open boxes for a certain fee to reveal their exact values or partially open them at a reduced cost. This introduces a new tradeoff between information acquisition and cost efficiency. We establish a hardness result and employ an array of techniques in stochastic optimization to provide a comprehensive analysis of this model. This includes (1) the identification of structural properties of the optimal policy that provide insights about optimal decisions; (2) the derivation of problem relaxations and provably near-optimal solutions; (3) the characterization of the optimal policy in special yet non-trivial cases; and (4) an extensive numerical study that compares the performance of various policies, and which provides additional insights about the optimal policy. Throughout, we show that intuitive threshold-based policies that extend the Pandora's box optimal solution can effectively guide search decisions. |
2025-07-09 | Bridging the Plausibility-Validity Gap by Fine-Tuning a Reasoning-Enhanced LLM for Chemical Synthesis and Discovery | Malikussaid, Hilal Hudan Nuha et.al. | 2507.07328 | | 42 pages, 8 figures, 1 equation, 2 algorithms, 31 tables, to be published in ISPACS Conference 2025, unabridged version | Abstract (click to expand)Large Language Models (LLMs) often generate scientifically plausible but factually invalid information, a challenge we term the "plausibility-validity gap," particularly in specialized domains like chemistry. This paper presents a systematic methodology to bridge this gap by developing a specialized scientific assistant. We utilized the Magistral Small model, noted for its integrated reasoning capabilities, and fine-tuned it using Low-Rank Adaptation (LoRA). A key component of our approach was the creation of a "dual-domain dataset," a comprehensive corpus curated from various sources encompassing both molecular properties and chemical reactions, which was standardized to ensure quality. Our evaluation demonstrates that the fine-tuned model achieves significant improvements over the baseline model in format adherence, chemical validity of generated molecules, and the feasibility of proposed synthesis routes. The results indicate a hierarchical learning pattern, where syntactic correctness is learned more readily than chemical possibility and synthesis feasibility. While a comparative analysis with human experts revealed competitive performance in areas like chemical creativity and reasoning, it also highlighted key limitations, including persistent errors in stereochemistry, a static knowledge cutoff, and occasional reference hallucination. This work establishes a viable framework for adapting generalist LLMs into reliable, specialized tools for chemical research, while also delineating critical areas for future improvement. |
2025-07-09 | Scalable ADER-DG Transport Method with Polynomial Order Independent CFL Limit | Kieran Ricardo, Kenneth Duru et.al. | 2507.07304 | | | Abstract (click to expand)Discontinuous Galerkin (DG) methods are known to suffer from increasingly restrictive time step constraints as the polynomial order increases, limiting their efficiency at high orders. In this paper, we introduce a novel locally implicit, but globally explicit ADER-DG scheme designed for transport-dominated problems. The method achieves a maximum stable time step governed by an element-width based CFL condition that is independent of the polynomial degree. By solving a set of element-local implicit problems at each time step, our approach more effectively captures the domain of dependence. As a result, our method remains stable for CFL numbers up to \(1/\sqrt{d}\) in \(d\) spatial dimensions. We provide a rigorous stability proof in one dimension, and extend the analysis to two and three dimensions using a semi-analytical von Neumann stability analysis. The accuracy and convergence of the method are demonstrated through numerical experiments on both linear and nonlinear test cases. |
2025-07-09 | DiffSpectra: Molecular Structure Elucidation from Spectra using Diffusion Models | Liang Wang, Yu Rong, Tingyang Xu et.al. | 2507.06853 | | | Abstract (click to expand)Molecular structure elucidation from spectra is a foundational problem in chemistry, with profound implications for compound identification, synthesis, and drug development. Traditional methods rely heavily on expert interpretation and lack scalability. Pioneering machine learning methods have introduced retrieval-based strategies, but their reliance on finite libraries limits generalization to novel molecules. Generative models offer a promising alternative, yet most adopt autoregressive SMILES-based architectures that overlook 3D geometry and struggle to integrate diverse spectral modalities. In this work, we present DiffSpectra, a generative framework that directly infers both 2D and 3D molecular structures from multi-modal spectral data using diffusion models. DiffSpectra formulates structure elucidation as a conditional generation process. Its denoising network is parameterized by Diffusion Molecule Transformer, an SE(3)-equivariant architecture that integrates topological and geometric information. Conditioning is provided by SpecFormer, a transformer-based spectral encoder that captures intra- and inter-spectral dependencies from multi-modal spectra. Extensive experiments demonstrate that DiffSpectra achieves high accuracy in structure elucidation, recovering exact structures with 16.01% top-1 accuracy and 96.86% top-20 accuracy through sampling. The model benefits significantly from 3D geometric modeling, SpecFormer pre-training, and multi-modal conditioning. These results highlight the effectiveness of spectrum-conditioned diffusion modeling in addressing the challenge of molecular structure elucidation. To our knowledge, DiffSpectra is the first framework to unify multi-modal spectral reasoning and joint 2D/3D generative modeling for de novo molecular structure elucidation. |
2025-07-09 | Text to model via SysML: Automated generation of dynamical system computational models from unstructured natural language text via enhanced System Modeling Language diagrams | Matthew Anderson Hendricks, Alice Cicirello et.al. | 2507.06803 | | | Abstract (click to expand)This paper contributes to speeding up the design and deployment of engineering dynamical systems by proposing a strategy for exploiting domain and expert knowledge for the automated generation of dynamical system computational model starting from a corpus of document relevant to the dynamical system of interest and an input document describing the specific system. This strategy is implemented in five steps and, crucially, it uses system modeling language diagrams (SysML) to extract accurate information about the dependencies, attributes, and operations of components. Natural Language Processing (NLP) strategies and Large Language Models (LLMs) are employed in specific tasks to improve intermediate outputs of the SySML diagrams automated generation, such as: list of key nouns; list of extracted relationships; list of key phrases and key relationships; block attribute values; block relationships; and BDD diagram generation. The applicability of automated SysML diagram generation is illustrated with different case studies. The computational models of complex dynamical systems from SysML diagrams are then obtained via code generation and computational model generation steps. In the code generation step, NLP strategies are used for summarization, while LLMs are used for validation only. The proposed approach is not limited to a specific system, domain, or computational software. The applicability of the proposed approach is shown via an end-to-end example from text to model of a simple pendulum, showing improved performance compared to results yielded by LLMs only. |
2025-07-08 | Eyes on the Road, Mind Beyond Vision: Context-Aware Multi-modal Enhanced Risk Anticipation | Jiaxun Zhang, Haicheng Liao, Yumu Xie et.al. | 2507.06444 | | Accepted by ACMMM2025 | Abstract (click to expand)Accurate accident anticipation remains challenging when driver cognition and dynamic road conditions are underrepresented in predictive models. In this paper, we propose CAMERA (Context-Aware Multi-modal Enhanced Risk Anticipation), a multi-modal framework integrating dashcam video, textual annotations, and driver attention maps for robust accident anticipation. Unlike existing methods that rely on static or environment-centric thresholds, CAMERA employs an adaptive mechanism guided by scene complexity and gaze entropy, reducing false alarms while maintaining high recall in dynamic, multi-agent traffic scenarios. A hierarchical fusion pipeline with Bi-GRU (Bidirectional GRU) captures spatio-temporal dependencies, while a Geo-Context Vision-Language module translates 3D spatial relationships into interpretable, human-centric alerts. Evaluations on the DADA-2000 and benchmarks show that CAMERA achieves state-of-the-art performance, improving accuracy and lead time. These results demonstrate the effectiveness of modeling driver attention, contextual description, and adaptive risk thresholds to enable more reliable accident anticipation. |
2025-07-08 | Rugsafe: A multichain protocol for recovering from and defending against Rug Pulls | Jovonni L. Pharr, Jahanzeb M. Hussain et.al. | 2507.06423 | | | Abstract (click to expand)Rugsafe introduces a comprehensive protocol aimed at mitigating the risks of rug pulls in the cryptocurrency ecosystem. By utilizing cryptographic security measures and economic incentives, the protocol provides a secure multichain system for recovering assets and transforming rugged tokens into opportunities and rewards. Foundational to Rugsafe are specialized vaults where rugged tokens can be securely deposited, and anticoin tokens are issued as receipts. These anticoins are designed to be inversely pegged to the price movement of the underlying rugged token. Users can utilize these anticoins within the ecosystem or choose to burn them, further securing the protocol and earning additional rewards. The supply of the native Rugsafe token is dynamically adjusted based on the volume, value, and activity of rugged tokens, ensuring stability and resilience. By depositing rugged tokens into a vault on several chains, and by burning anticoins, users receive incentives on the RugSafe chain. This protocol's vaults are designed to work in heterogenous blockchain ecosystems, offering a practical and effective solution to one of the most significant challenges in the cryptocurrency market. |
2025-07-08 | Forex Trading Robot Using Fuzzy Logic | Mustafa Shabani, Alireza Nasiri, Hassan Nafardi et.al. | 2507.06383 | | | Abstract (click to expand)In this study, we propose a fuzzy system for conducting short-term transactions in the forex market. The system is designed to enhance common strategies in the forex market using fuzzy logic, thereby improving the accuracy of transactions. Traditionally, technical strategies based on oscillator indicators have relied on predefined ranges for indicators such as Relative Strength Index (RSI), Commodity Channel Indicator (CCI), and Stochastic to determine entry points for trades. However, the use of these classic indicators has yielded suboptimal results due to the changing nature of the market over time. In our proposed approach, instead of employing classical indicators, we introduce a fuzzy Mamdani system for each indicator. The results obtained from these systems are then combined through voting to design a trading robot. Our findings demonstrate a considerable increase in the profitability factor compared to three other methods. Additionally, net profit, gross profit, and maximum capital reduction are calculated and compared across all approaches. |
2025-07-08 | Development and Real-World Application of Commercial Motor Vehicle Safety Enforcement Dashboards | Dhairya Parekh, Mark L. Franz Ph. D, Sara Zahedian Ph. D et.al. | 2507.06351 | | Presented at Transportation Research Board Annual Meeting 2025. Presentation number: TRBAM-25-04350 | Abstract (click to expand)Commercial Motor Vehicle (CMV) safety is crucial in traffic management and public safety. CMVs account for numerous traffic incidents, so monitoring CMV safety and safety inspections is essential for ensuring safe and efficient highway movement. This paper presents the development and real-world application of CMV dashboards designed under the guidance of CMV safety enforcement professionals from the Maryland State Police (MSP), the Maryland Department of Transportation - State Highway Administration (MDOT - SHA), and the Federal Motor Carrier Safety Administration (FMCSA) to enable intuitive and efficient analysis of CMV safety performance measures. First, three CMV safety dashboards enable CMV safety professionals to identify sites with a history of safety performance issues. A supplemental dashboard automates the analysis of CMV enforcement initiatives using the same performance measures. These performance measures are based on CMV probe vehicle speeds, inspection/citation data from Truck Weigh and Inspection Stations (TWIS), patrolling enforcement, and Virtual Weigh Stations (VWS). The authors collaborated with MSP to identify a portion of I-81 in Maryland, susceptible to improvement from targeted CMV enforcement. The supplemental enforcement assessment dashboard was employed to evaluate the impact of enforcement, including the post-enforcement halo effect. The results of the post-enforcement evaluation were mixed, indicating a need for more fine-grained citation data. |
2025-07-08 | Topic Modeling and Link-Prediction for Material Property Discovery | Ryan C. Barron, Maksim E. Eren, Valentin Stanev et.al. | 2507.06139 | | 4 pages, 3 figures, 1 table | Abstract (click to expand)Link prediction infers missing or future relations between graph nodes, based on connection patterns. Scientific literature networks and knowledge graphs are typically large, sparse, and noisy, and often contain missing links between entities. We present an AI-driven hierarchical link prediction framework that integrates matrix factorization to infer hidden associations and steer discovery in complex material domains. Our method combines Hierarchical Nonnegative Matrix Factorization (HNMFk) and Boolean matrix factorization (BNMFk) with automatic model selection, as well as Logistic matrix factorization (LMF), we use to construct a three-level topic tree from a 46,862-document corpus focused on 73 transition-metal dichalcogenides (TMDs). These materials are studied in a variety of physics fields with many current and potential applications. An ensemble BNMFk + LMF approach fuses discrete interpretability with probabilistic scoring. The resulting HNMFk clusters map each material onto coherent topics like superconductivity, energy storage, and tribology. Also, missing or weakly connected links are highlight between topics and materials, suggesting novel hypotheses for cross-disciplinary exploration. We validate our method by removing publications about superconductivity in well-known superconductors, and show the model predicts associations with the superconducting TMD clusters. This shows the method finds hidden connections in a graph of material to latent topic associations built from scientific literature, especially useful when examining a diverse corpus of scientific documents covering the same class of phenomena or materials but originating from distinct communities and perspectives. The inferred links generating new hypotheses, produced by our method, are exposed through an interactive Streamlit dashboard, designed for human-in-the-loop scientific discovery. |
2025-07-08 | Bridging Sequential Deep Operator Network and Video Diffusion: Residual Refinement of Spatio-Temporal PDE Solutions | Jaewan Park, Farid Ahmed, Kazuma Kobayashi et.al. | 2507.06133 | | | Abstract (click to expand)Video-diffusion models have recently set the standard in video generation, inpainting, and domain translation thanks to their training stability and high perceptual fidelity. Building on these strengths, we repurpose conditional video diffusion as a physics surrogate for spatio-temporal fields governed by partial differential equations (PDEs). Our two-stage surrogate first applies a Sequential Deep Operator Network (S-DeepONet) to produce a coarse, physics-consistent prior from the prescribed boundary or loading conditions. The prior is then passed to a conditional video diffusion model that learns only the residual: the point-wise difference between the ground truth and the S-DeepONet prediction. By shifting the learning burden from the full solution to its much smaller residual space, diffusion can focus on sharpening high-frequency structures without sacrificing global coherence. The framework is assessed on two disparate benchmarks: (i) vortex-dominated lid-driven cavity flow and (ii) tensile plastic deformation of dogbone specimens. Across these data sets the hybrid surrogate consistently outperforms its single-stage counterpart, cutting the mean relative L2 error from 4.57% to 0.83% for the flow problem and from 4.42% to 2.94% for plasticity, a relative improvements of 81.8% and 33.5% respectively. The hybrid approach not only lowers quantitative errors but also improves visual quality, visibly recovering fine spatial details. These results show that (i) conditioning diffusion on a physics-aware prior enables faithful reconstruction of localized features, (ii) residual learning reduces the problem, accelerating convergence and enhancing accuracy, and (iii) the same architecture transfers seamlessly from incompressible flow to nonlinear elasto-plasticity without problem-specific architectural modifications, highlighting its broad applicability to nonlinear, time-dependent continua. |
2025-07-08 | Affective-ROPTester: Capability and Bias Analysis of LLMs in Predicting Retinopathy of Prematurity | Shuai Zhao, Yulin Zhang, Luwei Xiao et.al. | 2507.05816 | | | Abstract (click to expand)Despite the remarkable progress of large language models (LLMs) across various domains, their capacity to predict retinopathy of prematurity (ROP) risk remains largely unexplored. To address this gap, we introduce a novel Chinese benchmark dataset, termed CROP, comprising 993 admission records annotated with low, medium, and high-risk labels. To systematically examine the predictive capabilities and affective biases of LLMs in ROP risk stratification, we propose Affective-ROPTester, an automated evaluation framework incorporating three prompting strategies: Instruction-based, Chain-of-Thought (CoT), and In-Context Learning (ICL). The Instruction scheme assesses LLMs' intrinsic knowledge and associated biases, whereas the CoT and ICL schemes leverage external medical knowledge to enhance predictive accuracy. Crucially, we integrate emotional elements at the prompt level to investigate how different affective framings influence the model's ability to predict ROP and its bias patterns. Empirical results derived from the CROP dataset yield two principal observations. First, LLMs demonstrate limited efficacy in ROP risk prediction when operating solely on intrinsic knowledge, yet exhibit marked performance gains when augmented with structured external inputs. Second, affective biases are evident in the model outputs, with a consistent inclination toward overestimating medium- and high-risk cases. Third, compared to negative emotions, positive emotional framing contributes to mitigating predictive bias in model outputs. These findings highlight the critical role of affect-sensitive prompt engineering in enhancing diagnostic reliability and emphasize the utility of Affective-ROPTester as a framework for evaluating and mitigating affective bias in clinical language modeling systems. |
2025-07-07 | MCNP-GO: A python package for assembling MCNP input files with a systems engineering approach | Alexandre Friou et.al. | 2507.05659 | | Submitted to Nuclear Engineering and Technology | Abstract (click to expand)This article introduces MCNP-GO (https://github.com/afriou/mcnpgo), a Python package designed to manipulate and assemble MCNP input files, allowing users to assemble a set of independent objects, each described by a valid MCNP file, into a single cohesive file. This tool is particularly useful for applications where precise modeling and positioning of equipment are crucial. The package addresses the challenges of managing large databases of MCNP input files, ensuring reliability and traceability through configuration management systems. MCNP-GO provides functionalities such as renumbering, extracting subsets of files, transforming files, and assembling files while managing collisions and materials. It also keeps track of the operations performed on files, enhancing traceability and ease of modification. The article demonstrates the package's capabilities through a practical example of assembling an MCNP input file for a tomographic experiment, highlighting its efficiency and user-friendliness. MCNP-GO is designed for users with minimal Python knowledge. |
2025-07-07 | MolFORM: Multi-modal Flow Matching for Structure-Based Drug Design | Jie Huang, Daiheng Zhang et.al. | 2507.05503 | | Accepted to ICML 2025 genbio workshop | Abstract (click to expand)Structure-based drug design (SBDD) seeks to generate molecules that bind effectively to protein targets by leveraging their 3D structural information. While diffusion-based generative models have become the predominant approach for SBDD, alternative non-autoregressive frameworks remain relatively underexplored. In this work, we introduce MolFORM, a novel generative framework that jointly models discrete (atom types) and continuous (3D coordinates) molecular modalities using multi-flow matching. To further enhance generation quality, we incorporate a preference-guided fine-tuning stage based on \textit{Direct Preference Optimization} (DPO), using Vina score as a reward signal. We propose a multi-modal flow DPO co-modeling strategy that simultaneously aligns discrete and continuous modalities, leading to consistent improvements across multiple evaluation metrics. |
2025-07-07 | From Autonomy to Agency: Agentic Vehicles for Human-Centered Mobility Systems | Jiangbo Yu et.al. | 2507.04996 | | | Abstract (click to expand)Autonomy, from the Greek autos (self) and nomos (law), refers to the capacity to operate according to internal rules without external control. Accordingly, autonomous vehicles (AuVs) are defined as systems capable of perceiving their environment and executing preprogrammed tasks independently of external input. However, both research and real-world deployments increasingly showcase vehicles that demonstrate behaviors beyond this definition (including the SAE levels 1 to 6), such as interaction with humans and machines, goal adaptation, contextual reasoning, external tool use, and long-term planning, particularly with the integration of large language models (LLMs) and agentic AI systems. These developments reveal a conceptual gap between technical autonomy and the broader cognitive and social capabilities needed for future human-centered mobility systems. To address this, we introduce the concept of agentic vehicles (AgVs), referring to vehicles that integrate agentic AI to reason, adapt, and interact within complex environments. This paper presents a systems-level framework to characterize AgVs, focusing on their cognitive and communicative layers and differentiating them from conventional AuVs. It synthesizes relevant advances in agentic AI, robotics, multi-agent systems, and human-machine interaction, and highlights how agentic AI, through high-level reasoning and tool use, can function not merely as computational tools but as interactive agents embedded in mobility ecosystems. The paper concludes by identifying key challenges in the development and governance of AgVs, including safety, real-time control, public acceptance, ethical alignment, and regulatory frameworks. |
2025-07-07 | Operator-based machine learning framework for generalizable prediction of unsteady treatment dynamics in stormwater infrastructure | Mohamed Shatarah, Kai Liu, Haochen Li et.al. | 2507.04682 | | 9 figures | Abstract (click to expand)Stormwater infrastructures are decentralized urban water-management systems that face highly unsteady hydraulic and pollutant loadings from episodic rainfall-runoff events. Accurately evaluating their in-situ treatment performance is essential for cost-effective design and planning. Traditional lumped dynamic models (e.g., continuously stirred tank reactor, CSTR) are computationally efficient but oversimplify transport and reaction processes, limiting predictive accuracy and insight. Computational fluid dynamics (CFD) resolves detailed turbulent transport and pollutant fate physics but incurs prohibitive computational cost for unsteady and long-term simulations. To address these limitations, this study develops a composite operator-based neural network (CPNN) framework that leverages state-of-the-art operator learning to predict the spatial and temporal dynamics of hydraulics and particulate matter (PM) in stormwater treatment. The framework is demonstrated on a hydrodynamic separator (HS), a common urban treatment device. Results indicate that the CPNN achieves R2 > 0.8 for hydraulic predictions in 95.2% of test cases; for PM concentration predictions, R2 > 0.8 in 72.6% of cases and 0.4 < R2 < 0.8 in 22.6%. The analysis identifies challenges in capturing dynamics under extreme low-flow conditions, owing to their lower contribution to the training loss. Exploiting the automatic-differentiation capability of the CPNN, sensitivity analyses quantify the influence of storm event loading on PM transport. Finally, the potential of the CPNN framework for continuous, long-term evaluation of stormwater infrastructure performance is discussed, marking a step toward robust, climate-aware planning and implementation. |
2025-07-05 | Efficiency through Evolution, A Darwinian Approach to Agent-Based Economic Forecast Modeling | Martin Jaraiz et.al. | 2507.04074 | | 18 pages, 9 figures, presented at the IIOA Conference, Male 2025 | Abstract (click to expand)This paper presents a novel Darwinian Agent-Based Modeling (ABM) methodology formacroeconomic forecasting that leverages evolutionary principles to achieve remarkablecomputational efficiency and emergent realism. Unlike conventional DSGE and ABM approachesthat rely on complex behavioral rules derived from large firm analysis, our framework employssimple "common sense" rules representative of small firms directly serving final consumers. Themethodology treats households as the primary drivers of economic dynamics, with firms adaptingthrough market-based natural selection within limited interaction neighborhoods. We demonstrate that this approach, when constrained by Input-Output table structures,generates realistic economic patterns including wealth distributions, firm size distributions, andsectoral employment patterns without extensive parameter calibration. Using FIGARO Input-Output tables for 46 countries and focusing on Austria as a case study, we show that the modelreproduces empirical regularities while maintaining computational efficiency on standard laptopsrather than requiring supercomputing clusters. Key findings include: (1) emergence of realistic firm and employment distributions fromminimal behavioral assumptions, (2) accurate reproduction of the initial Social Accounting Matrixvalues through evolutionary dynamics, (3) successful calibration using only 5-6 country-specificparameters to complement the FIGARO data, and (4) computational performance enabling fullsimulations on consumer hardware. These results suggest that evolutionary ABM approaches canprovide robust policy insights by capturing decentralized market adaptations while avoiding thecomputational complexity of traditional DSGE and comprehensive ABM models. |
2025-07-05 | From Query to Explanation: Uni-RAG for Multi-Modal Retrieval-Augmented Learning in STEM | Xinyi Wu, Yanhao Jia, Luwei Xiao et.al. | 2507.03868 | | | Abstract (click to expand)In AI-facilitated teaching, leveraging various query styles to interpret abstract educational content is crucial for delivering effective and accessible learning experiences. However, existing retrieval systems predominantly focus on natural text-image matching and lack the capacity to address the diversity and ambiguity inherent in real-world educational scenarios. To address this limitation, we develop a lightweight and efficient multi-modal retrieval module, named Uni-Retrieval, which extracts query-style prototypes and dynamically matches them with tokens from a continually updated Prompt Bank. This Prompt Bank encodes and stores domain-specific knowledge by leveraging a Mixture-of-Expert Low-Rank Adaptation (MoE-LoRA) module and can be adapted to enhance Uni-Retrieval's capability to accommodate unseen query types at test time. To enable natural language educational content generation, we integrate the original Uni-Retrieval with a compact instruction-tuned language model, forming a complete retrieval-augmented generation pipeline named Uni-RAG. Given a style-conditioned query, Uni-RAG first retrieves relevant educational materials and then generates human-readable explanations, feedback, or instructional content aligned with the learning objective. Experimental results on SER and other multi-modal benchmarks show that Uni-RAG outperforms baseline retrieval and RAG systems in both retrieval accuracy and generation quality, while maintaining low computational cost. Our framework provides a scalable, pedagogically grounded solution for intelligent educational systems, bridging retrieval and generation to support personalized, explainable, and efficient learning assistance across diverse STEM scenarios. |
2025-07-04 | Willchain: Decentralized, Privacy-Preserving, Self-Executing, Digital Wills | Jovonni L. PHarr et.al. | 2507.03694 | | | Abstract (click to expand)This work presents a novel decentralized protocol for digital estate planning that integrates advances distributed computing, and cryptography. The original proof-of-concept was constructed using purely solidity contracts. Since then, we have enhanced the implementation into a layer-1 protocol that uses modern interchain communication to connect several heterogeneous chain types. A key contribution of this research is the implementation of several modern cryptographic primitives to support various forms of claims for information validation. These primitives introduce an unmatched level of privacy to the process of digital inheritance. We also demonstrate on a set of heterogeneous smart contracts, following the same spec, on each chain to serve as entry points, gateways, or bridge contracts that are invoked via a path from the will module on our protocol, to the contract. This ensures a fair and secure distribution of digital assets in accordance with the wishes of the decedent without the requirement of moving their funds. This research further extends its innovations with a user interaction model, featuring a check-in system and account abstraction process, which enhances flexibility and user-friendliness without compromising on security. By developing a dedicated permissionless blockchain that is secured by a network of validators, and interchain relayers, the proposed protocol signifies a transformation in the digital estate planning industry and illustrates the potential of blockchain technology in revolutionizing traditional legal and personal spheres. Implementing a cryptoeconomic network at the core of inheritance planning allows for unique incentive compatible economic mechanisms to be constructed. |
2025-07-04 | Noise-robust multi-fidelity surrogate modelling for parametric partial differential equations | Benjamin M. Kent, Lorenzo Tamellini, Matteo Giacomini et.al. | 2507.03691 | | 35 pages, 31 figures | Abstract (click to expand)We address the challenge of constructing noise-robust surrogate models for quantities of interest (QoIs) arising from parametric partial differential equations (PDEs), using multi-fidelity collocation techniques; specifically, the Multi-Index Stochastic Collocation (MISC). In practical scenarios, the PDE evaluations used to build a response surface are often corrupted by numerical noise, especially for the low-fidelity models. This noise, which may originate from loose solver tolerances, coarse discretisations, or transient effects, can lead to overfitting in MISC, degrading surrogate quality through nonphysical oscillations and loss of convergence, thereby limiting its utility in downstream tasks like uncertainty quantification, optimisation, and control. To correct this behaviour, we propose an improved version of MISC that can automatically detect the presence of solver noise during the surrogate model construction and then ignore the exhausted fidelities. Our approach monitors the spectral decay of the surrogate at each iteration, identifying stagnation in the coefficient spectrum that signals the onset of noise. Once detected, the algorithm selectively halts the use of noisy fidelities, focusing computational resources on those fidelities that still provide meaningful information. The effectiveness of this approach is numerically validated on two challenging test cases: a parabolic advection--diffusion PDE with uncertain coefficients, and a parametric turbulent incompressible Navier--Stokes problem. The results showcase the accuracy and robustness of the resulting multi-fidelity surrogate and its capability to extract relevant information, even from under-resolved meshes not suitable for reliable single-fidelity computations. |
2025-07-04 | Model-Based Control for Power-to-X Platforms: Knowledge Integration for Digital Twins | Daniel Dittler, Peter Frank, Gary Hildebrandt et.al. | 2507.03553 | | | Abstract (click to expand)Offshore Power-to-X platforms enable flexible conversion of renewable energy, but place high demands on adaptive process control due to volatile operating conditions. To face this challenge, using Digital Twins in Power-to-X platforms is a promising approach. Comprehensive knowledge integration in Digital Twins requires the combination of heterogeneous models and a structured representation of model information. The proposed approach uses a standardized description of behavior models, semantic technologies and a graph-based model understanding to enable automatic adaption and selection of suitable models. It is implemented using a graph-based knowledge representation with Neo4j, automatic data extraction from Asset Administration Shells and port matching to ensure compatible model configurations. |
2025-07-04 | A Concept for Autonomous Problem-Solving in Intralogistics Scenarios | Johannes Sigel, Daniel Dittler, Nasser Jazdi et.al. | 2507.03534 | | | Abstract (click to expand)Achieving greater autonomy in automation systems is crucial for handling unforeseen situations effectively. However, this remains challenging due to technological limitations and the complexity of real-world environments. This paper examines the need for increased autonomy, defines the problem, and outlines key enabling technologies. A structured concept is proposed, consisting of three main steps: context enrichment, situation analysis, and generation of solution strategies. By following this approach, automation systems can make more independent decisions, reducing the need for human intervention. Additionally, possible realizations of the concept are discussed, especially the use of Large Language Models. While certain tasks may still require human assistance, the proposed approach significantly enhances the autonomy of automation systems, enabling more adaptive and intelligent problem-solving capabilities. |
2025-07-04 | ElliottAgents: A Natural Language-Driven Multi-Agent System for Stock Market Analysis and Prediction | Jarosław A. Chudziak, Michał Wawer et.al. | 2507.03435 | | 10 pages, 8 figures, 1 table. This is the accepted version of the paper presented at the 38th Pacific Asia Conference on Language, Information and Computation, Tokyo, Japan | Abstract (click to expand)This paper presents ElliottAgents, a multi-agent system leveraging natural language processing (NLP) and large language models (LLMs) to analyze complex stock market data. The system combines AI-driven analysis with the Elliott Wave Principle to generate human-comprehensible predictions and explanations. A key feature is the natural language dialogue between agents, enabling collaborative analysis refinement. The LLM-enhanced architecture facilitates advanced language understanding, reasoning, and autonomous decision-making. Experiments demonstrate the system's effectiveness in pattern recognition and generating natural language descriptions of market trends. ElliottAgents contributes to NLP applications in specialized domains, showcasing how AI-driven dialogue systems can enhance collaborative analysis in data-intensive fields. This research bridges the gap between complex financial data and human understanding, addressing the need for interpretable and adaptive prediction systems in finance. |
2025-07-04 | Real-time prediction of plasma instabilities with sparse-grid-accelerated optimized dynamic mode decomposition | Kevin Gill, Ionut-Gabriel Farcas, Silke Glas et.al. | 2507.03245 | | 28 pages, 14 figures, 8 tables | Abstract (click to expand)Parametric data-driven reduced-order models (ROMs) that embed dependencies in a large number of input parameters are crucial for enabling many-query tasks in large-scale problems. These tasks, including design optimization, control, and uncertainty quantification, are essential for developing digital twins in real-world applications. However, standard training data generation methods are computationally prohibitive due to the curse of dimensionality, as their cost scales exponentially with the number of inputs.This paper investigates efficient training of parametric data-driven ROMs using sparse grid interpolation with (L)-Leja points, specifically targeting scenarios with higher-dimensional input parameter spaces. (L)-Leja points are nested and exhibit slow growth, resulting in sparse grids with low cardinality in low-to-medium dimensional settings, making them ideal for large-scale, computationally expensive problems. Focusing on gyrokinetic simulations of plasma micro-instabilities in fusion experiments as a representative real-world application, we construct parametric ROMs for the full 5D gyrokinetic distribution function via optimized dynamic mode decomposition (optDMD) and sparse grids based on (L)-Leja points. We perform detailed experiments in two scenarios: First, the Cyclone Base Case benchmark assesses optDMD ROM prediction capabilities beyond training time horizons and across variations in the binormal wave number. Second, for a real-world electron temperature gradient driven micro-instability simulation featuring six input parameters, we demonstrate that an accurate parametric optDMD ROM can be constructed at a cost of only \(28\) high-fidelity gyrokinetic simulations thanks to sparse grids. In the broader context of fusion research, these results demonstrate the potential of sparse grid-based parametric ROMs to enable otherwise intractable many-query tasks. |
2025-07-03 | LANTERN: A Machine Learning Framework for Lipid Nanoparticle Transfection Efficiency Prediction | Asal Mehradfar, Mohammad Shahab Sepehri, Jose Miguel Hernandez-Lobato et.al. | 2507.03209 | | | Abstract (click to expand)The discovery of new ionizable lipids for efficient lipid nanoparticle (LNP)-mediated RNA delivery remains a critical bottleneck for RNA-based therapeutics development. Recent advances have highlighted the potential of machine learning (ML) to predict transfection efficiency from molecular structure, enabling high-throughput virtual screening and accelerating lead identification. However, existing approaches are hindered by inadequate data quality, ineffective feature representations, low predictive accuracy, and poor generalizability. Here, we present LANTERN (Lipid nANoparticle Transfection Efficiency pRedictioN), a robust ML framework for predicting transfection efficiency based on ionizable lipid representation. We benchmarked a diverse set of ML models against AGILE, a previously published model developed for transfection prediction. Our results show that combining simpler models with chemically informative features, particularly count-based Morgan fingerprints, outperforms more complex models that rely on internally learned embeddings, such as AGILE. We also show that a multi-layer perceptron trained on a combination of Morgan fingerprints and Expert descriptors achieved the highest performance ( \(\text{R}^2\) = 0.8161, r = 0.9053), significantly exceeding AGILE (\(\text{R}^2\) = 0.2655, r = 0.5488). We show that the models in LANTERN consistently have strong performance across multiple evaluation metrics. Thus, LANTERN offers a robust benchmarking framework for LNP transfection prediction and serves as a valuable tool for accelerating lipid-based RNA delivery systems design. |
2025-07-03 | Quantifying Cross-Attention Interaction in Transformers for Interpreting TCR-pMHC Binding | Jiarui Li, Zixiang Yin, Haley Smith et.al. | 2507.03197 | | | Abstract (click to expand)CD8+ "killer" T cells and CD4+ "helper" T cells play a central role in the adaptive immune system by recognizing antigens presented by Major Histocompatibility Complex (pMHC) molecules via T Cell Receptors (TCRs). Modeling binding between T cells and the pMHC complex is fundamental to understanding basic mechanisms of human immune response as well as in developing therapies. While transformer-based models such as TULIP have achieved impressive performance in this domain, their black-box nature precludes interpretability and thus limits a deeper mechanistic understanding of T cell response. Most existing post-hoc explainable AI (XAI) methods are confined to encoder-only, co-attention, or model-specific architectures and cannot handle encoder-decoder transformers used in TCR-pMHC modeling. To address this gap, we propose Quantifying Cross-Attention Interaction (QCAI), a new post-hoc method designed to interpret the cross-attention mechanisms in transformer decoders. Quantitative evaluation is a challenge for XAI methods; we have compiled TCR-XAI, a benchmark consisting of 274 experimentally determined TCR-pMHC structures to serve as ground truth for binding. Using these structures we compute physical distances between relevant amino acid residues in the TCR-pMHC interaction region and evaluate how well our method and others estimate the importance of residues in this region across the dataset. We show that QCAI achieves state-of-the-art performance on both interpretability and prediction accuracy under the TCR-XAI benchmark. |
2025-07-03 | Mechanics Simulation with Implicit Neural Representations of Complex Geometries | Samundra Karki, Ming-Chen Hsu, Adarsh Krishnamurthy et.al. | 2507.03087 | | arXiv admin note: substantial text overlap with arXiv:2503.08724 | Abstract (click to expand)Implicit Neural Representations (INRs), characterized by neural network-encoded signed distance fields, provide a powerful means to represent complex geometries continuously and efficiently. While successful in computer vision and generative modeling, integrating INRs into computational analysis workflows, such as finite element simulations, remains underdeveloped. In this work, we propose a computational framework that seamlessly combines INRs with the Shifted Boundary Method (SBM) for high-fidelity linear elasticity simulations without explicit geometry transformations. By directly querying the neural implicit geometry, we obtain the surrogate boundaries and distance vectors essential for SBM, effectively eliminating the meshing step. We demonstrate the efficacy and robustness of our approach through elasticity simulations on complex geometries (Stanford Bunny, Eiffel Tower, gyroids) sourced from triangle soups and point clouds. Our method showcases significant computational advantages and accuracy, underscoring its potential in biomedical, geophysical, and advanced manufacturing applications. |
2025-07-03 | Constraint-Guided Symbolic Regression for Data-Efficient Kinetic Model Discovery | Miguel Ángel de Carvalho Servia, Ilya Orson Sandoval, King Kuok et.al. | 2507.02730 | | 27 pages, 8 figures | Abstract (click to expand)The industrialization of catalytic processes hinges on the availability of reliable kinetic models for design, optimization, and control. Traditional mechanistic models demand extensive domain expertise, while many data-driven approaches often lack interpretability and fail to enforce physical consistency. To overcome these limitations, we propose the Physics-Informed Automated Discovery of Kinetics (PI-ADoK) framework. By integrating physical constraints directly into a symbolic regression approach, PI-ADoK narrows the search space and substantially reduces the number of experiments required for model convergence. Additionally, the framework incorporates a robust uncertainty quantification strategy via the Metropolis-Hastings algorithm, which propagates parameter uncertainty to yield credible prediction intervals. Benchmarking our method against conventional approaches across several catalytic case studies demonstrates that PI-ADoK not only enhances model fidelity but also lowers the experimental burden, highlighting its potential for efficient and reliable kinetic model discovery in chemical reaction engineering. |
2025-07-03 | Imitation and Heterogeneity Shape the Resilience of Community Currency Networks | Camilla Ancona, Dora Ricci, Carmela Bernardo et.al. | 2507.02678 | | | Abstract (click to expand)Community currency networks are made up of individuals and or companies that share some physical or social characteristics and engage in economic transactions using a virtual currency. This paper investigates the structural and dynamic properties of such mutual credit systems through a case study of Sardex, a community currency initiated and mainly operating in Sardinia, Italy. The transaction network is modeled as a directed weighted graph and analyzed through a graph theoretic framework focused on the analysis of strongly connected components, condensed representations, and behavioral connectivity patterns. Emphasis is placed on understanding the evolution of the network's core and peripheral structures over a three year period, with attention to temporal contraction, flow asymmetries, and structural fragmentation depending on different user types. Our findings reveal persistent deviations from degree based null models and suggest the presence of behavioral imitation, specifically, a user preference for more active peers. We further assess the impact of heterogeneous connections between different type of users, which strengthen the network topology and enhance its resilience. |
2025-07-03 | Time Resolution Independent Operator Learning | Diab W. Abueidda, Mbebo Nonna, Panos Pantidis et.al. | 2507.02524 | | | Abstract (click to expand)Accurately learning solution operators for time-dependent partial differential equations (PDEs) from sparse and irregular data remains a challenging task. Recurrent DeepONet extensions inherit the discrete-time limitations of sequence-to-sequence (seq2seq) RNN architectures, while neural-ODE surrogates cannot incorporate new inputs after initialization. We introduce NCDE-DeepONet, a continuous-time operator network that embeds a Neural Controlled Differential Equation (NCDE) in the branch and augments the trunk with explicit space-time coordinates. The NCDE encodes an entire load history as the solution of a controlled ODE driven by a spline-interpolated input path, making the representation input-resolution-independent: it encodes different input signal discretizations of the observed samples. The trunk then probes this latent path at arbitrary spatial locations and times, rendering the overall map output-resolution independent: predictions can be queried on meshes and time steps unseen during training without retraining or interpolation. Benchmarks on transient Poisson, elastodynamic, and thermoelastic problems confirm the robustness and accuracy of the framework, achieving almost instant solution prediction. These findings suggest that controlled dynamics provide a principled and efficient foundation for high-fidelity operator learning in transient mechanics. |
2025-07-03 | Continual Gradient Low-Rank Projection Fine-Tuning for LLMs | Chenxu Wang, Yilin Lyu, Zicheng Sun et.al. | 2507.02503 | | 15 pages, 6 figures, accepted by ACL 2025 main | Abstract (click to expand)Continual fine-tuning of Large Language Models (LLMs) is hampered by the trade-off between efficiency and expressiveness. Low-Rank Adaptation (LoRA) offers efficiency but constrains the model's ability to learn new tasks and transfer knowledge due to its low-rank nature and reliance on explicit parameter constraints. We propose GORP (Gradient LOw Rank Projection) for Continual Learning, a novel training strategy that overcomes these limitations by synergistically combining full and low-rank parameters and jointly updating within a unified low-rank gradient subspace. GORP expands the optimization space while preserving efficiency and mitigating catastrophic forgetting. Extensive experiments on continual learning benchmarks demonstrate GORP's superior performance compared to existing state-of-the-art approaches. Code is available at https://github.com/Wcxwcxw/GORP. |
2025-07-03 | Modeling the Effective Elastic Modulus and Thickness of Corrugated Boards Using Gaussian Process Regression and Expected Hypervolume Improvement | Ricardo Fitas et.al. | 2507.02208 | | This is the submitted version of the manuscript entitled "Modeling the Effective Elastic Modulus and Thickness of Corrugated Boards Using Gaussian Process Regression and Expected Hypervolume Improvement." The final version is published in "Lecture Notes in Civil Engineering" (Springer Nature) as part of the OPTARCH 2024 proceedings | Abstract (click to expand)This work aims to model the hypersurface of the effective elastic modulus, \( E_{z, \text{eff}} \), and thickness, \( th_{\text{eff}} \), in corrugated boards. A Latin Hypercube Sampling (LHS) is followed by Gaussian Process Regression (GP), enhanced by EHVI as a multi-objective acquisition function. Accurate modeling of \( E_{z, \text{eff}} \) and \( th_{\text{eff}} \) is critical for optimizing the mechanical properties of corrugated materials in engineering applications. LHS provides an efficient and straightforward approach for an initial sampling of the input space; GP is expected to be able to adapt to the complexity of the response surfaces by incorporating both prediction and uncertainty. Therefore, the next points being generated and evaluated are based on the complexity of the hypersurfaces, and some points, especially those with higher variance, are more exploited and carry more importance. The performance of GP with EHVI is measured by Mean Squared Error (MSE). Prediction of GP resulted in \( \text{MSE}(E_{z, \text{eff}}) = 5.24 \, \text{kPa}^2 \) and \( \text{MSE}(th_{\text{eff}}) = 1 \, \text{mm}^2 \). GP possesses then improved accuracy and adaptability for future applications in structural optimization. |
2025-07-02 | A Multi-Scale Finite Element Method for Investigating Fiber Remodeling in Hypertrophic Cardiomyopathy | Mohammad Mehri, Kenneth S. Campbell, Lik Chuan Lee et.al. | 2507.02193 | | | Abstract (click to expand)A significant hallmark of hypertrophic cardiomyopathy (HCM) is fiber disarray, which is associated with various cardiac events such as heart failure. Quantifying fiber disarray remains critical for understanding the disease s complex pathophysiology. This study investigates the role of heterogeneous HCM-induced cellular abnormalities in the development of fiber disarray and their subsequent impact on cardiac pumping function. Fiber disarray is predicted using a stress-based law to reorient myofibers and collagen within a multiscale finite element cardiac modeling framework, MyoFE. Specifically, the model is used to quantify the distinct impacts of heterogeneous distributions of hypercontractility, hypocontractility, and fibrosis on fiber disarray development and examines their effect on functional characteristics of the heart. Our results show that heterogenous cell level abnormalities highly disrupt the normal mechanics of myocardium and lead to significant fiber disarray. The pattern of disarray varies depending on the specific perturbation, offering valuable insights into the progression of HCM. Despite the random distribution of perturbed regions within the cardiac muscle, significantly higher fiber disarray is observed near the epicardium compared to the endocardium across all perturbed left ventricle (LV) models. This regional difference in fiber disarray, irrespective of perturbation severity, aligns with previous DT-MRI studies, highlighting the role of regional myocardial mechanics in the development of fiber disarray. Furthermore, cardiac performance declined in the remodeled LVs, particularly in those with fibrosis and hypocontractility. These findings provide important insights into the structural and functional consequences of HCM and offer a framework for future investigations into therapeutic interventions targeting cardiac remodeling. |
2025-07-02 | On the Design of Corrugated Boards: A New FEM Modeling and Experimental Validation | Ricardo Fitas, Heinz Joachim Schaffrath, Samuel Schabel et.al. | 2507.02189 | | | Abstract (click to expand)This study presents a simplified FEM modeling approach suitable for large structures made of corrugated boards, such as customized packages, based on a homogenization method, which is combined with correction factors for internal mechanisms. The homogenization process reduces computational time by transforming flute geometries into equivalent elastic models. In large deformations and in the presence of contact for a given geometry, the effective elastic modulus in the thickness direction, as well as the effective thickness of the structure, are corrected by two statistical Weibull distributions representing the contact and buckling mechanisms in a corrugated board. The Weibull parameters are obtained via experimental analysis, and such a process is then validated. The results demonstrate that the statistical parameters ( \(\beta_1 = 0.14\), \(\beta_2 = 1.31\) ) can be used for the simplistic representation of corrugated boards, being computationally efficient. This research contributes to the optimization of corrugated packaging design, specifically by simplifying FEM models for faster yet equally accurate simulations. |
2025-07-01 | Discovery of Fatigue Strength Models via Feature Engineering and automated eXplainable Machine Learning applied to the welded Transverse Stiffener | Michael A. Kraus, Helen Bartsch et.al. | 2507.02005 | | | Abstract (click to expand)This research introduces a unified approach combining Automated Machine Learning (AutoML) with Explainable Artificial Intelligence (XAI) to predict fatigue strength in welded transverse stiffener details. It integrates expert-driven feature engineering with algorithmic feature creation to enhance accuracy and explainability. Based on the extensive fatigue test database regression models - gradient boosting, random forests, and neural networks - were trained using AutoML under three feature schemes: domain-informed, algorithmic, and combined. This allowed a systematic comparison of expert-based versus automated feature selection. Ensemble methods (e.g. CatBoost, LightGBM) delivered top performance. The domain-informed model \(\mathcal M_2\) achieved the best balance: test RMSE \(\approx\) 30.6 MPa and \(R^2 \approx 0.780% over the full \(\Delta \sigma_{c,50\%}\) range, and RMSE \(\approx\) 13.4 MPa and \(R^2 \approx 0.527% within the engineering-relevant 0 - 150 MPa domain. The denser-feature model (\)\mathcal M_3\)) showed minor gains during training but poorer generalization, while the simpler base-feature model (\(\mathcal M_1\)) performed comparably, confirming the robustness of minimalist designs. XAI methods (SHAP and feature importance) identified stress ratio \(R\), stress range \(\Delta \sigma_i\), yield strength \(R_{eH}\) , and post-weld treatment (TIG dressing vs. as-welded) as dominant predictors. Secondary geometric factors - plate width, throat thickness, stiffener height - also significantly affected fatigue life. This framework demonstrates that integrating AutoML with XAI yields accurate, interpretable, and robust fatigue strength models for welded steel structures. It bridges data-driven modeling with engineering validation, enabling AI-assisted design and assessment. Future work will explore probabilistic fatigue life modeling and integration into digital twin environments. |
2025-07-02 | A modified Levenberg-Marquardt method for estimating the elastic material parameters of polymer waveguides using residuals between autocorrelated frequency responses | Dominik Itner, Dmitrij Dreiling, Hauke Gravenkamp et.al. | 2507.01706 | | | Abstract (click to expand)In this contribution, we address the estimation of the frequency-dependent elastic parameters of polymers in the ultrasound range, which is formulated as an inverse problem. This inverse problem is implemented as a nonlinear regression-type optimization problem, in which the simulation signals are fitted to the measurement signals. These signals consist of displacement responses in waveguides, focusing on hollow cylindrical geometries to enhance the simulation efficiency. To accelerate the optimization and reduce the number of model evaluations and wait times, we propose two novel methods. First, we introduce an adaptation of the Levenberg-Marquardt method derived from a geometrical interpretation of the least-squares optimization problem. Second, we introduce an improved objective function based on the autocorrelated envelopes of the measurement and simulation signals. Given that this study primarily relies on simulation data to quantify optimization convergence, we aggregate the expected ranges of realistic material parameters and derive their distributions to ensure the reproducibility of optimizations with proper measurements. We demonstrate the effectiveness of our objective function modification and step adaptation for various materials with isotropic material symmetry by comparing them with a state-of-the-art optimization method. In all cases, our method reduces the total number of model evaluations, thereby shortening the time to identify the material parameters. |
2025-07-02 | Spatially Distributed Wettability Characterization in Porous Media | Faisal Aljaberi, Hadi Belhaj, Sajjad Foroughi et.al. | 2507.01617 | | | Abstract (click to expand)An enhanced geometric algorithm for automated pore-by-pore contact angle measurement from micro-CT images, is presented that achieves superior accuracy compared to existing methods through robust fluid-fluid and solid-fluid interface extrapolation. Using this high resolution data, we generate spatially distributed contact angle maps that reveal previously hidden wettability heterogeneity. Our analysis of mixed-wet systems demonstrates the severe limitations of averaged metrics: a sample with a mean contact angle of 64.7 degrees, conventionally classified as uniformly weakly water-wet, exhibits 40% of its pore space in the intermediate-wetting regime (70-110 degrees). This heterogeneity explains the presence of minimal surface interfaces and fundamentally different pore-filling mechanisms operating within the same sample. By providing open-source tools for spatially-resolved wettability characterization, this work enables more accurate predictions of multiphase flow behavior in heterogeneous porous materials, essential for optimizing subsurface energy storage and recovery processes. |
2025-07-01 | Agentic AI in Product Management: A Co-Evolutionary Model | Nishant A. Parikh et.al. | 2507.01069 | | 41 pages, 2 figures | Abstract (click to expand)This study explores agentic AI's transformative role in product management, proposing a conceptual co-evolutionary framework to guide its integration across the product lifecycle. Agentic AI, characterized by autonomy, goal-driven behavior, and multi-agent collaboration, redefines product managers (PMs) as orchestrators of socio-technical ecosystems. Using systems theory, co-evolutionary theory, and human-AI interaction theory, the framework maps agentic AI capabilities in discovery, scoping, business case development, development, testing, and launch. An integrative review of 70+ sources, including case studies from leading tech firms, highlights PMs' evolving roles in AI orchestration, supervision, and strategic alignment. Findings emphasize mutual adaptation between PMs and AI, requiring skills in AI literacy, governance, and systems thinking. Addressing gaps in traditional frameworks, this study provides a foundation for future research and practical implementation to ensure responsible, effective agentic AI integration in software organizations. |
2025-07-02 | Ensemble Kalman Filter for Data Assimilation coupled with low-resolution computations techniques applied in Fluid Dynamics | Paul Jeanney, Ashton Hetherington, Shady E. Ahmed et.al. | 2507.00539 | | article, 49 pages, 29 figures, 4 tables | Abstract (click to expand)This paper presents an innovative Reduced-Order Model (ROM) for merging experimental and simulation data using Data Assimilation (DA) to estimate the "True" state of a fluid dynamics system, leading to more accurate predictions. Our methodology introduces a novel approach implementing the Ensemble Kalman Filter (EnKF) within a reduced-dimensional framework, grounded in a robust theoretical foundation and applied to fluid dynamics. To address the substantial computational demands of DA, the proposed ROM employs low-resolution (LR) techniques to drastically reduce computational costs. This approach involves downsampling datasets for DA computations, followed by an advanced reconstruction technique based on low-cost Singular Value Decomposition (lcSVD). The lcSVD method, a key innovation in this paper, has never been applied to DA before and offers a highly efficient way to enhance resolution with minimal computational resources. Our results demonstrate significant reductions in both computation time and RAM usage through the LR techniques without compromising the accuracy of the estimations. For instance, in a turbulent test case, the LR approach with a compression rate of 15.9 can achieve a speed-up of 13.7 and a RAM compression of 90.9% while maintaining a low Relative Root Mean Square Error (RRMSE) of 2.6%, compared to 0.8% in the high-resolution (HR) reference. Furthermore, we highlight the effectiveness of the EnKF in estimating and predicting the state of fluid flow systems based on limited observations and low-fidelity numerical data. This paper highlights the potential of the proposed DA method in fluid dynamics applications, particularly for improving computational efficiency in CFD and related fields. Its ability to balance accuracy with low computational and memory costs makes it suitable for large-scale and real-time applications, such as environmental monitoring or aerospace. |
2025-06-28 | The gradual transformation of inland countries -- human plowing, horse plowing and equity incentives | Hongfa Zi, Zhen Liu et.al. | 2507.00067 | | 9 pages,2 figures | Abstract (click to expand)Many modern countries have not learned their lessons and often hope for the wisdom of later generations, resulting in them only possessing modern technology and difficult to iterate ancient civilizations. At present, there is no way to tell how we should learn from history and promote the gradual upgrading of civilization. Therefore, we must tell the history of civilization's progress and the means of governance, learn from experience to improve the comprehensive strength and survival ability of civilization, and achieve an optimal solution for the tempering brought by conflicts and the reduction of internal conflicts. Firstly, we must follow the footsteps of history and explore the reasons for the long-term stability of each country in conflict, including providing economic benefits to the people and means of suppressing them; then, use mathematical methods to demonstrate how we can achieve the optimal solution at the current stage. After analysis, we can conclude that the civilization transformed from human plowing to horse plowing can easily suppress the resistance of the people and provide them with the ability to resist; The selection of rulers should consider multiple institutional aspects, such as exams, elections, and drawing lots; Economic development follows a lognormal distribution and can be adjusted by expected value and variance. Using a lognormal distribution with the maximum value to divide equity can adjust the wealth gap. |
2025-06-30 | Validation of AI-Based 3D Human Pose Estimation in a Cyber-Physical Environment | Lisa Marie Otto, Michael Kaiser, Daniel Seebacher et.al. | 2506.23739 | | 6 pages, 5 figures, Preprint for 2025 IEEE IAVVC (International Automated Vehicle Validation Conference) | Abstract (click to expand)Ensuring safe and realistic interactions between automated driving systems and vulnerable road users (VRUs) in urban environments requires advanced testing methodologies. This paper presents a test environment that combines a Vehiclein-the-Loop (ViL) test bench with a motion laboratory, demonstrating the feasibility of cyber-physical (CP) testing of vehicle-pedestrian and vehicle-cyclist interactions. Building upon previous work focused on pedestrian localization, we further validate a human pose estimation (HPE) approach through a comparative analysis of real-world (RW) and virtual representations of VRUs. The study examines the perception of full-body motion using a commercial monocular camera-based 3Dskeletal detection AI. The virtual scene is generated in Unreal Engine 5, where VRUs are animated in real time and projected onto a screen to stimulate the camera. The proposed stimulation technique ensures the correct perspective, enabling realistic vehicle perception. To assess the accuracy and consistency of HPE across RW and CP domains, we analyze the reliability of detections as well as variations in movement trajectories and joint estimation stability. The validation includes dynamic test scenarios where human avatars, both walking and cycling, are monitored under controlled conditions. Our results show a strong alignment in HPE between RW and CP test conditions for stable motion patterns, while notable inaccuracies persist under dynamic movements and occlusions, particularly for complex cyclist postures. These findings contribute to refining CP testing approaches for evaluating next-generation AI-based vehicle perception and to enhancing interaction models of automated vehicles and VRUs in CP environments. |
2025-06-30 | Immersive Technologies in Training and Healthcare: From Space Missions to Psychophysiological Research | Barbara Karpowicz, Maciej Grzeszczuk, Adam Kuzdraliński et.al. | 2506.23545 | | 8 pages, 1 figure | Abstract (click to expand)Virtual, Augmented, and eXtended Reality (VR/AR/XR) technologies are increasingly recognized for their applications in training, diagnostics, and psychological research, particularly in high-risk and highly regulated environments. In this panel we discuss how immersive systems enhance human performance across multiple domains, including clinical psychology, space exploration, and medical education. In psychological research and training, XR can offer a controlled yet ecologically valid setting for measuring cognitive and affective processes. In space exploration, we discuss the development of VR-based astronaut training and diagnostic systems, allowing astronauts to perform real-time health assessments. In medical education and rehabilitation, we cover procedural training and patient engagement. From virtual surgical simulations to gamified rehabilitation exercises, immersive environments enhance both learning outcomes and treatment adherence. |
2025-06-29 | Data-Driven Multiscale Topology Optimization of Soft Functionally Graded Materials with Large Deformations | Shiguang Deng, Horacio D. Espinosa, Wei Chen et.al. | 2506.23422 | | | Abstract (click to expand)Functionally Graded Materials (FGMs) made of soft constituents have emerged as promising material-structure systems in potential applications across many engineering disciplines, such as soft robots, actuators, energy harvesting, and tissue engineering. Designing such systems remains challenging due to their multiscale architectures, multiple material phases, and inherent material and geometric nonlinearities. The focus of this paper is to propose a general topology optimization framework that automates the design innovation of multiscale soft FGMs exhibiting nonlinear material behaviors under large deformations. Our proposed topology optimization framework integrates several key innovations: (1) a novel microstructure reconstruction algorithm that generates composite architecture materials from a reduced design space using physically interpretable parameters; (2) a new material homogenization approach that estimates effective properties by combining the stored energy functions of multiple soft constituents; (3) a neural network-based topology optimization that incorporates data-driven material surrogates to enable bottom-up, simultaneous optimization of material and structure; and (4) a generic nonlinear sensitivity analysis technique that computes design sensitivities numerically without requiring explicit gradient derivation. To enhance the convergence of the nonlinear equilibrium equations amid topology optimization, we introduce an energy interpolation scheme and employ a Newton-Raphson solver with adaptive step sizes and convergence criteria. Numerical experiments show that the proposed framework produces distinct topological designs, different from those obtained under linear elasticity, with spatially varying microstructures. |
2025-06-29 | Data-Driven Multiscale Topology Optimization of Spinodoid Architected Materials with Controllable Anisotropy | Shiguang Deng, Doksoo Lee, Aaditya Chandrasekhar et.al. | 2506.23420 | | | Abstract (click to expand)Spinodoid architected materials have drawn significant attention due to their unique nature in stochasticity, aperiodicity, and bi-continuity. Compared to classic periodic truss-, beam- and plate-based lattice architectures, spinodoids are insensitive to manufacturing defects, scalable for high throughput production, functionally graded by tunable local properties, and material failure resistant due to low-curvature morphology. However, the design of spinodoids is often hindered by the curse of dimensionality with extremely large design space of spinodoid types, material density, orientation, continuity, and anisotropy. From a design optimization perspective, while genetic algorithms are often beyond the reach of computing capacity, gradient-based topology optimization is challenged by the intricate mathematical derivation of gradient fields with respect to various spinodoid parameters. To address such challenges, we propose a data-driven multiscale topology optimization framework. Our framework reformulates the design variables of spinodoid materials as the parameters of neural networks, enabling automated computation of topological gradients. Additionally, it incorporates a Gaussian Process surrogate for spinodoid constitutive models, eliminating the need for repeated computational homogenization and enhancing the scalability of multiscale topology optimization. Compared to 'black-box' deep learning approaches, the proposed framework provides clear physical insights into material distribution. It explicitly reveals why anisotropic spinodoids with tailored orientations are favored in certain regions, while isotropic spinodoids are more suitable elsewhere. This interpretability helps to bridge the gap between data-driven design with mechanistic understanding. |
2025-06-28 | Towards a better approach to the Vehicle Routing Problem | Souad Abdoune, Menouar Boulif et.al. | 2506.23028 | | 22 pages, 21 figures | Abstract (click to expand)The Vehicle Routing Problem (VRP) is a fundamental challenge in logistics management research, given its substantial influence on transportation efficiency, cost minimization, and service quality. As a combinatorial optimization problem, VRP plays a crucial role in a wide range of real world applications, particularly in transportation, logistics, and delivery systems, due to its diverse formulations and numerous extensions. Over the years, researchers have introduced various VRP variants to address specific operational constraints, emerging industry requirements and optimize specific objectives, making it one of the most extensively studied problems in operations research. This article provides a comprehensive overview of VRP by exploring its theoretical foundations, discussing the limitations of its classical model, and introducing its key extensions. By systematically reviewing the diverse constraints, objectives, and variants examined in recent literature, this study aims to contribute to a deeper understanding of VRP while highlighting its ongoing evolution and relevance in modern optimization and decision making processes. |
2025-06-28 | SparStencil: Retargeting Sparse Tensor Cores to Scientific Stencil Computations via Structured Sparsity Transformation | Qi Li, Kun Li, Haozhi Han et.al. | 2506.22969 | | Accepted to SC'25 (June 3, 2025). This work was previously submitted to ISCA'25 (Nov 22, 2024) and substantially revised based on feedback | Abstract (click to expand)Sparse Tensor Cores offer exceptional performance gains for AI workloads by exploiting structured 2:4 sparsity. However, their potential remains untapped for core scientific workloads such as stencil computations, which exhibit irregular sparsity patterns.This paper presents SparStencil, the first system to retarget sparse TCUs for scientific stencil computations through structured sparsity transformation. SparStencil introduces three key techniques: (1) Adaptive Layout Morphing, which restructures stencil patterns into staircase-aligned sparse matrices via a flatten-and-crush pipeline; (2) Structured Sparsity Conversion, which formulates transformation as a graph matching problem to ensure compatibility with 2:4 sparsity constraints; (3) Automatic Kernel Generation, which compiles transformed stencils into optimized sparse MMA kernels via layout search and table-driven memory mapping. Evaluated on 79 stencil kernels spanning diverse scientific domains, SparStencil achieves up to 7.1x speedup (3.1x on average) over state-of-the-art framework while reducing code complexity and matching or exceeding expert-tuned performance in both compute throughput and memory efficiency. |
2025-06-28 | Feasibility of spectral-element modeling of wave propagation through the anatomy of marine mammals | Carlos García A., Vladimiro Boselli, Aida Hejazi Nooghabi et.al. | 2506.22944 | | | Abstract (click to expand)This study introduces the first 3D spectral-element method (SEM) simulation of ultrasonic wave propagation in a bottlenose dolphin (Tursiops truncatus) head. Unlike traditional finite-element methods (FEM), which struggle with high-frequency simulations due to costly linear-system inversions and slower convergence, SEM offers exponential convergence and efficient parallel computation. Using Computed Tomography (CT) scan data, we developed a detailed hexahedral mesh capturing complex anatomical features, such as acoustic fats and jaws. Our simulations of plane and spherical waves confirm SEM's effectiveness for ultrasonic time-domain modeling. This approach opens new avenues for marine biology, contributing to research in echolocation, the impacts of anthropogenic marine noise pollution and the biophysics of hearing and click generation in marine mammals. By overcoming FEM's limitations, SEM provides a powerful scalable tool to test hypotheses about dolphin bioacoustics, with significant implications for conservation and understanding marine mammal auditory systems under increasing environmental challenges. |
2025-06-28 | Improved design of an active landing gear for a passenger aircraft using multi-objective optimization technique | Milad Zarchi, Behrooz Attaran et.al. | 2506.22870 | | 21 pages, 24 Figures, | Abstract (click to expand)The landing gear system is a major aircraft subsystem that must withstand extreme forces during ground maneuvers and absorb vibrations. While traditional systems perform well under normal conditions, their efficiency drops under varying landing and runway scenarios. This study addresses this issue by simultaneously optimizing controller coefficients, parameters of a nonlinear hydraulic actuator integrated into the traditional shock absorber, and a vibration absorber using a bee-inspired multi-objective algorithm. To demonstrate adaptability, the paper includes sensitivity analysis for three-point landings affected by added payload and touchdown speed, and robustness analysis for one- and two-point landings under emergency wind conditions. The dynamic flight equations of an Airbus A320-200 during landing are derived and solved numerically. Results show that the active shock absorber system, optimized via two bee-based algorithms, outperforms the passive system in reducing bounce and pitch displacements and momenta, suspension travel, and impact force in both time and frequency domains. This leads to significantly improved passenger comfort and potentially longer structural fatigue life, demonstrating industrial applicability. |
2025-06-27 | CAD-Integrated Electrostatic Boundary Element Simulations with Non-Conforming Higher-Order Meshes | Benjamin Marussig, Jürgen Zechner, Thomas Rüberg et.al. | 2506.22676 | | | Abstract (click to expand)We present a design through analysis workflow that enables virtual prototyping of electric devices. A CAD plugin establishes the interaction between design and analysis, allowing the preparation of analysis models and the visualization of its results within the design environment. The simulations utilize a fast boundary element method (BEM) that allows for non-conforming and higher-order meshes. Our numerical experiments investigate the accuracy of the approach and its sensitivity to the initial CAD representation. Overall, the workflow enables a close link between design and analysis, where the non-conforming higher-order BEM approach provides accurate results and significantly simplifies the interaction. |
2025-06-26 | Exploring Artificial Intelligence Tutor Teammate Adaptability to Harness Discovery Curiosity and Promote Learning in the Context of Interactive Molecular Dynamics | Mustafa Demir, Jacob Miratsky, Jonathan Nguyen et.al. | 2506.22520 | | | Abstract (click to expand)This study examines the impact of an Artificial Intelligence tutor teammate (AI) on student curiosity-driven engagement and learning effectiveness during Interactive Molecular Dynamics (IMD) tasks on the Visual Molecular Dynamics platform. It explores the role of the AI's curiosity-triggering and response behaviors in stimulating and sustaining student curiosity, affecting the frequency and complexity of student-initiated questions. The study further assesses how AI interventions shape student engagement, foster discovery curiosity, and enhance team performance within the IMD learning environment. Using a Wizard-of-Oz paradigm, a human experimenter dynamically adjusts the AI tutor teammate's behavior through a large language model. By employing a mixed-methods exploratory design, a total of 11 high school students participated in four IMD tasks that involved molecular visualization and calculations, which increased in complexity over a 60-minute period. Team performance was evaluated through real-time observation and recordings, whereas team communication was measured by question complexity and AI's curiosity-triggering and response behaviors. Cross Recurrence Quantification Analysis (CRQA) metrics reflected structural alignment in coordination and were linked to communication behaviors. High-performing teams exhibited superior task completion, deeper understanding, and increased engagement. Advanced questions were associated with AI curiosity-triggering, indicating heightened engagement and cognitive complexity. CRQA metrics highlighted dynamic synchronization in student-AI interactions, emphasizing structured yet adaptive engagement to promote curiosity. These proof-of-concept findings suggest that the AI's dual role as a teammate and educator indicates its capacity to provide adaptive feedback, sustaining engagement and epistemic curiosity. |
2025-06-27 | StructMG: A Fast and Scalable Structured Algebraic Multigrid | Yi Zong, Peinan Yu, Haopeng Huang et.al. | 2506.21932 | | | Abstract (click to expand)Parallel multigrid is widely used as preconditioners in solving large-scale sparse linear systems. However, the current multigrid library still needs more satisfactory performance for structured grid problems regarding speed and scalability. Based on the classical 'multigrid seesaw', we derive three necessary principles for an efficient structured multigrid, which instructs our design and implementation of StructMG, a fast and scalable algebraic multigrid that constructs hierarchical grids automatically. As a preconditioner, StructMG can achieve both low cost per iteration and good convergence when solving large-scale linear systems with iterative methods in parallel. A stencil-based triple-matrix product via symbolic derivation and code generation is proposed for multi-dimensional Galerkin coarsening to reduce grid complexity, operator complexity, and implementation effort. A unified parallel framework of sparse triangular solver is presented to achieve fast convergence and high parallel efficiency for smoothers, including dependence-preserving Gauss-Seidel and incomplete LU methods. Idealized and real-world problems from radiation hydrodynamics, petroleum reservoir simulation, numerical weather prediction, and solid mechanics, are evaluated on ARM and X86 platforms to show StructMG's effectiveness. In comparison to \textit{hypre}'s structured and general multigrid preconditioners, StructMG achieves the fastest time-to-solutions in all cases with average speedups of 15.5x, 5.5x, 6.7x, 7.3x over SMG, PFMG, SysPFMG, and BoomerAMG, respectively. StructMG also significantly improves strong and weak scaling efficiencies. |
2025-06-27 | A Deep Learning Algorithm Based on CNN-LSTM Framework for Predicting Cancer Drug Sales Volume | Yinghan Li, Yilin Yao, Junghua Lin et.al. | 2506.21927 | | | Abstract (click to expand)This study explores the application potential of a deep learning model based on the CNN-LSTM framework in forecasting the sales volume of cancer drugs, with a focus on modeling complex time series data. As advancements in medical technology and cancer treatment continue, the demand for oncology medications is steadily increasing. Accurate forecasting of cancer drug sales plays a critical role in optimizing production planning, supply chain management, and healthcare policy formulation. The dataset used in this research comprises quarterly sales records of a specific cancer drug in Egypt from 2015 to 2024, including multidimensional information such as date, drug type, pharmaceutical company, price, sales volume, effectiveness, and drug classification. To improve prediction accuracy, a hybrid deep learning model combining Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks is employed. The CNN component is responsible for extracting local temporal features from the sales data, while the LSTM component captures long-term dependencies and trends. Model performance is evaluated using two widely adopted metrics: Mean Squared Error (MSE) and Root Mean Squared Error (RMSE). The results demonstrate that the CNN-LSTM model performs well on the test set, achieving an MSE of 1.150 and an RMSE of 1.072, indicating its effectiveness in handling nonlinear and volatile sales data. This research provides theoretical and technical support for data-driven decision-making in pharmaceutical marketing and healthcare resource planning. |
2025-06-27 | Model-free Forecasting of Rogue Waves using Reservoir Computing | Abrari Noor Hasmi, Hadi Susanto et.al. | 2506.21918 | | 26 pages 14 figures. To appear Communications in Nonlinear Science and Numerical Simulation (CNSNS), 2025 , 109087 | Abstract (click to expand)Recent research has demonstrated Reservoir Computing's capability to model various chaotic dynamical systems, yet its application to Hamiltonian systems remains relatively unexplored. This paper investigates the effectiveness of Reservoir Computing in capturing rogue wave dynamics from the nonlinear Schr\"{o}dinger equation, a challenging Hamiltonian system with modulation instability. The model-free approach learns from breather simulations with five unstable modes. A properly tuned parallel Echo State Network can predict dynamics from two distinct testing datasets. The first set is a continuation of the training data, whereas the second set involves a higher-order breather. An investigation of the one-step prediction capability shows remarkable agreement between the testing data and the models. Furthermore, we show that the trained reservoir can predict the propagation of rogue waves over a relatively long prediction horizon, despite facing unseen dynamics. Finally, we introduce a method to significantly improve the Reservoir Computing prediction in autonomous mode, enhancing its long-term forecasting ability. These results advance the application of Reservoir Computing to spatio-temporal Hamiltonian systems and highlight the critical importance of phase space coverage in the design of training data. |
2025-06-26 | Storm Surge in Color: RGB-Encoded Physics-Aware Deep Learning for Storm Surge Forecasting | Jinpai Zhao, Albert Cerrone, Eirik Valseth et.al. | 2506.21743 | | | Abstract (click to expand)Storm surge forecasting plays a crucial role in coastal disaster preparedness, yet existing machine learning approaches often suffer from limited spatial resolution, reliance on coastal station data, and poor generalization. Moreover, many prior models operate directly on unstructured spatial data, making them incompatible with modern deep learning architectures. In this work, we introduce a novel approach that projects unstructured water elevation fields onto structured Red Green Blue (RGB)-encoded image representations, enabling the application of Convolutional Long Short Term Memory (ConvLSTM) networks for end-to-end spatiotemporal surge forecasting. Our model further integrates ground-truth wind fields as dynamic conditioning signals and topo-bathymetry as a static input, capturing physically meaningful drivers of surge evolution. Evaluated on a large-scale dataset of synthetic storms in the Gulf of Mexico, our method demonstrates robust 48-hour forecasting performance across multiple regions along the Texas coast and exhibits strong spatial extensibility to other coastal areas. By combining structured representation, physically grounded forcings, and scalable deep learning, this study advances the frontier of storm surge forecasting in usability, adaptability, and interpretability. |
2025-06-26 | Counterfactual Voting Adjustment for Quality Assessment and Fairer Voting in Online Platforms with Helpfulness Evaluation | Chang Liu, Yixin Wang, Moontae Lee et.al. | 2506.21362 | | | Abstract (click to expand)Efficient access to high-quality information is vital for online platforms. To promote more useful information, users not only create new content but also evaluate existing content, often through helpfulness voting. Although aggregated votes help service providers rank their user content, these votes are often biased by disparate accessibility per position and the cascaded influence of prior votes. For a fairer assessment of information quality, we propose the Counterfactual Voting Adjustment (CVA), a causal framework that accounts for the context in which individual votes are cast. Through preliminary and semi-synthetic experiments, we show that CVA effectively models the position and herding biases, accurately recovering the predefined content quality. In a real experiment, we demonstrate that reranking content based on the learned quality by CVA exhibits stronger alignment with both user sentiment and quality evaluation assessed by GPT-4o, outperforming system rankings based on aggregated votes and model-based rerankings without causal inference. Beyond the individual quality inference, our embeddings offer comparative insights into the behavioral dynamics of expert user groups across 120 major StackExchange communities. |
2025-06-26 | Institutional Noise, Strategic Deviation, and Intertemporal Collapse: A Formal Model of Miner Behaviour under Protocol Uncertainty | Craig Steven Wright et.al. | 2506.20992 | | 40 pages, submitted to QJAE | Abstract (click to expand)This paper develops a formal game-theoretic model to examine how protocol mutability disrupts cooperative mining behaviour in blockchain systems. Using a repeated game framework with stochastic rule shocks, we show that even minor uncertainty in institutional rules increases time preference and induces strategic deviation. Fixed-rule environments support long-term investment and stable equilibrium strategies; in contrast, mutable protocols lead to short-termism, higher discounting, and collapse of coordinated engagement. Simulation results identify instability zones in the parameter space where rational mining gives way to extractive or arbitrage conduct. These findings support an Austrian economic interpretation: calculability requires rule stability. Institutional noise undermines the informational basis for productive action. We conclude that protocol design must be treated as a constitutional economic constraint, not a discretionary variable, if sustainable cooperation is to emerge in decentralised systems. |
2025-06-26 | LLM-guided Chemical Process Optimization with a Multi-Agent Approach | Tong Zeng, Srivathsan Badrinarayanan, Janghoon Ock et.al. | 2506.20921 | | 16 pages (main manuscript without references), 2 figures | Abstract (click to expand)Chemical process optimization is crucial to maximize production efficiency and economic performance. Traditional methods, including gradient-based solvers, evolutionary algorithms, and parameter grid searches, become impractical when operating constraints are ill-defined or unavailable, requiring engineers to rely on subjective heuristics to estimate feasible parameter ranges. To address this constraint definition bottleneck, we present a multi-agent framework of large language model (LLM) agents that autonomously infer operating constraints from minimal process descriptions, then collaboratively guide optimization using the inferred constraints. Our AutoGen-based agentic framework employs OpenAI's o3 model, with specialized agents for constraint generation, parameter validation, simulation execution, and optimization guidance. Through two phases - autonomous constraint generation using embedded domain knowledge, followed by iterative multi-agent optimization - the framework eliminates the need for predefined operational bounds. Validated on the hydrodealkylation process across cost, yield, and yield-to-cost ratio metrics, the framework demonstrated competitive performance with conventional optimization methods while achieving better computational efficiency, requiring fewer iterations to converge. Our approach converged in under 20 minutes, achieving a 31-fold speedup over grid search. Beyond computational efficiency, the framework's reasoning-guided search demonstrates sophisticated process understanding, correctly identifying utility trade-offs, and applying domain-informed heuristics. This approach shows significant potential for optimization scenarios where operational constraints are poorly characterized or unavailable, particularly for emerging processes and retrofit applications. |
2025-06-25 | Multicontinuum Homogenization for Poroelasticity Model | Dmitry Ammosov, Mohammed Al-Kobaisi, Yalchin Efendiev et.al. | 2506.20890 | | | Abstract (click to expand)In this paper, we derive multicontinuum poroelasticity models using the multicontinuum homogenization method. Poroelasticity models are widely used in many areas of science and engineering to describe coupled flow and mechanics processes in porous media. However, in many applications, the properties of poroelastic media possess high contrast, presenting serious computational challenges. It is well known that standard homogenization approaches often fail to give an accurate solution due to the lack of macroscopic parameters. Multicontinuum approaches allow us to consider such cases by defining several average states known as continua. In the field of poroelasticity, multiple-network models arising from the multiple porous media theory are representatives of these approaches. In this work, we extend previous findings by deriving the generalized multicontinuum poroelasticity model. We apply the recently developed multicontinuum homogenization method and provide a rigorous derivation of multicontinuum equations. For this purpose, we formulate coupled constraint cell problems in oversampled regions to consider different homogenized effects. Then, we obtain a multicontinuum expansion of the fine-scale fields and derive the multicontinuum model supposing the smoothness of macroscopic variables. We present the most general version of equations and the simplified ones based on our numerical experiments. Numerical results are presented for different heterogeneous media cases and demonstrate the high accuracy of our proposed multicontinuum models. |
2025-06-25 | MultiFinRAG: An Optimized Multimodal Retrieval-Augmented Generation (RAG) Framework for Financial Question Answering | Chinmay Gondhalekar, Urjitkumar Patel, Fang-Chun Yeh et.al. | 2506.20821 | | Preprint Copy | Abstract (click to expand)Financial documents--such as 10-Ks, 10-Qs, and investor presentations--span hundreds of pages and combine diverse modalities, including dense narrative text, structured tables, and complex figures. Answering questions over such content often requires joint reasoning across modalities, which strains traditional large language models (LLMs) and retrieval-augmented generation (RAG) pipelines due to token limitations, layout loss, and fragmented cross-modal context. We introduce MultiFinRAG, a retrieval-augmented generation framework purpose-built for financial QA. MultiFinRAG first performs multimodal extraction by grouping table and figure images into batches and sending them to a lightweight, quantized open-source multimodal LLM, which produces both structured JSON outputs and concise textual summaries. These outputs, along with narrative text, are embedded and indexed with modality-aware similarity thresholds for precise retrieval. A tiered fallback strategy then dynamically escalates from text-only to text+table+image contexts when necessary, enabling cross-modal reasoning while reducing irrelevant context. Despite running on commodity hardware, MultiFinRAG achieves 19 percentage points higher accuracy than ChatGPT-4o (free-tier) on complex financial QA tasks involving text, tables, images, and combined multimodal reasoning. |
2025-06-25 | A Hereditary Integral, Transient Network Approach to Modeling Permanent Set and Viscoelastic Response in Polymers | Stephen T. Castonguay, Joshua B. Fernandes, Michael A. Puso et.al. | 2506.20773 | | | Abstract (click to expand)An efficient numerical framework is presented for modeling viscoelasticity and permanent set of polymers. It is based on the hereditary integral form of transient network theory, in which polymer chains belong to distinct networks each with different natural equilibrium states. Chains continually detach from previously formed networks and reattach to new networks in a state of zero stress. The free energy of these networks is given in terms of the deformation gradient relative to the configuration at which the network was born. A decomposition of the kernel for various free energies allows for a recurrence relationship to be established, bypassing the need to integrate over all time history. The technique is established for both highly compressible and nearly incompressible materials through the use of neo-Hookean, Blatz-Ko, Yeoh, and Ogden-Hill material models. Multiple examples are presented showing the ability to handle rate-dependent response and residual strains under complex loading histories. |
2025-06-25 | A generalised framework for phase field-based modelling of coupled problems: application to thermo-mechanical fracture, hydraulic fracture, hydrogen embrittlement and corrosion | Y. Navidtehrani, C. Betegón, E. Martínez-Pañeda et.al. | 2506.20763 | | | Abstract (click to expand)We present a novel, generalised formulation to treat coupled structural integrity problems by combining phase field and multi-physics modelling. The approach exploits the versatility of the heat transfer equation and is therefore well suited to be adopted in commercial finite element packages, requiring only integration point-level implementation. This aspect is demonstrated here by implementing coupled, multi-variable phenomena through simple \texttt{UMAT} and \texttt{UMATHT} subroutines in the finite element package \texttt{Abaqus}. The generalised theoretical and computational framework presented is particularised to four problems of engineering and scientific relevance: thermo-mechanical fracture, hydraulic fracture, hydrogen-assisted cracking and metallic corrosion. 2D and 3D problems are considered. The results reveal a very good agreement with experimental data, and existing numerical and analytical solutions.The user subroutines developed are made freely available at https://mechmat.web.ox.ac.uk/codes. |
2025-06-25 | Pull-off strength of mushroom-shaped fibrils adhered to rigid substrates | C. Betegón, C. Rodríguez, E. Martínez-Pañeda et.al. | 2506.20745 | | | Abstract (click to expand)The exceptional adhesion properties of biological fibrillar structures -- such as those found in geckos -- have inspired the development of synthetic adhesive surfaces. Among these, mushroom-shaped fibrils have demonstrated superior pull-off strength compared to other geometries. In this study, we employ a computational approach based on a Dugdale cohesive zone model to analyze the detachment behavior of these fibrils when adhered to a rigid substrate. The results provide complete pull-off curves, revealing that the separation process is inherently unstable under load control, regardless of whether detachment initiates at the fibril edge or center. Our findings show that fibrils with a wide, thin mushroom cap effectively reduce stress concentrations and promote central detachment, leading to enhanced adhesion. However, detachment from the center is not observed in all geometries, whereas edge detachment can occur under certain conditions in all cases. Additionally, we investigate the impact of adhesion defects at the fibril center, showing that they can significantly reduce pull-off strength, particularly at high values of the dimensionless parameter \c{hi}. These insights contribute to the optimization of bio-inspired adhesives and microstructured surfaces for various engineering applications. |
2025-06-25 | A Survey of AI for Materials Science: Foundation Models, LLM Agents, Datasets, and Tools | Minh-Hao Van, Prateek Verma, Chen Zhao et.al. | 2506.20743 | | | Abstract (click to expand)Foundation models (FMs) are catalyzing a transformative shift in materials science (MatSci) by enabling scalable, general-purpose, and multimodal AI systems for scientific discovery. Unlike traditional machine learning models, which are typically narrow in scope and require task-specific engineering, FMs offer cross-domain generalization and exhibit emergent capabilities. Their versatility is especially well-suited to materials science, where research challenges span diverse data types and scales. This survey provides a comprehensive overview of foundation models, agentic systems, datasets, and computational tools supporting this growing field. We introduce a task-driven taxonomy encompassing six broad application areas: data extraction, interpretation and Q\&A; atomistic simulation; property prediction; materials structure, design and discovery; process planning, discovery, and optimization; and multiscale modeling. We discuss recent advances in both unimodal and multimodal FMs, as well as emerging large language model (LLM) agents. Furthermore, we review standardized datasets, open-source tools, and autonomous experimental platforms that collectively fuel the development and integration of FMs into research workflows. We assess the early successes of foundation models and identify persistent limitations, including challenges in generalizability, interpretability, data imbalance, safety concerns, and limited multimodal fusion. Finally, we articulate future research directions centered on scalable pretraining, continual learning, data governance, and trustworthiness. |
2025-06-25 | Opinion Dynamics with Highly Oscillating Opinions | Víctor A. Vargas-Pérez, Jesús Giráldez-Cru, Oscar Cordón et.al. | 2506.20472 | | | Abstract (click to expand)Opinion Dynamics (OD) models are a particular case of Agent-Based Models in which the evolution of opinions within a population is studied. In most OD models, opinions evolve as a consequence of interactions between agents, and the opinion fusion rule defines how those opinions are updated. In consequence, despite being simplistic, OD models provide an explainable and interpretable mechanism for understanding the underlying dynamics of opinion evolution. Unfortunately, existing OD models mainly focus on explaining the evolution of (usually synthetic) opinions towards consensus, fragmentation, or polarization, but they usually fail to analyze scenarios of (real-world) highly oscillating opinions. This work overcomes this limitation by studying the ability of several OD models to reproduce highly oscillating dynamics. To this end, we formulate an optimization problem which is further solved using Evolutionary Algorithms, providing both quantitative results on the performance of the optimization and qualitative interpretations on the obtained results. Our experiments on a real-world opinion dataset about immigration from the monthly barometer of the Spanish Sociological Research Center show that the ATBCR, based on both rational and emotional mechanisms of opinion update, is the most accurate OD model for capturing highly oscillating opinions. |
2025-06-25 | A Visualization Framework for Exploring Multi-Agent-Based Simulations Case Study of an Electric Vehicle Home Charging Ecosystem | Kristoffer Christensen, Bo Nørregaard Jørgensen, Zheng Grace Ma et.al. | 2506.20400 | | | Abstract (click to expand)Multi-agent-based simulations (MABS) of electric vehicle (EV) home charging ecosystems generate large, complex, and stochastic time-series datasets that capture interactions between households, grid infrastructure, and energy markets. These interactions can lead to unexpected system-level events, such as transformer overloads or consumer dissatisfaction, that are difficult to detect and explain through static post-processing. This paper presents a modular, Python-based dashboard framework, built using Dash by Plotly, that enables efficient, multi-level exploration and root-cause analysis of emergent behavior in MABS outputs. The system features three coordinated views (System Overview, System Analysis, and Consumer Analysis), each offering high-resolution visualizations such as time-series plots, spatial heatmaps, and agent-specific drill-down tools. A case study simulating full EV adoption with smart charging in a Danish residential network demonstrates how the dashboard supports rapid identification and contextual explanation of anomalies, including clustered transformer overloads and time-dependent charging failures. The framework facilitates actionable insight generation for researchers and distribution system operators, and its architecture is adaptable to other distributed energy resources and complex energy systems. |
2025-06-25 | Ten simple rules for PIs to integrate Research Software Engineering into their research group | Stuart M. Allen, Neil Chue Hong, Stephan Druskat et.al. | 2506.20217 | | 10 pages, submitted to PLOS Computational Biology | Abstract (click to expand)Research Software Engineering (RSEng) is a key success factor in producing high-quality research software, which in turn enables and improves research outcomes. However, as a principal investigator or leader of a research group you may not know what RSEng is, where to get started with it, or how to use it to maximize its benefit for your research. RSEng also often comes with technical complexity, and therefore reduced accessibility to some researchers. The ten simple rules presented in this paper aim to improve the accessibility of RSEng, and provide practical and actionable advice to PIs and leaders for integrating RSEng into their research group. By following these rules, readers can improve the quality, reproducibility, and trustworthiness of their research software, ultimately leading to better, more reproducible and more trustworthy research outcomes. |
2025-06-25 | Developing Artificial Mechanics Intuitions from Extremely Small Data | Jingruo Peng, Shuze Zhu et.al. | 2506.20148 | | | Abstract (click to expand)Humans can possess good mechanics intuitions by learning from a few examples, which leads to the question of how to develop artificial mechanics intuitions that can be learned from small data, as we are eagerly entering the era of artificial intelligence. We propose in this Letter the sample-switchable training method, which successfully develops highly-accurate artificial mechanics intuitions that can master brachistochrone problem, catenary problem, and large nonlinear deformation problem of elastic plate by learning from no more than three samples. The model's intuitive prediction ability increases nonlinearly with respect to the number of training samples, suggesting that superb mechanics intuitions can be in-principle achieved based on a finite number of samples, reflecting how human brains form good mechanics intuitions just by learning a few cases. Our current work presents an alternative perspective for educating artificial intelligence capable of intuitively understand and predict how materials deform and move, a scenario that has been frequently seen in Science-Fiction movies. |
2025-06-25 | DiT-SGCR: Directed Temporal Structural Representation with Global-Cluster Awareness for Ethereum Malicious Account Detection | Ye Tian, Liangliang Song, Peng Qian et.al. | 2506.20123 | | | Abstract (click to expand)The detection of malicious accounts on Ethereum - the preeminent DeFi platform - is critical for protecting digital assets and maintaining trust in decentralized finance. Recent advances highlight that temporal transaction evolution reveals more attack signatures than static graphs. However, current methods either fail to model continuous transaction dynamics or incur high computational costs that limit scalability to large-scale transaction networks. Furthermore, current methods fail to consider two higher-order behavioral fingerprints: (1) direction in temporal transaction flows, which encodes money movement trajectories, and (2) account clustering, which reveals coordinated behavior of organized malicious collectives. To address these challenges, we propose DiT-SGCR, an unsupervised graph encoder for malicious account detection. Specifically, DiT-SGCR employs directional temporal aggregation to capture dynamic account interactions, then coupled with differentiable clustering and graph Laplacian regularization to generate high-quality, low-dimensional embeddings. Our approach simultaneously encodes directional temporal dynamics, global topology, and cluster-specific behavioral patterns, thereby enhancing the discriminability and robustness of account representations. Furthermore, DiT-SGCR bypasses conventional graph propagation mechanisms, yielding significant scalability advantages. Extensive experiments on three datasets demonstrate that DiT-SGCR consistently outperforms state-of-the-art methods across all benchmarks, achieving F1-score improvements ranging from 3.62% to 10.83%. |
2025-06-24 | A modular and extensible library for parameterized terrain generation | Erik Wallin et.al. | 2506.19751 | | | Abstract (click to expand)Simulation-driven development of intelligent machines benefits from artificial terrains with controllable, well-defined characteristics. However, most existing tools for terrain generation focus on artist-driven workflows and visual realism, with limited support for parameterization, reproducibility, or scripting. We present a modular, Python-based library for procedural terrain generation that enables users to construct complex, parameterized terrains by chaining together simple modules. The system supports both structured and noise-based terrain elements, and integrates with Blender for rendering and object placement. The framework is designed to support applications such as generating synthetic terrains for training machine learning models or producing ground truth for perception tasks. By using a minimal but extensible set of modules, the system achieves high flexibility while remaining easy to configure and expand. We demonstrate that this enables fine-grained control over features such as slope, roughness, and the number of rocks, as well as extension to additional measures. This makes it well suited for workflows that demand reproducibility, variation, and integration with automated pipelines. |
2025-06-25 | ReLink: Computational Circular Design of Planar Linkage Mechanisms Using Available Standard Parts | Maxime Escande, Kristina Shea et.al. | 2506.19657 | | 29 pages, 18 figures, Submitted | Abstract (click to expand)The Circular Economy framework emphasizes sustainability by reducing resource consumption and waste through the reuse of components and materials. This paper presents ReLink, a computational framework for the circular design of planar linkage mechanisms using available standard parts. Unlike most mechanism design methods, which assume the ability to create custom parts and infinite part availability, ReLink prioritizes the reuse of discrete, standardized components, thus minimizing the need for new parts. The framework consists of two main components: design generation, where a generative design algorithm generates mechanisms from an inventory of available parts, and inverse design, which uses optimization methods to identify designs that match a user-defined trajectory curve. The paper also examines the trade-offs between kinematic performance and CO2 footprint when incorporating new parts. Challenges such as the combinatorial nature of the design problem and the enforcement of valid solutions are addressed. By combining sustainability principles with kinematic synthesis, ReLink lays the groundwork for further research into computational circular design to support the development of systems that integrate reused components into mechanical products. |
2025-06-27 | V2T-CoT: From Vision to Text Chain-of-Thought for Medical Reasoning and Diagnosis | Yuan Wang, Jiaxiang Liu, Shujian Gao et.al. | 2506.19610 | | 12 pages, 4 figures | Abstract (click to expand)Recent advances in multimodal techniques have led to significant progress in Medical Visual Question Answering (Med-VQA). However, most existing models focus on global image features rather than localizing disease-specific regions crucial for diagnosis. Additionally, current research tends to emphasize answer accuracy at the expense of the reasoning pathway, yet both are crucial for clinical decision-making. To address these challenges, we propose From Vision to Text Chain-of-Thought (V2T-CoT), a novel approach that automates the localization of preference areas within biomedical images and incorporates this localization into region-level pixel attention as knowledge for Vision CoT. By fine-tuning the vision language model on constructed R-Med 39K dataset, V2T-CoT provides definitive medical reasoning paths. V2T-CoT integrates visual grounding with textual rationale generation to establish precise and explainable diagnostic results. Experimental results across four Med-VQA benchmarks demonstrate state-of-the-art performance, achieving substantial improvements in both performance and interpretability. |
2025-06-24 | A Spline-Based Stress Function Approach for the Principle of Minimum Complementary Energy | Fabian Key, Lukas Freinberger et.al. | 2506.19534 | | | Abstract (click to expand)In computational engineering, ensuring the integrity and safety of structures in fields such as aerospace and civil engineering relies on accurate stress prediction. However, analytical methods are limited to simple test cases, and displacement-based finite element methods (FEMs), while commonly used, require a large number of unknowns to achieve high accuracy; stress-based numerical methods have so far failed to provide a simple and effective alternative. This work aims to develop a novel numerical approach that overcomes these limitations by enabling accurate stress prediction with improved flexibility for complex geometries and boundary conditions and fewer degrees of freedom (DOFs). The proposed method is based on a spline-based stress function formulation for the principle of minimum complementary energy, which we apply to plane, linear elastostatics. The method is first validated against an analytical power series solution and then tested on two test cases challenging for current state-of-the-art numerical schemes, a bi-layer cantilever with anisotropic material behavior, and a cantilever with a non-prismatic, parabolic-shaped beam geometry. Results demonstrate that our approach, unlike analytical methods, can be easily applied to general geometries and boundary conditions, and achieves stress accuracy comparable to that reported in the literature for displacement-based FEMs, while requiring significantly fewer DOFs. This novel spline-based stress function approach thus provides an efficient and flexible tool for accurate stress prediction, with promising applications in structural analysis and numerical design. |
2025-06-24 | Physics-Informed Neural Networks for Industrial Gas Turbines: Recent Trends, Advancements and Challenges | Afila Ajithkumar Sophiya, Sepehr Maleki, Giuseppe Bruni et.al. | 2506.19503 | | | Abstract (click to expand)Physics-Informed Neural Networks (PINNs) have emerged as a promising computational framework for solving differential equations by integrating deep learning with physical constraints. However, their application in gas turbines is still in its early stages, requiring further refinement and standardization for wider adoption. This survey provides a comprehensive review of PINNs in Industrial Gas Turbines (IGTs) research, highlighting their contributions to the analysis of aerodynamic and aeromechanical phenomena, as well as their applications in flow field reconstruction, fatigue evaluation, and flutter prediction, and reviews recent advancements in accuracy, computational efficiency, and hybrid modelling strategies. In addition, it explores key research efforts, implementation challenges, and future directions aimed at improving the robustness and scalability of PINNs. |
2025-06-25 | LKA: Large Kernel Adapter for Enhanced Medical Image Classification | Ziquan Zhu, Si-Yuan Lu, Tianjin Huang et.al. | 2506.19118 | | Some aspects of the experimental setup were not clearly described in the current version. We plan to revise and clarify these points before resubmitting | Abstract (click to expand)Despite the notable success of current Parameter-Efficient Fine-Tuning (PEFT) methods across various domains, their effectiveness on medical datasets falls short of expectations. This limitation arises from two key factors: (1) medical images exhibit extensive anatomical variation and low contrast, necessitating a large receptive field to capture critical features, and (2) existing PEFT methods do not explicitly address the enhancement of receptive fields. To overcome these challenges, we propose the Large Kernel Adapter (LKA), designed to expand the receptive field while maintaining parameter efficiency. The proposed LKA consists of three key components: down-projection, channel-wise large kernel convolution, and up-projection. Through extensive experiments on various datasets and pre-trained models, we demonstrate that the incorporation of a larger kernel size is pivotal in enhancing the adaptation of pre-trained models for medical image analysis. Our proposed LKA outperforms 11 commonly used PEFT methods, surpassing the state-of-the-art by 3.5% in top-1 accuracy across five medical datasets. |
2025-06-23 | Accurate identification of communication between multiple interacting neural populations | Belle Liu, Jacob Sacks, Matthew D. Golub et.al. | 2506.19094 | | | Abstract (click to expand)Neural recording technologies now enable simultaneous recording of population activity across many brain regions, motivating the development of data-driven models of communication between brain regions. However, existing models can struggle to disentangle the sources that influence recorded neural populations, leading to inaccurate portraits of inter-regional communication. Here, we introduce Multi-Region Latent Factor Analysis via Dynamical Systems (MR-LFADS), a sequential variational autoencoder designed to disentangle inter-regional communication, inputs from unobserved regions, and local neural population dynamics. We show that MR-LFADS outperforms existing approaches at identifying communication across dozens of simulations of task-trained multi-region networks. When applied to large-scale electrophysiology, MR-LFADS predicts brain-wide effects of circuit perturbations that were held out during model fitting. These validations on synthetic and real neural data position MR-LFADS as a promising tool for discovering principles of brain-wide information processing. |
2025-06-23 | Which Company Adjustment Matter? Insights from Uplift Modeling on Financial Health | Xinlin Wang, Mats Brorsson et.al. | 2506.19049 | | | Abstract (click to expand)Uplift modeling has achieved significant success in various fields, particularly in online marketing. It is a method that primarily utilizes machine learning and deep learning to estimate individual treatment effects. This paper we apply uplift modeling to analyze the effect of company adjustment on their financial status, and we treat these adjustment as treatments or interventions in this study. Although there have been extensive studies and application regarding binary treatments, multiple treatments, and continuous treatments, company adjustment are often more complex than these scenarios, as they constitute a series of multiple time-dependent actions. The effect estimation of company adjustment needs to take into account not only individual treatment traits but also the temporal order of this series of treatments. This study collects a real-world data set about company financial statements and reported behavior in Luxembourg for the experiments. First, we use two meta-learners and three other well-known uplift models to analyze different company adjustment by simplifying the adjustment as binary treatments. Furthermore, we propose a new uplift modeling framework (MTDnet) to address the time-dependent nature of these adjustment, and the experimental result shows the necessity of considering the timing of these adjustment. |
2025-06-23 | A Study of Dynamic Stock Relationship Modeling and S&P500 Price Forecasting Based on Differential Graph Transformer | Linyue Hu, Qi Wang et.al. | 2506.18717 | | | Abstract (click to expand)Stock price prediction is vital for investment decisions and risk management, yet remains challenging due to markets' nonlinear dynamics and time-varying inter-stock correlations. Traditional static-correlation models fail to capture evolving stock relationships. To address this, we propose a Differential Graph Transformer (DGT) framework for dynamic relationship modeling and price prediction. Our DGT integrates sequential graph structure changes into multi-head self-attention via a differential graph mechanism, adaptively preserving high-value connections while suppressing noise. Causal temporal attention captures global/local dependencies in price sequences. We further evaluate correlation metrics (Pearson, Mutual Information, Spearman, Kendall's Tau) across global/local/dual scopes as spatial-attention priors. Using 10 years of S&P 500 closing prices (z-score normalized; 64-day sliding windows), DGT with spatial priors outperformed GRU baselines (RMSE: 0.24 vs. 0.87). Kendall's Tau global matrices yielded optimal results (MAE: 0.11). K-means clustering revealed "high-volatility growth" and "defensive blue-chip" stocks, with the latter showing lower errors (RMSE: 0.13) due to stable correlations. Kendall's Tau and Mutual Information excelled in volatile sectors. This study innovatively combines differential graph structures with Transformers, validating dynamic relationship modeling and identifying optimal correlation metrics/scopes. Clustering analysis supports tailored quantitative strategies. Our framework advances financial time-series prediction through dynamic modeling and cross-asset interaction analysis. |
2025-06-23 | Airalogy: AI-empowered universal data digitization for research automation | Zijie Yang, Qiji Zhou, Fang Guo et.al. | 2506.18586 | | 146 pages, 6 figures, 49 supplementary figures | Abstract (click to expand)Research data are the foundation of Artificial Intelligence (AI)-driven science, yet current AI applications remain limited to a few fields with readily available, well-structured, digitized datasets. Achieving comprehensive AI empowerment across multiple disciplines is still out of reach. Present-day research data collection is often fragmented, lacking unified standards, inefficiently managed, and difficult to share. Creating a single platform for standardized data digitization needs to overcome the inherent challenge of balancing between universality (supporting the diverse, ever-evolving needs of various disciplines) and standardization (enforcing consistent formats to fully enable AI). No existing platform accommodates both facets. Building a truly multidisciplinary platform requires integrating scientific domain knowledge with sophisticated computing skills. Researchers often lack the computational expertise to design customized and standardized data recording methods, whereas platform developers rarely grasp the intricate needs of multiple scientific domains. These gaps impede research data standardization and hamper AI-driven progress. In this study, we address these challenges by developing Airalogy (https://airalogy.com), the world's first AI- and community-driven platform that balances universality and standardization for digitizing research data across multiple disciplines. Airalogy represents entire research workflows using customizable, standardized data records and offers an advanced AI research copilot for intelligent Q&A, automated data entry, analysis, and research automation. Already deployed in laboratories across all four schools of Westlake University, Airalogy has the potential to accelerate and automate scientific innovation in universities, industry, and the global research community-ultimately benefiting humanity as a whole. |
2025-06-23 | Communication Architecture for Autonomous Power-to-X Platforms: Enhancing Inspection and Operation With Legged Robots and 5G | Peter Frank, Falk Dettinger, Daniel Dittler et.al. | 2506.18572 | | | Abstract (click to expand)Inspection and maintenance of offshore platforms are associated with high costs, primarily due to the significant personnel requirements and challenging operational conditions. This paper first presents a classification of Power to X platforms. Building upon this foundation, a communication architecture is proposed to enable monitoring, control, and teleoperation for a Power to X platform. To reduce the demand for human labor, a robotic system is integrated to autonomously perform inspection and maintenance tasks. The implementation utilizes a quadruped robot. Remote monitoring, control, and teleoperation of the robot are analyzed within the context of a 5G standalone network. As part of the evaluation, aspects such as availability and latency are recorded, compared, and critically assessed. |
2025-06-23 | A Physics-Informed Neural Network Framework for Simulating Creep Buckling in Growing Viscoelastic Biological Tissues | Zhongya Lin, Jinshuai Bai, Shuang Li et.al. | 2506.18565 | | | Abstract (click to expand)Modeling viscoelastic behavior is crucial in engineering and biomechanics, where materials undergo time-dependent deformations, including stress relaxation, creep buckling and biological tissue development. Traditional numerical methods, like the finite element method, often require explicit meshing, artificial perturbations or embedding customised programs to capture these phenomena, adding computational complexity. In this study, we develop an energy-based physics-informed neural network (PINN) framework using an incremental approach to model viscoelastic creep, stress relaxation, buckling, and growth-induced morphogenesis. Physics consistency is ensured by training neural networks to minimize the systems potential energy functional, implicitly satisfying equilibrium and constitutive laws. We demonstrate that this framework can naturally capture creep buckling without pre-imposed imperfections, leveraging inherent training dynamics to trigger instabilities. Furthermore, we extend our framework to biological tissue growth and morphogenesis, predicting both uniform expansion and differential growth-induced buckling in cylindrical structures. Results show that the energy-based PINN effectively predicts viscoelastic instabilities, post-buckling evolution and tissue morphological evolution, offering a promising alternative to traditional methods. This study demonstrates that PINN can be a flexible robust tool for modeling complex, time-dependent material behavior, opening possible applications in structural engineering, soft materials, and tissue development. |
2025-06-23 | Virtual failure assessment diagrams for hydrogen transmission pipelines | J. Wijnen, J. Parker, M. Gagliano et.al. | 2506.18554 | | | Abstract (click to expand)We combine state-of-the-art thermo-metallurgical welding process modelling with coupled diffusion-elastic-plastic phase field fracture simulations to predict the failure states of hydrogen transport pipelines. This enables quantitatively resolving residual stress states and the role of brittle, hard regions of the weld such as the heat affected zone (HAZ). Failure pressures can be efficiently quantified as a function of asset state (existing defects), materials and weld procedures adopted, and hydrogen purity. Importantly, simulations spanning numerous relevant conditions (defect size and orientations) are used to build \emph{Virtual} Failure Assessment Diagrams (FADs), enabling a straightforward uptake of this mechanistic approach in fitness-for-service assessment. Model predictions are in very good agreement with FAD approaches from the standards but show that the latter are not conservative when resolving the heterogeneous nature of the weld microstructure. Appropriate, \emph{mechanistic} FAD safety factors are established that account for the role of residual stresses and hard, brittle weld regions. |
2025-06-23 | Neural-operator element method: Efficient and scalable finite element method enabled by reusable neural operators | Weihang Ouyang, Yeonjong Shin, Si-Wei Liu et.al. | 2506.18427 | | | Abstract (click to expand)The finite element method (FEM) is a well-established numerical method for solving partial differential equations (PDEs). However, its mesh-based nature gives rise to substantial computational costs, especially for complex multiscale simulations. Emerging machine learning-based methods (e.g., neural operators) provide data-driven solutions to PDEs, yet they present challenges, including high training cost and low model reusability. Here, we propose the neural-operator element method (NOEM) by synergistically combining FEM with operator learning to address these challenges. NOEM leverages neural operators (NOs) to simulate subdomains where a large number of finite elements would be required if FEM was used. In each subdomain, an NO is used to build a single element, namely a neural-operator element (NOE). NOEs are then integrated with standard finite elements to represent the entire solution through the variational framework. Thereby, NOEM does not necessitate dense meshing and offers efficient simulations. We demonstrate the accuracy, efficiency, and scalability of NOEM by performing extensive and systematic numerical experiments, including nonlinear PDEs, multiscale problems, PDEs on complex geometries, and discontinuous coefficient fields. |
2025-06-23 | Exact Conditional Score-Guided Generative Modeling for Amortized Inference in Uncertainty Quantification | Zezhong Zhang, Caroline Tatsuoka, Dongbin Xiu et.al. | 2506.18227 | | | Abstract (click to expand)We propose an efficient framework for amortized conditional inference by leveraging exact conditional score-guided diffusion models to train a non-reversible neural network as a conditional generative model. Traditional normalizing flow methods require reversible architectures, which can limit their expressiveness and efficiency. Although diffusion models offer greater flexibility, they often suffer from high computational costs during inference. To combine the strengths of both approaches, we introduce a two-stage method. First, we construct a training-free conditional diffusion model by analytically deriving an exact score function under a Gaussian mixture prior formed from samples of the underlying joint distribution. This exact conditional score model allows us to efficiently generate noise-labeled data, consisting of initial diffusion Gaussian noise and posterior samples conditioned on various observation values, by solving a reverse-time ordinary differential equation. Second, we use this noise-labeled data to train a feedforward neural network that maps noise and observations directly to posterior samples, eliminating the need for reversibility or iterative sampling at inference time. The resulting model provides fast, accurate, and scalable conditional sampling for high-dimensional and multi-modal posterior distributions, making it well-suited for uncertainty quantification tasks, e.g., parameter estimation of complex physical systems. We demonstrate the effectiveness of our approach through a series of numerical experiments. |
2025-06-22 | Conservative data-driven finite element formulation | Adriana Kuliková, Andrei G. Shvarts, Łukasz Kaczmarczyk et.al. | 2506.18206 | | | Abstract (click to expand)This paper presents a new data-driven finite element framework derived with mixed finite element formulation. The standard approach to diffusion problems requires the solution of the mathematical equations that describe both the conservation law and the constitutive relations, where the latter is traditionally obtained after fitting experimental data to simplified material models. To exploit all available information and avoid bias in the material model, we follow a data-driven approach. While the conservation laws and boundary conditions are satisfied by means of the finite element method, the experimental data is used directly in the numerical simulations, avoiding the need of fitting material model parameters. In order to satisfy the conservation law a priori in the strong sense, we introduce a mixed finite element formulation. This relaxes the regularity requirements on approximation spaces while enforcing continuity of the normal flux component across all of the inner boundaries. This weaker mixed formulation provides a posteriori error indicators tailored for this data-driven approach, enabling adaptive hp-refinement. The relaxed regularity of the approximation spaces makes it easier to observe how the variation in the datasets results in the non-uniqueness of the solution, which can be quantified to predict the uncertainty of the results. The capabilities of the formulation are demonstrated in an example of the nonlinear heat transfer in nuclear graphite using synthetically generated material datasets. |
2025-06-22 | Measuring Fractal Dimension using Discrete Global Grid Systems | Pramit Ghosh et.al. | 2506.18175 | | | Abstract (click to expand)This study builds a bridge between two well-studied but distant topics: fractal dimension and Discrete Global Grid System (DGGS). DGGSs are used as covering sets for geospatial vector data to calculate the Minkowski-Bouligand dimension. Using the method on synthetic data yields results within 1% of their theoretical fractal dimensions. A case study on opaque cloud fields obtained from satellite images gives fractal dimension in agreement with that available in the literature. The proposed method alleviates the problems of arbitrary grid placement and orientation, as well as the progression of cell sizes of the covering sets for geospatial data. Using DGGSs further ensure that intersections of the covering sets with the geospatial vector having large geographic extents are calculated by taking the curvature of the earth into account. This paper establishes the validity of DGGSs as covering sets theoretically and discusses desirable properties of DGGSs suitable for this purpose. |
2025-06-22 | A phase field model for hydraulic fracture: Drucker-Prager driving force and a hybrid coupling strategy | Y. Navidtehrani, C. Betegón, J. Vallejos et.al. | 2506.18161 | | | Abstract (click to expand)Recent years have seen a significant interest in using phase field approaches to model hydraulic fracture, so as to optimise a process that is key to industries such as petroleum engineering, mining and geothermal energy extraction. Here, we present a novel theoretical and computational phase field framework to simulate hydraulic fracture. The framework is general and versatile, in that it allows for improved treatments of the coupling between fluid flow and the phase field, and encompasses a universal description of the fracture driving force. Among others, this allows us to bring two innovations to the phase field hydraulic fracture community: (i) a new hybrid coupling approach to handle the fracture-fluid flow interplay, offering enhanced accuracy and flexibility; and (ii) a Drucker-Prager-based strain energy decomposition, extending the simulation of hydraulic fracture to materials exhibiting asymmetric tension-compression fracture behaviour (such as shale rocks) and enabling the prediction of geomechanical phenomena such as fault reactivation and stick-slip behaviour. Four case studies are addressed to illustrate these additional modelling capabilities and bring insight into permeability coupling, cracking behaviour, and multiaxial conditions in hydraulic fracturing simulations. The codes developed are made freely available to the community and can be downloaded from {https://mechmat.web.ox.ac.uk/ |
2025-06-22 | Six Decades Post-Discovery of Taylor's Power Law: From Ecological and Statistical Universality, Through Prime Number Distributions and Tipping-Point Signals, to Heterogeneity and Stability of Complex Networks | Zhanshan Sam Ma, R. A. J. Taylor et.al. | 2506.18154 | | | Abstract (click to expand)First discovered by L. R. Taylor (1961, Nature), Taylor's Power Law (TPL) correlates the mean (M) population abundances and the corresponding variances (V) across a set of insect populations using a power function (V=aM^b). TPL has demonstrated its 'universality' across numerous fields of sciences, social sciences, and humanities. This universality has inspired two main prongs of exploration: one from mathematicians and statisticians, who might instinctively respond with a convergence theorem similar to the central limit theorem of the Gaussian distribution, and another from biologists, ecologists, physicists, etc., who are more interested in potential underlying ecological or organizational mechanisms. Over the past six decades, TPL studies have produced a punctuated landscape with three relatively distinct periods (1960s-1980s; 1990s-2000s, and 2010s-2020s) across the two prongs of abstract and physical worlds. Eight themes have been identified and reviewed on this landscape, including population spatial aggregation and ecological mechanisms, TPL and skewed statistical distributions, mathematical/statistical mechanisms of TPL, sample vs. population TPL, population stability, synchrony, and early warning signals for tipping points, TPL on complex networks, TPL in macrobiomes, and in microbiomes. Three future research directions including fostering reciprocal interactions between the two prongs, heterogeneity measuring, and exploration in the context of evolution. The significance of TPL research includes practically, population fluctuations captured by TPL are relevant for agriculture, forestry, fishery, wildlife-conservation, epidemiology, tumor heterogeneity, earthquakes, social inequality, stock illiquidity, financial stability, tipping point events, etc.; theoretically, TPL is one form of power laws, which are related to phase transitions, universality, scale-invariance, etc. |
2025-06-22 | Learning from the Storm: A Multivariate Machine Learning Approach to Predicting Hurricane-Induced Economic Losses | Bolin Shen, Eren Erman Ozguven, Yue Zhao et.al. | 2506.17964 | | | Abstract (click to expand)Florida is particularly vulnerable to hurricanes, which frequently cause substantial economic losses. While prior studies have explored specific contributors to hurricane-induced damage, few have developed a unified framework capable of integrating a broader range of influencing factors to comprehensively assess the sources of economic loss. In this study, we propose a comprehensive modeling framework that categorizes contributing factors into three key components: (1) hurricane characteristics, (2) water-related environmental factors, and (3) socioeconomic factors of affected areas. By integrating multi-source data and aggregating all variables at the finer spatial granularity of the ZIP Code Tabulation Area (ZCTA) level, we employ machine learning models to predict economic loss, using insurance claims as indicators of incurred damage. Beyond accurate loss prediction, our approach facilitates a systematic assessment of the relative importance of each component, providing practical guidance for disaster mitigation, risk assessment, and the development of adaptive urban strategies in coastal and storm-exposed areas. Our code is now available at: https://github.com/LabRAI/Hurricane-Induced-Economic-Loss-Prediction |
2025-06-21 | A predictor-corrector scheme for approximating signed distances using finite element methods | Amina El Bachari, Johann Rannou, Vladislav A. Yastrebov et.al. | 2506.17830 | | 26 pages, 17 figures | Abstract (click to expand)In this article, we introduce a finite element method designed for the robust computation of approximate signed distance functions to arbitrary boundaries in two and three dimensions. Our method employs a novel prediction-correction approach, involving first the solution of a linear diffusion-based prediction problem, followed by a nonlinear minimization-based correction problem associated with the Eikonal equation. The prediction step efficiently generates a suitable initial guess, significantly facilitating convergence of the nonlinear correction step. A key strength of our approach is its ability to handle complex interfaces and initial level set functions with arbitrary steep or flat regions, a notable challenge for existing techniques. Through several representative examples, including classical geometries and more complex shapes such as star domains and three-dimensional tori, we demonstrate the accuracy, efficiency, and robustness of the method, validating its broad applicability for reinitializing diverse level set functions. |
2025-06-21 | Pix2Geomodel: A Next-Generation Reservoir Geomodeling with Property-to-Property Translation | Abdulrahman Al-Fakih, Ardiansyah Koeshidayatullah, Nabil A. Saraih et.al. | 2506.17747 | | 34 pages, 13 figures | Abstract (click to expand)Accurate geological modeling is critical for reservoir characterization, yet traditional methods struggle with complex subsurface heterogeneity, and they have problems with conditioning to observed data. This study introduces Pix2Geomodel, a novel conditional generative adversarial network (cGAN) framework based on Pix2Pix, designed to predict reservoir properties (facies, porosity, permeability, and water saturation) from the Rotliegend reservoir of the Groningen gas field. Utilizing a 7.6 million-cell dataset from the Nederlandse Aardolie Maatschappij, accessed via EPOS-NL, the methodology included data preprocessing, augmentation to generate 2,350 images per property, and training with a U-Net generator and PatchGAN discriminator over 19,000 steps. Evaluation metrics include pixel accuracy (PA), mean intersection over union (mIoU), frequency weighted intersection over union (FWIoU), and visualizations assessed performance in masked property prediction and property-to-property translation tasks. Results demonstrated high accuracy for facies (PA 0.88, FWIoU 0.85) and water saturation (PA 0.96, FWIoU 0.95), with moderate success for porosity (PA 0.70, FWIoU 0.55) and permeability (PA 0.74, FWIoU 0.60), and robust translation performance (e.g., facies-to-facies PA 0.98, FWIoU 0.97). The framework captured spatial variability and geological realism, as validated by variogram analysis, and calculated the training loss curves for the generator and discriminator for each property. Compared to traditional methods, Pix2Geomodel offers enhanced fidelity in direct property mapping. Limitations include challenges with microstructural variability and 2D constraints, suggesting future integration of multi-modal data and 3D modeling (Pix2Geomodel v2.0). This study advances the application of generative AI in geoscience, supporting improved reservoir management and open science initiatives. |
2025-06-20 | Variational Quantum Latent Encoding for Topology Optimization | Alireza Tabarraei et.al. | 2506.17487 | | | Abstract (click to expand)A variational framework for structural topology optimization is developed, integrating quantum and classical latent encoding strategies within a coordinate-based neural decoding architecture. In this approach, a low-dimensional latent vector, generated either by a variational quantum circuit or sampled from a Gaussian distribution, is mapped to a higher-dimensional latent space via a learnable projection layer. This enriched representation is then decoded into a high-resolution material distribution using a neural network that takes both the latent vector and Fourier-mapped spatial coordinates as input. The optimization is performed directly on the latent parameters, guided solely by physics-based objectives such as compliance minimization and volume constraints evaluated through finite element analysis, without requiring any precomputed datasets or supervised training. Quantum latent vectors are constructed from the expectation values of Pauli observables measured on parameterized quantum circuits, providing a structured and entangled encoding of information. The classical baseline uses Gaussian-sampled latent vectors projected in the same manner. The proposed variational formulation enables the generation of diverse and physically valid topologies by exploring the latent space through sampling or perturbation, in contrast to traditional optimization methods that yield a single deterministic solution. Numerical experiments show that both classical and quantum encodings produce high-quality structural designs. However, quantum encodings demonstrate advantages in several benchmark cases in terms of compliance and design diversity. These results highlight the potential of quantum circuits as an effective and scalable tool for physics-constrained topology optimization and suggest promising directions for applying near-term quantum hardware in structural design. |
2025-06-20 | Estimating Deprivation Cost Functions for Power Outages During Disasters: A Discrete Choice Modeling Approach | Xiangpeng Li, Mona Ahmadiani, Richard Woodward et.al. | 2506.16993 | | | Abstract (click to expand)Systems for the generation and distribution of electrical power represents critical infrastructure and, when extreme weather events disrupt such systems, this imposes substantial costs on consumers. These costs can be conceptualized as deprivation costs, an increasing function of time without service, quantifiable through individuals' willingness to pay for power restoration. Despite widespread recognition of outage impacts, a gap in the research literature exists regarding the systematic measurement of deprivation costs. This study addresses this deficiency by developing and implementing a methodology to estimate deprivation cost functions for electricity outages, using stated preference survey data collected from Harris County, Texas. This study compares multiple discrete choice model architectures, including multinomial logit and mixed logit specifications, as well as models incorporating BoxCox and exponential utility transformations for the deprivation time attribute. The analysis examines heterogeneity in deprivation valuation through sociodemographic interactions, particularly across income groups. Results confirm that power outage deprivation cost functions are convex and strictly increasing with time. Additionally, the study reveals both systematic and random taste variation in how individuals value power loss, highlighting the need for flexible modeling approaches. By providing both methodological and empirical foundations for incorporating deprivation costs into infrastructure risk assessments and humanitarian logistics, this research enables policymakers to better quantify service disruption costs and develop more equitable resilience strategies. |
2025-06-20 | A Neural Operator based Hybrid Microscale Model for Multiscale Simulation of Rate-Dependent Materials | Dhananjeyan Jeyaraj, Hamidreza Eivazi, Jendrik-Alexander Tröger et.al. | 2506.16918 | link | | Abstract (click to expand)The behavior of materials is influenced by a wide range of phenomena occurring across various time and length scales. To better understand the impact of microstructure on macroscopic response, multiscale modeling strategies are essential. Numerical methods, such as the \(\text{FE}^2\) approach, account for micro-macro interactions to predict the global response in a concurrent manner. However, these methods are computationally intensive due to the repeated evaluations of the microscale. This challenge has led to the integration of deep learning techniques into computational homogenization frameworks to accelerate multiscale simulations. In this work, we employ neural operators to predict the microscale physics, resulting in a hybrid model that combines data-driven and physics-based approaches. This allows for physics-guided learning and provides flexibility for different materials and spatial discretizations. We apply this method to time-dependent solid mechanics problems involving viscoelastic material behavior, where the state is represented by internal variables only at the microscale. The constitutive relations of the microscale are incorporated into the model architecture and the internal variables are computed based on established physical principles. The results for homogenized stresses (\(<6\%\) error) show that the approach is computationally efficient (\(\sim 100 \times\) faster). |
2025-06-20 | Integrating Traditional Technical Analysis with AI: A Multi-Agent LLM-Based Approach to Stock Market Forecasting | Michał Wawer, Jarosław A. Chudziak et.al. | 2506.16813 | | 12 pages, 8 figures, 1 table. This is the accepted version of the paper presented at the 17th International Conference on Agents and Artificial Intelligence (ICAART 2025), Porto, Portugal | Abstract (click to expand)Traditional technical analysis methods face limitations in accurately predicting trends in today's complex financial markets. This paper introduces ElliottAgents, an multi-agent system that integrates the Elliott Wave Principle with AI for stock market forecasting. The inherent complexity of financial markets, characterized by non-linear dynamics, noise, and susceptibility to unpredictable external factors, poses significant challenges for accurate prediction. To address these challenges, the system employs LLMs to enhance natural language understanding and decision-making capabilities within a multi-agent framework. By leveraging technologies such as Retrieval-Augmented Generation (RAG) and Deep Reinforcement Learning (DRL), ElliottAgents performs continuous, multi-faceted analysis of market data to identify wave patterns and predict future price movements. The research explores the system's ability to process historical stock data, recognize Elliott wave patterns, and generate actionable insights for traders. Experimental results, conducted on historical data from major U.S. companies, validate the system's effectiveness in pattern recognition and trend forecasting across various time frames. This paper contributes to the field of AI-driven financial analysis by demonstrating how traditional technical analysis methods can be effectively combined with modern AI approaches to create more reliable and interpretable market prediction systems. |
2025-06-20 | Pre-training Time Series Models with Stock Data Customization | Mengyu Wang, Tiejun Ma, Shay B. Cohen et.al. | 2506.16746 | link | Accepted by KDD 2025 | Abstract (click to expand)Stock selection, which aims to predict stock prices and identify the most profitable ones, is a crucial task in finance. While existing methods primarily focus on developing model structures and building graphs for improved selection, pre-training strategies remain underexplored in this domain. Current stock series pre-training follows methods from other areas without adapting to the unique characteristics of financial data, particularly overlooking stock-specific contextual information and the non-stationary nature of stock prices. Consequently, the latent statistical features inherent in stock data are underutilized. In this paper, we propose three novel pre-training tasks tailored to stock data characteristics: stock code classification, stock sector classification, and moving average prediction. We develop the Stock Specialized Pre-trained Transformer (SSPT) based on a two-layer transformer architecture. Extensive experimental results validate the effectiveness of our pre-training methods and provide detailed guidance on their application. Evaluations on five stock datasets, including four markets and two time periods, demonstrate that SSPT consistently outperforms the market and existing methods in terms of both cumulative investment return ratio and Sharpe ratio. Additionally, our experiments on simulated data investigate the underlying mechanisms of our methods, providing insights into understanding price series. Our code is publicly available at: https://github.com/astudentuser/Pre-training-Time-Series-Models-with-Stock-Data-Customization. |
2025-06-19 | Aethorix v1.0: AI-Driven Inverse Design of Inorganic Materials for Scalable Industrial Innovation | Yingjie Shi, Runtian Miao et.al. | 2506.16609 | | | Abstract (click to expand)Artificial intelligence for Science (AI4S) is poised to transform industrial manufacturing by enabling the accelerated discovery and optimization of advanced (bio)materials, dramatically reducing development cycles, and unlocking novel high-performance solutions. We introduce Aethorix v1.0, a platform that integrates large language models for objective mining, diffusion-based generative models for zero-shot inorganic crystal design, and machine-learned interatomic potentials for rapid property prediction at ab initio accuracy. The platform is developed to enhance the full materials development cycle, ranging from design to deployment in use cases, while incorporating critical operational constraints to meet rigorous manufacturing standards. We validated its industrial value through a real use case, showcasing how the framework can be seamlessly embedded into scalable materials R&D pipelines. |
2025-06-19 | Fast Converging Single Trace Quasi-local PMCHWT Equation for the Modelling of Composite Systems | Kristof Cools et.al. | 2506.16376 | | | Abstract (click to expand)The PMCHWT integral equation enables the modelling of scattering of time-harmonic fields by penetrable, piecewise homogeneous, systems. They have been generalised to include the modelling of composite systems that may contain junctions, i.e. lines along which three or more materials meet. Linear systems resulting upon discretisation of the PMCHWT are, because of their large dimension, typically solved by Krylov iterative methods. The number of iterations required for this solution critically depends on the eigenvalue distribution of the system matrix. For systems that do not contain junction lines, Calder\'on preconditioning, which was first applied to the electric field integral equation, has been generalised to the PMCHWT equation. When junctions are present, this approach cannot be applied. Alternative approaches, such as the global multi-trace method, conceptually remove the junction lines and as a result are amenable to Calder\'on preconditioning. This approach entails a doubling of the degrees of freedom, and the solution that is produced only approximately fulfils the continuity conditions at interfaces separating domains. In this contribution, a single trace quasi-local PMCHWT equation is introduced that requires a number of iterations for its solution that only slowly increases as the mesh size tends to zero. The method is constructed as a generalisation of the classic PMCHWT, and its discretisation is thoroughly discussed. A comprehensive suite of numerical experiments demonstrates the correctness, convergence behaviour, and efficiency of the method. The integral equation is demonstrated to be free from interior resonances. |
2025-06-19 | A Fast Iterative Robust Principal Component Analysis Method | Timbwaoga Aime Judicael Ouermi, Jixian Li, Chris R. Johnson et.al. | 2506.16013 | | | Abstract (click to expand)Principal Component Analysis (PCA) is widely used for dimensionality reduction and data analysis. However, PCA results are adversely affected by outliers often observed in real-world data. Existing robust PCA methods are often computationally expensive or exhibit limited robustness. In this work, we introduce a Fast Iterative Robust (FIR) PCA method by efficiently estimating the inliers center location and covariance. Our approach leverages Incremental PCA (IPCA) to iteratively construct a subset of data points that ensures improved location and covariance estimation that effectively mitigates the influence of outliers on PCA projection. We demonstrate that our method achieves competitive accuracy and performance compared to existing robust location and covariance methods while offering improved robustness to outlier contamination. We utilize simulated and real-world datasets to evaluate and demonstrate the efficacy of our approach in identifying and preserving underlying data structures in the presence of contamination. |
2025-06-18 | Finance Language Model Evaluation (FLaME) | Glenn Matlin, Mika Okamoto, Huzaifa Pardawala et.al. | 2506.15846 | | | Abstract (click to expand)Language Models (LMs) have demonstrated impressive capabilities with core Natural Language Processing (NLP) tasks. The effectiveness of LMs for highly specialized knowledge-intensive tasks in finance remains difficult to assess due to major gaps in the methodologies of existing evaluation frameworks, which have caused an erroneous belief in a far lower bound of LMs' performance on common Finance NLP (FinNLP) tasks. To demonstrate the potential of LMs for these FinNLP tasks, we present the first holistic benchmarking suite for Financial Language Model Evaluation (FLaME). We are the first research paper to comprehensively study LMs against 'reasoning-reinforced' LMs, with an empirical study of 23 foundation LMs over 20 core NLP tasks in finance. We open-source our framework software along with all data and results. |
2025-06-18 | Simulation of parametrized cardiac electrophysiology in three dimensions using physics-informed neural networks | Roshan Antony Gomez, Julien Stöcker, Barış Cansız et.al. | 2506.15405 | | | Abstract (click to expand)Physics-informed neural networks (PINNs) are extensively used to represent various physical systems across multiple scientific domains. The same can be said for cardiac electrophysiology, wherein fully-connected neural networks (FCNNs) have been employed to predict the evolution of an action potential in a 2D space following the two-parameter phenomenological Aliev-Panfilov (AP) model. In this paper, the training behaviour of PINNs is investigated to determine optimal hyperparameters to predict the electrophysiological activity of the myocardium in 3D according to the AP model, with the inclusion of boundary and material parameters. An FCNN architecture is employed with the governing partial differential equations in their strong form, which are scaled consistently with normalization of network inputs. The finite element (FE) method is used to generate training data for the network. Numerical examples with varying spatial dimensions and parameterizations are generated using the trained models. The network predicted fields for both the action potential and the recovery variable are compared with the respective FE simulations. Network losses are weighed with individual scalar values. Their effect on training and prediction is studied to arrive at a method of controlling losses during training. |
2025-06-18 | Minimizing Structural Vibrations via Guided Flow Matching Design Optimization | Jan van Delden, Julius Schultz, Sebastian Rothe et.al. | 2506.15263 | link | | Abstract (click to expand)Structural vibrations are a source of unwanted noise in engineering systems like cars, trains or airplanes. Minimizing these vibrations is crucial for improving passenger comfort. This work presents a novel design optimization approach based on guided flow matching for reducing vibrations by placing beadings (indentations) in plate-like structures. Our method integrates a generative flow matching model and a surrogate model trained to predict structural vibrations. During the generation process, the flow matching model pushes towards manufacturability while the surrogate model pushes to low-vibration solutions. The flow matching model and its training data implicitly define the design space, enabling a broader exploration of potential solutions as no optimization of manually-defined design parameters is required. We apply our method to a range of differentiable optimization objectives, including direct optimization of specific eigenfrequencies through careful construction of the objective function. Results demonstrate that our method generates diverse and manufacturable plate designs with reduced structural vibrations compared to designs from random search, a criterion-based design heuristic and genetic optimization. The code and data are available from https://github.com/ecker-lab/Optimizing_Vibrating_Plates. |
2025-06-17 | Explain First, Trust Later: LLM-Augmented Explanations for Graph-Based Crypto Anomaly Detection | Adriana Watson et.al. | 2506.14933 | link | 6 pages, 4 figures. Code available at: https://github.com/awatson246/crypto-anomaly-detection-policy | Abstract (click to expand)The decentralized finance (DeFi) community has grown rapidly in recent years, pushed forward by cryptocurrency enthusiasts interested in the vast untapped potential of new markets. The surge in popularity of cryptocurrency has ushered in a new era of financial crime. Unfortunately, the novelty of the technology makes the task of catching and prosecuting offenders particularly challenging. Thus, it is necessary to implement automated detection tools related to policies to address the growing criminality in the cryptocurrency realm. |
2025-06-17 | Optimistic MEV in Ethereum Layer 2s: Why Blockspace Is Always in Demand | Ozan Solmaz, Lioba Heimbach, Yann Vonlanthen et.al. | 2506.14768 | | | Abstract (click to expand)Layer 2 rollups are rapidly absorbing DeFi activity, securing over $40 billion and accounting for nearly half of Ethereum's DEX volume by Q1 2025, yet their MEV dynamics remain understudied. We address this gap by defining and quantifying optimistic MEV, a form of speculative, on-chain cyclic arbitrage whose detection and execution logic reside largely on-chain in smart contracts. As a result of their speculative nature and lack of off-chain opportunity verification, optimistic MEV transactions frequently fail to execute a profitable arbitrage. Applying our multi-stage identification pipeline to Arbitrum, Base, and Optimism, we find that in Q1 2025, optimistic MEV accounts for over 50% of on-chain gas on Base and Optimism and 7% on Arbitrum, driven mainly by "interaction" probes (on-chain computations searching for arbitrage). This speculative probing keeps blocks on Base and Optimism persistently full. Despite consuming over half of on-chain gas, optimistic MEV transactions pay less than one quarter of total gas fees. Cross-network comparison reveals divergent success rates, differing patterns of code reuse, and sensitivity to varying sequencer ordering and block production times. Finally, OLS regressions link optimistic MEV trade count to ETH volatility, retail trading activity, and DEX aggregator usage, showing how Layer 2 protocol parameters uniquely encourage speculative MEV. |
2025-06-23 | Accurate and scalable exchange-correlation with deep learning | Giulia Luise, Chin-Wei Huang, Thijs Vogels et.al. | 2506.14665 | | Main: 13 pages plus references, 11 figures and tables. Supplementary information: 19 pages, 12 figures and tables. v2 update: fix rendering of figure 1 and part of figure 5 in Safari PDF viewer. v3 update: update author information and fix typo | Abstract (click to expand)Density Functional Theory (DFT) is the most widely used electronic structure method for predicting the properties of molecules and materials. Although DFT is, in principle, an exact reformulation of the Schr\"odinger equation, practical applications rely on approximations to the unknown exchange-correlation (XC) functional. Most existing XC functionals are constructed using a limited set of increasingly complex, hand-crafted features that improve accuracy at the expense of computational efficiency. Yet, no current approximation achieves the accuracy and generality for predictive modeling of laboratory experiments at chemical accuracy -- typically defined as errors below 1 kcal/mol. In this work, we present Skala, a modern deep learning-based XC functional that bypasses expensive hand-designed features by learning representations directly from data. Skala achieves chemical accuracy for atomization energies of small molecules while retaining the computational efficiency typical of semi-local DFT. This performance is enabled by training on an unprecedented volume of high-accuracy reference data generated using computationally intensive wavefunction-based methods. Notably, Skala systematically improves with additional training data covering diverse chemistry. By incorporating a modest amount of additional high-accuracy data tailored to chemistry beyond atomization energies, Skala achieves accuracy competitive with the best-performing hybrid functionals across general main group chemistry, at the cost of semi-local DFT. As the training dataset continues to expand, Skala is poised to further enhance the predictive power of first-principles simulations. |
2025-06-17 | A Machine Learning Framework for Climate-Resilient Insurance and Real Estate Decisions | Lang Qin, Yuejin Xie, Daili Hua et.al. | 2506.14638 | | 26 pages, 12 figures | Abstract (click to expand)Extreme weather events increasingly threaten the insurance and real estate industries, creating conflicts between profitability and homeowner burdens. To address this, we propose the SSC-Insurance Model, which integrates SMOTE, SVM, and C-D-C algorithms to evaluate weather impacts on policies and investments. Our model achieves 88.3% accuracy in Zhejiang and 79.6% in Ireland, identifying a critical threshold (43% weather increase) for insurance viability. Additionally, we develop the TOA-Preservation Model using TOPSIS-ORM and AHP to prioritize building protection, with cultural value scoring highest (weight: 0.3383). Case studies on Nanxun Ancient Town show a 65.32% insurability probability and a protection score of 0.512. This work provides actionable tools for insurers, developers, and policymakers to manage climate risks sustainably. |
2025-06-17 | Collaborative Charging Scheduling via Balanced Bounding Box Methods | Fangting Zhou, Balázs Kulcsár, Jiaming Wu et.al. | 2506.14461 | | | Abstract (click to expand)Electric mobility faces several challenges, most notably the high cost of infrastructure development and the underutilization of charging stations. The concept of shared charging offers a promising solution. The paper explores sustainable urban logistics through horizontal collaboration between two fleet operators and addresses a scheduling problem for the shared use of charging stations. To tackle this, the study formulates a collaborative scheduling problem as a bi-objective nonlinear integer programming model, in which each company aims to minimize its own costs, creating inherent conflicts that require trade-offs. The Balanced Bounding Box Methods (B3Ms) are introduced in order to efficiently derive the efficient frontier, identifying a reduced set of representative solutions. These methods enhance computational efficiency by selectively disregarding closely positioned and competing solutions, preserving the diversity and representativeness of the solutions over the efficient frontier. To determine the final solution and ensure balanced collaboration, cooperative bargaining methods are applied. Numerical case studies demonstrate the viability and scalability of the developed methods, showing that the B3Ms can significantly reduce computational time while maintaining the integrity of the frontier. These methods, along with cooperative bargaining, provide an effective framework for solving various bi-objective optimization problems, extending beyond the collaborative scheduling problem presented here. |
2025-06-17 | Active Digital Twins via Active Inference | Matteo Torzoni, Domenico Maisto, Andrea Manzoni et.al. | 2506.14453 | | | Abstract (click to expand)Digital twins are transforming engineering and applied sciences by enabling real-time monitoring, simulation, and predictive analysis of physical systems and processes. However, conventional digital twins rely primarily on passive data assimilation, which limits their adaptability in uncertain and dynamic environments. This paper introduces the active digital twin paradigm, based on active inference. Active inference is a neuroscience-inspired, Bayesian framework for probabilistic reasoning and predictive modeling that unifies inference, decision-making, and learning under a unique, free energy minimization objective. By formulating the evolution of the active digital twin as a partially observable Markov decision process, the active inference agent continuously refines its generative model through Bayesian updates and forecasts future states and observations. Decision-making emerges from an optimization process that balances pragmatic exploitation (maximizing goal-directed utility) and epistemic exploration or information gain (actively resolving uncertainty). Actions are dynamically planned to minimize expected free energy, which quantifies both the divergence between predicted and preferred future observations, and the epistemic value of expected information gain about hidden states. This approach enables a new level of autonomy and resilience in digital twins, offering superior spontaneous exploration capabilities. The proposed framework is assessed on the health monitoring and predictive maintenance of a railway bridge. |
2025-06-18 | Higher-Oder Splitting Schemes for Fluids with Variable Viscosity | Richard Schussnig, Niklas Fehn, Douglas Ramalho Queiroz Pacheco et.al. | 2506.14424 | | | Abstract (click to expand)This article investigates matrix-free higher-order discontinuous Galerkin (DG) discretizations of the Navier-Stokes equations for incompressible flows with variable viscosity. The viscosity field may be prescribed analytically or governed by a rheological law, as often found in biomedical or industrial applications. The DG discretization of the adapted second-order viscous terms is carried out via the symmetric interior penalty Galerkin method, obviating auxiliary variables. Based on this spatial discretization, we compare several linearized variants of saddle point block systems and projection-based splitting time integration schemes in terms of their computational performance. Compared to the velocity-pressure block-system for the former, the splitting scheme allows solving a sequence of simple problems such as mass, convection-diffusion and Poisson equations. We investigate under which conditions the improved temporal stability of fully implicit schemes and resulting expensive nonlinear solves outperform the splitting schemes and linearized variants that are stable under hyperbolic time step restrictions. The key aspects of this work are i) a higher-order DG discretization for incompressible flows with variable viscosity, ii) accelerated nonlinear solver variants and suitable linearizations adopting a matrix-free \(hp\) -multigrid solver, and iii) a detailed comparison of the monolithic and projection-based solvers in terms of their (non-)linear solver performance. The presented schemes are evaluated in a series of numerical examples verifying their spatial and temporal accuracy, and the preconditioner performance under increasing viscosity contrasts, while their efficiency is showcased in the backward-facing step benchmark. |
2025-06-16 | Kolmogorov-Arnold Network for Gene Regulatory Network Inference | Tsz Pan Tong, Aoran Wang, George Panagopoulos et.al. | 2506.13740 | link | 26 pages, 14 figures, accepted in CMSB 2025 | Abstract (click to expand)Gene regulation is central to understanding cellular processes and development, potentially leading to the discovery of new treatments for diseases and personalized medicine. Inferring gene regulatory networks (GRNs) from single-cell RNA sequencing (scRNA-seq) data presents significant challenges due to its high dimensionality and complexity. Existing tree-based models, such as GENIE3 and GRNBOOST2, demonstrated scalability and explainability in GRN inference, but they cannot distinguish regulation types nor effectively capture continuous cellular dynamics. In this paper, we introduce scKAN, a novel model that employs a Kolmogorov-Arnold network (KAN) with explainable AI to infer GRNs from scRNA-seq data. By modeling gene expression as differentiable functions matching the smooth nature of cellular dynamics, scKAN can accurately and precisely detect activation and inhibition regulations through explainable AI and geometric tools. We conducted extensive experiments on the BEELINE benchmark, and scKAN surpasses and improves the leading signed GRN inference models ranging from 5.40\% to 28.37\% in AUROC and from 1.97\% to 40.45\% in AUPRC. These results highlight the potential of scKAN in capturing the underlying biological processes in gene regulation without prior knowledge of the graph structure. |
2025-06-16 | Constitutive Manifold Neural Networks | Wouter J. Schuttert, Mohammed Iqbal Abdul Rasheed, Bojana Rosić et.al. | 2506.13648 | | | Abstract (click to expand)Important material properties like thermal conductivity are often represented as symmetric positive definite (SPD) tensors, which exhibit variability due to inherent material heterogeneity and manufacturing uncertainties. These tensors reside on a curved Riemannian manifold, and accurately modeling their stochastic nature requires preserving both their symmetric positive definite properties and spatial symmetries. To achieve this, uncertainties are parametrized into scaling (magnitude) and rotation (orientation) components, modeled as independent random variables on a manifold structure derived from the maximum entropy principle. The propagation of such stochastic tensors through physics-based simulations necessitates computationally efficient surrogate models. However, traditional multi-layer perceptron (MLP) architectures are not well-suited for SPD tensors, as directly inputting their components fails to preserve their geometric properties, often leading to suboptimal results. To address this, we introduce Constitutive Manifold Neural Networks (CMNN). This approach introduces a preprocessing layer by mapping the SPD tensor from the curved manifold to the local tangent, a flat vector space, creating an information preserving map for input to the hidden layers of the neural networks. A case study on a steady-state heat conduction problem with stochastic anisotropic conductivity demonstrates that geometry-preserving preprocessing, such as logarithmic maps for scaling data, significantly improves learning performance over conventional MLPs. These findings underscore the importance of manifold-aware techniques when working with tensor-valued data in engineering applications. |
2025-06-16 | An Entropy-Stable/Double-Flux scheme for the multi-component compressible Navier-Stokes equations | Vahid Badrkhani, T. Jeremy P. Karpowsk, Christian Hasse et.al. | 2506.13231 | | | Abstract (click to expand)We present a novel combination of numerical techniques to improve the efficiency, accuracy, and robustness of multi-component compressible flow simulations. At the core of our approach is an Entropy-Stable formulation that preserves kinetic energy and integrates a Double-Flux scheme tailored for multi-component flows with variable specific heat ratios. This formulation yields low-dissipation, oscillation-free solutions and enhances stability compared to standard fully conservative methods. To further improve robustness, we introduce a new hybrid dissipation strategy that blends the Entropy-Stable/Double-Flux approach with conventional dissipation mechanisms. We provide a rigorous proof that the resulting numerical flux satisfies a semi-discrete entropy inequality, ensuring consistency with the second law of thermodynamics. For time integration, we employ an explicit Runge-Kutta scheme in combination with adaptive mesh refinement to capture local flow features dynamically. The method is implemented within an existing compressible Navier-Stokes solver based on OpenFOAM. Benchmark cases, including multi-dimensional interface and shock-interface interactions, demonstrate the effectiveness of the proposed framework. The results confirm its favorable stability and robustness, validating the approach as a promising advancement for high-fidelity simulations of supersonic flows. |
2025-06-16 | A modified Newmark/Newton-Raphson method with automatic differentiation for general nonlinear dynamics analysis | Yifan Jiang, Yuhong Jin, Lei Hou et.al. | 2506.13226 | link | 18 pages, 9 figures | Abstract (click to expand)The Newmark/Newton-Raphson (NNR) method is widely employed for solving nonlinear dynamic systems. However, the current NNR method exhibits limited applicability in complex nonlinear dynamic systems, as the acquisition of the Jacobian matrix required for Newton iterations incurs substantial computational costs and may even prove intractable in certain cases. To address these limitations, we integrate automatic differentiation (AD) into the NNR method, proposing a modified NNR method with AD (NNR-AD) to significantly improve its capability for effectively handling complex nonlinear systems. We have demonstrated that the NNR-AD method can directly solve dynamic systems with complex nonlinear characteristics, and its accuracy and generality have been rigorously validated. Furthermore, automatic differentiation significantly simplifies the computation of Jacobian matrices for such complex nonlinear dynamic systems. This improvement endows the NNR method with enhanced modularity, thereby enabling convenient and effective solutions for complex nonlinear dynamic systems. |
2025-06-14 | The Software Landscape for the Density Matrix Renormalization Group | Per Sehlstedt, Jan Brandejs, Paolo Bientinesi et.al. | 2506.12629 | link | | Abstract (click to expand)The density matrix renormalization group (DMRG) algorithm is a cornerstone computational method for studying quantum many-body systems, renowned for its accuracy and adaptability. Despite DMRG's broad applicability across fields such as materials science, quantum chemistry, and quantum computing, numerous independent implementations have been developed. This survey maps the rapidly expanding DMRG software landscape, providing a comprehensive comparison of features among 35 existing packages. We found significant overlap in features among the packages when comparing key aspects, such as parallelism strategies for high-performance computing and symmetry-adapted formulations that enhance efficiency. This overlap suggests opportunities for modularization of common operations, including tensor operations, symmetry representations, and eigensolvers, as the packages are mostly independent and share few third-party library dependencies where functionality is factored out. More widespread modularization and standardization would result in reduced duplication of efforts and improved interoperability. We believe that the proliferation of packages and the current lack of standard interfaces and modularity are more social than technical. We aim to raise awareness of existing packages, guide researchers in finding a suitable package for their needs, and help developers identify opportunities for collaboration, modularity standardization, and optimization. Ultimately, this work emphasizes the value of greater cohesion and modularity, which would benefit DMRG software, allowing these powerful algorithms to tackle more complex and ambitious problems. |
2025-06-13 | Interpretable Classification of Levantine Ceramic Thin Sections via Neural Networks | Sara Capriotti, Alessio Devoto, Simone Scardapane et.al. | 2506.12250 | | Accepted for publication in Machine Learning: Science and Technology | Abstract (click to expand)Classification of ceramic thin sections is fundamental for understanding ancient pottery production techniques, provenance, and trade networks. Although effective, traditional petrographic analysis is time-consuming. This study explores the application of deep learning models, specifically Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), as complementary tools to support the classification of Levantine ceramics based on their petrographic fabrics. A dataset of 1,424 thin section images from 178 ceramic samples belonging to several archaeological sites across the Levantine area, mostly from the Bronze Age, with few samples dating to the Iron Age, was used to train and evaluate these models. The results demonstrate that transfer learning significantly improves classification performance, with a ResNet18 model achieving 92.11% accuracy and a ViT reaching 88.34%. Explainability techniques, including Guided Grad-CAM and attention maps, were applied to interpret and visualize the models' decisions, revealing that both CNNs and ViTs successfully focus on key mineralogical features for the classification of the samples into their respective petrographic fabrics. These findings highlight the potential of explainable AI in archaeometric studies, providing a reproducible and efficient methodology for ceramic analysis while maintaining transparency in model decision-making. |
2025-06-13 | CLEAN-MI: A Scalable and Efficient Pipeline for Constructing High-Quality Neurodata in Motor Imagery Paradigm | Dingkun Liu, Zhu Chen, Dongrui Wu et.al. | 2506.11830 | | 10 pages, 6 figures | Abstract (click to expand)The construction of large-scale, high-quality datasets is a fundamental prerequisite for developing robust and generalizable foundation models in motor imagery (MI)-based brain-computer interfaces (BCIs). However, EEG signals collected from different subjects and devices are often plagued by low signal-to-noise ratio, heterogeneity in electrode configurations, and substantial inter-subject variability, posing significant challenges for effective model training. In this paper, we propose CLEAN-MI, a scalable and systematic data construction pipeline for constructing large-scale, efficient, and accurate neurodata in the MI paradigm. CLEAN-MI integrates frequency band filtering, channel template selection, subject screening, and marginal distribution alignment to systematically filter out irrelevant or low-quality data and standardize multi-source EEG datasets. We demonstrate the effectiveness of CLEAN-MI on multiple public MI datasets, achieving consistent improvements in data quality and classification performance. |
2025-06-13 | On the performance of multi-fidelity and reduced-dimensional neural emulators for inference of physiologic boundary conditions | Chloe H. Choi, Andrea Zanoni, Daniele E. Schiavazzi et.al. | 2506.11683 | | | Abstract (click to expand)Solving inverse problems in cardiovascular modeling is particularly challenging due to the high computational cost of running high-fidelity simulations. In this work, we focus on Bayesian parameter estimation and explore different methods to reduce the computational cost of sampling from the posterior distribution by leveraging low-fidelity approximations. A common approach is to construct a surrogate model for the high-fidelity simulation itself. Another is to build a surrogate for the discrepancy between high- and low-fidelity models. This discrepancy, which is often easier to approximate, is modeled with either a fully connected neural network or a nonlinear dimensionality reduction technique that enables surrogate construction in a lower-dimensional space. A third possible approach is to treat the discrepancy between the high-fidelity and surrogate models as random noise and estimate its distribution using normalizing flows. This allows us to incorporate the approximation error into the Bayesian inverse problem by modifying the likelihood function. We validate five different methods which are variations of the above on analytical test cases by comparing them to posterior distributions derived solely from high-fidelity models, assessing both accuracy and computational cost. Finally, we demonstrate our approaches on two cardiovascular examples of increasing complexity: a lumped-parameter Windkessel model and a patient-specific three-dimensional anatomy. |
2025-06-13 | PPDiff: Diffusing in Hybrid Sequence-Structure Space for Protein-Protein Complex Design | Zhenqiao Song, Tiaoxiao Li, Lei Li et.al. | 2506.11420 | | | Abstract (click to expand)Designing protein-binding proteins with high affinity is critical in biomedical research and biotechnology. Despite recent advancements targeting specific proteins, the ability to create high-affinity binders for arbitrary protein targets on demand, without extensive rounds of wet-lab testing, remains a significant challenge. Here, we introduce PPDiff, a diffusion model to jointly design the sequence and structure of binders for arbitrary protein targets in a non-autoregressive manner. PPDiffbuilds upon our developed Sequence Structure Interleaving Network with Causal attention layers (SSINC), which integrates interleaved self-attention layers to capture global amino acid correlations, k-nearest neighbor (kNN) equivariant graph layers to model local interactions in three-dimensional (3D) space, and causal attention layers to simplify the intricate interdependencies within the protein sequence. To assess PPDiff, we curate PPBench, a general protein-protein complex dataset comprising 706,360 complexes from the Protein Data Bank (PDB). The model is pretrained on PPBenchand finetuned on two real-world applications: target-protein mini-binder complex design and antigen-antibody complex design. PPDiffconsistently surpasses baseline methods, achieving success rates of 50.00%, 23.16%, and 16.89% for the pretraining task and the two downstream applications, respectively. |
2025-06-12 | An Attention-based Spatio-Temporal Neural Operator for Evolving Physics | Vispi Karkaria, Doksoo Lee, Yi-Ping Chen et.al. | 2506.11328 | | | Abstract (click to expand)In scientific machine learning (SciML), a key challenge is learning unknown, evolving physical processes and making predictions across spatio-temporal scales. For example, in real-world manufacturing problems like additive manufacturing, users adjust known machine settings while unknown environmental parameters simultaneously fluctuate. To make reliable predictions, it is desired for a model to not only capture long-range spatio-temporal interactions from data but also adapt to new and unknown environments; traditional machine learning models excel at the first task but often lack physical interpretability and struggle to generalize under varying environmental conditions. To tackle these challenges, we propose the Attention-based Spatio-Temporal Neural Operator (ASNO), a novel architecture that combines separable attention mechanisms for spatial and temporal interactions and adapts to unseen physical parameters. Inspired by the backward differentiation formula (BDF), ASNO learns a transformer for temporal prediction and extrapolation and an attention-based neural operator for handling varying external loads, enhancing interpretability by isolating historical state contributions and external forces, enabling the discovery of underlying physical laws and generalizability to unseen physical environments. Empirical results on SciML benchmarks demonstrate that ASNO outperforms over existing models, establishing its potential for engineering applications, physics discovery, and interpretable machine learning. |
2025-06-12 | Spectral Analysis of Discretized Boundary Integral Operators in 3D: a High-Frequency Perspective | V. Giunzioni, A. Merlini, F. P. Andriulli et.al. | 2506.10880 | | | Abstract (click to expand)When modeling propagation and scattering phenomena using integral equations discretized by the boundary element method, it is common practice to approximate the boundary of the scatterer with a mesh comprising elements of size approximately equal to a fraction of the wavelength \(\lambda\) of the incident wave, e.g., \(\lambda/10\) . In this work, by analyzing the spectra of the operator matrices, we show a discrepancy with respect to the continuous operators which grows with the simulation frequency, challenging the common belief that the aforementioned widely used discretization approach is sufficient to maintain the accuracy of the solution constant when increasing the frequency. |
2025-06-13 | PDESpectralRefiner: Achieving More Accurate Long Rollouts with Spectral Adjustment | Li Luo et.al. | 2506.10711 | | | Abstract (click to expand)Generating accurate and stable long rollouts is a notorious challenge for time-dependent PDEs (Partial Differential Equations). Recently, motivated by the importance of high-frequency accuracy, a refiner model called PDERefiner utilizes diffusion models to refine outputs for every time step, since the denoising process could increase the correctness of modeling high frequency part. For 1-D Kuramoto-Sivashinsky equation, refiner models can degrade the amplitude of high frequency part better than not doing refinement process. However, for some other cases, the spectrum might be more complicated. For example, for a harder PDE like Navior-Stokes equation, diffusion models could over-degrade the higher frequency part. This motivates us to release the constraint that each frequency weighs the same. We enhance our refiner model with doing adjustments on spectral space, which recovers Blurring diffusion models. We developed a new v-prediction technique for Blurring diffusion models, recovering the MSE training objective on the first refinement step. We show that in this case, for different model backbones, such as U-Net and neural operators, the outputs of PDE-SpectralRefiner are more accurate for both one-step MSE loss and rollout loss. |
2025-06-11 | Interpretable and flexible non-intrusive reduced-order models using reproducing kernel Hilbert spaces | Alejandro N Diaz, Shane A McQuarrie, John T Tencer et.al. | 2506.10224 | | | Abstract (click to expand)This paper develops an interpretable, non-intrusive reduced-order modeling technique using regularized kernel interpolation. Existing non-intrusive approaches approximate the dynamics of a reduced-order model (ROM) by solving a data-driven least-squares regression problem for low-dimensional matrix operators. Our approach instead leverages regularized kernel interpolation, which yields an optimal approximation of the ROM dynamics from a user-defined reproducing kernel Hilbert space. We show that our kernel-based approach can produce interpretable ROMs whose structure mirrors full-order model structure by embedding judiciously chosen feature maps into the kernel. The approach is flexible and allows a combination of informed structure through feature maps and closure terms via more general nonlinear terms in the kernel. We also derive a computable a posteriori error bound that combines standard error estimates for intrusive projection-based ROMs and kernel interpolants. The approach is demonstrated in several numerical experiments that include comparisons to operator inference using both proper orthogonal decomposition and quadratic manifold dimension reduction. |
2025-06-11 | Exploring EEG Responses during Observation of Actions Performed by Human Actor and Humanoid Robot | Anh T. Nguyen, Ajay Anand, Michelle J. Johnson et.al. | 2506.10170 | | | Abstract (click to expand)Action observation (AO) therapy is a promising rehabilitative treatment for motor and language function in individuals recovering from neurological conditions, such as stroke. This pilot study aimed to investigate the potential of humanoid robots to support AO therapy in rehabilitation settings. The brain activity of three healthy right-handed participants was monitored with electroencephalography (EEG) while they observed eight different actions performed by two agents, a human actor and a robot, using their left and right arms. Their event-related spectral perturbations (ERSPs, changes in the spectral power of neural oscillations in response to an event or stimulus, compared to baseline) in sensorimotor regions were analyzed. The single-subject analysis showed variability in ERSP patterns among all participants, including power suppression in sensorimotor mu and beta rhythms. One participant showed stronger responses to "robot" AO conditions than to "human" conditions. Strong and positive correlations in ERSP across all conditions were observed for almost all participants and channels, implying common cognitive processes or neural networks at play in the mirror neuron system during AO. The results support the feasibility of using EEG to explore differences in neural responses to observation of robot- and human-induced actions. |
2025-06-11 | Improving the performance of optical inverse design of multilayer thin films using CNN-LSTM tandem neural networks | Uijun Jung, Deokho Jang, Sungchul Kim et.al. | 2506.10044 | | 22 pages, 8 figures, 2 tables, 11 supplementary figures, 7 supplementary tables | Abstract (click to expand)Optical properties of thin film are greatly influenced by the thickness of each layer. Accurately predicting these thicknesses and their corresponding optical properties is important in the optical inverse design of thin films. However, traditional inverse design methods usually demand extensive numerical simulations and optimization procedures, which are time-consuming. In this paper, we utilize deep learning for the inverse design of the transmission spectra of SiO2/TiO2 multilayer thin films. We implement a tandem neural network (TNN), which can solve the one-to-many mapping problem that greatly degrades the performance of deep-learning-based inverse designs. In general, the TNN has been implemented by a back-to-back connection of an inverse neural network and a pre-trained forward neural network, both of which have been implemented based on multilayer perceptron (MLP) algorithms. In this paper, we propose to use not only MLP, but also convolutional neural network (CNN) or long short-term memory (LSTM) algorithms in the configuration of the TNN. We show that an LSTM-LSTM-based TNN yields the highest accuracy but takes the longest training time among nine configurations of TNNs. We also find that a CNN-LSTM-based TNN will be an optimal solution in terms of accuracy and speed because it could integrate the strengths of the CNN and LSTM algorithms. |
2025-06-11 | Causal Climate Emulation with Bayesian Filtering | Sebastian Hickman, Ilija Trajkovic, Julia Kaltenborn et.al. | 2506.09891 | | 32 pages, 21 figures | Abstract (click to expand)Traditional models of climate change use complex systems of coupled equations to simulate physical processes across the Earth system. These simulations are highly computationally expensive, limiting our predictions of climate change and analyses of its causes and effects. Machine learning has the potential to quickly emulate data from climate models, but current approaches are not able to incorporate physics-informed causal relationships. Here, we develop an interpretable climate model emulator based on causal representation learning. We derive a physics-informed approach including a Bayesian filter for stable long-term autoregressive emulation. We demonstrate that our emulator learns accurate climate dynamics, and we show the importance of each one of its components on a realistic synthetic dataset and data from two widely deployed climate models. |
2025-06-11 | Superstudent intelligence in thermodynamics | Rebecca Loubet, Pascal Zittlau, Marco Hoffmann et.al. | 2506.09822 | | This document is the unedited Author's version of a yet to be Submitted Work to Physical Review Physics Education Research. 15 pages, 2 figures, Graphical Abstract, Highlights and SI available (12 pages) | Abstract (click to expand)In this short note, we report and analyze a striking event: OpenAI's large language model o3 has outwitted all students in a university exam on thermodynamics. The thermodynamics exam is a difficult hurdle for most students, where they must show that they have mastered the fundamentals of this important topic. Consequently, the failure rates are very high, A-grades are rare - and they are considered proof of the students' exceptional intellectual abilities. This is because pattern learning does not help in the exam. The problems can only be solved by knowledgeably and creatively combining principles of thermodynamics. We have given our latest thermodynamics exam not only to the students but also to OpenAI's most powerful reasoning model, o3, and have assessed the answers of o3 exactly the same way as those of the students. In zero-shot mode, the model o3 solved all problems correctly, better than all students who took the exam; its overall score was in the range of the best scores we have seen in more than 10,000 similar exams since 1985. This is a turning point: machines now excel in complex tasks, usually taken as proof of human intellectual capabilities. We discuss the consequences this has for the work of engineers and the education of future engineers. |
2025-06-11 | Intelligent Design 4.0: Paradigm Evolution Toward the Agentic AI Era | Shuo Jiang, Min Xie, Frank Youhua Chen et.al. | 2506.09755 | | | Abstract (click to expand)Research and practice in Intelligent Design (ID) have significantly enhanced engineering innovation, efficiency, quality, and productivity over recent decades, fundamentally reshaping how engineering designers think, behave, and interact with design processes. The recent emergence of Foundation Models (FMs), particularly Large Language Models (LLMs), has demonstrated general knowledge-based reasoning capabilities, and open new paths and avenues for further transformation in engineering design. In this context, this paper introduces Intelligent Design 4.0 (ID 4.0) as an emerging paradigm empowered by agentic AI systems. We review the historical evolution of ID across four distinct stages: rule-based expert systems, task-specific machine learning models, large-scale foundation AI models, and the recent emerging paradigm of multi-agent collaboration. We propose a conceptual framework for ID 4.0 and discuss its potential to support end-to-end automation of engineering design processes through coordinated, autonomous multi-agent-based systems. Furthermore, we discuss future perspectives to enhance and fully realize ID 4.0's potential, including more complex design scenarios, more practical design implementations, novel agent coordination mechanisms, and autonomous design goal-setting with better human value alignment. In sum, these insights lay a foundation for advancing Intelligent Design toward greater adaptivity, autonomy, and effectiveness in addressing increasingly complex design challenges. |
2025-06-11 | Large Language Models for Design Structure Matrix Optimization | Shuo Jiang, Min Xie, Jianxi Luo et.al. | 2506.09749 | | | Abstract (click to expand)In complex engineering systems, the interdependencies among components or development activities are often modeled and analyzed using Design Structure Matrix (DSM). Reorganizing elements within a DSM to minimize feedback loops and enhance modularity or process efficiency constitutes a challenging combinatorial optimization (CO) problem in engineering design and operations. As problem sizes increase and dependency networks become more intricate, traditional optimization methods that solely use mathematical heuristics often fail to capture the contextual nuances and struggle to deliver effective solutions. In this study, we explore the potential of Large Language Models (LLMs) for helping solve such CO problems by leveraging their capabilities for advanced reasoning and contextual understanding. We propose a novel LLM-based framework that integrates network topology with contextual domain knowledge for iterative optimization of DSM element sequencing - a common CO problem. Experiments on various DSM cases show that our method consistently achieves faster convergence and superior solution quality compared to both stochastic and deterministic baselines. Notably, we find that incorporating contextual domain knowledge significantly enhances optimization performance regardless of the chosen LLM backbone. These findings highlight the potential of LLMs to solve complex engineering CO problems by combining semantic and mathematical reasoning. This approach paves the way towards a new paradigm in LLM-based engineering design optimization. |
2025-06-11 | Natural Language Guided Ligand-Binding Protein Design | Zhenqiao Song, Ramith Hettiarachchi, Chuan Li et.al. | 2506.09332 | | | Abstract (click to expand)Can AI protein models follow human language instructions and design proteins with desired functions (e.g. binding to a ligand)? Designing proteins that bind to a given ligand is crucial in a wide range of applications in biology and chemistry. Most prior AI models are trained on protein-ligand complex data, which is scarce due to the high cost and time requirements of laboratory experiments. In contrast, there is a substantial body of human-curated text descriptions about protein-ligand interactions and ligand formula. In this paper, we propose InstructPro, a family of protein generative models that follow natural language instructions to design ligand-binding proteins. Given a textual description of the desired function and a ligand formula in SMILES, InstructPro generates protein sequences that are functionally consistent with the specified instructions. We develop the model architecture, training strategy, and a large-scale dataset, InstructProBench, to support both training and evaluation. InstructProBench consists of 9,592,829 triples of (function description, ligand formula, protein sequence). We train two model variants: InstructPro-1B (with 1 billion parameters) and InstructPro-3B~(with 3 billion parameters). Both variants consistently outperform strong baselines, including ProGen2, ESM3, and Pinal. Notably, InstructPro-1B achieves the highest docking success rate (81.52% at moderate confidence) and the lowest average root mean square deviation (RMSD) compared to ground truth structures (4.026{\AA}). InstructPro-3B further descreases the average RMSD to 2.527{\AA}, demonstrating InstructPro's ability to generate ligand-binding proteins that align with the functional specifications. |
2025-06-10 | Transaction Categorization with Relational Deep Learning in QuickBooks | Kaiwen Dong, Padmaja Jonnalagedda, Xiang Gao et.al. | 2506.09234 | | Accepted to ECML-PKDD 2025 | Abstract (click to expand)Automatic transaction categorization is crucial for enhancing the customer experience in QuickBooks by providing accurate accounting and bookkeeping. The distinct challenges in this domain stem from the unique formatting of transaction descriptions, the wide variety of transaction categories, and the vast scale of the data involved. Furthermore, organizing transaction data in a relational database creates difficulties in developing a unified model that covers the entire database. In this work, we develop a novel graph-based model, named Rel-Cat, which is built directly over the relational database. We introduce a new formulation of transaction categorization as a link prediction task within this graph structure. By integrating techniques from natural language processing and graph machine learning, our model not only outperforms the existing production model in QuickBooks but also scales effectively to a growing customer base with a simpler, more effective architecture without compromising on accuracy. This design also helps tackle a key challenge of the cold start problem by adapting to minimal data. |
2025-06-10 | Rapid cardiac activation prediction for cardiac resynchronization therapy planning using geometric deep learning | Ehsan Naghavi, Haifeng Wang, Vahid Ziaei Rad et.al. | 2506.08987 | link | | Abstract (click to expand)Cardiac resynchronization therapy (CRT) is a common intervention for patients with dyssynchronous heart failure, yet approximately one-third of recipients fail to respond due to suboptimal lead placement. Identifying optimal pacing sites remains challenging, largely due to patient-specific anatomical variability and the limitations of current individualized planning strategies. In a step towards constructing an in-silico approach to help address this issue, we develop two geometric deep learning (DL) models, based on graph neural network (GNN) and geometry-informed neural operator (GINO), to predict cardiac activation time map in real-time for CRT planning and optimization. Both models are trained on a large synthetic dataset generated from finite-element (FE) simulations over a wide range of left ventricular (LV) geometries, pacing site configurations, and tissue conductivities. The GINO model significantly outperforms the GNN model, with lower prediction errors (1.14% vs 3.14%) and superior robustness to noise and various mesh discretization. Using the GINO model, we also develop a workflow for optimizing the pacing site in CRT from given activation time map and LV geometry. Compared to randomly selecting a pacing site, the CRT optimization workflow produces a larger reduction in maximum activation time (20% vs. 8%). In conjunction with an interactive web-based graphical user interface (GUI) available at https://dcsim.egr.msu.edu/, the GINO model shows promising potential as a clinical decision-support tool for personalized pre-procedural CRT optimization. |
2025-06-10 | IMAGIC-500: IMputation benchmark on A Generative Imaginary Country (500k samples) | Siyi Sun, David Antony Selby, Yunchuan Huang et.al. | 2506.08844 | | | Abstract (click to expand)Missing data imputation in tabular datasets remains a pivotal challenge in data science and machine learning, particularly within socioeconomic research. However, real-world socioeconomic datasets are typically subject to strict data protection protocols, which often prohibit public sharing, even for synthetic derivatives. This severely limits the reproducibility and accessibility of benchmark studies in such settings. Further, there are very few publicly available synthetic datasets. Thus, there is limited availability of benchmarks for systematic evaluation of imputation methods on socioeconomic datasets, whether real or synthetic. In this study, we utilize the World Bank's publicly available synthetic dataset, Synthetic Data for an Imaginary Country, which closely mimics a real World Bank household survey while being fully public, enabling broad access for methodological research. With this as a starting point, we derived the IMAGIC-500 dataset: we select a subset of 500k individuals across approximately 100k households with 19 socioeconomic features, designed to reflect the hierarchical structure of real-world household surveys. This paper introduces a comprehensive missing data imputation benchmark on IMAGIC-500 under various missing mechanisms (MCAR, MAR, MNAR) and missingness ratios (10\%, 20\%, 30\%, 40\%, 50\%). Our evaluation considers the imputation accuracy for continuous and categorical variables, computational efficiency, and impact on downstream predictive tasks, such as estimating educational attainment at the individual level. The results highlight the strengths and weaknesses of statistical, traditional machine learning, and deep learning imputation techniques, including recent diffusion-based methods. The IMAGIC-500 dataset and benchmark aim to facilitate the development of robust imputation algorithms and foster reproducible social science research. |
2025-06-10 | EDINET-Bench: Evaluating LLMs on Complex Financial Tasks using Japanese Financial Statements | Issa Sugiura, Takashi Ishida, Taro Makino et.al. | 2506.08762 | | | Abstract (click to expand)Financial analysis presents complex challenges that could leverage large language model (LLM) capabilities. However, the scarcity of challenging financial datasets, particularly for Japanese financial data, impedes academic innovation in financial analytics. As LLMs advance, this lack of accessible research resources increasingly hinders their development and evaluation in this specialized domain. To address this gap, we introduce EDINET-Bench, an open-source Japanese financial benchmark designed to evaluate the performance of LLMs on challenging financial tasks including accounting fraud detection, earnings forecasting, and industry prediction. EDINET-Bench is constructed by downloading annual reports from the past 10 years from Japan's Electronic Disclosure for Investors' NETwork (EDINET) and automatically assigning labels corresponding to each evaluation task. Our experiments reveal that even state-of-the-art LLMs struggle, performing only slightly better than logistic regression in binary classification for fraud detection and earnings forecasting. These results highlight significant challenges in applying LLMs to real-world financial applications and underscore the need for domain-specific adaptation. Our dataset, benchmark construction code, and evaluation code is publicly available to facilitate future research in finance with LLMs. |
2025-06-10 | Flow Matching Meets PDEs: A Unified Framework for Physics-Constrained Generation | Giacomo Baldan, Qiang Liu, Alberto Guardone et.al. | 2506.08604 | | | Abstract (click to expand)Generative machine learning methods, such as diffusion models and flow matching, have shown great potential in modeling complex system behaviors and building efficient surrogate models. However, these methods typically learn the underlying physics implicitly from data. We propose Physics-Based Flow Matching (PBFM), a novel generative framework that explicitly embeds physical constraints, both PDE residuals and algebraic relations, into the flow matching objective. We also introduce temporal unrolling at training time that improves the accuracy of the final, noise-free sample prediction. Our method jointly minimizes the flow matching loss and the physics-based residual loss without requiring hyperparameter tuning of their relative weights. Additionally, we analyze the role of the minimum noise level, \(\sigma_{\min}\), in the context of physical constraints and evaluate a stochastic sampling strategy that helps to reduce physical residuals. Through extensive benchmarks on three representative PDE problems, we show that our approach yields up to an \(8\times\) more accurate physical residuals compared to FM, while clearly outperforming existing algorithms in terms of distributional accuracy. PBFM thus provides a principled and efficient framework for surrogate modeling, uncertainty quantification, and accelerated simulation in physics and engineering applications. |
2025-06-11 | KP-PINNs: Kernel Packet Accelerated Physics Informed Neural Networks | Siyuan Yang, Cheng Song, Zhilu Lai et.al. | 2506.08563 | | Accepted to IJCAI 2025 | Abstract (click to expand)Differential equations are involved in modeling many engineering problems. Many efforts have been devoted to solving differential equations. Due to the flexibility of neural networks, Physics Informed Neural Networks (PINNs) have recently been proposed to solve complex differential equations and have demonstrated superior performance in many applications. While the L2 loss function is usually a default choice in PINNs, it has been shown that the corresponding numerical solution is incorrect and unstable for some complex equations. In this work, we propose a new PINNs framework named Kernel Packet accelerated PINNs (KP-PINNs), which gives a new expression of the loss function using the reproducing kernel Hilbert space (RKHS) norm and uses the Kernel Packet (KP) method to accelerate the computation. Theoretical results show that KP-PINNs can be stable across various differential equations. Numerical experiments illustrate that KP-PINNs can solve differential equations effectively and efficiently. This framework provides a promising direction for improving the stability and accuracy of PINNs-based solvers in scientific computing. |
2025-06-10 | Thermodynamically Consistent Latent Dynamics Identification for Parametric Systems | Xiaolong He, Yeonjong Shin, Anthony Gruber et.al. | 2506.08475 | | | Abstract (click to expand)We propose an efficient thermodynamics-informed latent space dynamics identification (tLaSDI) framework for the reduced-order modeling of parametric nonlinear dynamical systems. This framework integrates autoencoders for dimensionality reduction with newly developed parametric GENERIC formalism-informed neural networks (pGFINNs), which enable efficient learning of parametric latent dynamics while preserving key thermodynamic principles such as free energy conservation and entropy generation across the parameter space. To further enhance model performance, a physics-informed active learning strategy is incorporated, leveraging a greedy, residual-based error indicator to adaptively sample informative training data, outperforming uniform sampling at equivalent computational cost. Numerical experiments on the Burgers' equation and the 1D/1V Vlasov-Poisson equation demonstrate that the proposed method achieves up to 3,528x speed-up with 1-3% relative errors, and significant reduction in training (50-90%) and inference (57-61%) cost. Moreover, the learned latent space dynamics reveal the underlying thermodynamic behavior of the system, offering valuable insights into the physical-space dynamics. |
2025-06-09 | A Machine Learning Approach to Generate Residual Stress Distributions using Sparse Characterization Data in Friction-Stir Processed Parts | Shadab Anwar Shaikh, Kranthi Balusu, Ayoub Soulami et.al. | 2506.08205 | | | Abstract (click to expand)Residual stresses, which remain within a component after processing, can deteriorate performance. Accurately determining their full-field distributions is essential for optimizing the structural integrity and longevity. However, the experimental effort required for full-field characterization is impractical. Given these challenges, this work proposes a machine learning (ML) based Residual Stress Generator (RSG) to infer full-field stresses from limited measurements. An extensive dataset was initially constructed by performing numerous process simulations with a diverse parameter set. A ML model based on U-Net architecture was then trained to learn the underlying structure through systematic hyperparameter tuning. Then, the model's ability to generate simulated stresses was evaluated, and it was ultimately tested on actual characterization data to validate its effectiveness. The model's prediction of simulated stresses shows that it achieved excellent predictive accuracy and exhibited a significant degree of generalization, indicating that it successfully learnt the latent structure of residual stress distribution. The RSG's performance in predicting experimentally characterized data highlights the feasibility of the proposed approach in providing a comprehensive understanding of residual stress distributions from limited measurements, thereby significantly reducing experimental efforts. |
2025-06-09 | FreeGave: 3D Physics Learning from Dynamic Videos by Gaussian Velocity | Jinxi Li, Ziyang Song, Siyuan Zhou et.al. | 2506.07865 | link | CVPR 2025. Code and data are available at: https://github.com/vLAR-group/FreeGave | Abstract (click to expand)In this paper, we aim to model 3D scene geometry, appearance, and the underlying physics purely from multi-view videos. By applying various governing PDEs as PINN losses or incorporating physics simulation into neural networks, existing works often fail to learn complex physical motions at boundaries or require object priors such as masks or types. In this paper, we propose FreeGave to learn the physics of complex dynamic 3D scenes without needing any object priors. The key to our approach is to introduce a physics code followed by a carefully designed divergence-free module for estimating a per-Gaussian velocity field, without relying on the inefficient PINN losses. Extensive experiments on three public datasets and a newly collected challenging real-world dataset demonstrate the superior performance of our method for future frame extrapolation and motion segmentation. Most notably, our investigation into the learned physics codes reveals that they truly learn meaningful 3D physical motion patterns in the absence of any human labels in training. |
2025-06-12 | CheMatAgent: Enhancing LLMs for Chemistry and Materials Science through Tree-Search Based Tool Learning | Mengsong Wu, YaFei Wang, Yidong Ming et.al. | 2506.07551 | link | 15 pages, 6 figures | Abstract (click to expand)Large language models (LLMs) have recently demonstrated promising capabilities in chemistry tasks while still facing challenges due to outdated pretraining knowledge and the difficulty of incorporating specialized chemical expertise. To address these issues, we propose an LLM-based agent that synergistically integrates 137 external chemical tools created ranging from basic information retrieval to complex reaction predictions, and a dataset curation pipeline to generate the dataset ChemToolBench that facilitates both effective tool selection and precise parameter filling during fine-tuning and evaluation. We introduce a Hierarchical Evolutionary Monte Carlo Tree Search (HE-MCTS) framework, enabling independent optimization of tool planning and execution. By leveraging self-generated data, our approach supports step-level fine-tuning (FT) of the policy model and training task-adaptive PRM and ORM that surpass GPT-4o. Experimental evaluations demonstrate that our approach significantly improves performance in Chemistry QA and discovery tasks, offering a robust solution to integrate specialized tools with LLMs for advanced chemical applications. All datasets and code are available at https://github.com/AI4Chem/ChemistryAgent . |
2025-06-08 | End-to-End Probabilistic Framework for Learning with Hard Constraints | Utkarsh Utkarsh, Danielle C. Maddix, Ruijun Ma et.al. | 2506.07003 | | 46 pages, 5 figures, 10 tables | Abstract (click to expand)We present a general purpose probabilistic forecasting framework, ProbHardE2E, to learn systems that can incorporate operational/physical constraints as hard requirements. ProbHardE2E enforces hard constraints by exploiting variance information in a novel way; and thus it is also capable of performing uncertainty quantification (UQ) on the model. Our methodology uses a novel differentiable probabilistic projection layer (DPPL) that can be combined with a wide range of neural network architectures. This DPPL allows the model to learn the system in an end-to-end manner, compared to other approaches where the constraints are satisfied either through a post-processing step or at inference. In addition, ProbHardE2E can optimize a strictly proper scoring rule, without making any distributional assumptions on the target, which enables it to obtain robust distributional estimates (in contrast to existing approaches that generally optimize likelihood-based objectives, which are heavily biased by their distributional assumptions and model choices); and it can incorporate a range of non-linear constraints (increasing the power of modeling and flexibility). We apply ProbHardE2E to problems in learning partial differential equations with uncertainty estimates and to probabilistic time-series forecasting, showcasing it as a broadly applicable general setup that connects these seemingly disparate domains. |
2025-06-12 | QuantMCP: Grounding Large Language Models in Verifiable Financial Reality | Yifan Zeng et.al. | 2506.06622 | | | Abstract (click to expand)Large Language Models (LLMs) hold immense promise for revolutionizing financial analysis and decision-making, yet their direct application is often hampered by issues of data hallucination and lack of access to real-time, verifiable financial information. This paper introduces QuantMCP, a novel framework designed to rigorously ground LLMs in financial reality. By leveraging the Model Context Protocol (MCP) for standardized and secure tool invocation, QuantMCP enables LLMs to accurately interface with a diverse array of Python-accessible financial data APIs (e.g., Wind, yfinance). Users can interact via natural language to precisely retrieve up-to-date financial data, thereby overcoming LLM's inherent limitations in factual data recall. More critically, once furnished with this verified, structured data, the LLM's analytical capabilities are unlocked, empowering it to perform sophisticated data interpretation, generate insights, and ultimately support more informed financial decision-making processes. QuantMCP provides a robust, extensible, and secure bridge between conversational AI and the complex world of financial data, aiming to enhance both the reliability and the analytical depth of LLM applications in finance. |
2025-06-04 | Beyond the Norm: A Survey of Synthetic Data Generation for Rare Events | Jingyi Gu, Xuan Zhang, Guiling Wang et.al. | 2506.06380 | | | Abstract (click to expand)Extreme events, such as market crashes, natural disasters, and pandemics, are rare but catastrophic, often triggering cascading failures across interconnected systems. Accurate prediction and early warning can help minimize losses and improve preparedness. While data-driven methods offer powerful capabilities for extreme event modeling, they require abundant training data, yet extreme event data is inherently scarce, creating a fundamental challenge. Synthetic data generation has emerged as a powerful solution. However, existing surveys focus on general data with privacy preservation emphasis, rather than extreme events' unique performance requirements. This survey provides the first overview of synthetic data generation for extreme events. We systematically review generative modeling techniques and large language models, particularly those enhanced by statistical theory as well as specialized training and sampling mechanisms to capture heavy-tailed distributions. We summarize benchmark datasets and introduce a tailored evaluation framework covering statistical, dependence, visual, and task-oriented metrics. A central contribution is our in-depth analysis of each metric's applicability in extremeness and domain-specific adaptations, providing actionable guidance for model evaluation in extreme settings. We categorize key application domains and identify underexplored areas like behavioral finance, wildfires, earthquakes, windstorms, and infectious outbreaks. Finally, we outline open challenges, providing a structured foundation for advancing synthetic rare-event research. |
2025-06-06 | FinanceReasoning: Benchmarking Financial Numerical Reasoning More Credible, Comprehensive and Challenging | Zichen Tang, Haihong E, Ziyan Ma et.al. | 2506.05828 | | Accepted by ACL 2025 Main Conference | Abstract (click to expand)We introduce FinanceReasoning, a novel benchmark designed to evaluate the reasoning capabilities of large reasoning models (LRMs) in financial numerical reasoning problems. Compared to existing benchmarks, our work provides three key advancements. (1) Credibility: We update 15.6% of the questions from four public datasets, annotating 908 new questions with detailed Python solutions and rigorously refining evaluation standards. This enables an accurate assessment of the reasoning improvements of LRMs. (2) Comprehensiveness: FinanceReasoning covers 67.8% of financial concepts and formulas, significantly surpassing existing datasets. Additionally, we construct 3,133 Python-formatted functions, which enhances LRMs' financial reasoning capabilities through refined knowledge (e.g., 83.2% \(\rightarrow\) 91.6% for GPT-4o). (3) Challenge: Models are required to apply multiple financial formulas for precise numerical reasoning on 238 Hard problems. The best-performing model (i.e., OpenAI o1 with PoT) achieves 89.1% accuracy, yet LRMs still face challenges in numerical precision. We demonstrate that combining Reasoner and Programmer models can effectively enhance LRMs' performance (e.g., 83.2% \(\rightarrow\) 87.8% for DeepSeek-R1). Our work paves the way for future research on evaluating and improving LRMs in domain-specific complex reasoning tasks. |
2025-06-06 | EqCollide: Equivariant and Collision-Aware Deformable Objects Neural Simulator | Qianyi Chen, Tianrun Gao, Chenbo Jiang et.al. | 2506.05797 | | | Abstract (click to expand)Simulating collisions of deformable objects is a fundamental yet challenging task due to the complexity of modeling solid mechanics and multi-body interactions. Existing data-driven methods often suffer from lack of equivariance to physical symmetries, inadequate handling of collisions, and limited scalability. Here we introduce EqCollide, the first end-to-end equivariant neural fields simulator for deformable objects and their collisions. We propose an equivariant encoder to map object geometry and velocity into latent control points. A subsequent equivariant Graph Neural Network-based Neural Ordinary Differential Equation models the interactions among control points via collision-aware message passing. To reconstruct velocity fields, we query a neural field conditioned on control point features, enabling continuous and resolution-independent motion predictions. Experimental results show that EqCollide achieves accurate, stable, and scalable simulations across diverse object configurations, and our model achieves 24.34% to 35.82% lower rollout MSE even compared with the best-performing baseline model. Furthermore, our model could generalize to more colliding objects and extended temporal horizons, and stay robust to input transformed with group action. |
2025-06-06 | FlowOE: Imitation Learning with Flow Policy from Ensemble RL Experts for Optimal Execution under Heston Volatility and Concave Market Impacts | Yang Li, Zhi Chen et.al. | 2506.05755 | | 3 figures, 3 algorithms, 7 tables | Abstract (click to expand)Optimal execution in financial markets refers to the process of strategically transacting a large volume of assets over a period to achieve the best possible outcome by balancing the trade-off between market impact costs and timing or volatility risks. Traditional optimal execution strategies, such as static Almgren-Chriss models, often prove suboptimal in dynamic financial markets. This paper propose flowOE, a novel imitation learning framework based on flow matching models, to address these limitations. FlowOE learns from a diverse set of expert traditional strategies and adaptively selects the most suitable expert behavior for prevailing market conditions. A key innovation is the incorporation of a refining loss function during the imitation process, enabling flowOE not only to mimic but also to improve upon the learned expert actions. To the best of our knowledge, this work is the first to apply flow matching models in a stochastic optimal execution problem. Empirical evaluations across various market conditions demonstrate that flowOE significantly outperforms both the specifically calibrated expert models and other traditional benchmarks, achieving higher profits with reduced risk. These results underscore the practical applicability and potential of flowOE to enhance adaptive optimal execution. |
2025-06-06 | Hybrid Stabilization Protocol for Cross-Chain Digital Assets Using Adaptor Signatures and AI-Driven Arbitrage | Shengwei You, Andrey Kuehlkamp, Jarek Nabrzyski et.al. | 2506.05708 | | | Abstract (click to expand)Stablecoins face an unresolved trilemma of balancing decentralization, stability, and regulatory compliance. We present a hybrid stabilization protocol that combines crypto-collateralized reserves, algorithmic futures contracts, and cross-chain liquidity pools to achieve robust price adherence while preserving user privacy. At its core, the protocol introduces stabilization futures contracts (SFCs), non-collateralized derivatives that programmatically incentivize third-party arbitrageurs to counteract price deviations via adaptor signature atomic swaps. Autonomous AI agents optimize delta hedging across decentralized exchanges (DEXs), while zkSNARKs prove compliance with anti-money laundering (AML) regulations without exposing identities or transaction details. Our cryptographic design reduces cross-chain liquidity concentration (Herfindahl-Hirschman Index: 2,400 vs. 4,900 in single-chain systems) and ensures atomicity under standard cryptographic assumptions. The protocol's layered architecture encompassing incentive-compatible SFCs, AI-driven market making, and zero-knowledge regulatory proofs. It provides a blueprint for next-generation decentralized financial infrastructure. |
2025-06-05 | Applying Informer for Option Pricing: A Transformer-Based Approach | Feliks Bańka, Jarosław A. Chudziak et.al. | 2506.05565 | | 8 pages, 3 tables, 7 figures. Accepted at the 17th International Conference on Agents and Artificial Intelligence (ICAART 2025). Final version published in Proceedings of ICAART 2025 (Vol. 3), pages 1270-1277 | Abstract (click to expand)Accurate option pricing is essential for effective trading and risk management in financial markets, yet it remains challenging due to market volatility and the limitations of traditional models like Black-Scholes. In this paper, we investigate the application of the Informer neural network for option pricing, leveraging its ability to capture long-term dependencies and dynamically adjust to market fluctuations. This research contributes to the field of financial forecasting by introducing Informer's efficient architecture to enhance prediction accuracy and provide a more adaptable and resilient framework compared to existing methods. Our results demonstrate that Informer outperforms traditional approaches in option pricing, advancing the capabilities of data-driven financial forecasting in this domain. |
2025-06-05 | A Neural Network Model of Spatial and Feature-Based Attention | Ruoyang Hu, Robert A. Jacobs et.al. | 2506.05487 | | 6 pages, 9 figures | Abstract (click to expand)Visual attention is a mechanism closely intertwined with vision and memory. Top-down information influences visual processing through attention. We designed a neural network model inspired by aspects of human visual attention. This model consists of two networks: one serves as a basic processor performing a simple task, while the other processes contextual information and guides the first network through attention to adapt to more complex tasks. After training the model and visualizing the learned attention response, we discovered that the model's emergent attention patterns corresponded to spatial and feature-based attention. This similarity between human visual attention and attention in computer vision suggests a promising direction for studying human cognition using neural network models. |
2025-06-05 | MTPNet: Multi-Grained Target Perception for Unified Activity Cliff Prediction | Zishan Shu, Yufan Deng, Hongyu Zhang et.al. | 2506.05427 | link | | Abstract (click to expand)Activity cliff prediction is a critical task in drug discovery and material design. Existing computational methods are limited to handling single binding targets, which restricts the applicability of these prediction models. In this paper, we present the Multi-Grained Target Perception network (MTPNet) to incorporate the prior knowledge of interactions between the molecules and their target proteins. Specifically, MTPNet is a unified framework for activity cliff prediction, which consists of two components: Macro-level Target Semantic (MTS) guidance and Micro-level Pocket Semantic (MPS) guidance. By this way, MTPNet dynamically optimizes molecular representations through multi-grained protein semantic conditions. To our knowledge, it is the first time to employ the receptor proteins as guiding information to effectively capture critical interaction details. Extensive experiments on 30 representative activity cliff datasets demonstrate that MTPNet significantly outperforms previous approaches, achieving an average RMSE improvement of 18.95% on top of several mainstream GNN architectures. Overall, MTPNet internalizes interaction patterns through conditional deep learning to achieve unified predictions of activity cliffs, helping to accelerate compound optimization and design. Codes are available at: https://github.com/ZishanShu/MTPNet. |
2025-06-05 | FinMultiTime: A Four-Modal Bilingual Dataset for Financial Time-Series Analysis | Wenyan Xu, Dawei Xiang, Yue Liu et.al. | 2506.05019 | link | Under review | Abstract (click to expand)Pure time series forecasting tasks typically focus exclusively on numerical features; however, real-world financial decision-making demands the comparison and analysis of heterogeneous sources of information. Recent advances in deep learning and large scale language models (LLMs) have made significant strides in capturing sentiment and other qualitative signals, thereby enhancing the accuracy of financial time series predictions. Despite these advances, most existing datasets consist solely of price series and news text, are confined to a single market, and remain limited in scale. In this paper, we introduce FinMultiTime, the first large scale, multimodal financial time series dataset. FinMultiTime temporally aligns four distinct modalities financial news, structured financial tables, K-line technical charts, and stock price time series across both the S&P 500 and HS 300 universes. Covering 5,105 stocks from 2009 to 2025 in the United States and China, the dataset totals 112.6 GB and provides minute-level, daily, and quarterly resolutions, thus capturing short, medium, and long term market signals with high fidelity. Our experiments demonstrate that (1) scale and data quality markedly boost prediction accuracy; (2) multimodal fusion yields moderate gains in Transformer models; and (3) a fully reproducible pipeline enables seamless dataset updates. |
2025-06-05 | Nonlinear elastodynamic material identification of heterogeneous isogeometric Bernoulli-Euler beams | Bartłomiej Łazorczyk, Roger A. Sauer et.al. | 2506.04960 | | 37 pages, 16 figures, 8 tables | Abstract (click to expand)This paper presents a Finite Element Model Updating framework for identifying heterogeneous material distributions in planar Bernoulli-Euler beams based on a rotation-free isogeometric formulation. The procedure follows two steps: First, the elastic properties are identified from quasi-static displacements; then, the density is determined from modal data (low frequencies and mode shapes), given the previously obtained elastic properties. The identification relies on three independent discretizations: the isogeometric finite element mesh, a high-resolution grid of experimental measurements, and a material mesh composed of low-order Lagrange elements. The material mesh approximates the unknown material distributions, with its nodal values serving as design variables. The error between experiments and numerical model is expressed in a least squares manner. The objective is minimized using local optimization with the trust-region method, providing analytical derivatives to accelerate computations. Several numerical examples exhibiting large displacements are provided to test the proposed approach. To alleviate membrane locking, the B2M1 discretization is employed when necessary. Quasi-experimental data is generated using refined finite element models with random noise applied up to 4%. The method yields satisfactory results as long as a sufficient amount of experimental data is available, even for high measurement noise. Regularization is used to ensure a stable solution for dense material meshes. The density can be accurately reconstructed based on the previously identified elastic properties. The proposed framework can be straightforwardly extended to shells and 3D continua. |
2025-06-05 | A Private Smart Wallet with Probabilistic Compliance | Andrea Rizzini, Marco Esposito, Francesco Bruschi et.al. | 2506.04853 | | | Abstract (click to expand)We propose a privacy-preserving smart wallet with a novel invitation-based private onboarding mechanism. The solution integrates two levels of compliance in concert with an authority party: a proof of innocence mechanism and an ancestral commitment tracking system using bloom filters for probabilistic UTXO chain states. Performance analysis demonstrates practical efficiency: private transfers with compliance checks complete within seconds on a consumer-grade laptop, and overall with proof generation remaining low. On-chain costs stay minimal, ensuring affordability for all operations on Base layer 2 network. The wallet facilitates private contact list management through encrypted data blobs while maintaining transaction unlinkability. Our evaluation validates the approach's viability for privacy-preserving, compliance-aware digital payments with minimized computational and financial overhead. |
2025-06-05 | Tensor-based multivariate function approximation: methods benchmarking and comparison | Athanasios C. Antoulas, Ion Victor Gosea, Charles Poussot-Vassal et.al. | 2506.04791 | link | Report with a collection of examples, aimed at being regularly updated. Associated GIT: https://github.com/cpoussot/mLF | Abstract (click to expand)In this note, we evaluate the performances, the features and the user-experience of some methods (and their implementations) designed for tensor- (or data-) based multivariate function construction and approximation. To this aim, a collection of multivariate functions extracted from contributive works coming from different communities, is suggested. First, these functions with varying complexity (e.g. number and degree of the variables) and nature (e.g. rational, irrational, differentiable or not, symmetric, etc.) are used to construct tensors, each of different dimension and size on the disk. Second, grounded on this tensor, we inspect performances of each considered method (e.g. the accuracy, the computational time, the parameters tuning impact, etc.). Finally, considering the "best" parameter tuning set, we compare each method using multiple evaluation criteria. The purpose of this note is not to rank the methods but rather to evaluate as fairly as possible the different available strategies, with the idea in mind to guide users to understand the process, the possibilities, the advantages and the limits brought by each tools. The contribution claimed is to suggest a complete benchmark collection of some available tools for tensor approximation by surrogate models (e.g. rational functions, networks, etc.). In addition, as contributors of the multivariate Loewner Framework (mLF) approach (and its side implementation in MDSPACK), attention and details of the latter are more explicitly given, in order to provide readers a digest of this contributive work and some details with simple examples. |
2025-06-05 | Adaptive recycled plastic architecture: Vacuum-Sealed Chainmail Structures Through Computational Design | Yi Xu, Farzin Lotfi-Jam, Mustafa Faruki et.al. | 2506.04660 | | Accepted manuscript. Published in International Journal of Architectural Computing, April 2025 | Abstract (click to expand)The construction industry is a major consumer of raw materials, accounting for nearly half of global material usage annually, while generating significant waste that poses sustainability challenges. This paper explores the untapped potential of recycled plastics as a primary construction material, leveraging their lightweight, flexible, and customizable properties for advanced applications in modular chainmail systems. Through a computational workflow, the study optimizes the design, testing, and fabrication of vacuum-sealed chainmail structures composed of recycled plastic filaments, demonstrating their adaptability and structural performance for architectural use. Key contributions include a novel methodology for integrating recycled plastic filaments into chainmail geometries, validated through 2D sectional testing, 3D shell structure generation, and physical modeling under vacuum constraints. The research identifies the rectangular chainmail configuration as the most efficient and adaptable, achieving superior deformation capacity, material efficiency, and load-bearing performance. Optimization strategies for temporary structures highlight practical deployment potential, balancing material savings, usable area, and water drainage efficiency. The findings offer a foundation for innovative applications in extreme conditions, including disaster-prone areas, high-altitude environments, underwater platforms, and extraterrestrial habitats. These applications leverage the lightweight, adaptable, and durable properties of recycled plastics and modular chainmail systems, bridging the gap between waste management and high-performance design while addressing unique challenges in harsh and resource-constrained environments. |
2025-06-04 | ReXVQA: A Large-scale Visual Question Answering Benchmark for Generalist Chest X-ray Understanding | Ankit Pal, Jung-Oh Lee, Xiaoman Zhang et.al. | 2506.04353 | | | Abstract (click to expand)We present ReXVQA, the largest and most comprehensive benchmark for visual question answering (VQA) in chest radiology, comprising approximately 696,000 questions paired with 160,000 chest X-rays studies across training, validation, and test sets. Unlike prior efforts that rely heavily on template based queries, ReXVQA introduces a diverse and clinically authentic task suite reflecting five core radiological reasoning skills: presence assessment, location analysis, negation detection, differential diagnosis, and geometric reasoning. We evaluate eight state-of-the-art multimodal large language models, including MedGemma-4B-it, Qwen2.5-VL, Janus-Pro-7B, and Eagle2-9B. The best-performing model (MedGemma) achieves 83.24% overall accuracy. To bridge the gap between AI performance and clinical expertise, we conducted a comprehensive human reader study involving 3 radiology residents on 200 randomly sampled cases. Our evaluation demonstrates that MedGemma achieved superior performance (83.84% accuracy) compared to human readers (best radiology resident: 77.27%), representing a significant milestone where AI performance exceeds expert human evaluation on chest X-ray interpretation. The reader study reveals distinct performance patterns between AI models and human experts, with strong inter-reader agreement among radiologists while showing more variable agreement patterns between human readers and AI models. ReXVQA establishes a new standard for evaluating generalist radiological AI systems, offering public leaderboards, fine-grained evaluation splits, structured explanations, and category-level breakdowns. This benchmark lays the foundation for next-generation AI systems capable of mimicking expert-level clinical reasoning beyond narrow pathology classification. Our dataset will be open-sourced at https://huggingface.co/datasets/rajpurkarlab/ReXVQA |
2025-06-04 | Physics-Constrained Flow Matching: Sampling Generative Models with Hard Constraints | Utkarsh Utkarsh, Pengfei Cai, Alan Edelman et.al. | 2506.04171 | | 27 pages, 9 figures, 4 tables | Abstract (click to expand)Deep generative models have recently been applied to physical systems governed by partial differential equations (PDEs), offering scalable simulation and uncertainty-aware inference. However, enforcing physical constraints, such as conservation laws (linear and nonlinear) and physical consistencies, remains challenging. Existing methods often rely on soft penalties or architectural biases that fail to guarantee hard constraints. In this work, we propose Physics-Constrained Flow Matching (PCFM), a zero-shot inference framework that enforces arbitrary nonlinear constraints in pretrained flow-based generative models. PCFM continuously guides the sampling process through physics-based corrections applied to intermediate solution states, while remaining aligned with the learned flow and satisfying physical constraints. Empirically, PCFM outperforms both unconstrained and constrained baselines on a range of PDEs, including those with shocks, discontinuities, and sharp features, while ensuring exact constraint satisfaction at the final solution. Our method provides a general framework for enforcing hard constraints in both scientific and general-purpose generative models, especially in applications where constraint satisfaction is essential. |
2025-06-06 | Risk and Reward of Transitioning from a National to a Zonal Electricity Market in Great Britain | Lukas Franken, Andrew Lyden, Daniel Friedrich et.al. | 2506.04107 | | 29 pages, 26 figures | Abstract (click to expand)More spatially granular electricity wholesale markets promise more efficient operation and better asset siting in highly renewable power systems. Great Britain is considering moving from its current single-price national wholesale market to a zonal design. Existing studies reach varying and difficult-to-reconcile conclusions about the desirability of a zonal market in GB, partly because they rely on models that vary in their transparency and assumptions about future power systems. Using a novel open-source electricity market model, calibrated to match observed network behaviour, this article quantifies consumer savings, unit-level producer surplus impacts, and broader socioeconomic benefits that would have arisen had a six-zone market operated in Great Britain during 2022-2024. In the absence of mitigating policies, it is estimated that during those three years GB consumers would save approximately {\pounds}9.4/MWh (equalling an average of more than {\pounds}2.3B per year), but generators in northern regions would experience revenue reductions of 30-40\%. Policy interventions can restore these units' national market revenues to up to 97\% while still preserving around {\pounds}3.1/MWh in consumer savings (about {\pounds}750M per year). It is further estimated that the current system could achieve approximately {\pounds}380-{\pounds}770 million in annual welfare gain during 2022-2024 through improved operational efficiency alone. The drivers behind these benefits, notably wind curtailment volumes, are expected to become more pronounced towards 2030, suggesting that purely operationally achieved annual benefits of around {\pounds}1-2 billion beyond 2029 are likely. It is found that the scale of these benefits would outweigh the potential downsides related to increases in the cost of capital that have been estimated elsewhere. |
2025-06-04 | On the robustness of Dirichlet-Neumann coupling schemes for fluid-structure-interaction problems with nearly-closed fluid domains | A. Aissa-Berraies, Ferdinando A. Auricchio, Gertjan van Zwieten et.al. | 2506.04027 | | | Abstract (click to expand)Partitioned methods for fluid-structure interaction (FSI) involve solving the structural and flow problems sequentially. These methods allow for separate settings for the fluid and solid subsystems and thus modularity, enabling reuse of advanced commercial and open-source software. Most partitioned FSI schemes apply a Dirichlet-Neumann (DN) split of the interface conditions. The DN scheme is adequate in a wide range of applications, but it is sensitive to the added-mass effect, and it is susceptible to the incompressibility dilemma, i.e. it completely fails for FSI problems with an incompressible fluid furnished with Dirichlet boundary conditions on the part of its boundary complementary to the interface. In this paper, we show that if the fluid is incompressible and the fluid domain is nearly-closed, i.e. it carries Dirichlet conditions except for a permeable part of the boundary carrying a Robin condition, then the DN partitioned approach is sensitive to the flow resistance at the permeable part, and convergence of the partitioned approach deteriorates as the flow resistance increases. The DN scheme then becomes unstable in the limit as the flow resistance passes to infinity. Based on a simple model problem, we show that in the nearly-closed case, the convergence rate of the DN partitioned method depends on a so-called added-damping effect. The analysis gives insights that can aid to improve robustness and efficiency of partitioned method for FSI problems with contact, e.g. valve applications. In addition, the results elucidate the incompressibility dilemma as a limit of the added-damping effect passing to infinity, and the corresponding challenges related to FSI problems with nearly closed fluid-domain configurations. Via numerical experiments, we consider the generalization of the results of the simple model problem to more complex nearly-closed FSI problems. |
2025-06-03 | Optimization of Functional Materials Design with Optimal Initial Data in Surrogate-Based Active Learning | Seongmin Kim, In-Saeng Suh et.al. | 2506.03329 | | | Abstract (click to expand)The optimization of functional materials is important to enhance their properties, but their complex geometries pose great challenges to optimization. Data-driven algorithms efficiently navigate such complex design spaces by learning relationships between material structures and performance metrics to discover high-performance functional materials. Surrogate-based active learning, continually improving its surrogate model by iteratively including high-quality data points, has emerged as a cost-effective data-driven approach. Furthermore, it can be coupled with quantum computing to enhance optimization processes, especially when paired with a special form of surrogate model ( \(i.e.\) , quadratic unconstrained binary optimization), formulated by factorization machine. However, current practices often overlook the variability in design space sizes when determining the initial data size for optimization. In this work, we investigate the optimal initial data sizes required for efficient convergence across various design space sizes. By employing averaged piecewise linear regression, we identify initiation points where convergence begins, highlighting the crucial role of employing adequate initial data in achieving efficient optimization. These results contribute to the efficient optimization of functional materials by ensuring faster convergence and reducing computational costs in surrogate-based active learning. |
2025-06-03 | Cell-o1: Training LLMs to Solve Single-Cell Reasoning Puzzles with Reinforcement Learning | Yin Fang, Qiao Jin, Guangzhi Xiong et.al. | 2506.02911 | link | 28 pages; 16 tables; 7 figures; Code: https://github.com/ncbi-nlp/cell-o1 | Abstract (click to expand)Cell type annotation is a key task in analyzing the heterogeneity of single-cell RNA sequencing data. Although recent foundation models automate this process, they typically annotate cells independently, without considering batch-level cellular context or providing explanatory reasoning. In contrast, human experts often annotate distinct cell types for different cell clusters based on their domain knowledge. To mimic this workflow, we introduce the CellPuzzles task, where the objective is to assign unique cell types to a batch of cells. This benchmark spans diverse tissues, diseases, and donor conditions, and requires reasoning across the batch-level cellular context to ensure label uniqueness. We find that off-the-shelf large language models (LLMs) struggle on CellPuzzles, with the best baseline (OpenAI's o1) achieving only 19.0% batch-level accuracy. To fill this gap, we propose Cell-o1, a 7B LLM trained via supervised fine-tuning on distilled reasoning traces, followed by reinforcement learning with batch-level rewards. Cell-o1 achieves state-of-the-art performance, outperforming o1 by over 73% and generalizing well across contexts. Further analysis of training dynamics and reasoning behaviors provides insights into batch-level annotation performance and emergent expert-like reasoning. Code and data are available at https://github.com/ncbi-nlp/cell-o1. |
2025-06-03 | A mesoscale phase-field model of intergranular liquid lithium corrosion of ferritic/martensitic steels | A. Lhoest, S. Kovacevic, D. Nguyen-Manh et.al. | 2506.02776 | | | Abstract (click to expand)A phase-field model is developed to simulate intergranular corrosion of ferritic/martensitic steels exposed to liquid lithium. The chromium concentration of the material is used to track the mass transport within the metal and liquid (corrosive) phase. The framework naturally captures intergranular corrosion by enhancing the diffusion of chromium along grain boundaries relative to the grain bulk with no special treatment for the corrosion front evolution. The formulation applies to arbitrary 2D and 3D polycrystalline geometries. The framework reproduces experimental measurements of weight loss and corrosion depth for a 9 wt\% Cr ferritic/martensitic steel exposed to static lithium at 600 \(^\circ\) C. A sensitivity analysis, varying near-surface grain density, grain size, and chromium depletion thickness, highlights the microstructural influence in the corrosion process. Moreover, the significance of saturation is considered and evaluated. Simulation results show that near-surface grain density is a deciding factor, whereas grain size dictates the susceptibility to intergranular corrosion. |
2025-06-03 | Enriching Location Representation with Detailed Semantic Information | Junyuan Liu, Xinglei Wang, Tao Cheng et.al. | 2506.02744 | | | Abstract (click to expand)Spatial representations that capture both structural and semantic characteristics of urban environments are essential for urban modeling. Traditional spatial embeddings often prioritize spatial proximity while underutilizing fine-grained contextual information from places. To address this limitation, we introduce CaLLiPer+, an extension of the CaLLiPer model that systematically integrates Point-of-Interest (POI) names alongside categorical labels within a multimodal contrastive learning framework. We evaluate its effectiveness on two downstream tasks, land use classification and socioeconomic status distribution mapping, demonstrating consistent performance gains of 4% to 11% over baseline methods. Additionally, we show that incorporating POI names enhances location retrieval, enabling models to capture complex urban concepts with greater precision. Ablation studies further reveal the complementary role of POI names and the advantages of leveraging pretrained text encoders for spatial representations. Overall, our findings highlight the potential of integrating fine-grained semantic attributes and multimodal learning techniques to advance the development of urban foundation models. |
2025-06-03 | TL;DR: Too Long, Do Re-weighting for Effcient LLM Reasoning Compression | Zhong-Zhi Li, Xiao Liang, Zihao Tang et.al. | 2506.02678 | | | Abstract (click to expand)Large Language Models (LLMs) have recently achieved remarkable progress by leveraging Reinforcement Learning and extended Chain-of-Thought (CoT) techniques. However, the challenge of performing efficient language reasoning--especially during inference with extremely long outputs--has drawn increasing attention from the research community. In this work, we propose a dynamic ratio-based training pipeline that does not rely on sophisticated data annotations or interpolation between multiple models. We continuously balance the weights between the model's System-1 and System-2 data to eliminate redundant reasoning processes while preserving the model's reasoning capability. We validate our approach across models on DeepSeek-R1-Distill-7B and DeepSeek-R1-Distill-14B and on a diverse set of benchmarks with varying difficulty levels. Our method significantly reduces the number of output tokens by nearly 40% while maintaining the accuracy of the reasoning. Our code and data will be available soon. |
2025-06-03 | On the fracture mechanics validity of small scale tests | C. Cui, L. Cupertino-Malheiros, Z. Xiong et.al. | 2506.02538 | | | Abstract (click to expand)There is growing interest in conducting small-scale tests to gain additional insight into the fracture behaviour of components across a wide range of materials. For example, micro-scale mechanical tests inside of a microscope (\emph{in situ}) enable direct, high-resolution observation of the interplay between crack growth and microstructural phenomena (e.g., dislocation behaviour or the fracture resistance of a particular interface), and sub-size samples are increasingly used when only a limited amount of material is available. However, to obtain quantitative insight and extract relevant fracture parameters, the sample must be sufficiently large for a \(J\)- (HRR) or a \(K\)-field to exist. We conduct numerical and semi-analytical studies to map the conditions (sample geometry, material) that result in a valid, quantitative fracture experiment. Specifically, for a wide range of material properties, crack lengths and sample dimensions, we establish the maximum value of the \(J\)-integral where an HRR field ceases to exist (i.e., the maximum \(J\) value at which fracture must occur for the test to be valid, \(J_\mathrm{max}\)). Maps are generated to establish the maximum valid \(J\) value (\(J_\mathrm{max}\) ) as a function of yield strength, strain hardening and minimum sample size. These maps are then used to discuss the existing experimental literature and provide guidance on how to conduct quantitative experiments. Finally, our study is particularised to the analysis of metals that have been embrittled due to hydrogen exposure. The response of relevant materials under hydrogen-containing environments are superimposed on the aforementioned maps, determining the conditions that will enable quantitative insight. |
2025-06-03 | Generative AI for Predicting 2D and 3D Wildfire Spread: Beyond Physics-Based Models and Traditional Deep Learning | Haowen Xu, Sisi Zlatanova, Ruiyu Liang et.al. | 2506.02485 | | | Abstract (click to expand)Wildfires continue to inflict devastating human, environmental, and economic losses globally, as tragically exemplified by the 2025 Los Angeles wildfire and the urgent demand for more effective response strategies. While physics-based and deep learning models have advanced wildfire simulation, they face critical limitations in predicting and visualizing multimodal fire spread in real time, particularly in both 2D and 3D spatial domains using dynamically updated GIS data. These limitations hinder timely emergency response, infrastructure protection, and community safety. Generative AI has recently emerged as a transformative approach across research and industry. Models such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), Transformers, and diffusion-based architectures offer distinct advantages over traditional methods, including the integration of multimodal data, generation of diverse scenarios under uncertainty, and improved modeling of wildfire dynamics across spatial and temporal scales. This position paper advocates for the adoption of generative AI as a foundational framework for wildfire prediction. We explore how such models can enhance 2D fire spread forecasting and enable more realistic, scalable 3D simulations. Additionally, we employ a novel human-AI collaboration framework using large language models (LLMs) for automated knowledge extraction, literature synthesis, and bibliometric mapping. Looking ahead, we identify five key visions for integrating generative AI into wildfire management: multimodal approaches, AI foundation models, conversational AI systems, edge-computing-based scenario generation, and cognitive digital twins. We also address three major challenges accompanying these opportunities and propose potential solutions to support their implementation. |
2025-06-02 | Sensitivity-Aware Density Estimation in Multiple Dimensions | Aleix Boquet-Pujadas, Pol del Aguila Pla, Michael Unser et.al. | 2506.02323 | | | Abstract (click to expand)We formulate an optimization problem to estimate probability densities in the context of multidimensional problems that are sampled with uneven probability. It considers detector sensitivity as an heterogeneous density and takes advantage of the computational speed and flexible boundary conditions offered by splines on a grid. We choose to regularize the Hessian of the spline via the nuclear norm to promote sparsity. As a result, the method is spatially adaptive and stable against the choice of the regularization parameter, which plays the role of the bandwidth. We test our computational pipeline on standard densities and provide software. We also present a new approach to PET rebinning as an application of our framework. |
2025-06-02 | Singularity Blockchain Key Management via non-custodial key management | Sumit Vohra et.al. | 2506.02282 | | | Abstract (click to expand)web3 wallets are key to managing user identity on blockchain. The main purpose of a web3 wallet application is to manage the private key for the user and provide an interface to interact with the blockchain. The key management scheme ( KMS ) used by the wallet to store and recover the private key can be either custodial, where the keys are permissioned and in custody of the wallet provider or noncustodial where the keys are in custody of the user. The existing non-custodial key management schemes tend to offset the burden of storing and recovering the key entirely on the user by asking them to remember seed-phrases. This creates onboarding hassles for the user and introduces the risk that the user may lose their assets if they forget or lose their seedphrase/private key. In this paper, we propose a novel method of backing up user keys using a non-custodial key management technique that allows users to save and recover a backup of their private key using any independent sign-in method such as google-oAuth or other 3P oAuth. |
2025-06-02 | Benchmarking Large Language Models for Polymer Property Predictions | Sonakshi Gupta, Akhlak Mahmood, Shivank Shukla et.al. | 2506.02129 | | 5 figures | Abstract (click to expand)Machine learning has revolutionized polymer science by enabling rapid property prediction and generative design. Large language models (LLMs) offer further opportunities in polymer informatics by simplifying workflows that traditionally rely on large labeled datasets, handcrafted representations, and complex feature engineering. LLMs leverage natural language inputs through transfer learning, eliminating the need for explicit fingerprinting and streamlining training. In this study, we finetune general purpose LLMs -- open-source LLaMA-3-8B and commercial GPT-3.5 -- on a curated dataset of 11,740 entries to predict key thermal properties: glass transition, melting, and decomposition temperatures. Using parameter-efficient fine-tuning and hyperparameter optimization, we benchmark these models against traditional fingerprinting-based approaches -- Polymer Genome, polyGNN, and polyBERT -- under single-task (ST) and multi-task (MT) learning. We find that while LLM-based methods approach traditional models in performance, they generally underperform in predictive accuracy and efficiency. LLaMA-3 consistently outperforms GPT-3.5, likely due to its tunable open-source architecture. Additionally, ST learning proves more effective than MT, as LLMs struggle to capture cross-property correlations, a key strength of traditional methods. Analysis of molecular embeddings reveals limitations of general purpose LLMs in representing nuanced chemo-structural information compared to handcrafted features and domain-specific embeddings. These findings provide insight into the interplay between molecular embeddings and natural language processing, guiding LLM selection for polymer informatics. |
2025-06-02 | COALESCE: Economic and Security Dynamics of Skill-Based Task Outsourcing Among Team of Autonomous LLM Agents | Manish Bhatt, Ronald F. Del Rosario, Vineeth Sai Narajala et.al. | 2506.01900 | | 20 pages, 2 figures, github linked | Abstract (click to expand)The meteoric rise and proliferation of autonomous Large Language Model (LLM) agents promise significant capabilities across various domains. However, their deployment is increasingly constrained by substantial computational demands, specifically for Graphics Processing Unit (GPU) resources. This paper addresses the critical problem of optimizing resource utilization in LLM agent systems. We introduce COALESCE (Cost-Optimized and Secure Agent Labour Exchange via Skill-based Competence Estimation), a novel framework designed to enable autonomous LLM agents to dynamically outsource specific subtasks to specialized, cost-effective third-party LLM agents. The framework integrates mechanisms for hybrid skill representation, dynamic skill discovery, automated task decomposition, a unified cost model comparing internal execution costs against external outsourcing prices, simplified market-based decision-making algorithms, and a standardized communication protocol between LLM agents. Comprehensive validation through 239 theoretical simulations demonstrates 41.8\% cost reduction potential, while large-scale empirical validation across 240 real LLM tasks confirms 20.3\% cost reduction with proper epsilon-greedy exploration, establishing both theoretical viability and practical effectiveness. The emergence of proposed open standards like Google's Agent2Agent (A2A) protocol further underscores the need for frameworks like COALESCE that can leverage such standards for efficient agent interaction. By facilitating a dynamic market for agent capabilities, potentially utilizing protocols like A2A for communication, COALESCE aims to significantly reduce operational costs, enhance system scalability, and foster the emergence of specialized agent economies, making complex LLM agent functionalities more accessible and economically viable. |
2025-06-01 | Fast numerical generation of Lie closure | Yutaro Iiyama et.al. | 2506.01120 | | 12 pages | Abstract (click to expand)Finding the Lie-algebraic closure of a handful of matrices has important applications in quantum computing and quantum control. For most realistic cases, the closure cannot be determined analytically, necessitating an explicit numerical construction. The standard construction algorithm makes repeated calls to a subroutine that determines whether a matrix is linearly independent from a potentially large set of matrices. Because the common implementation of this subroutine has a high complexity, the construction of Lie closure is practically limited to trivially small matrix sizes. We present efficient alternative methods of linear independence check that simultaneously reduce the computational complexity and memory footprint. An implementation of one of the methods is validated against known results. Our new algorithms enable numerical studies of Lie closure in larger system sizes than was previously possible. |
2025-06-01 | Learning to optimize convex risk measures: The cases of utility-based shortfall risk and optimized certainty equivalent risk | Sumedh Gupte, Prashanth L. A., Sanjay P. Bhat et.al. | 2506.01101 | | | Abstract (click to expand)We consider the problems of estimation and optimization of two popular convex risk measures: utility-based shortfall risk (UBSR) and Optimized Certainty Equivalent (OCE) risk. We extend these risk measures to cover possibly unbounded random variables. We cover prominent risk measures like the entropic risk, expectile risk, monotone mean-variance risk, Value-at-Risk, and Conditional Value-at-Risk as few special cases of either the UBSR or the OCE risk. In the context of estimation, we derive non-asymptotic bounds on the mean absolute error (MAE) and mean-squared error (MSE) of the classical sample average approximation (SAA) estimators of both, the UBSR and the OCE. Next, in the context of optimization, we derive expressions for the UBSR gradient and the OCE gradient under a smooth parameterization. Utilizing these expressions, we propose gradient estimators for both, the UBSR and the OCE. We use the SAA estimator of UBSR in both these gradient estimators, and derive non-asymptotic bounds on MAE and MSE for the proposed gradient estimation schemes. We incorporate the aforementioned gradient estimators into a stochastic gradient (SG) algorithm for optimization. Finally, we derive non-asymptotic bounds that quantify the rate of convergence of our SG algorithm for the optimization of the UBSR and the OCE risk measure. |
2025-06-01 | Regulatory Graphs and GenAI for Real-Time Transaction Monitoring and Compliance Explanation in Banking | Kunal Khanvilkar, Kranthi Kommuru et.al. | 2506.01093 | | | Abstract (click to expand)This paper presents a real-time transaction monitoring framework that integrates graph-based modeling, narrative field embedding, and generative explanation to support automated financial compliance. The system constructs dynamic transaction graphs, extracts structural and contextual features, and classifies suspicious behavior using a graph neural network. A retrieval-augmented generation module generates natural language explanations aligned with regulatory clauses for each flagged transaction. Experiments conducted on a simulated stream of financial data show that the proposed method achieves superior results, with 98.2% F1-score, 97.8% precision, and 97.0% recall. Expert evaluation further confirms the quality and interpretability of generated justifications. The findings demonstrate the potential of combining graph intelligence and generative models to support explainable, audit-ready compliance in high-risk financial environments. |
2025-05-31 | Controlling the Spread of Epidemics on Networks with Differential Privacy | Dung Nguyen, Aravind Srinivasan, Renata Valieva et.al. | 2506.00745 | | | Abstract (click to expand)Designing effective strategies for controlling epidemic spread by vaccination is an important question in epidemiology, especially in the early stages when vaccines are limited. This is a challenging question when the contact network is very heterogeneous, and strategies based on controlling network properties, such as the degree and spectral radius, have been shown to be effective. Implementation of such strategies requires detailed information on the contact structure, which might be sensitive in many applications. Our focus here is on choosing effective vaccination strategies when the edges are sensitive and differential privacy guarantees are needed. Our main contributions are \((\varepsilon,\delta)\) -differentially private algorithms for designing vaccination strategies by reducing the maximum degree and spectral radius. Our key technique is a private algorithm for the multi-set multi-cover problem, which we use for controlling network properties. We evaluate privacy-utility tradeoffs of our algorithms on multiple synthetic and real-world networks, and show their effectiveness. |
2025-05-31 | Multi-Objective Neural Network Assisted Design Optimization of Soft Fin-Ray Grippers for Enhanced Grasping Performance | Ali Ghanizadeh, Ali Ahmadi, Arash Bahrami et.al. | 2506.00494 | | | Abstract (click to expand)Soft Fin-Ray grippers can perform delicate and careful manipulation, which has caused notable attention in different fields. These grippers can handle objects of various forms and sizes safely. The internal structure of the Fin-Ray finger plays a significant role in its adaptability and grasping performance. However, modeling the non-linear grasp force and deformation behaviors for design purposes is challenging. Moreover, when the Fin-Ray finger becomes more rigid and capable of exerting higher forces, it becomes less delicate in handling objects. The contrast between these two objectives gives rise to a multi-objective optimization problem. In this study, we employ finite element method (FEM) to estimate the deflections and contact forces of the Fin-Ray, grasping cylindrical objects. This dataset is then used to construct a multilayer perception (MLP) for prediction of the contact force and the tip displacement. The FEM dataset consists of three input and four target features. The three input features of the MLP and optimization design variables are the thickness of the front and supporting beams, the thickness of the cross beams, and the equal spacing between the cross beams. In addition, the target features are the maximum contact forces and maximum tip displacements in x- and y-directions. The magnitude of maximum contact force and magnitude of maximum tip displacement are the two objectives, showing the trade-off between force and delicate manipulation in soft Fin-Ray grippers. Furthermore, the optimized set of solutions are found using multi-objective optimal techniques. We use non-dominated sorting genetic algorithm (NSGA-II) method for this purpose. Our findings demonstrate that our methodologies can be used to improve the design and gripping performance of soft robotic grippers, helping us to choose a design not only for delicate grasping but also for high-force applications. |
2025-05-30 | Efficient Estimation of Regularized Tyler's M-Estimator Using Approximate LOOCV | Karim Abou-Moustafa et.al. | 2505.24781 | | An extended version of a short article that appeared in 2023 IEEE Workshop on Information Theory, Saint-Malo, France | Abstract (click to expand)We consider the problem of estimating a regularization parameter, or a shrinkage coefficient \(\alpha \in (0,1)\) for Regularized Tyler's M-estimator (RTME). In particular, we propose to estimate an optimal shrinkage coefficient by setting \(\alpha\) as the solution to a suitably chosen objective function; namely the leave-one-out cross-validated (LOOCV) log-likelihood loss. Since LOOCV is computationally prohibitive even for moderate sample size \(n\), we propose a computationally efficient approximation for the LOOCV log-likelihood loss that eliminates the need for invoking the RTME procedure \(n\) times for each sample left out during the LOOCV procedure. This approximation yields an \(O(n)\) reduction in the running time complexity for the LOOCV procedure, which results in a significant speedup for computing the LOOCV estimate. We demonstrate the efficiency and accuracy of the proposed approach on synthetic high-dimensional data sampled from heavy-tailed elliptical distributions, as well as on real high-dimensional datasets for object recognition, face recognition, and handwritten digit's recognition. Our experiments show that the proposed approach is efficient and consistently more accurate than other methods in the literature for shrinkage coefficient estimation. |
2025-05-30 | Efficient Bayesian multi-fidelity inverse analysis for expensive and non-differentiable physics-based simulations in high stochastic dimensions | Jonas Nitzler, Bugrahan Z. Temür, Phaedon-Stelios Koutsourelakis et.al. | 2505.24708 | | 42 pages, 20 figures | Abstract (click to expand)High-dimensional Bayesian inverse analysis (dim >> 100) is mostly unfeasible for computationally demanding, nonlinear physics-based high-fidelity (HF) models. Usually, the use of more efficient gradient-based inference schemes is impeded if the multi-physics models are provided by complex legacy codes. Adjoint-based derivatives are either exceedingly cumbersome to derive or non-existent for practically relevant large-scale nonlinear and coupled multi-physics problems. Similarly, holistic automated differentiation w.r.t. primary variables of multi-physics codes is usually not yet an option and requires extensive code restructuring if not considered from the outset in the software design. This absence of differentiability further exacerbates the already present computational challenges. To overcome the existing limitations, we propose a novel inference approach called Bayesian multi-fidelity inverse analysis (BMFIA), which leverages simpler and computationally cheaper lower-fidelity (LF) models that are designed to provide model derivatives. BMFIA learns a simple, probabilistic dependence of the LF and HF models, which is then employed in an altered likelihood formulation to statistically correct the inaccurate LF response. From a Bayesian viewpoint, this dependence represents a multi-fidelity conditional density (discriminative model). We demonstrate how this multi-fidelity conditional density can be learned robustly in the small data regime from only a few HF and LF simulations (50 to 300), which would not be sufficient for naive surrogate approaches. The formulation is fully differentiable and allows the flexible design of a wide range of LF models. We demonstrate that BMFIA solves Bayesian inverse problems for scenarios that used to be prohibitive, such as finely-resolved spatial reconstruction problems for nonlinear and transient coupled poro-elastic media physics. |
2025-05-30 | Looking for Attention: Randomized Attention Test Design for Validator Monitoring in Optimistic Rollups | Suhyeon Lee et.al. | 2505.24393 | | | Abstract (click to expand)Optimistic Rollups (ORUs) significantly enhance blockchain scalability but inherently suffer from the verifier's dilemma, particularly concerning validator attentiveness. Current systems lack mechanisms to proactively ensure validators are diligently monitoring L2 state transitions, creating a vulnerability where fraudulent states could be finalized. This paper introduces the Randomized Attention Test (RAT), a novel L1-based protocol designed to probabilistically challenge validators in ORUs, thereby verifying their liveness and computational readiness. Our game-theoretic analysis demonstrates that an Ideal Security Equilibrium, where all validators are attentive and proposers are honest, can be achieved with RAT. Notably, this equilibrium is attainable and stable with relatively low economic penalties (e.g., under $1000) for non-responsive validators and a low attention test frequency (e.g., under 1% per epoch). RAT thus provides a crucial, practical mechanism to enforce validator diligence, fortifying the overall security and integrity of ORU systems with minimizing additional costs. |
2025-05-30 | Singularity Protocol for Cross Chain AMM without Intermediate Tokens or Bridges | Sumit Vohra et.al. | 2505.24337 | | | Abstract (click to expand)Automated Market Makers (AMMs) are decentralized exchange protocols that provide continuous access to token liquidity without the need for order books or traditional market makers. However, this innovation has failed to scale when it comes to cross-chain swaps. Modern cross-chain swaps employ double-sided AMMs, which are not only inefficient due to liquidity fragmentation but also require an intermediate token. This introduces inherent volatility risk as well as blockchain and bridging risk, especially in the case of wrapped tokens. This paper describes the inefficiencies of existing AMM invariants, particularly their mixed polynomial nature, and derives a new class of AMMs that do not have bi-state dependency between the assets being swapped. We propose a novel method of value transfer swaps using the described invariant that mitigates the need for bi-state dependency and eliminates the need for intermediate tokens or bridging. Furthermore, we show how this mechanism enables efficient cross-chain swaps with lower gas requirements and no bridging risks. The proposed technology is designed to support cross-chain swaps across any permutation of L1, L2, and L3 blockchains. |
2025-05-30 | Transaction Proximity: A Graph-Based Approach to Blockchain Fraud Prevention | Gordon Y. Liao, Ziming Zeng, Mira Belenkiy et.al. | 2505.24284 | | | Abstract (click to expand)This paper introduces a fraud-deterrent access validation system for public blockchains, leveraging two complementary concepts: "Transaction Proximity", which measures the distance between wallets in the transaction graph, and "Easily Attainable Identities (EAIs)", wallets with direct transaction connections to centralized exchanges. Recognizing the limitations of traditional approaches like blocklisting (reactive, slow) and strict allow listing (privacy-invasive, adoption barriers), we propose a system that analyzes transaction patterns to identify wallets with close connections to centralized exchanges. Our directed graph analysis of the Ethereum blockchain reveals that 56% of large USDC wallets (with a lifetime maximum balance greater than \ $10,000) are EAI and 88% are within one transaction hop of an EAI. For transactions exceeding $ 2,000, 91% involve at least one EAI. Crucially, an analysis of past exploits shows that 83% of the known exploiter addresses are not EAIs, with 21% being more than five hops away from any regulated exchange. We present three implementation approaches with varying gas cost and privacy tradeoffs, demonstrating that EAI-based access control can potentially prevent most of these incidents while preserving blockchain openness. Importantly, our approach does not restrict access or share personally identifiable information, but it provides information for protocols to implement their own validation or risk scoring systems based on specific needs. This middle-ground solution enables programmatic compliance while maintaining the core values of open blockchain. |
2025-05-29 | The Surprising Soupability of Documents in State Space Models | Yasaman Jafari, Zixian Wang, Leon Bergen et.al. | 2505.24033 | | | Abstract (click to expand)We investigate whether hidden states from Structured State Space Models (SSMs) can be merged post-hoc to support downstream reasoning. Inspired by model souping, we propose a strategy where documents are encoded independently and their representations are pooled -- via simple operations like averaging -- into a single context state. This approach, which we call document souping, enables modular encoding and reuse without reprocessing the full input for each query. We finetune Mamba2 models to produce soupable representations and find that they support multi-hop QA, sparse retrieval, and long-document reasoning with strong accuracy. On HotpotQA, souping ten independently encoded documents nearly matches the performance of a cross-encoder trained on the same inputs. |
2025-05-29 | Computerized Modeling of Electrophysiology and Pathoelectrophysiology of the Atria -- How Much Detail is Needed? | Olaf Dössel, Axel Loewe et.al. | 2505.23717 | | | Abstract (click to expand)This review focuses on the computerized modeling of the electrophysiology of the human atria, emphasizing the simulation of common arrhythmias such as atrial flutter (AFlut) and atrial fibrillation (AFib). Which components of the model are necessary to accurately model arrhythmogenic tissue modifications, including remodeling, cardiomyopathy, and fibrosis, to ensure reliable simulations? The central question explored is the level of detail required for trustworthy simulations for a specific context of use. The review discusses the balance between model complexity and computational efficiency, highlighting the risks of oversimplification and excessive detail. It covers various aspects of atrial modeling, from cellular to whole atria levels, including the influence of atrial geometry, fiber direction, anisotropy, and wall thickness on simulation outcomes. The article also examines the impact of different modeling approaches, such as volumetric 3D models, bilayer models, and single surface models, on the realism of simulations. In addition, it reviews the latest advances in the modeling of fibrotic tissue and the verification and validation of atrial models. The intended use of these models in planning and optimization of atrial ablation strategies is discussed, with a focus on personalized modeling for individual patients and cohort-based approaches for broader applications. The review concludes by emphasizing the importance of integrating experimental data and clinical validation to enhance the utility of computerized atrial models to improve patient outcomes. |
2025-05-29 | Hybrid subgradient and simulated annealing method for hemivariational inequalities | Piotr Bartman-Szwarc, Adil M. Bagirov, Anna Ochal et.al. | 2505.23676 | | 14 pages, 4 figures, 25th International Conference on Computational Science WS | Abstract (click to expand)In this paper, we employ a global aggregate subgradient method for the numerical solution of hemivariational inequality problems arising in contact mechanics. The method integrates a global search procedure to identify starting points for a local minimization algorithm. The algorithm consists of two types of steps: null steps and serious steps. In each null step, only two subgradients are utilized: the aggregate subgradient and the subgradient computed at the current iteration point, which together determine the search direction. Furthermore, we compare the performance of the proposed method with selected solvers using a representative contact mechanics problem as a case study. |
2025-05-29 | Be.FM: Open Foundation Models for Human Behavior | Yutong Xie, Zhuoheng Li, Xiyuan Wang et.al. | 2505.23058 | | | Abstract (click to expand)Despite their success in numerous fields, the potential of foundation models for modeling and understanding human behavior remains largely unexplored. We introduce Be.FM, one of the first open foundation models designed for human behavior modeling. Built upon open-source large language models and fine-tuned on a diverse range of behavioral data, Be.FM can be used to understand and predict human decision-making. We construct a comprehensive set of benchmark tasks for testing the capabilities of behavioral foundation models. Our results demonstrate that Be.FM can predict behaviors, infer characteristics of individuals and populations, generate insights about contexts, and apply behavioral science knowledge. |
2025-05-28 | Evolution analysis of software quality metrics in an open-source java project: A case study on TestNG | Venkata Sai Sravya Sambaturu et.al. | 2505.22884 | | 9 Pages, 2 Figures | Abstract (click to expand)Software quality is critical in modern software engineering, especially in large and evolving codebases. This study analyzes the evolution of software quality metrics in five successive versions of the open-source Java testing framework TestNG. Using the static analysis tool Understand, eleven key object-oriented metrics, including cyclomatic complexity, class coupling, and lines of code, were extracted for each version. Statistical and visual analyses reveal structural trends over time. The results indicate that TestNG has matured into a more stable and maintainable framework, reflecting ongoing development, refactoring, and architectural improvements. This study provides insights into design evolution and offers recommendations for maintaining code quality in similar projects. |
2025-05-28 | A comprehensive analysis of PINNs: Variants, Applications, and Challenges | Afila Ajithkumar Sophiya, Akarsh K Nair, Sepehr Maleki et.al. | 2505.22761 | | | Abstract (click to expand)Physics Informed Neural Networks (PINNs) have been emerging as a powerful computational tool for solving differential equations. However, the applicability of these models is still in its initial stages and requires more standardization to gain wider popularity. Through this survey, we present a comprehensive overview of PINNs approaches exploring various aspects related to their architecture, variants, areas of application, real-world use cases, challenges, and so on. Even though existing surveys can be identified, they fail to provide a comprehensive view as they primarily focus on either different application scenarios or limit their study to a superficial level. This survey attempts to bridge the gap in the existing literature by presenting a detailed analysis of all these factors combined with recent advancements and state-of-the-art research in PINNs. Additionally, we discuss prevalent challenges in PINNs implementation and present some of the future research directions as well. The overall contributions of the survey can be summarised into three sections: A detailed overview of PINNs architecture and variants, a performance analysis of PINNs on different equations and application domains highlighting their features. Finally, we present a detailed discussion of current issues and future research directions. |
2025-05-28 | Physics-Informed Distillation of Diffusion Models for PDE-Constrained Generation | Yi Zhang, Difan Zou et.al. | 2505.22391 | | 23 pages, 5 figures, 4 tables | Abstract (click to expand)Modeling physical systems in a generative manner offers several advantages, including the ability to handle partial observations, generate diverse solutions, and address both forward and inverse problems. Recently, diffusion models have gained increasing attention in the modeling of physical systems, particularly those governed by partial differential equations (PDEs). However, diffusion models only access noisy data \(\boldsymbol{x}_t\) at intermediate steps, making it infeasible to directly enforce constraints on the clean sample \(\boldsymbol{x}_0\) at each noisy level. As a workaround, constraints are typically applied to the expectation of clean samples$ \mathbb{E}[\boldsymbol{x}_0 |
2025-05-28 | B-XAIC Dataset: Benchmarking Explainable AI for Graph Neural Networks Using Chemical Data | Magdalena Proszewska, Tomasz Danel, Dawid Rymarczyk et.al. | 2505.22252 | link | 26 pages, 16 figures, 5 tables | Abstract (click to expand)Understanding the reasoning behind deep learning model predictions is crucial in cheminformatics and drug discovery, where molecular design determines their properties. However, current evaluation frameworks for Explainable AI (XAI) in this domain often rely on artificial datasets or simplified tasks, employing data-derived metrics that fail to capture the complexity of real-world scenarios and lack a direct link to explanation faithfulness. To address this, we introduce B-XAIC, a novel benchmark constructed from real-world molecular data and diverse tasks with known ground-truth rationales for assigned labels. Through a comprehensive evaluation using B-XAIC, we reveal limitations of existing XAI methods for Graph Neural Networks (GNNs) in the molecular domain. This benchmark provides a valuable resource for gaining deeper insights into the faithfulness of XAI, facilitating the development of more reliable and interpretable models. |
2025-05-28 | FALCON: An ML Framework for Fully Automated Layout-Constrained Analog Circuit Design | Asal Mehradfar, Xuzhe Zhao, Yilun Huang et.al. | 2505.21923 | link | | Abstract (click to expand)Designing analog circuits from performance specifications is a complex, multi-stage process encompassing topology selection, parameter inference, and layout feasibility. We introduce FALCON, a unified machine learning framework that enables fully automated, specification-driven analog circuit synthesis through topology selection and layout-constrained optimization. Given a target performance, FALCON first selects an appropriate circuit topology using a performance-driven classifier guided by human design heuristics. Next, it employs a custom, edge-centric graph neural network trained to map circuit topology and parameters to performance, enabling gradient-based parameter inference through the learned forward model. This inference is guided by a differentiable layout cost, derived from analytical equations capturing parasitic and frequency-dependent effects, and constrained by design rules. We train and evaluate FALCON on a large-scale custom dataset of 1M analog mm-wave circuits, generated and simulated using Cadence Spectre across 20 expert-designed topologies. Through this evaluation, FALCON demonstrates >99\% accuracy in topology inference, <10\% relative error in performance prediction, and efficient layout-aware design that completes in under 1 second per instance. Together, these results position FALCON as a practical and extensible foundation model for end-to-end analog circuit design automation. |
2025-05-29 | SVRPBench: A Realistic Benchmark for Stochastic Vehicle Routing Problem | Ahmed Heakl, Yahia Salaheldin Shaaban, Martin Takac et.al. | 2505.21887 | link | 18 pages, 14 figures, 11 tables | Abstract (click to expand)Robust routing under uncertainty is central to real-world logistics, yet most benchmarks assume static, idealized settings. We present SVRPBench, the first open benchmark to capture high-fidelity stochastic dynamics in vehicle routing at urban scale. Spanning more than 500 instances with up to 1000 customers, it simulates realistic delivery conditions: time-dependent congestion, log-normal delays, probabilistic accidents, and empirically grounded time windows for residential and commercial clients. Our pipeline generates diverse, constraint-rich scenarios, including multi-depot and multi-vehicle setups. Benchmarking reveals that state-of-the-art RL solvers like POMO and AM degrade by over 20% under distributional shift, while classical and metaheuristic methods remain robust. To enable reproducible research, we release the dataset and evaluation suite. SVRPBench challenges the community to design solvers that generalize beyond synthetic assumptions and adapt to real-world uncertainty. |
2025-05-27 | Power-Capping Metric Evaluation for Improving Energy Efficiency | Maria Patrou, Thomas Wang, Wael Elwasif et.al. | 2505.21758 | | 14 pages, 3 figures, 2 tables. Accepted at the Energy Efficiency with Sustainable Performance: Techniques, Tools, and Best Practices, EESP Workshop, in conjunction with ISC High Performance 2025 | Abstract (click to expand)With high-performance computing systems now running at exascale, optimizing power-scaling management and resource utilization has become more critical than ever. This paper explores runtime power-capping optimizations that leverage integrated CPU-GPU power management on architectures like the NVIDIA GH200 superchip. We evaluate energy-performance metrics that account for simultaneous CPU and GPU power-capping effects by using two complementary approaches: speedup-energy-delay and a Euclidean distance-based multi-objective optimization method. By targeting a mostly compute-bound exascale science application, the Locally Self-Consistent Multiple Scattering (LSMS), we explore challenging scenarios to identify potential opportunities for energy savings in exascale applications, and we recognize that even modest reductions in energy consumption can have significant overall impacts. Our results highlight how GPU task-specific dynamic power-cap adjustments combined with integrated CPU-GPU power steering can improve the energy utilization of certain GPU tasks, thereby laying the groundwork for future adaptive optimization strategies. |
2025-05-27 | Modeling extreme events and intermittency in turbulent diffusion with a mean gradient | Mustafa A Mohamad, Di Qi et.al. | 2505.21688 | | | Abstract (click to expand)We study the statistical properties of passive tracer transport in turbulent flows with a mean gradient, emphasizing tracer intermittency and extreme events. An analytically tractable model is developed, coupling zonal and shear velocity components with both linear and nonlinear stochastic dynamics. Formulating the model in Fourier space, a simple explicit solution for the tracer invariant statistics is derived. Through this model we identify the resonance condition responsible for non-Gaussian behavior and bursts in the tracer. Resonant conditions, that lead to a peak in the tracer variance, occur when the zonal flow and the shear flow phase speeds are equivalent. Numerical experiments across a range of regimes, including different energy spectra and zonal flow models, are performed to validate these findings and demonstrate how the velocity field and stochasticity determines tracer extremes. These results provide additional insight into the mechanisms underlying turbulent tracer transport, with implications for uncertainty quantification and data assimilation in geophysical and environmental applications. |
2025-05-27 | Out of the Past: An AI-Enabled Pipeline for Traffic Simulation from Noisy, Multimodal Detector Data and Stakeholder Feedback | Rex Chen, Karen Wu, John McCartney et.al. | 2505.21349 | | 12 pages; 5 figures; preprint version | Abstract (click to expand)How can a traffic simulation be designed to faithfully reflect real-world traffic conditions? Past data-driven approaches to traffic simulation in the literature have relied on unrealistic or suboptimal heuristics. They also fail to adequately account for the effects of uncertainty and multimodality in the data on simulation outcomes. In this work, we integrate advances in AI to construct a three-step, end-to-end pipeline for generating a traffic simulation from detector data: computer vision for vehicle counting from camera footage, combinatorial optimization for vehicle route generation from multimodal data, and large language models for iterative simulation refinement from natural language feedback. Using a road network from Strongsville, Ohio as a testbed, we demonstrate that our pipeline can accurately capture the city's traffic patterns in a granular simulation. Beyond Strongsville, our traffic simulation framework can be generalized to other municipalities with different levels of data and infrastructure availability. |
2025-05-27 | A Lightweight Multi-Expert Generative Language Model System for Engineering Information and Knowledge Extraction | Bogdan Bogachov, Yaoyao Fiona Zhao et.al. | 2505.21109 | | 10 pages, 4 Figures, 6 Tables. This paper has been accepted to be published in the proceedings of IDETC-CIE 2025 | Abstract (click to expand)Despite recent advancements in domain adaptation techniques for large language models, these methods remain computationally intensive, and the resulting models can still exhibit hallucination issues. Most existing adaptation methods do not prioritize reducing the computational resources required for fine-tuning and inference of language models. Hallucination issues have gradually decreased with each new model release. However, they remain prevalent in engineering contexts, where generating well-structured text with minimal errors and inconsistencies is critical. This work introduces a novel approach called the Small Language Graph (SLG), which is a lightweight adaptation solution designed to address the two key challenges outlined above. The system is structured in the form of a graph, where each node represents a lightweight expert - a small language model fine-tuned on specific and concise texts. The results of this study have shown that SLG was able to surpass conventional fine-tuning methods on the Exact Match metric by 3 times. Additionally, the fine-tuning process was 1.7 times faster compared to that of a larger stand-alone language model. These findings introduce a potential for small to medium-sized engineering companies to confidently use generative AI technologies, such as LLMs, without the necessity to invest in expensive computational resources. Also, the graph architecture and the small size of expert nodes offer a possible opportunity for distributed AI systems, thus potentially diverting the global need for expensive centralized compute clusters. |
2025-05-27 | Limitations of Nyquist Criteria in the Discretization of 2D Electromagnetic Integral Equations at High Frequency: Spectral Insights into Pollution Effects | Viviana Giunzioni, Adrien Merlini, Francesco P. Andriulli et.al. | 2505.20942 | | | Abstract (click to expand)The use of boundary integral equations in modeling boundary value problems-such as elastic, acoustic, or electromagnetic ones-is well established in the literature and widespread in practical applications. These equations are typically solved numerically using boundary element methods (BEMs), which generally provide accurate and reliable solutions. When the frequency of the wave phenomenon under study increases, the discretization of the problem is typically chosen to maintain a fixed number of unknowns per wavelength. Under these conditions, the BEM over finite-dimensional subspaces of piecewise polynomial basis functions is commonly believed to provide a bounded solution accuracy. If proven, this would constitute a significant advantage of the BEM with respect to finite element and finite difference time domain methods, which, in contrast, are affected by numerical pollution. In this work, we conduct a rigorous spectral analysis of some of the most commonly used boundary integral operators and examine the impact of the BEM discretization on the solution accuracy of widely used integral equations modeling two-dimensional electromagnetic scattering from a perfectly electrically conducting cylinder. We consider both ill-conditioned and well-conditioned equations, the latter being characterized by solution operators bounded independently of frequency. Our analysis, which is capable of tracking the effects of BEM discretization on compositions and sums of different operators, reveals a form of pollution that affects, in different measures, equations of both kinds. After elucidating the mechanism by which the BEM discretization impacts accuracy, we propose a solution strategy that can cure the pollution problem thus evidenced. The defining strength of the proposed theoretical model lies in its capacity to deliver deep insight into the root causes of the phenomenon. |
2025-05-27 | Reduced and mixed precision turbulent flow simulations using explicit finite difference schemes | Bálint Siklósi, Pushpender K. Sharma, David J. Lusher et.al. | 2505.20911 | | | Abstract (click to expand)The use of reduced and mixed precision computing has gained increasing attention in high-performance computing (HPC) as a means to improve computational efficiency, particularly on modern hardware architectures like GPUs. In this work, we explore the application of mixed precision arithmetic in compressible turbulent flow simulations using explicit finite difference schemes. We extend the OPS and OpenSBLI frameworks to support customizable precision levels, enabling fine-grained control over precision allocation for different computational tasks. Through a series of numerical experiments on the Taylor-Green vortex benchmark, we demonstrate that mixed precision strategies, such as half-single and single-double combinations, can offer significant performance gains without compromising numerical accuracy. However, pure half-precision computations result in unacceptable accuracy loss, underscoring the need for careful precision selection. Our results show that mixed precision configurations can reduce memory usage and communication overhead, leading to notable speedups, particularly on multi-CPU and multi-GPU systems. |
2025-05-29 | GIT-BO: High-Dimensional Bayesian Optimization with Tabular Foundation Models | Rosen Ting-Ying Yu, Cyril Picard, Faez Ahmed et.al. | 2505.20685 | | | Abstract (click to expand)Bayesian optimization (BO) effectively optimizes expensive black-box functions but faces significant challenges in high-dimensional spaces (dimensions exceeding 100) due to the curse of dimensionality. Existing high-dimensional BO methods typically leverage low-dimensional embeddings or structural assumptions to mitigate this challenge, yet these approaches frequently incur considerable computational overhead and rigidity due to iterative surrogate retraining and fixed assumptions. To address these limitations, we propose Gradient-Informed Bayesian Optimization using Tabular Foundation Models (GIT-BO), an approach that utilizes a pre-trained tabular foundation model (TFM) as a surrogate, leveraging its gradient information to adaptively identify low-dimensional subspaces for optimization. We propose a way to exploit internal gradient computations from the TFM's forward pass by creating a gradient-informed diagnostic matrix that reveals the most sensitive directions of the TFM's predictions, enabling optimization in a continuously re-estimated active subspace without the need for repeated model retraining. Extensive empirical evaluation across 23 synthetic and real-world benchmarks demonstrates that GIT-BO consistently outperforms four state-of-the-art Gaussian process-based high-dimensional BO methods, showing superior scalability and optimization performances, especially as dimensionality increases up to 500 dimensions. This work establishes foundation models, augmented with gradient-informed adaptive subspace identification, as highly competitive alternatives to traditional Gaussian process-based approaches for high-dimensional Bayesian optimization tasks. |
2025-05-27 | FinTagging: An LLM-ready Benchmark for Extracting and Structuring Financial Information | Yan Wang, Yang Ren, Lingfei Qian et.al. | 2505.20650 | link | | Abstract (click to expand)We introduce FinTagging, the first full-scope, table-aware XBRL benchmark designed to evaluate the structured information extraction and semantic alignment capabilities of large language models (LLMs) in the context of XBRL-based financial reporting. Unlike prior benchmarks that oversimplify XBRL tagging as flat multi-class classification and focus solely on narrative text, FinTagging decomposes the XBRL tagging problem into two subtasks: FinNI for financial entity extraction and FinCL for taxonomy-driven concept alignment. It requires models to jointly extract facts and align them with the full 10k+ US-GAAP taxonomy across both unstructured text and structured tables, enabling realistic, fine-grained evaluation. We assess a diverse set of LLMs under zero-shot settings, systematically analyzing their performance on both subtasks and overall tagging accuracy. Our results reveal that, while LLMs demonstrate strong generalization in information extraction, they struggle with fine-grained concept alignment, particularly in disambiguating closely related taxonomy entries. These findings highlight the limitations of existing LLMs in fully automating XBRL tagging and underscore the need for improved semantic reasoning and schema-aware modeling to meet the demands of accurate financial disclosure. Code is available at our GitHub repository and data is at our Hugging Face repository. |
2025-05-26 | FinLoRA: Benchmarking LoRA Methods for Fine-Tuning LLMs on Financial Datasets | Dannong Wang, Jaisal Patel, Daochen Zha et.al. | 2505.19819 | link | | Abstract (click to expand)Low-rank adaptation (LoRA) methods show great potential for scaling pre-trained general-purpose Large Language Models (LLMs) to hundreds or thousands of use scenarios. However, their efficacy in high-stakes domains like finance is rarely explored, e.g., passing CFA exams and analyzing SEC filings. In this paper, we present the open-source FinLoRA project that benchmarks LoRA methods on both general and highly professional financial tasks. First, we curated 19 datasets covering diverse financial applications; in particular, we created four novel XBRL analysis datasets based on 150 SEC filings. Second, we evaluated five LoRA methods and five base LLMs. Finally, we provide extensive experimental results in terms of accuracy, F1, and BERTScore and report computational cost in terms of time and GPU memory during fine-tuning and inference stages. We find that LoRA methods achieved substantial performance gains of 36\% on average over base models. Our FinLoRA project provides an affordable and scalable approach to democratize financial intelligence to the general public. Datasets, LoRA adapters, code, and documentation are available at https://github.com/Open-Finance-Lab/FinLoRA |
2025-05-26 | Cross-Sequence Semi-Supervised Learning for Multi-Parametric MRI-Based Visual Pathway Delineation | Alou Diakite, Cheng Li, Lei Xie et.al. | 2505.19733 | | | Abstract (click to expand)Accurately delineating the visual pathway (VP) is crucial for understanding the human visual system and diagnosing related disorders. Exploring multi-parametric MR imaging data has been identified as an important way to delineate VP. However, due to the complex cross-sequence relationships, existing methods cannot effectively model the complementary information from different MRI sequences. In addition, these existing methods heavily rely on large training data with labels, which is labor-intensive and time-consuming to obtain. In this work, we propose a novel semi-supervised multi-parametric feature decomposition framework for VP delineation. Specifically, a correlation-constrained feature decomposition (CFD) is designed to handle the complex cross-sequence relationships by capturing the unique characteristics of each MRI sequence and easing the multi-parametric information fusion process. Furthermore, a consistency-based sample enhancement (CSE) module is developed to address the limited labeled data issue, by generating and promoting meaningful edge information from unlabeled data. We validate our framework using two public datasets, and one in-house Multi-Shell Diffusion MRI (MDM) dataset. Experimental results demonstrate the superiority of our approach in terms of delineation performance when compared to seven state-of-the-art approaches. |
2025-05-26 | Integrated Finite Element Neural Network (IFENN) for Phase-Field Fracture with Minimal Input and Generalized Geometry-Load Handling | Panos Pantidis, Lampros Svolos, Diab Abueidda et.al. | 2505.19566 | | | Abstract (click to expand)We present a novel formulation for modeling phase-field fracture propagation based on the Integrated Finite Element Neural Network (IFENN) framework. IFENN is a hybrid solver scheme that utilizes neural networks as PDE solvers within FEM, preserving accuracy via residual minimization while achieving speed-up via swift network predictions and reduction of the size of system of equations in coupled problems. In this work, we introduce a radically new formulation of IFENN in which the phase-field variable is calculated using physics-informed convolutional networks (PICNNs), while the equilibrium equation is still solved using FEM to maintain the solver robustness. Unlike conventional approaches, which rely on sequence or time-dependent models, we eliminate the need to include temporal features in the training setup and inference stage. Instead, we show that it is sufficient to learn only the spatial coupling between the strain energy density and the phase-field variable in the vicinity of the fracture process zone, and utilize this information along the advancing crack simulation. We train a single CNN in a purely physics-based, unsupervised manner on just two load increments from a single-notch tension problem, with a total training time of only 5 minutes. Following this exceptionally minimal and fast training, we show that the same PICNN can (when embedded within IFENN) model crack propagation in a very wide range of unseen scenarios, including arbitrarily rectangular domains, single and multiple interacting cracks, varying mesh densities, and arbitrary loading paths. The proposed formulation delivers breakthroughs that address many of the limitations in the existing literature of hybrid modeling, introducing a new paradigm for the development of generalizable, physics-consistent hybrid models that are applicable to fracture and other coupled problems. |
2025-05-26 | DoctorRAG: Medical RAG Fusing Knowledge with Patient Analogy through Textual Gradients | Yuxing Lu, Gecheng Fu, Wei Wu et.al. | 2505.19538 | | 32 pages, 5 figures, 5 tables | Abstract (click to expand)Existing medical RAG systems mainly leverage knowledge from medical knowledge bases, neglecting the crucial role of experiential knowledge derived from similar patient cases -- a key component of human clinical reasoning. To bridge this gap, we propose DoctorRAG, a RAG framework that emulates doctor-like reasoning by integrating both explicit clinical knowledge and implicit case-based experience. DoctorRAG enhances retrieval precision by first allocating conceptual tags for queries and knowledge sources, together with a hybrid retrieval mechanism from both relevant knowledge and patient. In addition, a Med-TextGrad module using multi-agent textual gradients is integrated to ensure that the final output adheres to the retrieved knowledge and patient query. Comprehensive experiments on multilingual, multitask datasets demonstrate that DoctorRAG significantly outperforms strong baseline RAG models and gains improvements from iterative refinements. Our approach generates more accurate, relevant, and comprehensive responses, taking a step towards more doctor-like medical reasoning systems. |
2025-05-26 | BizFinBench: A Business-Driven Real-World Financial Benchmark for Evaluating LLMs | Guilong Lu, Xuntao Guo, Rongjunchen Zhang et.al. | 2505.19457 | link | Project Page: https://hithink-research.github.io/BizFinBench/ | Abstract (click to expand)Large language models excel in general tasks, yet assessing their reliability in logic-heavy, precision-critical domains like finance, law, and healthcare remains challenging. To address this, we introduce BizFinBench, the first benchmark specifically designed to evaluate LLMs in real-world financial applications. BizFinBench consists of 6,781 well-annotated queries in Chinese, spanning five dimensions: numerical calculation, reasoning, information extraction, prediction recognition, and knowledge-based question answering, grouped into nine fine-grained categories. The benchmark includes both objective and subjective metrics. We also introduce IteraJudge, a novel LLM evaluation method that reduces bias when LLMs serve as evaluators in objective metrics. We benchmark 25 models, including both proprietary and open-source systems. Extensive experiments show that no model dominates across all tasks. Our evaluation reveals distinct capability patterns: (1) In Numerical Calculation, Claude-3.5-Sonnet (63.18) and DeepSeek-R1 (64.04) lead, while smaller models like Qwen2.5-VL-3B (15.92) lag significantly; (2) In Reasoning, proprietary models dominate (ChatGPT-o3: 83.58, Gemini-2.0-Flash: 81.15), with open-source models trailing by up to 19.49 points; (3) In Information Extraction, the performance spread is the largest, with DeepSeek-R1 scoring 71.46, while Qwen3-1.7B scores 11.23; (4) In Prediction Recognition, performance variance is minimal, with top models scoring between 39.16 and 50.00. We find that while current LLMs handle routine finance queries competently, they struggle with complex scenarios requiring cross-concept reasoning. BizFinBench offers a rigorous, business-aligned benchmark for future research. The code and dataset are available at https://github.com/HiThink-Research/BizFinBench. |
2025-05-25 | RoofNet: A Global Multimodal Dataset for Roof Material Classification | Noelle Law, Yuki Miura et.al. | 2505.19358 | | | Abstract (click to expand)Natural disasters are increasing in frequency and severity, causing hundreds of billions of dollars in damage annually and posing growing threats to infrastructure and human livelihoods. Accurate data on roofing materials is critical for modeling building vulnerability to natural hazards such as earthquakes, floods, wildfires, and hurricanes, yet such data remain unavailable. To address this gap, we introduce RoofNet, the largest and most geographically diverse novel multimodal dataset to date, comprising over 51,500 samples from 184 geographically diverse sites pairing high-resolution Earth Observation (EO) imagery with curated text annotations for global roof material classification. RoofNet includes geographically diverse satellite imagery labeled with 14 key roofing types -- such as asphalt shingles, clay tiles, and metal sheets -- and is designed to enhance the fidelity of global exposure datasets through vision-language modeling (VLM). We sample EO tiles from climatically and architecturally distinct regions to construct a representative dataset. A subset of 6,000 images was annotated in collaboration with domain experts to fine-tune a VLM. We used geographic- and material-aware prompt tuning to enhance class separability. The fine-tuned model was then applied to the remaining EO tiles, with predictions refined through rule-based and human-in-the-loop verification. In addition to material labels, RoofNet provides rich metadata including roof shape, footprint area, solar panel presence, and indicators of mixed roofing materials (e.g., HVAC systems). RoofNet supports scalable, AI-driven risk assessment and serves as a downstream benchmark for evaluating model generalization across regions -- offering actionable insights for insurance underwriting, disaster preparedness, and infrastructure policy planning. |
2025-05-25 | Rapid Development of Efficient Participant-Specific Computational Models of the Wrist | Thor E. Andreassen, Taylor P. Trentadue, Andrew R. Thoreson et.al. | 2505.19282 | | 33 Pages, 1 Graphical Abstract, 10 Figures, 6 Tables | Abstract (click to expand)While computational modeling may help to develop new treatment options for hand and wrist injuries, at present, few models exist. The time and expertise required to develop and use these models is considerable. Moreover, most do not allow for variation of material properties, instead relying on literature reported averages. We have developed a novel automated workflow combining non-linear morphing techniques with various algorithmic techniques to create participant-specific finite element models. Using this workflow, three participant-specific models were created from our existing four-dimensional computed tomography (4DCT) data. These were then used to perform two analyses to demonstrate the usefulness of the models to investigate clinical questions, namely optimization of ligament properties to participant-specific kinematics, and Monte Carlo (MC) analysis of the impacts of ligament injury on joint contact pressure, as an analogue for joint injury that may lead to osteoarthritis. Participant-specific models can be created in 2 hours and individual simulations performed in 45 seconds. This work lays the groundwork for future patient-specific modeling of the hand and wrist. |
2025-05-25 | MOOSE-Chem2: Exploring LLM Limits in Fine-Grained Scientific Hypothesis Discovery via Hierarchical Search | Zonglin Yang, Wanhao Liu, Ben Gao et.al. | 2505.19209 | | | Abstract (click to expand)Large language models (LLMs) have shown promise in automating scientific hypothesis generation, yet existing approaches primarily yield coarse-grained hypotheses lacking critical methodological and experimental details. We introduce and formally define the novel task of fine-grained scientific hypothesis discovery, which entails generating detailed, experimentally actionable hypotheses from coarse initial research directions. We frame this as a combinatorial optimization problem and investigate the upper limits of LLMs' capacity to solve it when maximally leveraged. Specifically, we explore four foundational questions: (1) how to best harness an LLM's internal heuristics to formulate the fine-grained hypothesis it itself would judge as the most promising among all the possible hypotheses it might generate, based on its own internal scoring-thus defining a latent reward landscape over the hypothesis space; (2) whether such LLM-judged better hypotheses exhibit stronger alignment with ground-truth hypotheses; (3) whether shaping the reward landscape using an ensemble of diverse LLMs of similar capacity yields better outcomes than defining it with repeated instances of the strongest LLM among them; and (4) whether an ensemble of identical LLMs provides a more reliable reward landscape than a single LLM. To address these questions, we propose a hierarchical search method that incrementally proposes and integrates details into the hypothesis, progressing from general concepts to specific experimental configurations. We show that this hierarchical process smooths the reward landscape and enables more effective optimization. Empirical evaluations on a new benchmark of expert-annotated fine-grained hypotheses from recent chemistry literature show that our method consistently outperforms strong baselines. |
2025-05-24 | Discrete gradient methods for port-Hamiltonian differential-algebraic equations | Philipp L. Kinon, Riccardo Morandin, Philipp Schulze et.al. | 2505.18810 | link | 36 pages (8 pages appendix), 6 figures | Abstract (click to expand)Discrete gradient methods are a powerful tool for the time discretization of dynamical systems, since they are structure-preserving regardless of the form of the total energy. In this work, we discuss the application of discrete gradient methods to the system class of nonlinear port-Hamiltonian differential-algebraic equations - as they emerge from the port- and energy-based modeling of physical systems in various domains. We introduce a novel numerical scheme tailored for semi-explicit differential-algebraic equations and further address more general settings using the concepts of discrete gradient pairs and Dirac-dissipative structures. Additionally, the behavior under system transformations is investigated and we demonstrate that under suitable assumptions port-Hamiltonian differential-algebraic equations admit a representation which consists of a parametrized port-Hamiltonian semi-explicit system and an unstructured equation. Finally, we present the application to multibody system dynamics and discuss numerical results to demonstrate the capabilities of our approach. |
2025-05-24 | A high-order matrix-free adaptive solver for the shallow water equations with irregular bathymetry | Luca Arpaia, Giuseppe Orlando, Christian Ferrarin et.al. | 2505.18743 | | | Abstract (click to expand)We present the first step in the development of an Adaptive Mesh Refinement (AMR) solver for coastal engineering applications, based on a high-order Discontinuous Galerkin (DG) method as implemented in the deal.II library. This environment provides efficient and native parallelization techniques and automatically handles non-conforming meshes to implement both static and dynamic AMR approaches. The proposed method is automatically well-balanced, allows the use of realistic bathymetry data without any regularity assumption, and includes a consistent conservative discretization for transported chemical species. Numerical experiments on idealized benchmarks validate the proposed approach, while results obtained on realistic bathymetries and complex domains show its potential for accurate and efficient adaptive simulations of coastal flows. |
2025-05-24 | Beyond Equilibrium: Non-Equilibrium Foundations Should Underpin Generative Processes in Complex Dynamical Systems | Jiazhen Liu, Ruikun Li, Huandong Wang et.al. | 2505.18621 | | | Abstract (click to expand)This position paper argues that next-generation non-equilibrium-inspired generative models will provide the essential foundation for better modeling real-world complex dynamical systems. While many classical generative algorithms draw inspiration from equilibrium physics, they are fundamentally limited in representing systems with transient, irreversible, or far-from-equilibrium behavior. We show that non-equilibrium frameworks naturally capture non-equilibrium processes and evolving distributions. Through empirical experiments on a dynamic Printz potential system, we demonstrate that non-equilibrium generative models better track temporal evolution and adapt to non-stationary landscapes. We further highlight future directions such as integrating non-equilibrium principles with generative AI to simulate rare events, inferring underlying mechanisms, and representing multi-scale dynamics across scientific domains. Our position is that embracing non-equilibrium physics is not merely beneficial--but necessary--for generative AI to serve as a scientific modeling tool, offering new capabilities for simulating, understanding, and controlling complex systems. |
2025-05-24 | Learning Fluid-Structure Interaction Dynamics with Physics-Informed Neural Networks and Immersed Boundary Methods | Afrah Farea, Saiful Khan, Reza Daryani et.al. | 2505.18565 | link | | Abstract (click to expand)We introduce neural network architectures that combine physics-informed neural networks (PINNs) with the immersed boundary method (IBM) to solve fluid-structure interaction (FSI) problems. Our approach features two distinct architectures: a Single-FSI network with a unified parameter space, and an innovative Eulerian-Lagrangian network that maintains separate parameter spaces for fluid and structure domains. We study each architecture using standard Tanh and adaptive B-spline activation functions. Empirical studies on a 2D cavity flow problem involving a moving solid structure show that the Eulerian-Lagrangian architecture performs significantly better. The adaptive B-spline activation further enhances accuracy by providing locality-aware representation near boundaries. While our methodology shows promising results in predicting the velocity field, pressure recovery remains challenging due to the absence of explicit force-coupling constraints in the current formulation. Our findings underscore the importance of domain-specific architectural design and adaptive activation functions for modeling FSI problems within the PINN framework. |
2025-05-23 | AERO: An autonomous platform for continuous research | Valérie Hayot-Sasson, Abby Stevens, Nicholson Collier et.al. | 2505.18408 | link | | Abstract (click to expand)The COVID-19 pandemic highlighted the need for new data infrastructure, as epidemiologists and public health workers raced to harness rapidly evolving data, analytics, and infrastructure in support of cross-sector investigations. To meet this need, we developed AERO, an automated research and data sharing platform for continuous, distributed, and multi-disciplinary collaboration. In this paper, we describe the AERO design and how it supports the automatic ingestion, validation, and transformation of monitored data into a form suitable for analysis; the automated execution of analyses on this data; and the sharing of data among different entities. We also describe how our AERO implementation leverages capabilities provided by the Globus platform and GitHub for automation, distributed execution, data sharing, and authentication. We present results obtained with an instance of AERO running two public health surveillance applications and demonstrate benchmarking results with a synthetic application, all of which are publicly available for testing. |
2025-05-23 | Seeing Beyond Words: MatVQA for Challenging Visual-Scientific Reasoning in Materials Science | Sifan Wu, Huan Zhang, Yizhan Li et.al. | 2505.18319 | | | Abstract (click to expand)The emergence of Multimodal Large Language Models (MLLMs) that integrate vision and language modalities has unlocked new potentials for scientific reasoning, outperforming prior benchmarks in both natural language and coding domains. Current materials science evaluation datasets such as MaScQA and SciQA remain largely text-based and fail to capture the visual and research-level analytic complexity required in materials discovery and design. We introduce MatVQA, a scalable benchmark specifically designed to address this gap. Generated via an automated pipeline, MArxivAgent, from recent materials literature, MatVQA features 1325 questions across four critical structure-property-performance (SPP) reasoning tasks. Uniquely, MatVQA employs an iterative process to eliminate textual shortcuts, compelling MLLMs to perform fine-grained, low-level visual analysis of material imagery (e.g., microscopy, diffraction patterns) integrated with multi-step scientific reasoning. Benchmarking 17 open- and closed-source MLLMs on MatVQA reveals substantial gaps in current multimodal reasoning capabilities. MatVQA benchmark data, along with evaluation code, is publicly available in \href{https://anonymous.4open.science/r/matvqa-1E01}{https://anonymous.4open.science/r/matvqa-1E01/README.md} to catalyze further research in applying MLLMs to complex materials science problems. |
2025-05-22 | Multi-Objective Optimization Algorithms for Energy Management Systems in Microgrids: A Control Strategy Based on a PHIL System | Saiful Islam, Sanaz Mostaghim, Michael Hartmann et.al. | 2505.18210 | | | Abstract (click to expand)In this research a real time power hardware in loop configuration has been implemented for an microgrid with the combination of distribution energy resources such as photovoltaic, grid tied inverter, battery, utility grid, and a diesel generator. This paper introduces an unique adaptive multi-objective optimization approach that employs weighted optimization techniques for real-time microgrid systems. The aim is to effectively balance various factors including fuel consumption, load mismatch, power quality, battery degradation, and the utilization of renewable energy sources. A real time experimental data from power hardware in loop system has been used for dynamically updating system states. The adaptive preference-based selection method are adjusted based on state of battery charging thresholds. The technique has been integrated with six technical objectives and complex constraints. This approach helps to practical microgrid decision making and optimization of dynamic energy systems. The energy management process were also able to maximize photovoltaic production where minimizing power mismatch, stabilizing battery state of charge under different condition. The research results were also compared with the baseline system without optimization techniques, and a reliable outcome was found. |
2025-05-23 | MOOSE-Chem3: Toward Experiment-Guided Hypothesis Ranking via Simulated Experimental Feedback | Wanhao Liu, Zonglin Yang, Jue Wang et.al. | 2505.17873 | link | | Abstract (click to expand)Hypothesis ranking is a crucial component of automated scientific discovery, particularly in natural sciences where wet-lab experiments are costly and throughput-limited. Existing approaches focus on pre-experiment ranking, relying solely on large language model's internal reasoning without incorporating empirical outcomes from experiments. We introduce the task of experiment-guided ranking, which aims to prioritize candidate hypotheses based on the results of previously tested ones. However, developing such strategies is challenging due to the impracticality of repeatedly conducting real experiments in natural science domains. To address this, we propose a simulator grounded in three domain-informed assumptions, modeling hypothesis performance as a function of similarity to a known ground truth hypothesis, perturbed by noise. We curate a dataset of 124 chemistry hypotheses with experimentally reported outcomes to validate the simulator. Building on this simulator, we develop a pseudo experiment-guided ranking method that clusters hypotheses by shared functional characteristics and prioritizes candidates based on insights derived from simulated experimental feedback. Experiments show that our method outperforms pre-experiment baselines and strong ablations. |
2025-05-23 | 4D-CTA Image and geometry dataset for kinematic analysis of abdominal aortic aneurysms | Mostafa Jamshidian, Adam Wittek, Saeideh Sekhavat et.al. | 2505.17647 | | | Abstract (click to expand)This article presents a dataset used in the article "Kinematics of Abdominal Aortic Aneurysms" [arXiv:2405.13377], published in the Journal of Biomechanics. The dataset is publicly available for download from the Zenodo data repository (https://doi.org/10.5281/zenodo.15477710). The dataset includes time-resolved 3D computed tomography angiography (4D-CTA) images of abdominal aortic aneurysm (AAA) captured throughout the cardiac cycle from ten patients diagnosed with AAA, along with ten patient-specific AAA geometries extracted from these images. Typically, the 4D-CTA dataset for each patient contains ten electrocardiogram (ECG)-gated 3D-CTA image frames acquired over a cardiac cycle, capturing both the systolic and diastolic phases of the AAA configuration. For method verification, the dataset also includes synthetic ground truth data generated from Patient 1's 3D-CTA AAA image in the diastolic phase. The ground truth data includes the patient-specific finite element (FE) biomechanical model and a synthetic systolic 3D-CTA image. The synthetic systolic image was generated by warping Patient 1's diastolic 3D-CTA image using the realistic displacement field obtained from the AAA biomechanical FE model. The images were acquired at Fiona Stanley Hospital in Western Australia and provided to the researchers at the Intelligent Systems for Medicine Laboratory at The University of Western Australia (ISML-UWA), where image-based AAA kinematic analysis was performed. Our dataset enabled the analysis of AAA wall displacement and strain throughout the cardiac cycle using a non-invasive, in vivo, image registration-based approach. The use of widely adopted, open-source file formats (NRRD for images and STL for geometries) facilitates broad applicability and reusability in AAA biomechanics studies that require patient-specific geometry and information about AAA kinematics during cardiac cycle. |
2025-05-23 | Latent Imputation before Prediction: A New Computational Paradigm for De Novo Peptide Sequencing | Ye Du, Chen Yang, Nanxi Yu et.al. | 2505.17524 | link | Accepted by ICML 2025 | Abstract (click to expand)De novo peptide sequencing is a fundamental computational technique for ascertaining amino acid sequences of peptides directly from tandem mass spectrometry data, eliminating the need for reference databases. Cutting-edge models usually encode the observed mass spectra into latent representations from which peptides are predicted autoregressively. However, the issue of missing fragmentation, attributable to factors such as suboptimal fragmentation efficiency and instrumental constraints, presents a formidable challenge in practical applications. To tackle this obstacle, we propose a novel computational paradigm called \underline{\textbf{L}}atent \underline{\textbf{I}}mputation before \underline{\textbf{P}}rediction (LIPNovo). LIPNovo is devised to compensate for missing fragmentation information within observed spectra before executing the final peptide prediction. Rather than generating raw missing data, LIPNovo performs imputation in the latent space, guided by the theoretical peak profile of the target peptide sequence. The imputation process is conceptualized as a set-prediction problem, utilizing a set of learnable peak queries to reason about the relationships among observed peaks and directly generate the latent representations of theoretical peaks through optimal bipartite matching. In this way, LIPNovo manages to supplement missing information during inference and thus boosts performance. Despite its simplicity, experiments on three benchmark datasets demonstrate that LIPNovo outperforms state-of-the-art methods by large margins. Code is available at \href{https://github.com/usr922/LIPNovo}{https://github.com/usr922/LIPNovo}. |
2025-05-23 | Sparse Diffusion Autoencoder for Test-time Adapting Prediction of Complex Systems | Jingwen Cheng, Ruikun Li, Huandong Wang et.al. | 2505.17459 | | | Abstract (click to expand)Predicting the behavior of complex systems is critical in many scientific and engineering domains, and hinges on the model's ability to capture their underlying dynamics. Existing methods encode the intrinsic dynamics of high-dimensional observations through latent representations and predict autoregressively. However, these latent representations lose the inherent spatial structure of spatiotemporal dynamics, leading to the predictor's inability to effectively model spatial interactions and neglect emerging dynamics during long-term prediction. In this work, we propose SparseDiff, introducing a test-time adaptation strategy to dynamically update the encoding scheme to accommodate emergent spatiotemporal structures during the long-term evolution of the system. Specifically, we first design a codebook-based sparse encoder, which coarsens the continuous spatial domain into a sparse graph topology. Then, we employ a graph neural ordinary differential equation to model the dynamics and guide a diffusion decoder for reconstruction. SparseDiff autoregressively predicts the spatiotemporal evolution and adjust the sparse topological structure to adapt to emergent spatiotemporal patterns by adaptive re-encoding. Extensive evaluations on representative systems demonstrate that SparseDiff achieves an average prediction error reduction of 49.99\% compared to baselines, requiring only 1\% of the spatial resolution. |
2025-05-22 | From Local Patterns to Global Understanding: Cross-Stock Trend Integration for Enhanced Predictive Modeling | Yi Hu, Hanchi Ren, Jingjing Deng et.al. | 2505.16573 | | | Abstract (click to expand)Stock price prediction is a critical area of financial forecasting, traditionally approached by training models using the historical price data of individual stocks. While these models effectively capture single-stock patterns, they fail to leverage potential correlations among stock trends, which could improve predictive performance. Current single-stock learning methods are thus limited in their ability to provide a broader understanding of price dynamics across multiple stocks. To address this, we propose a novel method that merges local patterns into a global understanding through cross-stock pattern integration. Our strategy is inspired by Federated Learning (FL), a paradigm designed for decentralized model training. FL enables collaborative learning across distributed datasets without sharing raw data, facilitating the aggregation of global insights while preserving data privacy. In our adaptation, we train models on individual stock data and iteratively merge them to create a unified global model. This global model is subsequently fine-tuned on specific stock data to retain local relevance. The proposed strategy enables parallel training of individual stock models, facilitating efficient utilization of computational resources and reducing overall training time. We conducted extensive experiments to evaluate the proposed method, demonstrating that it outperforms benchmark models and enhances the predictive capabilities of state-of-the-art approaches. Our results highlight the efficacy of Cross-Stock Trend Integration (CSTI) in advancing stock price prediction, offering a robust alternative to traditional single-stock learning methodologies. |
2025-05-22 | A finite element solver for a thermodynamically consistent electrolyte model | Jan Habscheid, Satyvir Singh, Lambert Theisen et.al. | 2505.16296 | | 29 pages, 15 figures | Abstract (click to expand)In this study, we present a finite element solver for a thermodynamically consistent electrolyte model that accurately captures multicomponent ionic transport by incorporating key physical phenomena such as steric effects, solvation, and pressure coupling. The model is rooted in the principles of non-equilibrium thermodynamics and strictly enforces mass conservation, charge neutrality, and entropy production. It extends beyond classical frameworks like the Nernst-Planck system by employing modified partial mass balances, the electrostatic Poisson equation, and a momentum balance expressed in terms of electrostatic potential, atomic fractions, and pressure, thereby enhancing numerical stability and physical consistency. Implemented using the FEniCSx platform, the solver efficiently handles one- and two-dimensional problems with varied boundary conditions and demonstrates excellent convergence behavior and robustness. Validation against benchmark problems confirms its improved physical fidelity, particularly in regimes characterized by high ionic concentrations and strong electrochemical gradients. Simulation results reveal critical electrolyte phenomena, including electric double layer formation, rectification behavior, and the effects of solvation number, Debye length, and compressibility. The solver's modular variational formulation facilitates its extension to complex electrochemical systems involving multiple ionic species with asymmetric valences. |
2025-05-22 | Advanced Integration Strategies for ESD Protection and Termination in High-Speed LVDS Systems | Kavya Gaddipati et.al. | 2505.16200 | | LVDS Design Integration, ESD Protection Implementation, Signal Integrity Optimization, High-Speed PCB Layout, Termination Technique. 13 PAGES, 2 FIGURES | Abstract (click to expand)This technical article explores comprehensive strategies for integrating Electrostatic Discharge (ESD) protection diodes and termination resistors in LowVoltage Differential Signaling (LVDS) designs. The article examines critical aspects of protection mechanisms, design considerations, impedance matching, and placement optimization techniques. Through detailed analysis of layout considerations and advanced design strategies, the article presents solutions for common integration challenges. It emphasizes the importance of signal integrity maintenance and protection effectiveness while providing practical guidelines for implementing robust LVDS systems. Various methodologies for performance optimization and validation are discussed, offering designers a thorough framework for creating reliable high-speed digital systems that balance protection requirements with signal integrity demands. |
2025-05-21 | Elasto-acoustic wave propagation in geophysical media using hybrid high-order methods on general meshes | Romain Mottier, Alexandre Ern, Laurent Guillot et.al. | 2505.15771 | | | Abstract (click to expand)Hybrid high-order (HHO) methods are numerical methods characterized by several interesting properties such as local conservativity, geometric flexibility and high-order accuracy. Here, HHO schemes are studied for the space semi-discretization of coupled elasto-acoustic waves in the time domain using a first-order formulation. Explicit and singly diagonal implicit Runge--Kutta (ERK & SDIRK) schemes are used for the time discretization. We show that an efficient implementation of explicit (resp. implicit) time schemes calls for a static condensation of the face (resp. cell) unknowns. Crucially, both static condensation procedures only involve block-diagonal matrices. Then, we provide numerical estimates for the CFL stability limit of ERK schemes and present a comparative study on the efficiency of explicit versus implicit schemes. Our findings indicate that implicit time schemes remain competitive in many situations. Finally, simulations in a 2D realistic geophysical configuration are performed, illustrating the geometrical flexibility of the HHO method: both hybrid (triangular and quadrangular) and nonconforming (with hanging nodes) meshes are easily handled, delivering results of comparable accuracy to a reference spectral element software based on tensorized elements. |
2025-05-21 | Toward Open Earth Science as Fast and Accessible as Natural Language | Marquita Ellis, Iksha Gurung, Muthukumaran Ramasubramanian et.al. | 2505.15690 | | | Abstract (click to expand)Is natural-language-driven earth observation data analysis now feasible with the assistance of Large Language Models (LLMs)? For open science in service of public interest, feasibility requires reliably high accuracy, interactive latencies, low (sustainable) costs, open LLMs, and openly maintainable software -- hence, the challenge. What are the techniques and programming system requirements necessary for satisfying these constraints, and what is the corresponding development and maintenance burden in practice? This study lays the groundwork for exploring these questions, introducing an impactful earth science use-case, and providing a software framework with evaluation data and metrics, along with initial results from employing model scaling, prompt-optimization, and inference-time scaling optimization techniques. While we attain high accuracy (near 100%) across 10 of 11 metrics, the analysis further considers cost (token-spend), latency, and maintainability across this space of techniques. Finally, we enumerate opportunities for further research, general programming and evaluation framework development, and ongoing work for a comprehensive, deployable solution. This is a call for collaboration and contribution. |
2025-05-21 | Local-Global Associative Frames for Symmetry-Preserving Crystal Structure Modeling | Haowei Hua, Wanyu Lin et.al. | 2505.15315 | | | Abstract (click to expand)Crystal structures are defined by the periodic arrangement of atoms in 3D space, inherently making them equivariant to SO(3) group. A fundamental requirement for crystal property prediction is that the model's output should remain invariant to arbitrary rotational transformations of the input structure. One promising strategy to achieve this invariance is to align the given crystal structure into a canonical orientation with appropriately computed rotations, or called frames. However, existing work either only considers a global frame or solely relies on more advanced local frames based on atoms' local structure. A global frame is too coarse to capture the local structure heterogeneity of the crystal, while local frames may inadvertently disrupt crystal symmetry, limiting their expressivity. In this work, we revisit the frame design problem for crystalline materials and propose a novel approach to construct expressive Symmetry-Preserving Frames, dubbed as SPFrame, for modeling crystal structures. Specifically, this local-global associative frame constructs invariant local frames rather than equivariant ones, thereby preserving the symmetry of the crystal. In parallel, it integrates global structural information to construct an equivariant global frame to enforce SO(3) invariance. Extensive experimental results demonstrate that SPFrame consistently outperforms traditional frame construction techniques and existing crystal property prediction baselines across multiple benchmark tasks. |
2025-05-21 | Quantization of Probability Distributions via Divide-and-Conquer: Convergence and Error Propagation under Distributional Arithmetic Operations | Bilgesu Arif Bilgin, Olof Hallqvist Elias, Michael Selby et.al. | 2505.15283 | | 31 pages, 8 figures. Comments welcome! | Abstract (click to expand)This article studies a general divide-and-conquer algorithm for approximating continuous one-dimensional probability distributions with finite mean. The article presents a numerical study that compares pre-existing approximation schemes with a special focus on the stability of the discrete approximations when they undergo arithmetic operations. The main results are a simple upper bound of the approximation error in terms of the Wasserstein-1 distance that is valid for all continuous distributions with finite mean. In many use-cases, the studied method achieve optimal rate of convergence, and numerical experiments show that the algorithm is more stable than pre-existing approximation schemes in the context of arithmetic operations. |
2025-05-21 | Degree-Optimized Cumulative Polynomial Kolmogorov-Arnold Networks | Mathew Vanherreweghe, Lirandë Pira, Patrick Rebentrost et.al. | 2505.15228 | | | Abstract (click to expand)We introduce cumulative polynomial Kolmogorov-Arnold networks (CP-KAN), a neural architecture combining Chebyshev polynomial basis functions and quadratic unconstrained binary optimization (QUBO). Our primary contribution involves reformulating the degree selection problem as a QUBO task, reducing the complexity from \(O(D^N)\) to a single optimization step per layer. This approach enables efficient degree selection across neurons while maintaining computational tractability. The architecture performs well in regression tasks with limited data, showing good robustness to input scales and natural regularization properties from its polynomial basis. Additionally, theoretical analysis establishes connections between CP-KAN's performance and properties of financial time series. Our empirical validation across multiple domains demonstrates competitive performance compared to several traditional architectures tested, especially in scenarios where data efficiency and numerical stability are important. Our implementation, including strategies for managing computational overhead in larger networks is available in Ref.~\citep{cpkan_implementation}. |
2025-05-21 | R&D-Agent-Quant: A Multi-Agent Framework for Data-Centric Factors and Model Joint Optimization | Yuante Li, Xu Yang, Xiao Yang et.al. | 2505.15155 | link | | Abstract (click to expand)Financial markets pose fundamental challenges for asset return prediction due to their high dimensionality, non-stationarity, and persistent volatility. Despite advances in large language models and multi-agent systems, current quantitative research pipelines suffer from limited automation, weak interpretability, and fragmented coordination across key components such as factor mining and model innovation. In this paper, we propose R&D-Agent for Quantitative Finance, in short RD-Agent(Q), the first data-centric multi-agent framework designed to automate the full-stack research and development of quantitative strategies via coordinated factor-model co-optimization. RD-Agent(Q) decomposes the quant process into two iterative stages: a Research stage that dynamically sets goal-aligned prompts, formulates hypotheses based on domain priors, and maps them to concrete tasks, and a Development stage that employs a code-generation agent, Co-STEER, to implement task-specific code, which is then executed in real-market backtests. The two stages are connected through a feedback stage that thoroughly evaluates experimental outcomes and informs subsequent iterations, with a multi-armed bandit scheduler for adaptive direction selection. Empirically, RD-Agent(Q) achieves up to 2X higher annualized returns than classical factor libraries using 70% fewer factors, and outperforms state-of-the-art deep time-series models on real markets. Its joint factor-model optimization delivers a strong balance between predictive accuracy and strategy robustness. Our code is available at: https://github.com/microsoft/RD-Agent. |
2025-05-20 | Global Maxwell Tomography Using the Volume-Surface Integral Equation for Improved Estimation of Electrical Properties | Ilias Giannakopoulos, José E. Cruz Serrallés, Jan Paška et.al. | 2505.14546 | | 12 pages, 8 figures, 2 tables, Article accepted for publication in IEEE Transactions on Biomedical Engineering | Abstract (click to expand)Objective: Global Maxwell Tomography (GMT) is a noninvasive inverse optimization method for the estimation of electrical properties (EP) from magnetic resonance (MR) measurements. GMT uses the volume integral equation (VIE) in the forward problem and assumes that the sample has negligible effect on the coil currents. Consequently, GMT calculates the coil's incident fields with an initial EP distribution and keeps them constant for all optimization iterations. This can lead to erroneous reconstructions. This work introduces a novel version of GMT that replaces VIE with the volume-surface integral equation (VSIE), which recalculates the coil currents at every iteration based on updated EP estimates before computing the associated fields. Methods: We simulated an 8-channel transceiver coil array for 7 T brain imaging and reconstructed the EP of a realistic head model using VSIE-based GMT. We built the coil, collected experimental MR measurements, and reconstructed EP of a two-compartment phantom. Results: In simulations, VSIE-based GMT outperformed VIE-based GMT by at least 12% for both EP. In experiments, the relative difference with respect to probe-measured EP values in the inner (outer) compartment was 13% (26%) and 17% (33%) for the permittivity and conductivity, respectively. Conclusion: The use of VSIE over VIE enhances GMT's performance by accounting for the effect of the EP on the coil currents. Significance: VSIE-based GMT does not rely on an initial EP estimate, rendering it more suitable for experimental reconstructions compared to the VIE-based GMT. |
2025-05-20 | Design and Evaluation of a Microservices Cloud Framework for Online Travel Platforms | Biman Barua, M. Shamim Kaiser et.al. | 2505.14508 | | 15 pages, 2 figures, 6 tables | Abstract (click to expand)Handling online travel agents globally requires efficient and flexible software solution architectures. When it needs to handle thousands of agents and billions of clients data globally. Microservices architecture is used to break down a large program into numerous, smaller services which can run individually and perform individual tasks. This paper analyses and integrates a unique Microservices Cloud Framework designed to support Online Travel Platforms (MCF-OTP). MCF-OTPs main goal is to increase the performance, flexibility, and maintenance of online travel platforms via cloud computing and microservice technologies. Large-scale travel apps, including managing numerous data sources, dealing with traffic peaks, and providing fault tolerance, can be addressed by the suggested framework. The framework increases good interpretation between flawless data synchronization, microservices, and dynamic scaling based on demand technology. An organization framework that optimizes service borders and minimizes inter-service dependencies is recommended. Thus, this can result in elevated development adaptability. In this research, the principal goal is to evaluate MCF-OTPs efficiency using the indicators of fault tolerance and response time. It is indicated by the findings that the MCF-OTP structure excels traditional monolithic designs in terms of dependability and scalability, managing traffic spikes seamlessly and decreasing downtime. The cost-effective analysis helps ascertain the net gain attained by the startup fees and the ongoing operational costs. The cloud-based environment is used to reduce the fracture cost which also helps to increase the efficiency of resource allocation, according to the research. |
2025-05-20 | Higher-order, mixed-hybrid finite elements for Kirchhoff-Love shells | Jonas Neumeyer, Michael Wolfgang Kaiser, Thomas-Peter Fries et.al. | 2505.14115 | | Article has been submitted to Internat. J. Numer. Methods Engrg | Abstract (click to expand)A novel mixed-hybrid method for Kirchhoff-Love shells is proposed that enables the use of classical, possibly higher-order Lagrange elements in numerical analyses. In contrast to purely displacement-based formulations that require higher continuity of shape functions as in IGA, the mixed formulation features displacements and moments as primary unknowns. Thereby the continuity requirements are reduced, allowing equal-order interpolations of the displacements and moments. Hybridization enables an element-wise static condensation of the degrees of freedom related to the moments, at the price of introducing (significantly less) rotational degrees of freedom acting as Lagrange multipliers to weakly enforce the continuity of tangential moments along element edges. The mixed model is formulated coordinate-free based on the Tangential Differential Calculus, making it applicable for explicitly and implicitly defined shell geometries. All mechanically relevant boundary conditions are considered. Numerical results confirm optimal higher-order convergence rates whenever the mechanical setup allows for sufficiently smooth solutions; new benchmark test cases of this type are proposed. |
2025-05-20 | Improved Methods for Model Pruning and Knowledge Distillation | Wei Jiang, Anying Fu, Youling Zhang et.al. | 2505.14052 | | | Abstract (click to expand)Model pruning is a performance optimization technique for large language models like R1 or o3-mini. However, existing pruning methods often lead to significant performance degradation or require extensive retraining and fine-tuning. This technique aims to identify and remove neurons, connections unlikely leading to the contribution during the human-computer interaction phase. Our goal is to obtain a much smaller and faster knowledge distilled model that can quickly generate content almost as good as those of the unpruned ones. We propose MAMA Pruning, short for Movement and Magnitude Analysis, an improved pruning method that effectively reduces model size and computational complexity while maintaining performance comparable to the original unpruned model even at extreme pruned levels. The improved method is based on weights, bias fixed in the pre-training phase and GRPO rewards verified during the post-training phase as our novel pruning indicators. Preliminary experimental results show that our method outperforms and be comparable to state-of-the-art methods across various pruning levels and different downstream computational linguistics tasks. |
2025-05-20 | PLUTUS Open Source -- Breaking Barriers in Algorithmic Trading | An-Dan Nguyen, Quang-Khoi Ta, Duy-Anh Vo et.al. | 2505.14050 | link | 8 pages, 10 figures, 3 tables | Abstract (click to expand)Algorithmic trading has long been an opaque, fragmented domain, guarded by secrecy and built around proprietary systems. In contrast to the open, collaborative evolution in fields like machine learning or software engineering, the algorithmic trading ecosystem has been slow to adopt reproducibility, standardization, and shared infrastructure. This paper introduces PLUTUS Open Source, an initiative sponsored by ALGOTRADE to reshape this landscape through openness, structure, and collaboration. PLUTUS combines a reproducibility standard, a modular development framework, and a growing suite of community-built reference strategies. The project provides a systematic approach to designing, testing, and documenting trading algorithms, regardless of the user's technical or financial background. We outline the motivation behind the initiative, present its foundational structure, and showcase working examples that adhere to the PLUTUS standard. We also invite the broader research and trading communities to contribute, iterate, and help build a transparent and inclusive future for algorithmic trading. |
2025-05-20 | Predicting Dynamical Systems across Environments via Diffusive Model Weight Generation | Ruikun Li, Huandong Wang, Jingtao Ding et.al. | 2505.13919 | | | Abstract (click to expand)Data-driven methods offer an effective equation-free solution for predicting physical dynamics. However, the same physical system can exhibit significantly different dynamic behaviors in various environments. This causes prediction functions trained for specific environments to fail when transferred to unseen environments. Therefore, cross-environment prediction requires modeling the dynamic functions of different environments. In this work, we propose a model weight generation method, \texttt{EnvAd-Diff}. \texttt{EnvAd-Diff} operates in the weight space of the dynamic function, generating suitable weights from scratch based on environmental condition for zero-shot prediction. Specifically, we first train expert prediction functions on dynamic trajectories from a limited set of visible environments to create a model zoo, thereby constructing sample pairs of prediction function weights and their corresponding environments. Subsequently, we train a latent space diffusion model conditioned on the environment to model the joint distribution of weights and environments. Considering the lack of environmental prior knowledge in real-world scenarios, we propose a physics-informed surrogate label to distinguish different environments. Generalization experiments across multiple systems demonstrate that a 1M parameter prediction function generated by \texttt{EnvAd-Diff} outperforms a pre-trained 500M parameter foundation model. |
2025-05-19 | Generative Modeling of Random Fields from Limited Data via Constrained Latent Flow Matching | James E. Warner, Tristan A. Shah, Patrick E. Leser et.al. | 2505.13007 | link | 10 pages plus references and appendices, 17 figures | Abstract (click to expand)Deep generative models are promising tools for science and engineering, but their reliance on abundant, high-quality data limits applicability. We present a novel framework for generative modeling of random fields (probability distributions over continuous functions) that incorporates domain knowledge to supplement limited, sparse, and indirect data. The foundation of the approach is latent flow matching, where generative modeling occurs on compressed function representations in the latent space of a pre-trained variational autoencoder (VAE). Innovations include the adoption of a function decoder within the VAE and integration of physical/statistical constraints into the VAE training process. In this way, a latent function representation is learned that yields continuous random field samples satisfying domain-specific constraints when decoded, even in data-limited regimes. Efficacy is demonstrated on two challenging applications: wind velocity field reconstruction from sparse sensors and material property inference from a limited number of indirect measurements. Results show that the proposed framework achieves significant improvements in reconstruction accuracy compared to unconstrained methods and enables effective inference with relatively small training datasets that is intractable without constraints. |
2025-05-19 | Implicit differentiation with second-order derivatives and benchmarks in finite-element-based differentiable physics | Tianju Xue et.al. | 2505.12646 | link | | Abstract (click to expand)Differentiable programming is revolutionizing computational science by enabling automatic differentiation (AD) of numerical simulations. While first-order gradients are well-established, second-order derivatives (Hessians) for implicit functions in finite-element-based differentiable physics remain underexplored. This work bridges this gap by deriving and implementing a framework for implicit Hessian computation in PDE-constrained optimization problems. We leverage primitive AD tools (Jacobian-vector product/vector-Jacobian product) to build an algorithm for Hessian-vector products and validate the accuracy against finite difference approximations. Four benchmarks spanning linear/nonlinear, 2D/3D, and single/coupled-variable problems demonstrate the utility of second-order information. Results show that the Newton-CG method with exact Hessians accelerates convergence for nonlinear inverse problems (e.g., traction force identification, shape optimization), while the L-BFGS-B method suffices for linear cases. Our work provides a robust foundation for integrating second-order implicit differentiation into differentiable physics engines, enabling faster and more reliable optimization. |
2025-05-20 | ChromFound: Towards A Universal Foundation Model for Single-Cell Chromatin Accessibility Data | Yifeng Jiao, Yuchen Liu, Yu Zhang et.al. | 2505.12638 | | | Abstract (click to expand)The advent of single-cell Assay for Transposase-Accessible Chromatin using sequencing (scATAC-seq) offers an innovative perspective for deciphering regulatory mechanisms by assembling a vast repository of single-cell chromatin accessibility data. While foundation models have achieved significant success in single-cell transcriptomics, there is currently no foundation model for scATAC-seq that supports zero-shot high-quality cell identification and comprehensive multi-omics analysis simultaneously. Key challenges lie in the high dimensionality and sparsity of scATAC-seq data, as well as the lack of a standardized schema for representing open chromatin regions (OCRs). Here, we present ChromFound, a foundation model tailored for scATAC-seq. ChromFound utilizes a hybrid architecture and genome-aware tokenization to effectively capture genome-wide long contexts and regulatory signals from dynamic chromatin landscapes. Pretrained on 1.97 million cells from 30 tissues and 6 disease conditions, ChromFound demonstrates broad applicability across 6 diverse tasks. Notably, it achieves robust zero-shot performance in generating universal cell representations and exhibits excellent transferability in cell type annotation and cross-omics prediction. By uncovering enhancer-gene links undetected by existing computational methods, ChromFound offers a promising framework for understanding disease risk variants in the noncoding genome. |
2025-05-19 | Seismic analysis based on a new interval method with incomplete information | Shizhong Liang, Yuxiang Yang, Chen Li et.al. | 2505.12607 | | | Abstract (click to expand)For seismic analysis in engineering structures, it is essential to consider the dynamic responses under seismic excitation, necessitating the description of seismic accelerations. Limit seismics samples lead to incomplete uncertainty information, which is described by the non-probabilistic method reasonable. This study employs the minimum interval radius-based interval process (MRIP) based on the convex model to describe the time-variant uncertain seismic acceleration, subsequently conducting uncertainty analysis for seismic structures. However, the Monte Carlo simulation for uncertainty analysis requires extensive deterministic computations to ensure accuracy, exhibiting poor computational efficiency. To address this issue, this paper first improves the covariance matrix adaptation evolution strategy (CMA-ES) through the dynamic evolution sequence, proposing DES-ES, whose efficiency is validated to be higher than that of CMA-ES. Furthermore, leveraging the dependency of the responses, a computational framework named DES-ES-SS is proposed. Numerical experiments demonstrate that DES-ES-SS improves computational efficiency while maintaining the accuracy of the interval uncertainty analysis of the seismic structures whether the seismic acceleration is stationary or non-stationary. |
2025-05-18 | Predicting Gas Well Performance with Decline Curve Analysis: A Case Study on Semutang Gas Field | Md. Shakil Rahaman, Ahmed Sakib, Ataharuse Samad et.al. | 2505.12333 | | 8th International Conference on Mechanical, Industrial and Energy Engineering 2024 | Abstract (click to expand)Decline-curve analysis (DCA) is a widely utilized method for production forecasting and estimating remaining reserves in gas reservoir. Based on the assumptions that past production trend can be mathematically characterized and used to predict future performance. It relies on historical production data and assumes that production methods remain unchanged throughout the analysis. This method is particularly valuable due to its accuracy in forecasting and its broad acceptance within the industry. Wells in the same geographical area and producing from similar geological formations often exhibit similar decline curve parameters. This study applies DCA to forecast the future production performance and estimate the ultimate recovery for the Semutang gas field's well 5 in Bangladesh. Using historical production data, decline curves were generated based on exponential, hyperbolic, and harmonic model equations. The cumulative production estimations were 11,139.34 MMSCF for the exponential model, 11,620.26 MMSCF for the hyperbolic model, and 14,021.92 MMSCF for the harmonic model. In terms of the well's productive life, the estimates were 335.13 days, 1,152 days, and 22,611 days, respectively. Among these models, the hyperbolic decline provided the most realistic forecast, closely aligning with observed production trend. The study highlights the importance of selecting an appropriate decline model for accurate production forecasting and reserve estimation, which is essential for effective reservoir management and resource optimization. |
2025-05-17 | MLLM-based Discovery of Intrinsic Coordinates and Governing Equations from High-Dimensional Data | Ruikun Li, Yan Lu, Shixiang Tang et.al. | 2505.11940 | | | Abstract (click to expand)Discovering governing equations from scientific data is crucial for understanding the evolution of systems, and is typically framed as a search problem within a candidate equation space. However, the high-dimensional nature of dynamical systems leads to an exponentially expanding equation space, making the search process extremely challenging. The visual perception and pre-trained scientific knowledge of multimodal large language models (MLLM) hold promise for providing effective navigation in high-dimensional equation spaces. In this paper, we propose a zero-shot method based on MLLM for automatically discovering physical coordinates and governing equations from high-dimensional data. Specifically, we design a series of enhanced visual prompts for MLLM to enhance its spatial perception. In addition, MLLM's domain knowledge is employed to navigate the search process within the equation space. Quantitative and qualitative evaluations on two representative types of systems demonstrate that the proposed method effectively discovers the physical coordinates and equations from both simulated and real experimental data, with long-term extrapolation accuracy improved by approximately 26.96% compared to the baseline. |
2025-05-17 | LLM-Enhanced Feature Engineering for Multi-Factor Electricity Price Predictions | Haochen Xue, Chenghao Liu, Chong Zhang et.al. | 2505.11890 | | | Abstract (click to expand)Accurately forecasting electricity price volatility is crucial for effective risk management and decision-making. Traditional forecasting models often fall short in capturing the complex, non-linear dynamics of electricity markets, particularly when external factors like weather conditions and market volatility are involved. These limitations hinder their ability to provide reliable predictions in markets with high volatility, such as the New South Wales (NSW) electricity market. To address these challenges, we introduce FAEP, a Feature-Augmented Electricity Price Prediction framework. FAEP leverages Large Language Models (LLMs) combined with advanced feature engineering to enhance prediction accuracy. By incorporating external features such as weather data and price volatility jumps, and utilizing Retrieval-Augmented Generation (RAG) for effective feature extraction, FAEP overcomes the shortcomings of traditional approaches. A hybrid XGBoost-LSTM model in FAEP further refines these augmented features, resulting in a more robust prediction framework. Experimental results demonstrate that FAEP achieves state-of-art (SOTA) performance compared to other electricity price prediction models in the Australian New South Wale electricity market, showcasing the efficiency of LLM-enhanced feature engineering and hybrid machine learning architectures. |
2025-05-16 | CLT and Edgeworth Expansion for m-out-of-n Bootstrap Estimators of The Studentized Median | Imon Banerjee, Sayak Chakrabarty et.al. | 2505.11725 | | 48 pages | Abstract (click to expand)The m-out-of-n bootstrap, originally proposed by Bickel, Gotze, and Zwet (1992), approximates the distribution of a statistic by repeatedly drawing m subsamples (with m much smaller than n) without replacement from an original sample of size n. It is now routinely used for robust inference with heavy-tailed data, bandwidth selection, and other large-sample applications. Despite its broad applicability across econometrics, biostatistics, and machine learning, rigorous parameter-free guarantees for the soundness of the m-out-of-n bootstrap when estimating sample quantiles have remained elusive. This paper establishes such guarantees by analyzing the estimator of sample quantiles obtained from m-out-of-n resampling of a dataset of size n. We first prove a central limit theorem for a fully data-driven version of the estimator that holds under a mild moment condition and involves no unknown nuisance parameters. We then show that the moment assumption is essentially tight by constructing a counter-example in which the CLT fails. Strengthening the assumptions slightly, we derive an Edgeworth expansion that provides exact convergence rates and, as a corollary, a Berry Esseen bound on the bootstrap approximation error. Finally, we illustrate the scope of our results by deriving parameter-free asymptotic distributions for practical statistics, including the quantiles for random walk Metropolis-Hastings and the rewards of ergodic Markov decision processes, thereby demonstrating the usefulness of our theory in modern estimation and learning tasks. |
2025-05-16 | GLOVA: Global and Local Variation-Aware Analog Circuit Design with Risk-Sensitive Reinforcement Learning | Dongjun Kim, Junwoo Park, Chaehyeon Shin et.al. | 2505.11208 | | Accepted for DAC 2025 | Abstract (click to expand)Analog/mixed-signal circuit design encounters significant challenges due to performance degradation from process, voltage, and temperature (PVT) variations. To achieve commercial-grade reliability, iterative manual design revisions and extensive statistical simulations are required. While several studies have aimed to automate variation aware analog design to reduce time-to-market, the substantial mismatches in real-world wafers have not been thoroughly addressed. In this paper, we present GLOVA, an analog circuit sizing framework that effectively manages the impact of diverse random mismatches to improve robustness against PVT variations. In the proposed approach, risk-sensitive reinforcement learning is leveraged to account for the reliability bound affected by PVT variations, and ensemble-based critic is introduced to achieve sample-efficient learning. For design verification, we also propose \(\mu\)-\(\sigma\) evaluation and simulation reordering method to reduce simulation costs of identifying failed designs. GLOVA supports verification through industrial-level PVT variation evaluation methods, including corner simulation as well as global and local Monte Carlo (MC) simulations. Compared to previous state-of-the-art variation-aware analog sizing frameworks, GLOVA achieves up to 80.5\(\times\) improvement in sample efficiency and 76.0\(\times\) reduction in time. |
2025-05-16 | Prot2Text-V2: Protein Function Prediction with Multimodal Contrastive Alignment | Xiao Fei, Michail Chatzianastasis, Sarah Almeida Carneiro et.al. | 2505.11194 | link | 25 pages, 11 figures | Abstract (click to expand)Predicting protein function from sequence is a central challenge in computational biology. While existing methods rely heavily on structured ontologies or similarity-based techniques, they often lack the flexibility to express structure-free functional descriptions and novel biological functions. In this work, we introduce Prot2Text-V2, a novel multimodal sequence-to-text model that generates free-form natural language descriptions of protein function directly from amino acid sequences. Our method combines a protein language model as a sequence encoder (ESM-3B) and a decoder-only language model (LLaMA-3.1-8B-Instruct) through a lightweight nonlinear modality projector. A key innovation is our Hybrid Sequence-level Contrastive Alignment Learning (H-SCALE), which improves cross-modal learning by matching mean- and std-pooled protein embeddings with text representations via contrastive loss. After the alignment phase, we apply instruction-based fine-tuning using LoRA on the decoder to teach the model how to generate accurate protein function descriptions conditioned on the protein sequence. We train Prot2Text-V2 on about 250K curated entries from SwissProt and evaluate it under low-homology conditions, where test sequences have low similarity with training samples. Prot2Text-V2 consistently outperforms traditional and LLM-based baselines across various metrics. |
2025-05-16 | Time Travel is Cheating: Going Live with DeepFund for Real-Time Fund Investment Benchmarking | Changlun Li, Yao Shi, Chen Wang et.al. | 2505.11065 | link | 21 pages, 9 figures | Abstract (click to expand)Large Language Models (LLMs) have demonstrated notable capabilities across financial tasks, including financial report summarization, earnings call transcript analysis, and asset classification. However, their real-world effectiveness in managing complex fund investment remains inadequately assessed. A fundamental limitation of existing benchmarks for evaluating LLM-driven trading strategies is their reliance on historical back-testing, inadvertently enabling LLMs to "time travel"-leveraging future information embedded in their training corpora, thus resulting in possible information leakage and overly optimistic performance estimates. To address this issue, we introduce DeepFund, a live fund benchmark tool designed to rigorously evaluate LLM in real-time market conditions. Utilizing a multi-agent architecture, DeepFund connects directly with real-time stock market data-specifically data published after each model pretraining cutoff-to ensure fair and leakage-free evaluations. Empirical tests on nine flagship LLMs from leading global institutions across multiple investment dimensions-including ticker-level analysis, investment decision-making, portfolio management, and risk control-reveal significant practical challenges. Notably, even cutting-edge models such as DeepSeek-V3 and Claude-3.7-Sonnet incur net trading losses within DeepFund real-time evaluation environment, underscoring the present limitations of LLMs for active fund management. Our code is available at https://github.com/HKUSTDial/DeepFund. |
2025-05-16 | Enforced Interface Constraints for Domain Decomposition Method of Discrete Physics-Informed Neural Networks | Jichao Yin, Mingxuan Li, Jianguang Fang et.al. | 2505.10925 | | 25 pages, 13 figures | Abstract (click to expand)This study presents a discrete physics-informed neural network (dPINN) framework, enhanced with enforced interface constraints (EIC), for modeling physical systems using the domain decomposition method (DDM). Built upon finite element-style mesh discretization, the dPINN accurately evaluates system energy through Gaussian quadrature-based element-wise integration. To ensure physical field continuity across subdomain interfaces, the EIC mechanism enforces interfacial displacement constraints without requiring auxiliary sampling or loss penalties.This formulation supports independent meshing in each subdomain, simplifying preprocessing and improving computational flexibility. Additionally, by eliminating the influence of weak spatial constraints (WSC) commonly observed in traditional PINNs, the EIC-dPINN delivers more stable and physically consistent predictions.Extensive two- and three-dimensional numerical experiments validate the proposed framework's accuracy and demonstrate the computational efficiency gains achieved through parallel training. The results highlight the framework's scalability, robustness, and potential for solving large-scale, geometrically complex problems. |
2025-05-15 | Space-Time Multigrid Methods Suitable for Topology Optimisation of Transient Heat Conduction | Magnus Appel, Joe Alexandersen et.al. | 2505.10168 | | 30 pages, 13 figures | Abstract (click to expand)This paper presents Space-Time MultiGrid (STMG) methods which are suitable for performing topology optimisation of transient heat conduction problems. The proposed methods use a pointwise smoother and uniform Cartesian space-time meshes. For problems with high contrast in the diffusivity, it was found that it is beneficial to define a coarsening strategy based on the geometric mean of the minimum and maximum diffusivity. However, other coarsening strategies may be better for other smoothers. Several methods of discretising the coarse levels were tested. Of these, it was best to use a method which averages the thermal resistivities on the finer levels. However, this was likely a consequence of the fact that only one spatial dimension was considered for the test problems. A second coarsening strategy was proposed which ensures spatial resolution on the coarse grids. Mixed results were found for this strategy. The proposed STMG methods were used as a solver for a one-dimensional topology optimisation problem. In this context, the adjoint problem was also solved using the STMG methods. The STMG methods were sufficiently robust for this application, since they converged during every optimisation cycle. It was found that the STMG methods also work for the adjoint problem when the prolongation operator only sends information forwards in time, even although the direction of time for the adjoint problem is backwards. |
2025-05-15 | Knowledge-Based Aerospace Engineering -- A Systematic Literature Review | Tim Wittenborg, Ildar Baimuratov, Ludvig Knöös Franzén et.al. | 2505.10142 | | 40 pages, 14 figures, submitted to Advanced Engineering Informatics | Abstract (click to expand)The aerospace industry operates at the frontier of technological innovation while maintaining high standards regarding safety and reliability. In this environment, with an enormous potential for re-use and adaptation of existing solutions and methods, Knowledge-Based Engineering (KBE) has been applied for decades. The objective of this study is to identify and examine state-of-the-art knowledge management practices in the field of aerospace engineering. Our contributions include: 1) A SWARM-SLR of over 1,000 articles with qualitative analysis of 164 selected articles, supported by two aerospace engineering domain expert surveys. 2) A knowledge graph of over 700 knowledge-based aerospace engineering processes, software, and data, formalized in the interoperable Web Ontology Language (OWL) and mapped to Wikidata entries where possible. The knowledge graph is represented on the Open Research Knowledge Graph (ORKG), and an aerospace Wikibase, for reuse and continuation of structuring aerospace engineering knowledge exchange. 3) Our resulting intermediate and final artifacts of the knowledge synthesis, available as a Zenodo dataset. This review sets a precedent for structured, semantic-based approaches to managing aerospace engineering knowledge. By advancing these principles, research, and industry can achieve more efficient design processes, enhanced collaboration, and a stronger commitment to sustainable aviation. |
2025-05-15 | Physical regularized Hierarchical Generative Model for Metallic Glass Structural Generation and Energy Prediction | Qiyuan Chen, Ajay Annamareddy, Ying-Fei Li et.al. | 2505.09977 | | | Abstract (click to expand)Disordered materials such as glasses, unlike crystals, lack long range atomic order and have no periodic unit cells, yielding a high dimensional configuration space with widely varying properties. The complexity not only increases computational costs for atomistic simulations but also makes it difficult for generative AI models to deliver accurate property predictions and realistic structure generation. In this work, we introduce GlassVAE, a hierarchical graph variational autoencoder that uses graph representations to learn compact, rotation, translation, and permutation invariant embeddings of atomic configurations. The resulting structured latent space not only enables efficient generation of novel, physically plausible structures but also supports exploration of the glass energy landscape. To enforce structural realism and physical fidelity, we augment GlassVAE with two physics informed regularizers, a radial distribution function (RDF) loss that captures characteristic short and medium range ordering and an energy regression loss that reflects the broad configurational energetics. Both theoretical analysis and experimental results highlight the critical impact of these regularizers. By encoding high dimensional atomistic data into a compact latent vector and decoding it into structures with accurate energy predictions, GlassVAE provides a fast, physics aware path for modeling and designing disordered materials. |
2025-05-15 | Avocado Price Prediction Using a Hybrid Deep Learning Model: TCN-MLP-Attention Architecture | Linwei Zhang, LuFeng, Ruijia Liang et.al. | 2505.09907 | | | Abstract (click to expand)With the growing demand for healthy foods, agricultural product price forecasting has become increasingly important. Hass avocados, as a high-value crop, exhibit complex price fluctuations influenced by factors such as seasonality, region, and weather. Traditional prediction models often struggle with highly nonlinear and dynamic data. To address this, we propose a hybrid deep learning model, TCN-MLP-Attention Architecture, combining Temporal Convolutional Networks (TCN) for sequential feature extraction, Multi-Layer Perceptrons (MLP) for nonlinear interactions, and an Attention mechanism for dynamic feature weighting. The dataset used covers over 50,000 records of Hass avocado sales across the U.S. from 2015 to 2018, including variables such as sales volume, average price, time, region, weather, and variety type, collected from point-of-sale systems and the Hass Avocado Board. After systematic preprocessing, including missing value imputation and feature normalization, the proposed model was trained and evaluated. Experimental results demonstrate that the TCN-MLP-Attention model achieves excellent predictive performance, with an RMSE of 1.23 and an MSE of 1.51, outperforming traditional methods. This research provides a scalable and effective approach for time series forecasting in agricultural markets and offers valuable insights for intelligent supply chain management and price strategy optimization. |
2025-05-15 | Promise of Data-Driven Modeling and Decision Support for Precision Oncology and Theranostics | Binesh Sadanandan, Vahid Behzadan et.al. | 2505.09899 | | | Abstract (click to expand)Cancer remains a leading cause of death worldwide, necessitating personalized treatment approaches to improve outcomes. Theranostics, combining molecular-level imaging with targeted therapy, offers potential for precision oncology but requires optimized, patient-specific care plans. This paper investigates state-of-the-art data-driven decision support applications with a reinforcement learning focus in precision oncology. We review current applications, training environments, state-space representation, performance evaluation criteria, and measurement of risk and reward, highlighting key challenges. We propose a framework integrating data-driven modeling with reinforcement learning-based decision support to optimize radiopharmaceutical therapy dosing, addressing identified challenges and setting directions for future research. The framework leverages Neural Ordinary Differential Equations and Physics-Informed Neural Networks to enhance Physiologically Based Pharmacokinetic models while applying reinforcement learning algorithms to iteratively refine treatment policies based on patient-specific data. |
2025-05-14 | Radon Exposure Dataset | Dakotah Maguire, Jeremy Logan, Heechan Lee et.al. | 2505.09489 | | 7 pages, 2 tables | Abstract (click to expand)Exposure to elevated radon levels in the home is one of the leading causes of lung cancer in the world. The following study describes the creation of a comprehensive, state-level dataset designed to enable the modeling and prediction of household radon concentrations at Zip Code Tabulation Area (ZCTA) and sub-kilometer scales. Details include the data collection and processing involved in compiling physical and demographic factors for Pennsylvania and Utah. Attempting to mitigate this risk requires identifying the underlying geological causes and the populations that might be at risk. This work focuses on identifying at-risk populations throughout Pennsylvania and Utah, where radon levels are some of the highest in the country. The resulting dataset harmonizes geological and demographic factors from various sources and spatial resolutions, including temperature, geochemistry, and soil characteristics. Demographic variables such as the household heating fuel used, the age of building, and the housing type provide further insight into which populations could be most susceptible in areas with potentially high radon levels. This dataset also serves as a foundational resource for two other studies conducted by the authors. The resolution of the data provides a novel approach to predicting potential radon exposure, and the data processing conducted for these states can be scaled up to larger spatial resolutions (e.g., the Contiguous United States [CONUS]) and allow for a broad reclassification of radon exposure potential in the United States. |
2025-05-14 | Optimization of the initial post-buckling response of trusses and frames by an asymptotic approach | Federico Ferrari, Ole Sigmund et.al. | 2505.09373 | | | Abstract (click to expand)Asymptotic post-buckling theory is applied to sizing and topology optimization of trusses and frames, exploring its potential and current computational difficulties. We show that a designs' post-buckling response can be controlled by including the lowest two asymptotic coefficients, representing the initial post-buckling slope and curvature, in the optimization formulation. This also reduces the imperfection sensitivity of the optimized design. The asymptotic expansion can further be used to approximate the structural nonlinear response, and then to optimize for a given measure of the nonlinear mechanical performance such as, for example, end-compliance or complementary work. Examples of linear and nonlinear compliance minimization of trusses and frames show the effective use of the asymptotic method for including post-buckling constraints in structural optimization. |
2025-05-13 | Predictive Digital Twins with Quantified Uncertainty for Patient-Specific Decision Making in Oncology | Graham Pash, Umberto Villa, David A. Hormuth II et.al. | 2505.08927 | link | | Abstract (click to expand)Quantifying the uncertainty in predictive models is critical for establishing trust and enabling risk-informed decision making for personalized medicine. In contrast to one-size-fits-all approaches that seek to mitigate risk at the population level, digital twins enable personalized modeling thereby potentially improving individual patient outcomes. Realizing digital twins in biomedicine requires scalable and efficient methods to integrate patient data with mechanistic models of disease progression. This study develops an end-to-end data-to-decisions methodology that combines longitudinal non-invasive imaging data with mechanistic models to estimate and predict spatiotemporal tumor progression accounting for patient-specific anatomy. Through the solution of a statistical inverse problem, imaging data inform the spatially varying parameters of a reaction-diffusion model of tumor progression. An efficient parallel implementation of the forward model coupled with a scalable approximation of the Bayesian posterior distribution enables rigorous, but tractable, quantification of uncertainty due to the sparse, noisy measurements. The methodology is verified on a virtual patient with synthetic data to control for model inadequacy, noise level, and the frequency of data collection. The application to decision-making is illustrated by evaluating the importance of imaging frequency and formulating an optimal experimental design question. The clinical relevance is demonstrated through a model validation study on a cohort of patients with publicly available longitudinal imaging data. |
2025-05-13 | Addressing the Current Challenges of Quantum Machine Learning through Multi-Chip Ensembles | Junghoon Justin Park, Jiook Cha, Samuel Yen-Chi Chen et.al. | 2505.08782 | | | Abstract (click to expand)Quantum Machine Learning (QML) holds significant promise for solving computational challenges across diverse domains. However, its practical deployment is constrained by the limitations of noisy intermediate-scale quantum (NISQ) devices, including noise, limited scalability, and trainability issues in variational quantum circuits (VQCs). We introduce the multi-chip ensemble VQC framework, which partitions high-dimensional computations across smaller quantum chips to enhance scalability, trainability, and noise resilience. We show that this approach mitigates barren plateaus, reduces quantum error bias and variance, and maintains robust generalization through controlled entanglement. Designed to align with current and emerging quantum hardware, the framework demonstrates strong potential for enabling scalable QML on near-term devices, as validated by experiments on standard benchmark datasets (MNIST, FashionMNIST, CIFAR-10) and real world dataset (PhysioNet EEG). |
2025-05-14 | Sensitivity-Constrained Fourier Neural Operators for Forward and Inverse Problems in Parametric Differential Equations | Abdolmehdi Behroozi, Chaopeng Shen and, Daniel Kifer et.al. | 2505.08740 | link | | Abstract (click to expand)Parametric differential equations of the form du/dt = f(u, x, t, p) are fundamental in science and engineering. While deep learning frameworks such as the Fourier Neural Operator (FNO) can efficiently approximate solutions, they struggle with inverse problems, sensitivity estimation (du/dp), and concept drift. We address these limitations by introducing a sensitivity-based regularization strategy, called Sensitivity-Constrained Fourier Neural Operators (SC-FNO). SC-FNO achieves high accuracy in predicting solution paths and consistently outperforms standard FNO and FNO with physics-informed regularization. It improves performance in parameter inversion tasks, scales to high-dimensional parameter spaces (tested with up to 82 parameters), and reduces both data and training requirements. These gains are achieved with a modest increase in training time (30% to 130% per epoch) and generalize across various types of differential equations and neural operators. Code and selected experiments are available at: https://github.com/AMBehroozi/SC_Neural_Operators |
2025-05-13 | Topology and geometry optimization of grid-shells under self-weight loading | Helen E. Fairclough, Karol Bolbotowski, Linwei He et.al. | 2505.08645 | | 25 pages, 20 figures | Abstract (click to expand)This manuscript presents an approach for simultaneously optimizing the connectivity and elevation of grid-shell structures acting in pure compression (or pure tension) under the combined effects of a prescribed external loading and the design-dependent self-weight of the structure itself. The method derived herein involves solving a second-order cone optimization problem, thereby ensuring convexity and obtaining globally optimal results for a given discretization of the design domain. Several numerical examples are presented, illustrating characteristics of this class of optimal structures. It is found that, as self-weight becomes more significant, both the optimal topology and the optimal elevation profile of the structure change, highlighting the importance of optimizing both topology and geometry simultaneously from the earliest stages of design. It is shown that this approach can obtain solutions with greater accuracy and several orders of magnitude more quickly than a standard 3D layout/truss topology optimization approach. |
2025-05-13 | Improving Unsupervised Task-driven Models of Ventral Visual Stream via Relative Position Predictivity | Dazhong Rong, Hao Dong, Xing Gao et.al. | 2505.08316 | link | This paper has been accepted for full publication at CogSci 2025 (https://cognitivesciencesociety.org/cogsci-2025/) | Abstract (click to expand)Based on the concept that ventral visual stream (VVS) mainly functions for object recognition, current unsupervised task-driven methods model VVS by contrastive learning, and have achieved good brain similarity. However, we believe functions of VVS extend beyond just object recognition. In this paper, we introduce an additional function involving VVS, named relative position (RP) prediction. We first theoretically explain contrastive learning may be unable to yield the model capability of RP prediction. Motivated by this, we subsequently integrate RP learning with contrastive learning, and propose a new unsupervised task-driven method to model VVS, which is more inline with biological reality. We conduct extensive experiments, demonstrating that: (i) our method significantly improves downstream performance of object recognition while enhancing RP predictivity; (ii) RP predictivity generally improves the model brain similarity. Our results provide strong evidence for the involvement of VVS in location perception (especially RP prediction) from a computational perspective. |
2025-05-12 | QUEST: QUantum-Enhanced Shared Transportation | Chinonso Onah, Neel Miscasci, Carsten Othmer et.al. | 2505.08074 | | 11 pages, 7 figures | Abstract (click to expand)We introduce Windbreaking-as-a-Service'' (WaaS) as an innovative approach to shared transportation in which larger windbreaker'' vehicles provide aerodynamic shelter for ``windsurfer'' vehicles, thereby reducing drag and fuel consumption. As a computational framework to solve the large-scale matching and assignment problems that arise in WaaS, we present \textbf{QUEST} (Quantum-Enhanced Shared Transportation). Specifically, we formulate the pairing of windbreakers and windsurfers -- subject to timing, speed, and vehicle-class constraints -- as a mixed-integer quadratic problem (MIQP). Focusing on a single-segment prototype, we verify the solution classically via the Hungarian Algorithm, a Gurobi-based solver, and brute-force enumeration of binary vectors. We then encode the problem as a Quadratic Unconstrained Binary Optimization (QUBO) and map it to an Ising Hamiltonian, enabling the use of the Quantum Approximate Optimization Algorithm (QAOA) and other quantum and classical annealing technologies. Our quantum implementation successfully recovers the optimal assignment identified by the classical methods, confirming the soundness of the QUEST pipeline for a controlled prototype. While QAOA and other quantum heuristics do not guarantee a resolution of the fundamental complexity barriers, this study illustrates how the WaaS problem can be systematically translated into a quantum-ready model. It also lays the groundwork for addressing multi-segment scenarios and potentially leveraging quantum advantage for large-scale shared-transportation instances. |
2025-05-12 | A comparative study of Bitcoin and Ripple cryptocurrencies trading using Deep Reinforcement Learning algorithms | Dieu-Donne Fangnon, Armandine Sorel Kouyim Meli, Verlon Roel Mbingui et.al. | 2505.07660 | link | arXiv admin note: text overlap with arXiv:1911.10107 by other authors | Abstract (click to expand)Artificial intelligence (AI) has demonstrated remarkable success across various applications. In light of this trend, the field of automated trading has developed a keen interest in leveraging AI techniques to forecast the future prices of financial assets. This interest stems from the need to address trading challenges posed by the inherent volatility and dynamic nature of asset prices. However, crafting a flawless strategy becomes a formidable task when dealing with assets characterized by intricate and ever-changing price dynamics. To surmount these formidable challenges, this research employs an innovative rule-based strategy approach to train Deep Reinforcement Learning (DRL). This application is carried out specifically in the context of trading Bitcoin (BTC) and Ripple (XRP). Our proposed approach hinges on the integration of Deep Q-Network, Double Deep Q-Network, Dueling Deep Q-learning networks, alongside the Advantage Actor-Critic algorithms. Each of them aims to yield an optimal policy for our application. To evaluate the effectiveness of our Deep Reinforcement Learning (DRL) approach, we rely on portfolio wealth and the trade signal as performance metrics. The experimental outcomes highlight that Duelling and Double Deep Q-Network outperformed when using XRP with the increasing of the portfolio wealth. All codes are available in this \href{https://github.com/VerlonRoelMBINGUI/RL_Final_Projects_AMMI2023}{\color{blue}Github link}. |
2025-05-12 | A Value of Information-based assessment of strain-based thickness loss monitoring in ship hull structures | Nicholas E. Silionis, Konstantinos N. Anyfantis et.al. | 2505.07427 | | 32 pages, 16 figures, Preprint submitted to Elsevier journal | Abstract (click to expand)Recent advances in Structural Health Monitoring (SHM) have attracted industry interest, yet real-world applications, such as in ship structures remain scarce. Despite SHM's potential to optimise maintenance, its adoption in ships is limited due to the lack of clearly quantifiable benefits for hull maintenance. This study employs a Bayesian pre-posterior decision analysis to quantify the value of information (VoI) from SHM systems monitoring corrosion-induced thickness loss (CITL) in ship hulls, in a first-of-its-kind analysis for ship structures. We define decision-making consequence cost functions based on exceedance probabilities relative to a target CITL threshold, which can be set by the decision-maker. This introduces a practical aspect to our framework, that enables implicitly modelling the decision-maker's risk perception. We apply this framework to a large-scale, high-fidelity numerical model of a commercial vessel and examine the relative benefits of different CITL monitoring strategies, including strain-based SHM and traditional on-site inspections. |
2025-05-14 | Simulating many-engine spacecraft: Exceeding 100 trillion grid points via information geometric regularization and the MFC flow solver | Benjamin Wilfong, Anand Radhakrishnan, Henry Le Berre et.al. | 2505.07392 | link | 10 pages, 7 figures | Abstract (click to expand)This work proposes a method and optimized implementation for exascale simulations of high-speed compressible fluid flows, enabling the simulation of multi-engine rocket craft at an unprecedented scale. We significantly improve upon the state-of-the-art in terms of computational cost and memory footprint through a carefully crafted implementation of the recently proposed information geometric regularization, which eliminates the need for numerical shock capturing. Unified addressing on tightly coupled CPU--GPU platforms increases the total problem size with negligible performance hit. Despite linear stencil algorithms being memory-bound, we achieve wall clock times that are four times faster than optimized baseline numerics. This enables the execution of CFD simulations at more than 100 trillion grid points, surpassing the largest state-of-the-art publicly available simulations by an order of magnitude. Ideal weak scaling is demonstrated on OLCF Frontier and CSCS Alps using the full system, entailing 37.8K AMD MI250X GPUs (Frontier) or 9.2K NVIDIA GH200 superchips (Alps). |
2025-05-11 | Can LLM-based Financial Investing Strategies Outperform the Market in Long Run? | Weixian Waylon Li, Hyeonjun Kim, Mihai Cucuringu et.al. | 2505.07078 | link | 14 pages | Abstract (click to expand)Large Language Models (LLMs) have recently been leveraged for asset pricing tasks and stock trading applications, enabling AI agents to generate investment decisions from unstructured financial data. However, most evaluations of LLM timing-based investing strategies are conducted on narrow timeframes and limited stock universes, overstating effectiveness due to survivorship and data-snooping biases. We critically assess their generalizability and robustness by proposing FINSABER, a backtesting framework evaluating timing-based strategies across longer periods and a larger universe of symbols. Systematic backtests over two decades and 100+ symbols reveal that previously reported LLM advantages deteriorate significantly under broader cross-section and over a longer-term evaluation. Our market regime analysis further demonstrates that LLM strategies are overly conservative in bull markets, underperforming passive benchmarks, and overly aggressive in bear markets, incurring heavy losses. These findings highlight the need to develop LLM strategies that are able to prioritise trend detection and regime-aware risk controls over mere scaling of framework complexity. |
2025-05-11 | Energy-Efficient Ternary Encoding for High-Speed Data Transmission in 3D-Integrated Circuits Using Inductive Coupling Links | Abdullah Saeed Alghotmi et.al. | 2505.06908 | | 6 pages | Abstract (click to expand)This paper proposes a ternary signalling scheme for inductive coupling links (ICLs) in 3D-integrated circuits (3D-ICs) to reduce crosstalk and electromagnetic interference in multi-stacked chip communications. By converting binary data into ternary sequences with three voltage levels (-V, 0V, +V), the approach enhances signal separation, reduces crosstalk, and improves signal integrity. Unlike traditional Non-Return to Zero (NRZ) systems, the ternary scheme increases bandwidth efficiency and reduces power consumption through fewer signal transitions. A modified H-Bridge transmitter generates ternary symbols by controlling current flow based on binary-to-ternary mapping. Preliminary simulations validate the efficiency of the scheme, showing reduced power consumption and higher data rates compared to NRZ. This approach shows promise for high-performance computing and IoT devices in 3D-IC environments, offering enhanced noise resilience, lower power usage, and improved communication efficiency. |
2025-05-14 | Decoding Futures Price Dynamics: A Regularized Sparse Autoencoder for Interpretable Multi-Horizon Forecasting and Factor Discovery | Abhijit Gupta et.al. | 2505.06795 | | | Abstract (click to expand)Commodity price volatility creates economic challenges, necessitating accurate multi-horizon forecasting. Predicting prices for commodities like copper and crude oil is complicated by diverse interacting factors (macroeconomic, supply/demand, geopolitical, etc.). Current models often lack transparency, limiting strategic use. This paper presents a Regularized Sparse Autoencoder (RSAE), a deep learning framework for simultaneous multi-horizon commodity price prediction and discovery of interpretable latent market drivers. The RSAE forecasts prices at multiple horizons (e.g., 1-day, 1-week, 1-month) using multivariate time series. Crucially, L1 regularization ($\ |
2025-05-10 | PC-SRGAN: Physically Consistent Super-Resolution Generative Adversarial Network for General Transient Simulations | Md Rakibul Hasan, Pouria Behnoudfar, Dan MacKinlay et.al. | 2505.06502 | | | Abstract (click to expand)Machine Learning, particularly Generative Adversarial Networks (GANs), has revolutionised Super Resolution (SR). However, generated images often lack physical meaningfulness, which is essential for scientific applications. Our approach, PC-SRGAN, enhances image resolution while ensuring physical consistency for interpretable simulations. PC-SRGAN significantly improves both the Peak Signal-to-Noise Ratio and the Structural Similarity Index Measure compared to conventional methods, even with limited training data (e.g., only 13% of training data required for SRGAN). Beyond SR, PC-SRGAN augments physically meaningful machine learning, incorporating numerically justified time integrators and advanced quality metrics. These advancements promise reliable and causal machine-learning models in scientific domains. A significant advantage of PC-SRGAN over conventional SR techniques is its physical consistency, which makes it a viable surrogate model for time-dependent problems. PC-SRGAN advances scientific machine learning, offering improved accuracy and efficiency for image processing, enhanced process understanding, and broader applications to scientific research. The source codes and data will be made publicly available at https://github.com/hasan-rakibul/PC-SRGAN upon acceptance of this paper. |
2025-05-09 | A New DAPO Algorithm for Stock Trading | Ruijian Zha, Bojun Liu et.al. | 2505.06408 | link | Accepted to IEEE IDS 2025 Special Track: Financial Reinforcement Learning and Foundation Models (FinRLFM). 3 pages, 2 figures, 3 tables | Abstract (click to expand)Recent advances in reinforcement learning, such as Dynamic Sampling Policy Optimization (DAPO), show strong performance when paired with large language models (LLMs). Motivated by this success, we ask whether similar gains can be realized in financial trading. We design a trading agent that combines an improved Group Relative Policy Optimization (GRPO) algorithm, augmented with ideas from DAPO, with LLM-based risk and sentiment signals extracted from financial news. On the NASDAQ-100 index (FNSPID dataset), our agent attains a cumulative return of 230.49 percent and an information ratio of 0.37, outperforming the CPPO-DeepSeek baseline. It also cuts training time from about 8 hours to 2.5 hours over 100 epochs while markedly reducing RAM usage. The proposed RL-LLM framework offers a scalable path toward data-efficient trading agents. Code: https://github.com/Ruijian-Zha/FinRL-DAPO-SR/ |
2025-05-07 | RAAC panels can suddenly collapse before any warning of corrosion-induced surface cracking | E. Korec, P. Grassl, M. Jirasek et.al. | 2505.06294 | | | Abstract (click to expand)The collapse of reinforced autoclaved aerated concrete (RAAC) panels has attracted considerable public and academic interest. As detailed experimental data are not yet available and replicating the natural corrosion process requires years or decades, computational modelling is essential to understand under which conditions corrosion remains concealed. The very high porosity of RAAC is widely suspected to be a major contributing factor. However, current corrosion-induced cracking models are known to struggle with capturing the role of concrete porosity. To remedy this critical deficiency, we propose to enrich corrosion-induced cracking modelling with the analytical solution of reactive transport equations governing the precipitation of rust and a porosity-dependent description of diffusivity. With this, the corrosion concealment in RAAC panels is studied computationally for the first time, revealing that RAAC panels can suddenly collapse before any warning of corrosion-induced surface cracking and allowing to map the conditions most likely to result in sudden collapse. |
2025-05-07 | ALFEE: Adaptive Large Foundation Model for EEG Representation | Wei Xiong, Junming Lin, Jiangtong Li et.al. | 2505.06291 | | 17pages, 17 figures | Abstract (click to expand)While foundation models excel in text, image, and video domains, the critical biological signals, particularly electroencephalography(EEG), remain underexplored. EEG benefits neurological research with its high temporal resolution, operational practicality, and safety profile. However, low signal-to-noise ratio, inter-subject variability, and cross-paradigm differences hinder the generalization of current models. Existing methods often employ simplified strategies, such as a single loss function or a channel-temporal joint representation module, and suffer from a domain gap between pretraining and evaluation tasks that compromises efficiency and adaptability. To address these limitations, we propose the Adaptive Large Foundation model for EEG signal representation(ALFEE) framework, a novel hybrid transformer architecture with two learning stages for robust EEG representation learning. ALFEE employs a hybrid attention that separates channel-wise feature aggregation from temporal dynamics modeling, enabling robust EEG representation with variable channel configurations. A channel encoder adaptively compresses variable channel information, a temporal encoder captures task-guided evolution, and a hybrid decoder reconstructs signals in both temporal and frequency domains. During pretraining, ALFEE optimizes task prediction, channel and temporal mask reconstruction, and temporal forecasting to enhance multi-scale and multi-channel representation. During fine-tuning, a full-model adaptation with a task-specific token dictionary and a cross-attention layer boosts performance across multiple tasks. After 25,000 hours of pretraining, extensive experimental results on six downstream EEG tasks demonstrate the superior performance of ALFEE over existing models. Our ALFEE framework establishes a scalable foundation for biological signal analysis with implementation at https://github.com/xw1216/ALFEE. |
2025-05-09 | FlowHFT: Flow Policy Induced Optimal High-Frequency Trading under Diverse Market Conditions | Yang Li, Zhi Chen, Steve Yang et.al. | 2505.05784 | | 14 pages, 1 figure, 6 tables, 2 algorithms | Abstract (click to expand)High-frequency trading (HFT) is an investing strategy that continuously monitors market states and places bid and ask orders at millisecond speeds. Traditional HFT approaches fit models with historical data and assume that future market states follow similar patterns. This limits the effectiveness of any single model to the specific conditions it was trained for. Additionally, these models achieve optimal solutions only under specific market conditions, such as assumptions about stock price's stochastic process, stable order flow, and the absence of sudden volatility. Real-world markets, however, are dynamic, diverse, and frequently volatile. To address these challenges, we propose the FlowHFT, a novel imitation learning framework based on flow matching policy. FlowHFT simultaneously learns strategies from numerous expert models, each proficient in particular market scenarios. As a result, our framework can adaptively adjust investment decisions according to the prevailing market state. Furthermore, FlowHFT incorporates a grid-search fine-tuning mechanism. This allows it to refine strategies and achieve superior performance even in complex or extreme market scenarios where expert strategies may be suboptimal. We test FlowHFT in multiple market environments. We first show that flow matching policy is applicable in stochastic market environments, thus enabling FlowHFT to learn trading strategies under different market conditions. Notably, our single framework consistently achieves performance superior to the best expert for each market condition. |
2025-05-09 | Unfitted finite element modelling of surface-bulk viscous flows in animal cells | Eric Neiva, Hervé Turlier et.al. | 2505.05723 | link | 29 pages, 15 figures | Abstract (click to expand)This work presents a novel unfitted finite element framework to simulate coupled surface-bulk problems in time-dependent domains, focusing on fluid-fluid interactions in animal cells between the actomyosin cortex and the cytoplasm. The cortex, a thin layer beneath the plasma membrane, provides structural integrity and drives shape changes by generating surface contractile forces akin to tension. Cortical contractions generate Marangoni-like surface flows and induce intracellular cytoplasmic flows that are essential for processes such as cell division, migration, and polarization, particularly in large animal cells. Despite its importance, the spatiotemporal regulation of cortex-cytoplasm interactions remains poorly understood and computational modelling can be very challenging because surface-bulk dynamics often lead to large cell deformations. To address these challenges, we propose a sharp-interface framework that uniquely combines the trace finite element method for surface flows with the aggregated finite element method for bulk flows. This approach enables accurate and stable simulations on fixed Cartesian grids without remeshing. The model also incorporates mechanochemical feedback through the surface transport of a molecular regulator of active tension. We solve the resulting mixed-dimensional system on a fixed Cartesian grid using a level-set-based method to track the evolving surface. Numerical experiments validate the accuracy and stability of the method, capturing phenomena such as self-organised pattern formation, curvature-driven relaxation, and cell cleavage. This novel framework offers a powerful and extendable tool for investigating increasingly complex morphogenetic processes in animal cells. |
2025-05-08 | Characterizing GPU Energy Usage in Exascale-Ready Portable Science Applications | William F. Godoy, Oscar Hernandez, Paul R. C. Kent et.al. | 2505.05623 | | 14 pages, 8 figures, 3 tables. Accepted at the Energy Efficiency with Sustainable Performance: Techniques, Tools, and Best Practices, EESP Workshop, in conjunction with ISC High Performance 2025 | Abstract (click to expand)We characterize the GPU energy usage of two widely adopted exascale-ready applications representing two classes of particle and mesh solvers: (i) QMCPACK, a quantum Monte Carlo package, and (ii) AMReX-Castro, an adaptive mesh astrophysical code. We analyze power, temperature, utilization, and energy traces from double-/single (mixed)-precision benchmarks on NVIDIA's A100 and H100 and AMD's MI250X GPUs using queries in NVML and rocm smi lib, respectively. We explore application-specific metrics to provide insights on energy vs. performance trade-offs. Our results suggest that mixed-precision energy savings range between 6-25% on QMCPACK and 45% on AMReX-Castro. Also there are still gaps in the AMD tooling on Frontier GPUs that need to be understood, while query resolutions on NVML have little variability between 1 ms and 1 s. Overall, application level knowledge is crucial to define energy-cost/science-benefit opportunities for the codesign of future supercomputer architectures in the post-Moore era. |
2025-05-08 | The Evolution of Embedding Table Optimization and Multi-Epoch Training in Pinterest Ads Conversion | Andrew Qiu, Shubham Barhate, Hin Wai Lui et.al. | 2505.05605 | | | Abstract (click to expand)Deep learning for conversion prediction has found widespread applications in online advertising. These models have become more complex as they are trained to jointly predict multiple objectives such as click, add-to-cart, checkout and other conversion types. Additionally, the capacity and performance of these models can often be increased with the use of embedding tables that encode high cardinality categorical features such as advertiser, user, campaign, and product identifiers (IDs). These embedding tables can be pre-trained, but also learned end-to-end jointly with the model to directly optimize the model objectives. Training these large tables is challenging due to: gradient sparsity, the high cardinality of the categorical features, the non-uniform distribution of IDs and the very high label sparsity. These issues make training prone to both slow convergence and overfitting after the first epoch. Previous works addressed the multi-epoch overfitting issue by using: stronger feature hashing to reduce cardinality, filtering of low frequency IDs, regularization of the embedding tables, re-initialization of the embedding tables after each epoch, etc. Some of these techniques reduce overfitting at the expense of reduced model performance if used too aggressively. In this paper, we share key learnings from the development of embedding table optimization and multi-epoch training in Pinterest Ads Conversion models. We showcase how our Sparse Optimizer speeds up convergence, and how multi-epoch overfitting varies in severity between different objectives in a multi-task model depending on label sparsity. We propose a new approach to deal with multi-epoch overfitting: the use of a frequency-adaptive learning rate on the embedding tables and compare it to embedding re-initialization. We evaluate both methods offline using an industrial large-scale production dataset. |
2025-05-08 | Trading Under Uncertainty: A Distribution-Based Strategy for Futures Markets Using FutureQuant Transformer | Wenhao Guo, Yuda Wang, Zeqiao Huang et.al. | 2505.05595 | | 16 pages, 12 figures | Abstract (click to expand)In the complex landscape of traditional futures trading, where vast data and variables like real-time Limit Order Books (LOB) complicate price predictions, we introduce the FutureQuant Transformer model, leveraging attention mechanisms to navigate these challenges. Unlike conventional models focused on point predictions, the FutureQuant model excels in forecasting the range and volatility of future prices, thus offering richer insights for trading strategies. Its ability to parse and learn from intricate market patterns allows for enhanced decision-making, significantly improving risk management and achieving a notable average gain of 0.1193% per 30-minute trade over state-of-the-art models with a simple algorithm using factors such as RSI, ATR, and Bollinger Bands. This innovation marks a substantial leap forward in predictive analytics within the volatile domain of futures trading. |
2025-05-08 | Advanced Stock Market Prediction Using Long Short-Term Memory Networks: A Comprehensive Deep Learning Framework | Rajneesh Chaudhary et.al. | 2505.05325 | | 11 pages, 17 figures, submitted as a pre-final year undergraduate project at Indian Institute of Information Technology, Gwalior. The paper integrates LSTM-based time series forecasting with sentiment analysis using VADER and includes a working web interface for real-time prediction | Abstract (click to expand)Predicting stock market movements remains a persistent challenge due to the inherently volatile, non-linear, and stochastic nature of financial time series data. This paper introduces a deep learning-based framework employing Long Short-Term Memory (LSTM) networks to forecast the closing stock prices of major technology firms: Apple, Google, Microsoft, and Amazon, listed on NASDAQ. Historical data was sourced from Yahoo Finance and processed using normalization and feature engineering techniques. The proposed model achieves a Mean Absolute Percentage Error (MAPE) of 2.72 on unseen test data, significantly outperforming traditional models like ARIMA. To further enhance predictive accuracy, sentiment scores were integrated using real-time news articles and social media data, analyzed through the VADER sentiment analysis tool. A web application was also developed to provide real-time visualizations of stock price forecasts, offering practical utility for both individual and institutional investors. This research demonstrates the strength of LSTM networks in modeling complex financial sequences and presents a novel hybrid approach combining time series modeling with sentiment analysis. |
2025-05-13 | A thermoelastic plate model for shot peen forming metal panels based on effective torque | Conor Rowan et.al. | 2505.05236 | | | Abstract (click to expand)A common technique used in factories to shape metal panels is shot peen forming. The impacts between the hard steel shot and the softer metal of the panel cause localized plastic deformation used to improve the fatigue properties of the material's surface. The residual stress distribution imparted by impacts also results in bending, which suggests that a torque is associated with it. In this paper, we model shot peen forming as the application of spatially varying torques to a Kirchhoff plate, opting to use the language of thermoelasticity in order to introduce these torque distributions. First, we derive the governing equations for the thermoelastic thin plate model and show that only a torque-type resultant of the temperature distribution shows up in the bending equation. Next, to calibrate from the shot peen operation an empirical effective torque parameter used in the thermoelastic model, a simple and non-invasive test is devised. This test relies only on measuring the maximum displacement of a uniformly shot peened plate as opposed to characterizing the residual stress distribution. After discussing how to handle the unconventional fully-free boundary conditions germane for peened plates, we introduce an approach to solving the inverse problem whereby the peening distribution required to obtain a specified plate contour can be obtained. Given the non-unique relationship between peening distributions and the displacement at discrete points, we explore a regularization of the inverse problem which gives rise to shot peen distributions that match the capabilities of equipment in the factory. In order to validate our proposed model, an experiment with quantified uncertainty is designed and carried out which investigates the agreement between the predictions of the calibrated model and real shot peen forming operations. |
2025-05-08 | Physics-informed solution reconstruction in elasticity and heat transfer using the explicit constraint force method | Conor Rowan, Kurt Maute, Alireza Doostan et.al. | 2505.04875 | | | Abstract (click to expand)One use case of physics-informed neural networks'' (PINNs) is solution reconstruction, which aims to estimate the full-field state of a physical system from sparse measurements. Parameterized governing equations of the system are used in tandem with the measurements to regularize the regression problem. However, in real-world solution reconstruction problems, the parameterized governing equation may be inconsistent with the physical phenomena that give rise to the measurement data. We show that due to assuming consistency between the true and parameterized physics, PINNs-based approaches may fail to satisfy three basic criteria of interpretability, robustness, and data consistency. As we argue, these criteria ensure that (i) the quality of the reconstruction can be assessed, (ii) the reconstruction does not depend strongly on the choice of physics loss, and (iii) that in certain situations, the physics parameters can be uniquely recovered. In the context of elasticity and heat transfer, we demonstrate how standard formulations of the physics loss and techniques for constraining the solution to respect the measurement data lead to different constraint forces" -- which we define as additional source terms arising from the constraints -- and that these constraint forces can significantly influence the reconstructed solution. To avoid the potentially substantial influence of the choice of physics loss and method of constraint enforcement on the reconstructed solution, we propose the ``explicit constraint force method'' (ECFM) to gain control of the source term introduced by the constraint. We then show that by satisfying the criteria of interpretability, robustness, and data consistency, this approach leads to more predictable and customizable reconstructions from noisy measurement data, even when the parameterization of the missing physics is inconsistent with the measured system. |
2025-05-07 | HiPerRAG: High-Performance Retrieval Augmented Generation for Scientific Insights | Ozan Gokdemir, Carlo Siebenschuh, Alexander Brace et.al. | 2505.04846 | | This paper has been accepted at the Platform for Advanced Scientific Computing Conference (PASC 25), June 16-18, 2025, Brugg-Windisch, Switzerland | Abstract (click to expand)The volume of scientific literature is growing exponentially, leading to underutilized discoveries, duplicated efforts, and limited cross-disciplinary collaboration. Retrieval Augmented Generation (RAG) offers a way to assist scientists by improving the factuality of Large Language Models (LLMs) in processing this influx of information. However, scaling RAG to handle millions of articles introduces significant challenges, including the high computational costs associated with parsing documents and embedding scientific knowledge, as well as the algorithmic complexity of aligning these representations with the nuanced semantics of scientific content. To address these issues, we introduce HiPerRAG, a RAG workflow powered by high performance computing (HPC) to index and retrieve knowledge from more than 3.6 million scientific articles. At its core are Oreo, a high-throughput model for multimodal document parsing, and ColTrast, a query-aware encoder fine-tuning algorithm that enhances retrieval accuracy by using contrastive learning and late-interaction techniques. HiPerRAG delivers robust performance on existing scientific question answering benchmarks and two new benchmarks introduced in this work, achieving 90% accuracy on SciQ and 76% on PubMedQA-outperforming both domain-specific models like PubMedGPT and commercial LLMs such as GPT-4. Scaling to thousands of GPUs on the Polaris, Sunspot, and Frontier supercomputers, HiPerRAG delivers million document-scale RAG workflows for unifying scientific knowledge and fostering interdisciplinary innovation. |
2025-05-07 | RDPP-TD: Reputation and Data Privacy-Preserving based Truth Discovery Scheme in Mobile Crowdsensing | Lijian Wu, Weikun Xie, Wei Tan et.al. | 2505.04361 | | | Abstract (click to expand)Truth discovery (TD) plays an important role in Mobile Crowdsensing (MCS). However, existing TD methods, including privacy-preserving TD approaches, estimate the truth by weighting only the data submitted in the current round, which often results in low data quality. Moreover, there is a lack of effective TD methods that preserve both reputation and data privacy. To address these issues, a Reputation and Data Privacy-Preserving based Truth Discovery (RDPP-TD) scheme is proposed to obtain high-quality data for MCS. The RDPP-TD scheme consists of two key approaches: a Reputation-based Truth Discovery (RTD) approach, which integrates the weight of current-round data with workers' reputation values to estimate the truth, thereby achieving more accurate results, and a Reputation and Data Privacy-Preserving (RDPP) approach, which ensures privacy preservation for sensing data and reputation values. First, the RDPP approach, when seamlessly integrated with RTD, can effectively evaluate the reliability of workers and their sensing data in a privacy-preserving manner. Second, the RDPP scheme supports reputation-based worker recruitment and rewards, ensuring high-quality data collection while incentivizing workers to provide accurate information. Comprehensive theoretical analysis and extensive experiments based on real-world datasets demonstrate that the proposed RDPP-TD scheme provides strong privacy protection and improves data quality by up to 33.3%. |
2025-05-07 | Yield and Buckling Stress Limits in Topology Optimization of Multiscale Structures | Christoffer Fyllgraf Christensen, Fengwen Wang, Ole Sigmund et.al. | 2505.04353 | | 24 pages, 20 figures, 9 tables | Abstract (click to expand)This study presents an extension of multiscale topology optimization by integrating both yield stress and local/global buckling considerations into the design process. Building upon established multiscale methodologies, we develop a new framework incorporating yield stress limits either as constraints or objectives alongside previously established local and global buckling constraints. This approach significantly refines the optimization process, ensuring that the resulting designs meet mechanical performance criteria and adhere to critical material yield constraints. First, we establish local density-dependent von Mises yield surfaces based on local yield estimates from homogenization-based analysis to predict the local yield limits of the homogenized materials. Then, these local Yield-based Load Factors (YLFs) are combined with local and global buckling criteria to obtain topology optimized designs that consider yield and buckling failure on all levels. This integration is crucial for the practical application of optimized structures in real-world scenarios, where material yield and stability behavior critically influence structural integrity and durability. Numerical examples demonstrate how optimized designs depend on the stiffness to yield ratio of the considered building material. Despite the foundational assumption of separation of scales, the de-homogenized structures, even at relatively coarse length scales, exhibit a high degree of agreement with the corresponding homogenized predictions. |
2025-05-06 | Modal Decomposition and Identification for a Population of Structures Using Physics-Informed Graph Neural Networks and Transformers | Xudong Jian, Kiran Bacsa, Gregory Duthé et.al. | 2505.04018 | | | Abstract (click to expand)Modal identification is crucial for structural health monitoring and structural control, providing critical insights into structural dynamics and performance. This study presents a novel deep learning framework that integrates graph neural networks (GNNs), transformers, and a physics-informed loss function to achieve modal decomposition and identification across a population of structures. The transformer module decomposes multi-degrees-of-freedom (MDOF) structural dynamic measurements into single-degree-of-freedom (SDOF) modal responses, facilitating the identification of natural frequencies and damping ratios. Concurrently, the GNN captures the structural configurations and identifies mode shapes corresponding to the decomposed SDOF modal responses. The proposed model is trained in a purely physics-informed and unsupervised manner, leveraging modal decomposition theory and the independence of structural modes to guide learning without the need for labeled data. Validation through numerical simulations and laboratory experiments demonstrates its effectiveness in accurately decomposing dynamic responses and identifying modal properties from sparse structural dynamic measurements, regardless of variations in external loads or structural configurations. Comparative analyses against established modal identification techniques and model variations further underscore its superior performance, positioning it as a favorable approach for population-based structural health monitoring. |
2025-05-06 | Algorithm Selection in Short-Range Molecular Dynamics Simulations | Samuel James Newcome, Fabio Alexander Gratl, Manuel Lerchner et.al. | 2505.03438 | link | 16 pages, 1 figure. Submitted to the 25th International Conference on Computational Science. This version includes two minor corrections to the submitted manuscript, which do not result from the conference's peer review, and no changes resulting from the peer review process | Abstract (click to expand)Numerous algorithms and parallelisations have been developed for short-range particle simulations; however, none are optimally performant for all scenarios. Such a concept led to the prior development of the particle simulation library AutoPas, which implemented many of these algorithms and parallelisations and could select and tune these over the course of the simulation as the scenario changed. Prior works have, however, used only naive approaches to the algorithm selection problem, which can lead to significant overhead from trialling poorly performing algorithmic configurations. In this work, we investigate this problem in the case of Molecular Dynamics simulations. We present three algorithm selection strategies: an approach which makes performance predictions from past data, an expert-knowledge fuzzy logic-based approach, and a data-driven random forest-based approach. We demonstrate that these approaches can achieve speedups of up to 4.05 compared to prior approaches and 1.25 compared to a perfect configuration selection without dynamic algorithm selection. In addition, we discuss the practicality of the strategies in comparison to their performance, to highlight the tractability of such solutions. |
2025-05-09 | Data-efficient inverse design of spinodoid metamaterials | Max Rosenkranz, Markus Kästner, Ivo F. Sbalzarini et.al. | 2505.03415 | | 17 pages, 11 figures (edited acknowledgements) | Abstract (click to expand)We create an data-efficient and accurate surrogate model for structure-property linkages of spinodoid metamaterials with only 75 data points -- far fewer than the several thousands used in prior works -- and demonstrate its use in multi-objective inverse design. The inverse problem of finding a material microstructure that leads to given bulk properties is of great interest in mechanics and materials science. These inverse design tasks often require a large dataset, which can become unaffordable when considering material behavior that requires more expensive simulations or experiments. We generate a data-efficient surrogate for the mapping between the characteristics of the local material structure and the effective elasticity tensor and use it to inversely design structures with multiple objectives simultaneously. The presented neural network-based surrogate model achieves its data efficiency by inherently satisfying certain requirements, such as equivariance with respect to permutations of structure parameters, which avoids having to learn them from data. The resulting surrogate of the forward model is differentiable, allowing its direct use in gradient-based optimization for the inverse design problem. We demonstrate in three inverse design tasks of varying complexity that this approach yields reliable results while requiring significantly less training data than previous approaches based on neural-network surrogates. This paves the way for inverse design involving nonlinear mechanical behavior, where data efficiency is currently the limiting factor. |
2025-05-06 | Transformers Applied to Short-term Solar PV Power Output Forecasting | Andea Scott, Sindhu Sreedhara, Folasade Ayoola et.al. | 2505.03188 | | | Abstract (click to expand)Reliable forecasts of the power output from variable renewable energy generators like solar photovoltaic systems are important to balancing load on real-time electricity markets and ensuring electricity supply reliability. However, solar PV power output is highly uncertain, with significant variations occurring over both longer (daily or seasonally) and shorter (within minutes) timescales due to weather conditions, especially cloud cover. This paper builds on existing work that uses convolutional neural networks in the computer vision task of predicting (in a Nowcast model) and forecasting (in a Forecast model) solar PV power output (Stanford EAO SUNSET Model). A pure transformer architecture followed by a fully-connected layer is applied to one year of image data with experiments run on various combinations of learning rate and batch size. We find that the transformer architecture performs almost as well as the baseline model in the PV output prediction task. However, it performs worse on sunny days. |
2025-05-05 | Multiscale Parallel Simulation of Malignant Pleural Mesothelioma via Adaptive Domain Partitioning -- an Efficiency Analysis Study | Anton Dolganov, Valeria Krzhizhanovskaya, Stefano Trebeschi et.al. | 2505.03067 | | | Abstract (click to expand)A novel parallel efficiency analysis on a framework for simulating the growth of Malignant Pleural Mesothelioma (MPM) tumours is presented. Proliferation of MPM tumours in the pleural space is simulated using a Cellular Potts Model (CPM) coupled with partial differential equations (PDEs). Using segmented lung data from CT scans, an environment is set up with artificial tumour data in the pleural space, representing the simulation domain, onto which a dynamic bounding box is applied to restrict computations to the region of interest, dramatically reducing memory and CPU overhead. This adaptive partitioning of the domain enables efficient use of computational resources by reducing the three-dimensional (3D) domain over which the PDEs are to be solved. The PDEs, representing oxygen, nutrients, and cytokines, are solved using the finite-volume method with a first-order implicit Euler scheme. Parallelization is realized using the public Python library mpi4py in combination with LinearGMRESSolver and PETSc for efficient convergence. Performance analyses have shown that parallelization achieves a reduced solving time compared to serial computation. Also, optimizations enable efficient use of available memory and improved load balancing amongst the cores. |
2025-05-05 | Data Compression for Time Series Modelling: A Case Study of Smart Grid Demand Forecasting | Mikkel Bue Lykkegaard, Svend Vendelbo Nielsen, Akanksha Upadhyay et.al. | 2505.02606 | | | Abstract (click to expand)Efficient time series forecasting is essential for smart energy systems, enabling accurate predictions of energy demand, renewable resource availability, and grid stability. However, the growing volume of high-frequency data from sensors and IoT devices poses challenges for storage and transmission. This study explores Discrete Wavelet Transform (DWT)-based data compression as a solution to these challenges while ensuring forecasting accuracy. A case study of a seawater supply system in Hirtshals, Denmark, operating under dynamic weather, operational schedules, and seasonal trends, is used for evaluation. Biorthogonal wavelets of varying orders were applied to compress data at different rates. Three forecasting models - Ordinary Least Squares (OLS), XGBoost, and the Time Series Dense Encoder (TiDE) - were tested to assess the impact of compression on forecasting performance. Lossy compression rates up to \(r_{\mathrm{lossy}} = 0.999\) were analyzed, with the Normalized Mutual Information (NMI) metric quantifying the relationship between compression and information retention. Results indicate that wavelet-based compression can retain essential features for accurate forecasting when applied carefully. XGBoost proved highly robust to compression artifacts, maintaining stable performance across diverse compression rates. In contrast, OLS demonstrated sensitivity to smooth wavelets and high compression rates, while TiDE showed some variability but remained competitive. This study highlights the potential of wavelet-based compression for scalable, efficient data management in smart energy systems without sacrificing forecasting accuracy. The findings are relevant to other fields requiring high-frequency time series forecasting, including climate modeling, water supply systems, and industrial operations. |
2025-05-05 | Predicting the Dynamics of Complex System via Multiscale Diffusion Autoencoder | Ruikun Li, Jingwen Cheng, Huandong Wang et.al. | 2505.02450 | | | Abstract (click to expand)Predicting the dynamics of complex systems is crucial for various scientific and engineering applications. The accuracy of predictions depends on the model's ability to capture the intrinsic dynamics. While existing methods capture key dynamics by encoding a low-dimensional latent space, they overlook the inherent multiscale structure of complex systems, making it difficult to accurately predict complex spatiotemporal evolution. Therefore, we propose a Multiscale Diffusion Prediction Network (MDPNet) that leverages the multiscale structure of complex systems to discover the latent space of intrinsic dynamics. First, we encode multiscale features through a multiscale diffusion autoencoder to guide the diffusion model for reliable reconstruction. Then, we introduce an attention-based graph neural ordinary differential equation to model the co-evolution across different scales. Extensive evaluations on representative systems demonstrate that the proposed method achieves an average prediction error reduction of 53.23% compared to baselines, while also exhibiting superior robustness and generalization. |
2025-05-04 | Probabilistic Method for Optimizing Submarine Search and Rescue Strategy Under Environmental Uncertainty | Runhao Liu, Ziming Chen, Peng Zhang et.al. | 2505.02186 | | | Abstract (click to expand)When coping with the urgent challenge of locating and rescuing a deep-sea submersible in the event of communication or power failure, environmental uncertainty in the ocean can not be ignored. However, classic physical models are limited to deterministic scenarios. Therefore, we present a hybrid algorithm framework combined with dynamic analysis for target submarine, Monte Carlo and Bayesian method for conducting a probabilistic prediction to improve the search efficiency. Herein, the Monte Carlo is performed to overcome the environmental variability to improve the accuracy in location prediction. According to the trajectory prediction, we integrated the Bayesian based grid research and probabilistic updating. For more complex situations, we introduced the Bayesian filtering. Aiming to maximize the rate of successful rescue and costs, the economic optimization is performed utilizing the cost-benefit analysis based on entropy weight method and the CER is applied for evaluation. |
2025-05-04 | Data-Driven Team Selection in Fantasy Premier League Using Integer Programming and Predictive Modeling Approach | Danial Ramezani et.al. | 2505.02170 | | | Abstract (click to expand)Fantasy football is a billion-dollar industry with millions of participants. Constrained by a fixed budget, decision-makers draft a squad whose players are expected to perform well in the upcoming weeks to maximize total points. This paper proposes novel deterministic and robust integer programming models that select the optimal starting eleven and the captain. A new hybrid scoring metric is constructed using an interpretable artificial intelligence framework and underlying match performance data. Several objective functions and estimation techniques are introduced for the programming model. To the best of my knowledge, this is the first study to approach fantasy football through this lens. The models' performance is evaluated using data from the 2023/24 Premier League season. Results indicate that the proposed hybrid method achieved the highest score while maintaining consistent performance. Utilizing the Monte Carlo simulation, the strategic choice of averaging techniques for estimating cost vectors, and the proposed hybrid approach are shown to be effective during the out-of-sample period. This paper also provides a thorough analysis of the optimal formations and players selected by the models, offering valuable insights into effective fantasy football strategies. |
2025-05-04 | Representation Learning of Limit Order Book: A Comprehensive Study and Benchmarking | Muyao Zhong, Yushi Lin, Peng Yang et.al. | 2505.02139 | | | Abstract (click to expand)The Limit Order Book (LOB), the mostly fundamental data of the financial market, provides a fine-grained view of market dynamics while poses significant challenges in dealing with the esteemed deep models due to its strong autocorrelation, cross-feature constrains, and feature scale disparity. Existing approaches often tightly couple representation learning with specific downstream tasks in an end-to-end manner, failed to analyze the learned representations individually and explicitly, limiting their reusability and generalization. This paper conducts the first systematic comparative study of LOB representation learning, aiming to identify the effective way of extracting transferable, compact features that capture essential LOB properties. We introduce LOBench, a standardized benchmark with real China A-share market data, offering curated datasets, unified preprocessing, consistent evaluation metrics, and strong baselines. Extensive experiments validate the sufficiency and necessity of LOB representations for various downstream tasks and highlight their advantages over both the traditional task-specific end-to-end models and the advanced representation learning models for general time series. Our work establishes a reproducible framework and provides clear guidelines for future research. Datasets and code will be publicly available at https://github.com/financial-simulation-lab/LOBench. |
2025-05-04 | A Deep Learning-Aided Approach for Estimating Field Permeability Map by Fusing Well Logs, Well Tests, and Seismic Data | Grigoriy Shutov, Viktor Duplyakov, Shadfar Davoodi et.al. | 2505.02093 | | | Abstract (click to expand)Obtaining reliable permeability maps of oil reservoirs is crucial for building a robust and accurate reservoir simulation model and, therefore, designing effective recovery strategies. This problem, however, remains challenging, as it requires the integration of various data sources by experts from different disciplines. Moreover, there are no sources to provide direct information about the inter-well space. In this work, a new method based on the data-fusion approach is proposed for predicting two-dimensional permeability maps on the whole reservoir area. This method utilizes non-parametric regression with a custom kernel shape accounting for different data sources: well logs, well tests, and seismics. A convolutional neural network is developed to process seismic data and then incorporate it with other sources. A multi-stage data fusion procedure helps to artificially increase the training dataset for the seismic interpretation model and finally to construct the adequate permeability map. The proposed methodology of permeability map construction from different sources was tested on a real oil reservoir located in Western Siberia. The results demonstrate that the developed map perfectly corresponds to the permeability estimations in the wells, and the inter-well space permeability predictions are considerably improved through the incorporation of the seismic data. |
2025-05-04 | A Deep Learning Scheme of Electromagnetic Scattering From Scatterers With Incomplete Profiles | Ji-Yuan Wang, Xin-Yue Lou, Liang Zhang et.al. | 2505.02086 | | | Abstract (click to expand)A deep learning scheme is proposed to solve the electromagnetic (EM) scattering problems where the profile of the dielectric scatterer of interest is incomplete. As a compensation, a limited amount of scattering data is provided, which is in principle containing sufficient information associated with the missing part of the profile. The existing solvers can hardly realize the compensation if the known part of the profile and the scattering data are combined straightforwardly. On one hand, the well-developed forward solvers have no mechanism to accept the scattering data, which can recover the unknown part of the profile if properly used. On the other hand, the existing solvers for inverse problems cannot retrieve the complete profile with an acceptable accuracy from the limited amount of scattering data, even when the available part of the profile can be fed into the solvers. This work aims to handle the difficulty. To this end, the EM forward scattering from an incompletely known dielectric scatterer is derived. A scheme based on DL is then proposed where the forward and inverse scattering problems are solved simultaneously. Numerical experiments are conducted to demonstrate the performance of the proposed DL-based scheme for both two-dimensional (2-D) and three-dimensional (3-D) EM scattering problems. |
2025-05-03 | A computational framework for predicting the effect of surface roughness in fatigue | S. Jiménez-Alfaro, E. Martínez-Pañeda et.al. | 2505.01871 | | | Abstract (click to expand)Surface roughness is a critical factor influencing the fatigue life of structural components. Its effect is commonly quantified using a correction coefficient known as the surface factor. In this paper, a phase field based numerical framework is proposed to estimate the surface factor while accounting for the stochastic nature of surface roughness. The model is validated against existing experimental data. Furthermore, we investigate the influence of key parameters on the fatigue life of rough surfaces, such as surface topology and failure strength. An important effect of surface roughness is observed when the average surface roughness increases and the correlation length of the surface profile decreases. This effect becomes more pronounced with higher failure strengths. |
2025-05-03 | Enhancing Black-Litterman Portfolio via Hybrid Forecasting Model Combining Multivariate Decomposition and Noise Reduction | Ziye Yang, Ke Lu et.al. | 2505.01781 | | | Abstract (click to expand)The sensitivity to input parameters and lack of flexibility limits the traditional Mean-Variance model. In contrast, the Black-Litterman model has attracted widespread attention by integrating market equilibrium returns with investors' subjective views. This paper proposes a novel hybrid deep learning model combining Singular Spectrum analysis (SSA), Multivariate Aligned Empirical Mode Decomposition (MA-EMD), and Temporal Convolutional Networks (TCNs), aiming to improve the prediction accuracy of asset prices and thus enhance the ability of the Black-Litterman model to generate subjective views. Experimental results show that noise reduction pre-processing can improve the model's accuracy, and the prediction performance of the proposed model is significantly better than that of three multivariate decomposition benchmark models. We construct an investment portfolio by using 20 representative stocks from the NASDAQ 100 index. By combining the hybrid forecasting model with the Black-Litterman model, the generated investment portfolio exhibits better returns and risk control capabilities than the Mean-Variance, Equal-Weighted, and Market-Weighted models in the short holding period. |
2025-05-02 | A Domain Adaptation of Large Language Models for Classifying Mechanical Assembly Components | Fatemeh Elhambakhsh, Daniele Grandi, Hyunwoong Ko et.al. | 2505.01627 | | | Abstract (click to expand)The conceptual design phase represents a critical early stage in the product development process, where designers generate potential solutions that meet predefined design specifications based on functional requirements. Functional modeling, a foundational aspect of this phase, enables designers to reason about product functions before specific structural details are determined. A widely adopted approach to functional modeling is the Function-Behavior-Structure (FBS) framework, which supports the transformation of functional intent into behavioral and structural descriptions. However, the effectiveness of function-based design is often hindered by the lack of well-structured and comprehensive functional data. This scarcity can negatively impact early design decision-making and hinder the development of accurate behavioral models. Recent advances in Large Language Models (LLMs), such as those based on GPT architectures, offer a promising avenue to address this gap. LLMs have demonstrated significant capabilities in language understanding and natural language processing (NLP), making them suitable for automated classification tasks. This study proposes a novel LLM-based domain adaptation (DA) framework using fine-tuning for the automated classification of mechanical assembly parts' functions. By fine-tuning LLMs on domain-specific datasets, the traditionally manual and subjective process of function annotation can be improved in both accuracy and consistency. A case study demonstrates fine-tuning GPT-3.5 Turbo on data from the Oregon State Design Repository (OSDR), and evaluation on the A Big CAD (ABC) dataset shows that the domain-adapted LLM can generate high-quality functional data, enhancing the semantic representation of mechanical parts and supporting more effective design exploration in early-phase engineering. |
2025-05-02 | Dynamical Update Maps for Particle Flow with Differential Algebra | Simone Servadio et.al. | 2505.01598 | | 8 pages, 9 figures, FUSION 2025 | Abstract (click to expand)Particle Flow Filters estimate the ``a posteriori" probability density function (PDF) by moving an ensemble of particles according to the likelihood. Particles are propagated under the system dynamics until a measurement becomes available when each particle undergoes an additional stochastic differential equation in a pseudo-time that updates the distribution following a homotopy transformation. This flow of particles can be represented as a recursive update step of the filter. In this work, we leverage the Differential Algebra (DA) representation of the solution flow of dynamics to improve the computational burden of particle flow filters. Thanks to this approximation, both the prediction and the update differential equations are solved in the DA framework, creating two sets of polynomial maps: the first propagates particles forward in time while the second updates particles, achieving the flow. The final result is a new particle flow filter that rapidly propagates and updates PDFs using mathematics based on deviation vectors. Numerical applications show the benefits of the proposed technique, especially in reducing computational time, so that small systems such as CubeSats can run the filter for attitude determination. |
2025-05-02 | Advances in Particle Flow Filters with Taylor Expansion Series | Simone Servadio et.al. | 2505.01597 | | 8 pages, 6 figures, FUSION 2025 | Abstract (click to expand)Particle Flow Filters perform the measurement update by moving particles to a different location rather than modifying the particles' weight based on the likelihood. Their movement (flow) is dictated by a drift term, which continuously pushes the particle toward the posterior distribution, and a diffusion term, which guarantees the spread of particles. This work presents a novel derivation of these terms based on high-order polynomial expansions, where the common techniques based on linearization reduce to a simpler version of the new methodology. Thanks to differential algebra, the high-order particle flow is derived directly onto the polynomials representation of the distribution, embedded with differentiation and evaluation. The resulting technique proposes two new particle flow filters, whose difference relies on the selection of the expansion center for the Taylor polynomial evaluation. Numerical applications show the improvement gained by the inclusion of high-order terms, especially when comparing performance with the Gromov flow and the "exact" flow. |
2025-05-02 | Optimising Kernel-based Multivariate Statistical Process Control | Zina-Sabrina Duma, Victoria Jorry, Tuomas Sihvonen et.al. | 2505.01556 | link | | Abstract (click to expand)Multivariate Statistical Process Control (MSPC) is a framework for monitoring and diagnosing complex processes by analysing the relationships between multiple process variables simultaneously. Kernel MSPC extends the methodology by leveraging kernel functions to capture non-linear relationships between the data, enhancing the process monitoring capabilities. However, optimising the kernel MSPC parameters, such as the kernel type and kernel parameters, is often done in literature in time-consuming and non-procedural manners such as cross-validation or grid search. In the present paper, we propose optimising the kernel MSPC parameters with Kernel Flows (KF), a recent kernel learning methodology introduced for Gaussian Process Regression (GPR). Apart from the optimisation technique, the novelty of the study resides also in the utilisation of kernel combinations for learning the optimal kernel type, and introduces individual kernel parameters for each variable. The proposed methodology is evaluated with multiple cases from the benchmark Tennessee Eastman Process. The faults are detected for all evaluated cases, including the ones not detected in the original study. |
2025-05-02 | SimICD: A Closed-Loop Simulation Framework For ICD Therapy | Hannah Lydon, Milad Kazemi, Martin Bishop et.al. | 2505.01371 | link | Accepted for publication in the 47th annual Engineering in Medicine and Biology Conference (EMBC) | Abstract (click to expand)Virtual studies of ICD behaviour are crucial for testing device functionality in a controlled environment prior to clinical application. Although previous works have shown the viability of using in silico testing for diagnosis, there is a notable gap in available models that can simulate therapy progression decisions during arrhythmic episodes. This work introduces SimICD, a simulation tool which combines virtual ICD logic algorithms with cardiac electrophysiology simulations in a feedback loop, allowing the progression of ICD therapy protocols to be simulated for a range of tachy-arrhythmia episodes. Using a cohort of virtual patients, we demonstrate the ability of SimICD to simulate realistic cardiac signals and ICD responses that align with the logic of real-world devices, facilitating the reprogramming of ICD parameters to adapt to specific episodes. |
2025-05-02 | Reduced-order structure-property linkages for stochastic metamaterials | Hooman Danesh, Maruthi Annamaraju, Tim Brepols et.al. | 2505.01283 | | | Abstract (click to expand)The capabilities of additive manufacturing have facilitated the design and production of mechanical metamaterials with diverse unit cell geometries. Establishing linkages between the vast design space of unit cells and their effective mechanical properties is critical for the efficient design and performance evaluation of such metamaterials. However, physics-based simulations of metamaterial unit cells across the entire design space are computationally expensive, necessitating a materials informatics framework to efficiently capture complex structure-property relationships. In this work, principal component analysis of 2-point correlation functions is performed to extract the salient features from a large dataset of randomly generated 2D metamaterials. Physics-based simulations are performed using a fast Fourier transform (FFT)-based homogenization approach to efficiently compute the homogenized effective elastic stiffness across the extensive unit cell designs. Subsequently, Gaussian process regression is used to generate reduced-order surrogates, mapping unit cell designs to their homogenized effective elastic constant. It is demonstrated that the adopted workflow enables a high-value low-dimensional representation of the voluminous stochastic metamaterial dataset, facilitating the construction of robust structure-property maps. Finally, an uncertainty-based active learning framework is utilized to train a surrogate model with a significantly smaller number of data points compared to the original full dataset. It is shown that a dataset as small as \(0.61\%\) of the entire dataset is sufficient to generate accurate and robust structure-property maps. |
2025-05-02 | Design for a Digital Twin in Clinical Patient Care | Anna-Katharina Nitschke, Carlos Brandl, Fabian Egersdörfer et.al. | 2505.01206 | | submitted to npj Digital Medicine | Abstract (click to expand)Digital Twins hold great potential to personalize clinical patient care, provided the concept is translated to meet specific requirements dictated by established clinical workflows. We present a generalizable Digital Twin design combining knowledge graphs and ensemble learning to reflect the entire patient's clinical journey and assist clinicians in their decision-making. Such Digital Twins can be predictive, modular, evolving, informed, interpretable and explainable with applications ranging from oncology to epidemiology. |
2025-05-02 | Transforming physics-informed machine learning to convex optimization | Letian Yi, Siyuan Yang, Ying Cui et.al. | 2505.01047 | link | 41 pages,17 figures | Abstract (click to expand)Physics-Informed Machine Learning (PIML) offers a powerful paradigm of integrating data with physical laws to address important scientific problems, such as parameter estimation, inferring hidden physics, equation discovery, and state prediction, etc. However, PIML still faces many serious optimization challenges that significantly restrict its applications. In this study, we propose a comprehensive framework that transforms PIML to convex optimization to overcome all these limitations, referred to as Convex-PIML. The linear combination of B-splines is utilized to approximate the data, promoting the convexity of the loss function. By replacing the non-convex components of the loss function with convex approximations, the problem is further converted into a sequence of successively refined approximated convex optimization problems. This conversion allows the use of well-established convex optimization algorithms, obtaining solutions effectively and efficiently. Furthermore, an adaptive knot optimization method based on error estimate is introduced to mitigate the spectral bias issue of PIML, further improving the performance. The proposed theoretically guaranteed framework is tested in scenarios with distinct types of physical prior. The results indicate that optimization problems are effectively solved in these scenarios, highlighting the potential of the framework for broad applications. |
2025-05-01 | Leveraging Partial SMILES Validation Scheme for Enhanced Drug Design in Reinforcement Learning Frameworks | Xinyu Wang, Jinbo Bi, Minghu Song et.al. | 2505.00530 | | 17 pages, 5 main figures, 2 appendix figures. Submitted to ICML 2025 | Abstract (click to expand)SMILES-based molecule generation has emerged as a powerful approach in drug discovery. Deep reinforcement learning (RL) using large language model (LLM) has been incorporated into the molecule generation process to achieve high matching score in term of likelihood of desired molecule candidates. However, a critical challenge in this approach is catastrophic forgetting during the RL phase, where knowledge such as molecule validity, which often exceeds 99\% during pretraining, significantly deteriorates. Current RL algorithms applied in drug discovery, such as REINVENT, use prior models as anchors to retian pretraining knowledge, but these methods lack robust exploration mechanisms. To address these issues, we propose Partial SMILES Validation-PPO (PSV-PPO), a novel RL algorithm that incorporates real-time partial SMILES validation to prevent catastrophic forgetting while encouraging exploration. Unlike traditional RL approaches that validate molecule structures only after generating entire sequences, PSV-PPO performs stepwise validation at each auto-regressive step, evaluating not only the selected token candidate but also all potential branches stemming from the prior partial sequence. This enables early detection of invalid partial SMILES across all potential paths. As a result, PSV-PPO maintains high validity rates even during aggressive exploration of the vast chemical space. Our experiments on the PMO and GuacaMol benchmark datasets demonstrate that PSV-PPO significantly reduces the number of invalid generated structures while maintaining competitive exploration and optimization performance. While our work primarily focuses on maintaining validity, the framework of PSV-PPO can be extended in future research to incorporate additional forms of valuable domain knowledge, further enhancing reinforcement learning applications in drug discovery. |
2025-05-04 | Evaluation of Thermal Control Based on Spatial Thermal Comfort with Reconstructed Environmental Data | Youngkyu Kim, Byounghyun Yoo, Ji Young Yun et.al. | 2505.00468 | | | Abstract (click to expand)Achieving thermal comfort while maintaining energy efficiency is a critical objective in building system control. Conventional thermal comfort models, such as the Predicted Mean Vote (PMV), rely on both environmental and personal variables. However, the use of fixed-location sensors limits the ability to capture spatial variability, which reduces the accuracy of occupant-specific comfort estimation. To address this limitation, this study proposes a new PMV estimation method that incorporates spatial environmental data reconstructed using the Gappy Proper Orthogonal Decomposition (Gappy POD) algorithm. In addition, a group PMV-based control framework is developed to account for the thermal comfort of multiple occupants. The Gappy POD method enables fast and accurate reconstruction of indoor temperature fields from sparse sensor measurements. Using these reconstructed fields and occupant location data, spatially resolved PMV values are calculated. Group-level thermal conditions are then derived through statistical aggregation methods and used to control indoor temperature in a multi-occupant living lab environment. Experimental results show that the Gappy POD algorithm achieves an average relative error below 3\% in temperature reconstruction. PMV distributions varied by up to 1.26 scale units depending on occupant location. Moreover, thermal satisfaction outcomes varied depending on the group PMV method employed. These findings underscore the importance for adaptive thermal control strategies that incorporate both spatial and individual variability, offering valuable insights for future occupant-centric building operations. |
2025-05-01 | Subspace-Distance-Enabled Active Learning for Efficient Data-Driven Model Reduction of Parametric Dynamical Systems | Harshit Kapadia, Peter Benner, Lihong Feng et.al. | 2505.00460 | | 31 pages, 10 figures, 4 tables | Abstract (click to expand)In situations where the solution of a high-fidelity dynamical system needs to be evaluated repeatedly, over a vast pool of parametric configurations and in absence of access to the underlying governing equations, data-driven model reduction techniques are preferable. We propose a novel active learning approach to build a parametric data-driven reduced-order model (ROM) by greedily picking the most important parameter samples from the parameter domain. As a result, during the ROM construction phase, the number of high-fidelity solutions dynamically grow in a principled fashion. The high-fidelity solution snapshots are expressed in several parameter-specific linear subspaces, with the help of proper orthogonal decomposition (POD), and the relative distance between these subspaces is used as a guiding mechanism to perform active learning. For successfully achieving this, we provide a distance measure to evaluate the similarity between pairs of linear subspaces with different dimensions, and also show that this distance measure is a metric. The usability of the proposed subspace-distance-enabled active learning (SDE-AL) framework is demonstrated by augmenting two existing non-intrusive reduced-order modeling approaches, and providing their active-learning-driven (ActLearn) extensions, namely, SDE-ActLearn-POD-KSNN, and SDE-ActLearn-POD-NN. Furthermore, we report positive results for two parametric physical models, highlighting the efficiency of the proposed SDE-AL approach. |
2025-04-30 | Generative Machine Learning in Adaptive Control of Dynamic Manufacturing Processes: A Review | Suk Ki Lee, Hyunwoong Ko et.al. | 2505.00210 | | 12 pages, 1 figure, 1 table. This paper has been accepted for publication in the proceedings of ASME IDETC-CIE 2025 | Abstract (click to expand)Dynamic manufacturing processes exhibit complex characteristics defined by time-varying parameters, nonlinear behaviors, and uncertainties. These characteristics require sophisticated in-situ monitoring techniques utilizing multimodal sensor data and adaptive control systems that can respond to real-time feedback while maintaining product quality. Recently, generative machine learning (ML) has emerged as a powerful tool for modeling complex distributions and generating synthetic data while handling these manufacturing uncertainties. However, adopting these generative technologies in dynamic manufacturing systems lacks a functional control-oriented perspective to translate their probabilistic understanding into actionable process controls while respecting constraints. This review presents a functional classification of Prediction-Based, Direct Policy, Quality Inference, and Knowledge-Integrated approaches, offering a perspective for understanding existing ML-enhanced control systems and incorporating generative ML. The analysis of generative ML architectures within this framework demonstrates control-relevant properties and potential to extend current ML-enhanced approaches where conventional methods prove insufficient. We show generative ML's potential for manufacturing control through decision-making applications, process guidance, simulation, and digital twins, while identifying critical research gaps: separation between generation and control functions, insufficient physical understanding of manufacturing phenomena, and challenges adapting models from other domains. To address these challenges, we propose future research directions aimed at developing integrated frameworks that combine generative ML and control technologies to address the dynamic complexities of modern manufacturing systems. |
2025-04-30 | Generative Multimodal Multiscale Data Fusion for Digital Twins in Aerosol Jet Electronics Printing | Fatemeh Elhambakhsh, Suk Ki Lee, Hyunwoong Ko et.al. | 2505.00176 | | | Abstract (click to expand)The rising demand for high-value electronics necessitates advanced manufacturing techniques capable of meeting stringent specifications for precise, complex, and compact devices, driving the shift toward innovative additive manufacturing (AM) solutions. Aerosol Jet Printing (AJP) is a versatile AM technique that utilizes aerosolized functional materials to accurately print intricate patterns onto diverse substrates. Machine learning (ML)- based Process-Structure-Property (PSP) modeling is essential for enhancing AJP manufacturing, as it quantitatively connects process parameters, structural features, and resulting material properties. However, current ML approaches for modeling PSP relationships in AJP face significant limitations in handling multimodal and multiscale data, underscoring a critical need for generative methods capable of comprehensive analysis through multimodal and multiscale fusion. To address this challenge, this study introduces a novel generative modeling methodology leveraging diffusion models for PSP data fusion in AJP. The proposed method integrates multimodal, multiscale PSP features in two phases: (1) registering the features, and (2) fusing them to generate causal relationships between PSP attributes. A case study demonstrates the registration and fusion of optical microscopy (OM) images and confocal profilometry (CP) data from AJP, along with the fine-tuning of the fusion step. The results effectively capture complex PSP relationships, offering deeper insights into digital twins of dynamic manufacturing systems. |
2025-04-30 | Generative AI in Financial Institution: A Global Survey of Opportunities, Threats, and Regulation | Bikash Saha, Nanda Rani, Sandeep Kumar Shukla et.al. | 2504.21574 | | | Abstract (click to expand)Generative Artificial Intelligence (GenAI) is rapidly reshaping the global financial landscape, offering unprecedented opportunities to enhance customer engagement, automate complex workflows, and extract actionable insights from vast financial data. This survey provides an overview of GenAI adoption across the financial ecosystem, examining how banks, insurers, asset managers, and fintech startups worldwide are integrating large language models and other generative tools into their operations. From AI-powered virtual assistants and personalized financial advisory to fraud detection and compliance automation, GenAI is driving innovation across functions. However, this transformation comes with significant cybersecurity and ethical risks. We discuss emerging threats such as AI-generated phishing, deepfake-enabled fraud, and adversarial attacks on AI systems, as well as concerns around bias, opacity, and data misuse. The evolving global regulatory landscape is explored in depth, including initiatives by major financial regulators and international efforts to develop risk-based AI governance. Finally, we propose best practices for secure and responsible adoption - including explainability techniques, adversarial testing, auditability, and human oversight. Drawing from academic literature, industry case studies, and policy frameworks, this chapter offers a perspective on how the financial sector can harness GenAI's transformative potential while navigating the complex risks it introduces. |
2025-04-30 | Security Analysis and Implementation of Cryptocurrency Systems on Blockchain 2.0 | Pengfei Gao, Dechao Kong, Xiaoqi Li et.al. | 2504.21367 | | | Abstract (click to expand)Blockchain technology has set off a wave of decentralization in the world since its birth. The trust system constructed by blockchain technology based on cryptography algorithm and computing power provides a practical and powerful solution to solve the trust problem in human society. In order to make more convenient use of the characteristics of blockchain and build applications on it, smart contracts appear. By defining some trigger automatic execution contracts, the application space of blockchain is expanded and the foundation for the rapid development of blockchain is laid. This is blockchain 2.0. However, the programmability of smart contracts also introduces vulnerabilities. In order to cope with the insufficient security guarantee of high-value application networks running on blockchain 2.0 and smart contracts, this article will be represented by Ethereum to introduce the technical details of understanding blockchain 2.0 and the operation principle of contract virtual machines, and explain how cryptocurrencies based on blockchain 2.0 are constructed and operated. The common security problems and solutions are also discussed. Based on relevant research and on-chain practice, this paper provides a complete and comprehensive perspective to understanding cryptocurrency technology based on blockchain 2.0 and provides a reference for building more secure cryptocurrency contracts. |
2025-04-30 | Multi-level datasets training method in Physics-Informed Neural Networks | Yao-Hsuan Tsai, Hsiao-Tung Juan, Pao-Hsiung Chiu et.al. | 2504.21328 | | 33 pages, 12figures | Abstract (click to expand)Physics-Informed Neural Networks have emerged as a promising methodology for solving PDEs, gaining significant attention in computer science and various physics-related fields. Despite being demonstrated the ability to incorporate the physics of laws for versatile applications, PINNs still struggle with the challenging problems which are stiff to be solved and/or have high-frequency components in the solutions, resulting in accuracy and convergence issues. It may not only increase computational costs, but also lead to accuracy loss or solution divergence. In this study, an alternative approach is proposed to mitigate the above-mentioned problems. Inspired by the multi-grid method in CFD community, the underlying idea of the current approach is to efficiently remove different frequency errors via training with different levels of training samples, resulting in a simpler way to improve the training accuracy without spending time in fine-tuning of neural network structures, loss weights as well as hyperparameters. To demonstrate the efficacy of current approach, we first investigate canonical 1D ODE with high-frequency component and 2D convection-diffusion equation with V-cycle training strategy. Finally, the current method is employed for the classical benchmark problem of steady Lid-driven cavity flows at different Reynolds numbers, to investigate the applicability and efficacy for the problem involved multiple modes of high and low frequency. By virtue of various training sequence modes, improvement through predictions lead to 30% to 60% accuracy improvement. We also investigate the synergies between current method and transfer learning techniques for more challenging problems (i.e., higher Re). From the present results, it also revealed that the current framework can produce good predictions even for the case of Re=5000, demonstrating the ability to solve complex high-frequency PDEs. |
2025-04-30 | Redundancy Analysis and Mitigation for Machine Learning-Based Process Monitoring of Additive Manufacturing | Jiarui Xie, Yaoyao Fiona Zhao et.al. | 2504.21317 | | 13 pages, 5 figures, 2 tables. Accepted by IDETC-CIE 2025 | Abstract (click to expand)The deployment of machine learning (ML)-based process monitoring systems has significantly advanced additive manufacturing (AM) by enabling real-time defect detection, quality assessment, and process optimization. However, redundancy is a critical yet often overlooked challenge in the deployment and operation of ML-based AM process monitoring systems. Excessive redundancy leads to increased equipment costs, compromised model performance, and high computational requirements, posing barriers to industrial adoption. However, existing research lacks a unified definition of redundancy and a systematic framework for its evaluation and mitigation. This paper defines redundancy in ML-based AM process monitoring and categorizes it into sample-level, feature-level, and model-level redundancy. A comprehensive multi-level redundancy mitigation (MLRM) framework is proposed, incorporating advanced methods such as data registration, downscaling, cross-modality knowledge transfer, and model pruning to systematically reduce redundancy while improving model performance. The framework is validated through an ML-based in-situ defect detection case study for directed energy deposition (DED), demonstrating a 91% reduction in latency, a 47% decrease in error rate, and a 99.4% reduction in storage requirements. Additionally, the proposed approach lowers sensor costs and energy consumption, enabling a lightweight, cost-effective, and scalable monitoring system. By defining redundancy and introducing a structured mitigation framework, this study establishes redundancy analysis and mitigation as a key enabler of efficient ML-based process monitoring in production environments. |
2025-04-29 | Simulating Heterogeneity within Elastic and Inelastic Discrete Mechanical Models | Jan Raisinger, Qiwei Zhang, John E. Bolander et.al. | 2504.20861 | | 24 pages, 11 figures | Abstract (click to expand)The study investigates the elastic and fracture behaviors of discrete, elastically homogeneous models of heterogeneous media. The homogeneity is accomplished either by volumetric-deviatoric decomposition of constitutive function or by an auxiliary stress homogenization method. The elastic parameters of the homogenized material models are randomly varied in space to introduce heterogeneity independently of the geometric properties of the discrete model. Several forms of randomization are investigated using statistical properties of nodal stress oscillations in periodic representative volume elements (RVEs). It is found that the stress oscillations present in discrete models built on heterogeneous geometric structures with standard constitutive models cannot be replicated by randomization of the elastically homogeneous discrete system. The marginal distributions as well as dependencies between stress tensor components cannot be adequately matched. With respect to quasi-brittle fracture behavior, the macroscopic response of the different models is studied for the load case of uniaxial tension. The elastically homogenized material provides higher peak stress occurring at lower strain levels and a steeper softening phase, compared to the standard material. Randomization of the elastic material parameters, as well as adjustment of inelastic material parameters, brings the macroscopic response of the homogenized material close to that of the standard material, although the damage distribution prior to the strain localization differs. These findings provide insight into the potential for controlled, random assignment of heterogeneity in homogeneous models, using physically-based discretizations of material structure with standard constitutive models for comparison. |
2025-04-29 | DiffLiB: High-fidelity differentiable modeling of lithium-ion batteries and efficient gradient-based parameter identification | Weipeng Xu, Kaiqi Yang, Yuzhi Zhang et.al. | 2504.20674 | link | | Abstract (click to expand)The physics-based Doyle-Fuller-Newman (DFN) model, widely adopted for its precise electrochemical modeling, stands out among various simulation models of lithium-ion batteries (LIBs). Although the DFN model is powerful in forward predictive analysis, the inverse identification of its model parameters has remained a long-standing challenge. The numerous unknown parameters associated with the nonlinear, time-dependent, and multi-scale DFN model are extremely difficult to be determined accurately and efficiently, hindering the practical use of such battery simulation models in industrial applications. To tackle this challenge, we introduce DiffLiB, a high-fidelity finite-element-based LIB simulation framework, equipped with advanced differentiable programming techniques so that efficient gradient-based inverse parameter identification is enabled. Customized automatic differentiation rules are defined by identifying the VJP (vector-Jacobian product) structure in the chain rule and implemented using adjoint-based implicit differentiation methods. Four numerical examples, including both 2D and 3D forward predictions and inverse parameter identification, are presented to validate the accuracy and computational efficiency of DiffLiB. Benchmarking against COMSOL demonstrates excellent agreement in forward predictions, with terminal voltage discrepancies maintaining a root-mean-square error (RMSE) below 2 mV across all test conditions. In parameter identification tasks using experimentally measured voltage data, the proposed gradient-based optimization scheme achieves superior computational performance, with 96% fewer forward predictions and 72% less computational time compared with gradient-free approaches. These results demonstrate that DiffLiB is a versatile and powerful computational framework for the development of advanced LIBs. |
2025-04-29 | Faster Random Walk-based Capacitance Extraction with Generalized Antithetic Sampling | Periklis Liaskovitis, Marios Visvardis, Efthymios Efstathiou et.al. | 2504.20586 | | | Abstract (click to expand)Floating random walk-based capacitance extraction has emerged in recent years as a tried and true approach for extracting parasitic capacitance in very large scale integrated circuits. Being a Monte Carlo method, its performance is dependent on the variance of sampled quantities and variance reduction methods are crucial for the challenges posed by ever denser process technologies and layout-dependent effects. In this work, we present a novel, universal variance reduction method for floating random walk-based capacitance extraction, which is conceptually simple, highly efficient and provably reduces variance in all extractions, especially when layout-dependent effects are present. It is complementary to existing mathematical formulations for variance reduction and its performance gains are experienced on top of theirs. Numerical experiments demonstrate substantial such gains of up to 30% in number of walks necessary and even more in actual extraction times compared to the best previously proposed variance reduction approaches for the floating random-walk. |
2025-04-28 | Multiscale modelling of thermally stressed superelastic polyimide | Jerome Samuel S, Puneet Kumar Patra, Md Rushdie Ibne Islam et.al. | 2504.20123 | | 25 pages, 17 figures | Abstract (click to expand)Many thermo-mechanical processes, such as thermal expansion and stress relaxation, originate at the atomistic scale. We develop a sequential multiscale approach to study thermally stressed superelastic polyimide to explore these effects. The continuum-scale smoothed particle hydrodynamics (SPH) model is coupled with atomistic molecular dynamics (MD) through constitutive modelling, where thermo-mechanical properties and equations of state are derived from MD simulations. The results are verified through benchmark problems of heat transfer. Finally, we analyse the insulating capabilities of superelastic polyimide by simulating the thermal response of an aluminium plate. The result shows a considerable reduction in the thermal stress, strain and temperature field development in the aluminium plate when superelastic polyimide is used as an insulator. The present work demonstrates the effectiveness of the multi-scale method in capturing thermo-mechanical interactions in superelastic polyimide. |
2025-04-28 | Stochastic Subspace via Probabilistic Principal Component Analysis for Characterizing Model Error | Akash Yadav, Ruda Zhang et.al. | 2504.19963 | link | | Abstract (click to expand)This paper proposes a probabilistic model of subspaces based on the probabilistic principal component analysis (PCA). Given a sample of vectors in the embedding space -- commonly known as a snapshot matrix -- this method uses quantities derived from the probabilistic PCA to construct distributions of the sample matrix, as well as the principal subspaces. It is applicable to projection-based reduced-order modeling methods, such as proper orthogonal decomposition and related model reduction methods. The stochastic subspace thus constructed can be used, for example, to characterize model-form uncertainty in computational mechanics. The proposed method has multiple desirable properties: (1) it is naturally justified by the probabilistic PCA and has analytic forms for the induced random matrix models; (2) it satisfies linear constraints, such as boundary conditions of all kinds, by default; (3) it has only one hyperparameter, which significantly simplifies training; and (4) its algorithm is very easy to implement. We compare the proposed method with existing approaches in a low-dimensional visualization example and a parametric static problem, and demonstrate its performance in a dynamics model of a space structure. |
2025-04-26 | Spreading of highly cohesive metal powders with transverse oscillation kinematics | Reimar Weissbach, Garrett Adams, Patrick M. Praegla et.al. | 2504.18981 | | | Abstract (click to expand)Powder bed additive manufacturing processes such as laser powder bed fusion (LPBF) or binder jetting (BJ) benefit from using fine (D50 \(\leq20~\mu m\) ) powders. However, the increasing level of cohesion with decreasing particle size makes spreading a uniform and continuous layer challenging. As a result, LPBF typically employs a coarser size distribution, and rotating roller mechanisms are used in BJ machines, that can create wave-like surface profiles due to roller run-out. In this work, a transverse oscillation kinematic for powder spreading is proposed, explored computationally, and validated experimentally. Simulations are performed using an integrated discrete element-finite element (DEM-FEM) framework and predict that transverse oscillation of a non-rotating roller facilitates the spreading of dense powder layers (beyond 50% packing fraction) with a high level of robustness to kinematic parameters. The experimental study utilizes a custom-built mechanized powder spreading testbed and X-ray transmission imaging for the analysis of spread powder layers. Experimental results generally validate the computational results, however, also exhibit parasitic layer cracking. For transverse oscillation frequencies above 200 Hz, powder layers of high packing fraction (between 50-60%) were formed, and for increased layer thicknesses, highly uniform and continuous layers were deposited. Statistical analysis of the experimental powder layer morphology as a function of kinematic spreading parameters revealed that an increasing transverse surface velocity improves layer uniformity and reduces cracking defects. This suggests that with minor improvements to the machine design, the proposed transverse oscillation kinematic has the potential to result in thin and consistently uniform powder layers of highly cohesive powder. |
2025-04-26 | Scientific Open-Source Software Is Less Likely to Become Abandoned Than One Might Think! Lessons from Curating a Catalog of Maintained Scientific Software | Addi Malviya Thakur, Reed Milewicz, Mahmoud Jahanshahi et.al. | 2504.18971 | | | Abstract (click to expand)Scientific software is essential to scientific innovation and in many ways it is distinct from other types of software. Abandoned (or unmaintained), buggy, and hard to use software, a perception often associated with scientific software can hinder scientific progress, yet, in contrast to other types of software, its longevity is poorly understood. Existing data curation efforts are fragmented by science domain and/or are small in scale and lack key attributes. We use large language models to classify public software repositories in World of Code into distinct scientific domains and layers of the software stack, curating a large and diverse collection of over 18,000 scientific software projects. Using this data, we estimate survival models to understand how the domain, infrastructural layer, and other attributes of scientific software affect its longevity. We further obtain a matched sample of non-scientific software repositories and investigate the differences. We find that infrastructural layers, downstream dependencies, mentions of publications, and participants from government are associated with a longer lifespan, while newer projects with participants from academia had shorter lifespan. Against common expectations, scientific projects have a longer lifetime than matched non-scientific open-source software projects. We expect our curated attribute-rich collection to support future research on scientific software and provide insights that may help extend longevity of both scientific and other projects. |
2025-04-30 | 4DGS-CC: A Contextual Coding Framework for 4D Gaussian Splatting Data Compression | Zicong Chen, Zhenghao Chen, Wei Jiang et.al. | 2504.18925 | | | Abstract (click to expand)Storage is a significant challenge in reconstructing dynamic scenes with 4D Gaussian Splatting (4DGS) data. In this work, we introduce 4DGS-CC, a contextual coding framework that compresses 4DGS data to meet specific storage constraints. Building upon the established deformable 3D Gaussian Splatting (3DGS) method, our approach decomposes 4DGS data into 4D neural voxels and a canonical 3DGS component, which are then compressed using Neural Voxel Contextual Coding (NVCC) and Vector Quantization Contextual Coding (VQCC), respectively. Specifically, we first decompose the 4D neural voxels into distinct quantized features by separating the temporal and spatial dimensions. To losslessly compress each quantized feature, we leverage the previously compressed features from the temporal and spatial dimensions as priors and apply NVCC to generate the spatiotemporal context for contextual coding. Next, we employ a codebook to store spherical harmonics information from canonical 3DGS as quantized vectors, which are then losslessly compressed by using VQCC with the auxiliary learned hyperpriors for contextual coding, thereby reducing redundancy within the codebook. By integrating NVCC and VQCC, our contextual coding framework, 4DGS-CC, enables multi-rate 4DGS data compression tailored to specific storage requirements. Extensive experiments on three 4DGS data compression benchmarks demonstrate that our method achieves an average storage reduction of approximately 12 times while maintaining rendering fidelity compared to our baseline 4DGS approach. |
2025-04-24 | QuantBench: Benchmarking AI Methods for Quantitative Investment | Saizhuo Wang, Hao Kong, Jiadong Guo et.al. | 2504.18600 | | | Abstract (click to expand)The field of artificial intelligence (AI) in quantitative investment has seen significant advancements, yet it lacks a standardized benchmark aligned with industry practices. This gap hinders research progress and limits the practical application of academic innovations. We present QuantBench, an industrial-grade benchmark platform designed to address this critical need. QuantBench offers three key strengths: (1) standardization that aligns with quantitative investment industry practices, (2) flexibility to integrate various AI algorithms, and (3) full-pipeline coverage of the entire quantitative investment process. Our empirical studies using QuantBench reveal some critical research directions, including the need for continual learning to address distribution shifts, improved methods for modeling relational financial data, and more robust approaches to mitigate overfitting in low signal-to-noise environments. By providing a common ground for evaluation and fostering collaboration between researchers and practitioners, QuantBench aims to accelerate progress in AI for quantitative investment, similar to the impact of benchmark platforms in computer vision and natural language processing. |
2025-04-25 | Discovering Governing Equations of Geomagnetic Storm Dynamics with Symbolic Regression | Stefano Markidis, Jonah Ekelund, Luca Pennati et.al. | 2504.18461 | | Accepted for publication in the 25th International Conference on Computational Science proceedings | Abstract (click to expand)Geomagnetic storms are large-scale disturbances of the Earth's magnetosphere driven by solar wind interactions, posing significant risks to space-based and ground-based infrastructure. The Disturbance Storm Time (Dst) index quantifies geomagnetic storm intensity by measuring global magnetic field variations. This study applies symbolic regression to derive data-driven equations describing the temporal evolution of the Dst index. We use historical data from the NASA OMNIweb database, including solar wind density, bulk velocity, convective electric field, dynamic pressure, and magnetic pressure. The PySR framework, an evolutionary algorithm-based symbolic regression library, is used to identify mathematical expressions linking dDst/dt to key solar wind. The resulting models include a hierarchy of complexity levels and enable a comparison with well-established empirical models such as the Burton-McPherron-Russell and O'Brien-McPherron models. The best-performing symbolic regression models demonstrate superior accuracy in most cases, particularly during moderate geomagnetic storms, while maintaining physical interpretability. Performance evaluation on historical storm events includes the 2003 Halloween Storm, the 2015 St. Patrick's Day Storm, and a 2017 moderate storm. The results provide interpretable, closed-form expressions that capture nonlinear dependencies and thresholding effects in Dst evolution. |
2025-04-25 | A Composable Game-Theoretic Framework for Blockchains | Zeta Avarikioti, Georg Fuchsbauer, Pim Keer et.al. | 2504.18214 | | 19 pages (12 for main paper), 5 figures | Abstract (click to expand)Blockchains rely on economic incentives to ensure secure and decentralised operation, making incentive compatibility a core design concern. However, protocols are rarely deployed in isolation. Applications interact with the underlying consensus and network layers, and multiple protocols may run concurrently on the same chain. These interactions give rise to complex incentive dynamics that traditional, isolated analyses often fail to capture. We propose the first compositional game-theoretic framework for blockchain protocols. Our model represents blockchain protocols as interacting games across layers -- application, network, and consensus. It enables formal reasoning about incentive compatibility under composition by introducing two key abstractions: the cross-layer game, which models how strategies in one layer influence others, and cross-application composition, which captures how application protocols interact concurrently through shared infrastructure. We illustrate our framework through case studies on HTLCs, Layer-2 protocols, and MEV, showing how compositional analysis reveals subtle incentive vulnerabilities and supports modular security proofs. |
2025-04-25 | A Machine Learning Approach For Bitcoin Forecasting | Stefano Sossi-Rojas, Gissel Velarde, Damian Zieba et.al. | 2504.18206 | | 15 pages | Abstract (click to expand)Bitcoin is one of the cryptocurrencies that is gaining more popularity in recent years. Previous studies have shown that closing price alone is not enough to forecast stock market series. We introduce a new set of time series and demonstrate that a subset is necessary to improve directional accuracy based on a machine learning ensemble. In our experiments, we study which time series and machine learning algorithms deliver the best results. We found that the most relevant time series that contribute to improving directional accuracy are Open, High and Low, with the largest contribution of Low in combination with an ensemble of Gated Recurrent Unit network and a baseline forecast. The relevance of other Bitcoin-related features that are not price-related is negligible. The proposed method delivers similar performance to the state-of-the-art when observing directional accuracy. |
2025-04-25 | SMARTFinRAG: Interactive Modularized Financial RAG Benchmark | Yiwei Zha et.al. | 2504.18024 | link | For open source github repo, see https://github.com/JonathanZha47/SMARTFinRAG | Abstract (click to expand)Financial sectors are rapidly adopting language model technologies, yet evaluating specialized RAG systems in this domain remains challenging. This paper introduces SMARTFinRAG, addressing three critical gaps in financial RAG assessment: (1) a fully modular architecture where components can be dynamically interchanged during runtime; (2) a document-centric evaluation paradigm generating domain-specific QA pairs from newly ingested financial documents; and (3) an intuitive interface bridging research-implementation divides. Our evaluation quantifies both retrieval efficacy and response quality, revealing significant performance variations across configurations. The platform's open-source architecture supports transparent, reproducible research while addressing practical deployment challenges faced by financial institutions implementing RAG systems. |
2025-04-24 | EAQGA: A Quantum-Enhanced Genetic Algorithm with Novel Entanglement-Aware Crossovers | Mohammad Kashfi Haghighi, Matthieu Fortin-Deschênes, Christophe Pere et.al. | 2504.17923 | | This work has been submitted to the IEEE (QCE25) for possible publication | Abstract (click to expand)Genetic algorithms are highly effective optimization techniques for many computationally challenging problems, including combinatorial optimization tasks like portfolio optimization. Quantum computing has also shown potential in addressing these complex challenges. Combining these approaches, quantum genetic algorithms leverage the principles of superposition and entanglement to enhance the performance of classical genetic algorithms. In this work, we propose a novel quantum genetic algorithm introducing an innovative crossover strategy to generate quantum circuits from a binary solution. We incorporate a heuristic method to encode entanglement patterns from parent solutions into circuits for the next generation. Our algorithm advances quantum genetic algorithms by utilizing a limited number of entanglements, enabling efficient exploration of optimal solutions without significantly increasing circuit depth, making it suitable for near-term applications. We test this approach on a portfolio optimization problem using an IBM 127 qubits Eagle processor (ibm_quebec) and simulators. Compared to state-of-the-art algorithms, our results show that the proposed method improves fitness values by 33.6% over classical genetic algorithm and 37.2% over quantum-inspired genetic algorithm, using the same iteration counts and population sizes with real quantum hardware employing 100 qubits. These findings highlight the potential of current quantum computers to address real-world utility-scale combinatorial optimization problems. |
2025-04-24 | polyGen: A Learning Framework for Atomic-level Polymer Structure Generation | Ayush Jain, Rampi Ramprasad et.al. | 2504.17656 | | | Abstract (click to expand)Synthetic polymeric materials underpin fundamental technologies in the energy, electronics, consumer goods, and medical sectors, yet their development still suffers from prolonged design timelines. Although polymer informatics tools have supported speedup, polymer simulation protocols continue to face significant challenges: on-demand generation of realistic 3D atomic structures that respect the conformational diversity of polymer structures. Generative algorithms for 3D structures of inorganic crystals, bio-polymers, and small molecules exist, but have not addressed synthetic polymers. In this work, we introduce polyGen, the first latent diffusion model designed specifically to generate realistic polymer structures from minimal inputs such as the repeat unit chemistry alone, leveraging a molecular encoding that captures polymer connectivity throughout the architecture. Due to a scarce dataset of only 3855 DFT-optimized polymer structures, we augment our training with DFT-optimized molecular structures, showing improvement in joint learning between similar chemical structures. We also establish structure matching criteria to benchmark our approach on this novel problem. polyGen effectively generates diverse conformations of both linear chains and complex branched structures, though its performance decreases when handling repeat units with a high atom count. Given these initial results, polyGen represents a paradigm shift in atomic-level structure generation for polymer science-the first proof-of-concept for predicting realistic atomic-level polymer conformations while accounting for their intrinsic structural flexibility. |
2025-04-24 | Towards Equitable Rail Service Allocation Through Fairness-Oriented Timetabling in Liberalized Markets | David Muñoz-Valero, Juan Moreno-Garcia, Julio Alberto López-Gómez et.al. | 2504.17489 | | 30 pages, 7 figures | Abstract (click to expand)Over the last few decades, European rail transport has undergone major changes as part of the process of liberalization set out in European regulations. In this context of liberalization, railway undertakings compete with each other for the limited infrastructure capacity available to offer their rail services. The infrastructure manager is responsible for the equitable allocation of infrastructure between all companies in the market, which is essential to ensure the efficiency and sustainability of this competitive ecosystem. In this paper, a methodology based on Jain, Gini and Atkinson equity metrics is used to solve the rail service allocation problem in a liberalized railway market, analyzing the solutions obtained. The results show that the proposed methodology and the equity metrics used allow for equitable planning in different competitiveness scenarios. These results contrast with solutions where the objective of the infrastructure manager is to maximize its own profit, without regard for the equitable allocation of infrastructure. Therefore, the computational tests support the methodology and metrics used as a planning and decision support tool in a liberalized railway market. |
2025-04-24 | An approach based on metaheuristic algorithms to the timetabling problem in deregulated railway markets | David Muñoz-Valero, Juan Moreno-Garcia, Julio Alberto López-Gómez et.al. | 2504.17455 | | 20 pages, 16 figures | Abstract (click to expand)The train timetabling problem in liberalized railway markets represents a challenge to the coordination between infrastructure managers and railway undertakings. Efficient scheduling is critical in maximizing infrastructure capacity and utilization while adhering as closely as possible to the requests of railway undertakings. These objectives ultimately contribute to maximizing the infrastructure manager's revenues. This paper sets out a modular simulation framework to reproduce the dynamics of deregulated railway systems. Ten metaheuristic algorithms using the MEALPY Python library are then evaluated in order to optimize train schedules in the liberalized Spanish railway market. The results show that the Genetic Algorithm outperforms others in revenue optimization, convergence speed, and schedule adherence. Alternatives, such as Particle Swarm Optimization and Ant Colony Optimization Continuous, show slower convergence and higher variability. The results emphasize the trade-off between scheduling more trains and adhering to requested times, providing insights into solving complex scheduling problems in deregulated railway systems. |
2025-04-24 | Detection, Classification and Prevalence of Self-Admitted Aging Debt | Murali Sridharan, Mika Mäntylä, Leevi Rantala et.al. | 2504.17428 | | Draft | Abstract (click to expand)Context: Previous research on software aging is limited with focus on dynamic runtime indicators like memory and performance, often neglecting evolutionary indicators like source code comments and narrowly examining legacy issues within the TD context. Objective: We introduce the concept of Aging Debt (AD), representing the increased maintenance efforts and costs needed to keep software updated. We study AD through Self-Admitted Aging Debt (SAAD) observed in source code comments left by software developers. Method: We employ a mixed-methods approach, combining qualitative and quantitative analyses to detect and measure AD in software. This includes framing SAAD patterns from the source code comments after analysing the source code context, then utilizing the SAAD patterns to detect SAAD comments. In the process, we develop a taxonomy for SAAD that reflects the temporal aging of software and its associated debt. Then we utilize the taxonomy to quantify the different types of AD prevalent in OSS repositories. Results: Our proposed taxonomy categorizes temporal software aging into Active and Dormant types. Our extensive analysis of over 9,000+ Open Source Software (OSS) repositories reveals that more than 21% repositories exhibit signs of SAAD as observed from our gold standard SAAD dataset. Notably, Dormant AD emerges as the predominant category, highlighting a critical but often overlooked aspect of software maintenance. Conclusion: As software volume grows annually, so do evolutionary aging and maintenance challenges; our proposed taxonomy can aid researchers in detailed software aging studies and help practitioners develop improved and proactive maintenance strategies. |
2025-04-24 | Data-Driven Surrogate Modeling Techniques to Predict the Effective Contact Area of Rough Surface Contact Problems | Tarik Sahin, Jacopo Bonari, Sebastian Brandstaeter et.al. | 2504.17354 | | | Abstract (click to expand)The effective contact area in rough surface contact plays a critical role in multi-physics phenomena such as wear, sealing, and thermal or electrical conduction. Although accurate numerical methods, like the Boundary Element Method (BEM), are available to compute this quantity, their high computational cost limits their applicability in multi-query contexts, such as uncertainty quantification, parameter identification, and multi-scale algorithms, where many repeated evaluations are required. This study proposes a surrogate modeling framework for predicting the effective contact area using fast-to-evaluate data-driven techniques. Various machine learning algorithms are trained on a precomputed dataset, where the inputs are the imposed load and statistical roughness parameters, and the output is the corresponding effective contact area. All models undergo hyperparameter optimization to enable fair comparisons in terms of predictive accuracy and computational efficiency, evaluated using established quantitative metrics. Among the models, the Kernel Ridge Regressor demonstrates the best trade-off between accuracy and efficiency, achieving high predictive accuracy, low prediction time, and minimal training overhead-making it a strong candidate for general-purpose surrogate modeling. The Gaussian Process Regressor provides an attractive alternative when uncertainty quantification is required, although it incurs additional computational cost due to variance estimation. The generalization capability of the Kernel Ridge model is validated on an unseen simulation scenario, confirming its ability to transfer to new configurations. Database generation constitutes the dominant cost in the surrogate modeling process. Nevertheless, the approach proves practical and efficient for multi-query tasks, even when accounting for this initial expense. |
2025-04-24 | Tokenizing Stock Prices for Enhanced Multi-Step Forecast and Prediction | Zhuohang Zhu, Haodong Chen, Qiang Qu et.al. | 2504.17313 | | | Abstract (click to expand)Effective stock price forecasting (estimating future prices) and prediction (estimating future price changes) are pivotal for investors, regulatory agencies, and policymakers. These tasks enable informed decision-making, risk management, strategic planning, and superior portfolio returns. Despite their importance, forecasting and prediction are challenging due to the dynamic nature of stock price data, which exhibit significant temporal variations in distribution and statistical properties. Additionally, while both forecasting and prediction targets are derived from the same dataset, their statistical characteristics differ significantly. Forecasting targets typically follow a log-normal distribution, characterized by significant shifts in mean and variance over time, whereas prediction targets adhere to a normal distribution. Furthermore, although multi-step forecasting and prediction offer a broader perspective and richer information compared to single-step approaches, it is much more challenging due to factors such as cumulative errors and long-term temporal variance. As a result, many previous works have tackled either single-step stock price forecasting or prediction instead. To address these issues, we introduce a novel model, termed Patched Channel Integration Encoder (PCIE), to tackle both stock price forecasting and prediction. In this model, we utilize multiple stock channels that cover both historical prices and price changes, and design a novel tokenization method to effectively embed these channels in a cross-channel and temporally efficient manner. Specifically, the tokenization process involves univariate patching and temporal learning with a channel-mixing encoder to reduce cumulative errors. Comprehensive experiments validate that PCIE outperforms current state-of-the-art models in forecast and prediction tasks. |
2025-04-23 | Reinforcement learning framework for the mechanical design of microelectronic components under multiphysics constraints | Siddharth Nair, Timothy F. Walsh, Greg Pickrell et.al. | 2504.17142 | | 27 pages of main text, 15 figures | Abstract (click to expand)This study focuses on the development of reinforcement learning based techniques for the design of microelectronic components under multiphysics constraints. While traditional design approaches based on global optimization approaches are effective when dealing with a small number of design parameters, as the complexity of the solution space and of the constraints increases different techniques are needed. This is an important reason that makes the design and optimization of microelectronic components (characterized by large solution space and multiphysics constraints) very challenging for traditional methods. By taking as prototypical elements an application-specific integrated circuit (ASIC) and a heterogeneously integrated (HI) interposer, we develop and numerically test an optimization framework based on reinforcement learning (RL). More specifically, we consider the optimization of the bonded interconnect geometry for an ASIC chip as well as the placement of components on a HI interposer while satisfying thermoelastic and design constraints. This placement problem is particularly interesting because it features a high-dimensional solution space. |
2025-04-23 | Demonstration of an AI-driven workflow for dynamic x-ray spectroscopy | Ming Du, Mark Wolfman, Chengjun Sun et.al. | 2504.17124 | | | Abstract (click to expand)X-ray absorption near edge structure (XANES) spectroscopy is a powerful technique for characterizing the chemical state and symmetry of individual elements within materials, but requires collecting data at many energy points which can be time-consuming. While adaptive sampling methods exist for efficiently collecting spectroscopic data, they often lack domain-specific knowledge about XANES spectra structure. Here we demonstrate a knowledge-injected Bayesian optimization approach for adaptive XANES data collection that incorporates understanding of spectral features like absorption edges and pre-edge peaks. We show this method accurately reconstructs the absorption edge of XANES spectra using only 15-20% of the measurement points typically needed for conventional sampling, while maintaining the ability to determine the x-ray energy of the sharp peak after absorption edge with errors less than 0.03 eV, the absorption edge with errors less than 0.1 eV; and overall root-mean-square errors less than 0.005 compared to compared to traditionally sampled spectra. Our experiments on battery materials and catalysts demonstrate the method's effectiveness for both static and dynamic XANES measurements, improving data collection efficiency and enabling better time resolution for tracking chemical changes. This approach advances the degree of automation in XANES experiments reducing the common errors of under- or over-sampling points in near the absorption edge and enabling dynamic experiments that require high temporal resolution or limited measurement time. |
2025-04-23 | An Entropy Stable Formulation of Two-equation Turbulence Models with Particular Reference to the k-epsilon Model | Guillermo Hauke, Thomas J. R. Hughes et.al. | 2504.17110 | | 50 pages, 13 figures | Abstract (click to expand)Consistency and stability are two essential ingredients in the design of numerical algorithms for partial differential equations. Robust algorithms can be developed by incorporating nonlinear physical stability principles in their design, such as the entropy production inequality (i.e., the Clausius-Duhem inequality or second law of thermodynamics), rather than by simply adding artificial viscosity (a common approach). This idea is applied to the k-epsilon and two-equation turbulence models by introducing space-time averaging. Then, a set of entropy variables can be defined which leads to a symmetric system of advective-diffusive equations. Positivity and symmetry of the equations require certain constraints on the turbulence diffusivity coefficients and the turbulence source terms. With these, we are able to design entropy producing two-equation turbulence models and, in particular, the k-epsilon model. |
2025-04-23 | Mixing Data-Driven and Physics-Based Constitutive Models using Uncertainty-Driven Phase Fields | J. Storm, W. Sun, I. B. C. M. Rocha et.al. | 2504.16713 | | | Abstract (click to expand)There is a high interest in accelerating multiscale models using data-driven surrogate modeling techniques. Creating a large training dataset encompassing all relevant load scenarios is essential for a good surrogate, yet the computational cost of producing this data quickly becomes a limiting factor. Commonly, a pre-trained surrogate is used throughout the computational domain. Here, we introduce an alternative adaptive mixture approach that uses a fast probabilistic surrogate model as constitutive model when possible, but resorts back to the true high-fidelity model when necessary. The surrogate is thus not required to be accurate for every possible load condition, enabling a significant reduction in the data collection time. We achieve this by creating phases in the computational domain corresponding to the different models. These phases evolve using a phase-field model driven by the surrogate uncertainty. When the surrogate uncertainty becomes large, the phase-field model causes a local transition from the surrogate to the high-fidelity model, maintaining a highly accurate simulation. We discuss the requirements of this approach to achieve accurate and numerically stable results and compare the phase-field model to a purely local approach that does not enforce spatial smoothness for the phase mixing. Using a Gaussian Process surrogate for an elasto-plastic material, we demonstrate the potential of this mixture of models to accelerate multiscale simulations. |
2025-04-23 | 3D-1D modelling of cranial plate heating induced by low or medium frequency magnetic fields | Alessandro Arduino, Oriano Bottauscio, Denise Grappein et.al. | 2504.16600 | | | Abstract (click to expand)Safety assessment of patients with one-dimensionally structured passive implants, like cranial plates or stents, exposed to low or medium frequency magnetic fields, like those generated in magnetic resonance imaging or magnetic hyperthermia, can be challenging, because of the different length scales of the implant and the human body. Most of the methods used to estimate the heating induced near such implants neglect the presence of the metallic materials within the body, modeling the metal as thermal seeds. To overcome this limitation, a novel numerical approach that solves three-dimensional and one-dimensional coupled problems is proposed. This method leads to improved results by modelling the thermal diffusion through the highly conductive metallic implants. A comparison of the proposed method predictions with measurements performed on a cranial plate exposed to the magnetic field generated by a gradient coil system for magnetic resonance imaging is presented, showing an improved accuracy up to 25 % with respect to the method based on thermal seeds. The proposed method is finally applied to a magnetic hyperthermia case study in which a patient with a cranial plate is exposed to the magnetic field generated by a collar-type magnetic hyperthermia applicator for neck tumour treatment, predicting a temperature increase in proximity of the implant that is 10 % lower than the one overestimated by relying on thermal seeds. |
2025-04-23 | Preconditioning Natural and Second Order Gradient Descent in Quantum Optimization: A Performance Benchmark | Théo Lisart-Liebermann, Arcesio Castañeda Medina et.al. | 2504.16518 | | 17 pages, 36 figures, in review | Abstract (click to expand)The optimization of parametric quantum circuits is technically hindered by three major obstacles: the non-convex nature of the objective function, noisy gradient evaluations, and the presence of barren plateaus. As a result, the selection of classical optimizer becomes a critical factor in assessing and exploiting quantum-classical applications. One promising approach to tackle these challenges involves incorporating curvature information into the parameter update. The most prominent methods in this field are quasi-Newton and quantum natural gradient methods, which can facilitate faster convergence compared to first-order approaches. Second order methods however exhibit a significant trade-off between computational cost and accuracy, as well as heightened sensitivity to noise. This study evaluates the performance of three families of optimizers on synthetically generated MaxCut problems on a shallow QAOA algorithm. To address noise sensitivity and iteration cost, we demonstrate that incorporating secant-penalization in the BFGS update rule (SP-BFGS) yields improved outcomes for QAOA optimization problems, introducing a novel approach to stabilizing BFGS updates against gradient noise. |
2025-04-23 | Comparing Different Transformer Model Structures for Stock Prediction | Qizhao Chen et.al. | 2504.16361 | | | Abstract (click to expand)This paper compares different Transformer model architectures for stock index prediction. While many studies have shown that Transformers perform well in stock price forecasting, few have explored how different structural designs impact performance. Most existing works treat the Transformer as a black box, overlooking how specific architectural choices may affect predictive accuracy. However, understanding these differences is critical for developing more effective forecasting models. This study aims to identify which Transformer variant is most suitable for stock forecasting. This study evaluates five Transformer structures: (1) encoder-only Transformer, (2) decoder-only Transformer, (3) Vanilla Transformer (encoder + decoder), (4) Vanilla Transformer without embedding layers, and (5) Vanilla Transformer with ProbSparse attention. Results show that Transformer-based models generally outperform traditional approaches. Transformer with decoder only structure outperforms all other models in all scenarios. Transformer with ProbSparse attention has the worst performance in almost all cases. |
2025-04-22 | Quantum Doubly Stochastic Transformers | Jannis Born, Filip Skogh, Kahn Rhrissorrakrai et.al. | 2504.16275 | | Under Review | Abstract (click to expand)At the core of the Transformer, the Softmax normalizes the attention matrix to be right stochastic. Previous research has shown that this often destabilizes training and that enforcing the attention matrix to be doubly stochastic (through Sinkhorn's algorithm) consistently improves performance across different tasks, domains and Transformer flavors. However, Sinkhorn's algorithm is iterative, approximative, non-parametric and thus inflexible w.r.t. the obtained doubly stochastic matrix (DSM). Recently, it has been proven that DSMs can be obtained with a parametric quantum circuit, yielding a novel quantum inductive bias for DSMs with no known classical analogue. Motivated by this, we demonstrate the feasibility of a hybrid classical-quantum doubly stochastic Transformer (QDSFormer) that replaces the Softmax in the self-attention layer with a variational quantum circuit. We study the expressive power of the circuit and find that it yields more diverse DSMs that better preserve information than classical operators. Across multiple small-scale object recognition tasks, we find that our QDSFormer consistently surpasses both a standard Vision Transformer and other doubly stochastic Transformers. Beyond the established Sinkformer, this comparison includes a novel quantum-inspired doubly stochastic Transformer (based on QR decomposition) that can be of independent interest. The QDSFormer also shows improved training stability and lower performance variation suggesting that it may mitigate the notoriously unstable training of ViTs on small-scale data. |
2025-04-22 | Accurate and generalizable protein-ligand binding affinity prediction with geometric deep learning | Krinos Li, Xianglu Xiao, Zijun Zhong et.al. | 2504.16261 | | 6 pages, 5 figures | Abstract (click to expand)Protein-ligand binding complexes are ubiquitous and essential to life. Protein-ligand binding affinity prediction (PLA) quantifies the binding strength between ligands and proteins, providing crucial insights for discovering and designing potential candidate ligands. While recent advances have been made in predicting protein-ligand complex structures, existing algorithms for interaction and affinity prediction suffer from a sharp decline in performance when handling ligands bound with novel unseen proteins. We propose IPBind, a geometric deep learning-based computational method, enabling robust predictions by leveraging interatomic potential between complex's bound and unbound status. Experimental results on widely used binding affinity prediction benchmarks demonstrate the effectiveness and universality of IPBind. Meanwhile, it provides atom-level insights into prediction. This work highlights the advantage of leveraging machine learning interatomic potential for predicting protein-ligand binding affinity. |
2025-04-22 | A Systematic Literature Review of Software Engineering Research on Jupyter Notebook | Md Saeed Siddik, Hao Li, Cor-Paul Bezemer et.al. | 2504.16180 | | | Abstract (click to expand)Context: Jupyter Notebook has emerged as a versatile tool that transforms how researchers, developers, and data scientists conduct and communicate their work. As the adoption of Jupyter notebooks continues to rise, so does the interest from the software engineering research community in improving the software engineering practices for Jupyter notebooks. Objective: The purpose of this study is to analyze trends, gaps, and methodologies used in software engineering research on Jupyter notebooks. Method: We selected 146 relevant publications from the DBLP Computer Science Bibliography up to the end of 2024, following established systematic literature review guidelines. We explored publication trends, categorized them based on software engineering topics, and reported findings based on those topics. Results: The most popular venues for publishing software engineering research on Jupyter notebooks are related to human-computer interaction instead of traditional software engineering venues. Researchers have addressed a wide range of software engineering topics on notebooks, such as code reuse, readability, and execution environment. Although reusability is one of the research topics for Jupyter notebooks, only 64 of the 146 studies can be reused based on their provided URLs. Additionally, most replication packages are not hosted on permanent repositories for long-term availability and adherence to open science principles. Conclusion: Solutions specific to notebooks for software engineering issues, including testing, refactoring, and documentation, are underexplored. Future research opportunities exist in automatic testing frameworks, refactoring clones between notebooks, and generating group documentation for coherent code cells. |
2025-04-23 | Fast Higher-Order Interpolation and Restriction in ExaHyPE Avoiding Non-physical Reflections | Timothy Stokes, Tobias Weinzierl, Han Zhang et.al. | 2504.15814 | | | Abstract (click to expand)Wave equations help us to understand phenomena ranging from earthquakes to tsunamis. These phenomena materialise over very large scales. It would be computationally infeasible to track them over a regular mesh. Yet, since the phenomena are localised, adaptive mesh refinement (AMR) can be used to construct meshes with a higher resolution close to the regions of interest. ExaHyPE is a software engine created to solve wave problems using AMR, and we use it as baseline to construct our numerical relativity application called ExaGRyPE. To advance the mesh in time, we have to interpolate and restrict along resolution transitions in each and every time step. ExaHyPE's vanilla code version uses a d-linear tensor-product approach. In benchmarks of a stationary black hole this performs slowly and leads to errors in conserved quantities near AMR boundaries. We therefore introduce a set of higher-order interpolation schemes where the derivatives are calculated at each coarse grid cell to approximate the enclosed fine cells. The resulting methods run faster than the tensor-product approach. Most importantly, when running the stationary black hole simulation using the higher order methods the errors near the AMR boundaries are removed. |
2025-04-21 | A dual-stage constitutive modeling framework based on finite strain data-driven identification and physics-augmented neural networks | Lennart Linden, Karl A. Kalina, Jörg Brummund et.al. | 2504.15492 | | | Abstract (click to expand)In this contribution, we present a novel consistent dual-stage approach for the automated generation of hyperelastic constitutive models which only requires experimentally measurable data. To generate input data for our approach, an experiment with full-field measurement has to be conducted to gather testing force and corresponding displacement field of the sample. Then, in the first step of the dual-stage framework, a new finite strain Data-Driven Identification (DDI) formulation is applied. This method enables to identify tuples consisting of stresses and strains by only prescribing the applied boundary conditions and the measured displacement field. In the second step, the data set is used to calibrate a Physics-Augmented Neural Network (PANN), which fulfills all common conditions of hyperelasticity by construction and is very flexible at the same time. We demonstrate the applicability of our approach by several descriptive examples. Two-dimensional synthetic data are exemplarily generated in virtual experiments by using a reference constitutive model. The calibrated PANN is then applied in 3D Finite Element simulations. In addition, a real experiment including noisy data is mimicked. |
2025-04-21 | Speaker Fuzzy Fingerprints: Benchmarking Text-Based Identification in Multiparty Dialogues | Rui Ribeiro, Luísa Coheur, Joao P. Carvalho et.al. | 2504.14963 | | Paper accepted at the FUZZY IEEE 2025 conference | Abstract (click to expand)Speaker identification using voice recordings leverages unique acoustic features, but this approach fails when only textual data is available. Few approaches have attempted to tackle the problem of identifying speakers solely from text, and the existing ones have primarily relied on traditional methods. In this work, we explore the use of fuzzy fingerprints from large pre-trained models to improve text-based speaker identification. We integrate speaker-specific tokens and context-aware modeling, demonstrating that conversational context significantly boosts accuracy, reaching 70.6% on the Friends dataset and 67.7% on the Big Bang Theory dataset. Additionally, we show that fuzzy fingerprints can approximate full fine-tuning performance with fewer hidden units, offering improved interpretability. Finally, we analyze ambiguous utterances and propose a mechanism to detect speaker-agnostic lines. Our findings highlight key challenges and provide insights for future improvements in text-based speaker identification. |
2025-04-21 | EducationQ: Evaluating LLMs' Teaching Capabilities Through Multi-Agent Dialogue Framework | Yao Shi, Rongkeng Liang, Yong Xu et.al. | 2504.14928 | | | Abstract (click to expand)Large language models (LLMs) increasingly serve as educational tools, yet evaluating their teaching capabilities remains challenging due to the resource-intensive, context-dependent, and methodologically complex nature of teacher-student interactions. We introduce EducationQ, a multi-agent dialogue framework that efficiently assesses teaching capabilities through simulated dynamic educational scenarios, featuring specialized agents for teaching, learning, and evaluation. Testing 14 LLMs across major AI Organizations (OpenAI, Meta, Google, Anthropic, and others) on 1,498 questions spanning 13 disciplines and 10 difficulty levels reveals that teaching effectiveness does not correlate linearly with model scale or general reasoning capabilities - with some smaller open-source models outperforming larger commercial counterparts in teaching contexts. This finding highlights a critical gap in current evaluations that prioritize knowledge recall over interactive pedagogy. Our mixed-methods evaluation, combining quantitative metrics with qualitative analysis and expert case studies, identifies distinct pedagogical strengths employed by top-performing models (e.g., sophisticated questioning strategies, adaptive feedback mechanisms). Human expert evaluations show 78% agreement with our automated qualitative analysis of effective teaching behaviors, validating our methodology. EducationQ demonstrates that LLMs-as-teachers require specialized optimization beyond simple scaling, suggesting next-generation educational AI prioritize targeted enhancement of specific pedagogical effectiveness. |
2025-04-23 | Physics-Aware Compression of Plasma Distribution Functions with GPU-Accelerated Gaussian Mixture Models | Andong Hu, Luca Pennati, Ivy Peng et.al. | 2504.14897 | | 15 pages, 8 figures | Abstract (click to expand)Data compression is a critical technology for large-scale plasma simulations. Storing complete particle information requires Terabyte-scale data storage, and analysis requires ad-hoc scalable post-processing tools. We propose a physics-aware in-situ compression method using Gaussian Mixture Models (GMMs) to approximate electron and ion velocity distribution functions with a number of Gaussian components. This GMM-based method allows us to capture plasma features such as mean velocity and temperature, and it enables us to identify heating processes and generate beams. We first construct a histogram to reduce computational overhead and apply GPU-accelerated, in-situ GMM fitting within iPIC3D, a large-scale implicit Particle-in-Cell simulator, ensuring real-time compression. The compressed representation is stored using the ADIOS 2 library, thus optimizing the I/O process. The GPU and histogramming implementation provides a significant speed-up with respect to GMM on particles (both in time and required memory at run-time), enabling real-time compression. Compared to algorithms like SZ, MGARD, and BLOSC2, our GMM-based method has a physics-based approach, retaining the physical interpretation of plasma phenomena such as beam formation, acceleration, and heating mechanisms. Our GMM algorithm achieves a compression ratio of up to \(10^4\) , requiring a processing time comparable to, or even lower than, standard compression engines. |
2025-04-19 | A parallel implementation of reduced-order modeling of large-scale systems | Ionut-Gabriel Farcas, Rayomand P. Gundevia, Ramakanth Munipalli et.al. | 2504.14338 | link | 19 pages, 4 figures; the corresponding code can be found at https://github.com/ionutfarcas/distributed_Operator_Inference | Abstract (click to expand)Motivated by the large-scale nature of modern aerospace engineering simulations, this paper presents a detailed description of distributed Operator Inference (dOpInf), a recently developed parallel algorithm designed to efficiently construct physics-based reduced-order models (ROMs) for problems with large state dimensions. One such example is the simulation of rotating detonation rocket engines, where snapshot data generated by high-fidelity large-eddy simulations have many millions of degrees of freedom. dOpInf enables, via distributed computing, the efficient processing of datasets with state dimensions that are too large to process on a single computer, and the learning of structured physics-based ROMs that approximate the dynamical systems underlying those datasets. All elements of dOpInf are scalable, leading to a fully parallelized reduced modeling approach that can scale to the thousands of processors available on leadership high-performance computing platforms. The resulting ROMs are computationally cheap, making them ideal for key engineering tasks such as design space exploration, risk assessment, and uncertainty quantification. To illustrate the practical application of dOpInf, we provide a step-by-step tutorial using a 2D Navier-Stokes flow over a step scenario as a case study. This tutorial guides users through the implementation process, making dOpInf accessible for integration into complex aerospace engineering simulations. |
2025-04-18 | Transformer Encoder and Multi-features Time2Vec for Financial Prediction | Nguyen Kim Hai Bui, Nguyen Duy Chien, Péter Kovács et.al. | 2504.13801 | | 5 pages, currently under review at Eusipco 2025 | Abstract (click to expand)Financial prediction is a complex and challenging task of time series analysis and signal processing, expected to model both short-term fluctuations and long-term temporal dependencies. Transformers have remarkable success mostly in natural language processing using attention mechanism, which also influenced the time series community. The ability to capture both short and long-range dependencies helps to understand the financial market and to recognize price patterns, leading to successful applications of Transformers in stock prediction. Although, the previous research predominantly focuses on individual features and singular predictions, that limits the model's ability to understand broader market trends. In reality, within sectors such as finance and technology, companies belonging to the same industry often exhibit correlated stock price movements. In this paper, we develop a novel neural network architecture by integrating Time2Vec with the Encoder of the Transformer model. Based on the study of different markets, we propose a novel correlation feature selection method. Through a comprehensive fine-tuning of multiple hyperparameters, we conduct a comparative analysis of our results against benchmark models. We conclude that our method outperforms other state-of-the-art encoding methods such as positional encoding, and we also conclude that selecting correlation features enhance the accuracy of predicting multiple stock prices. |
2025-04-22 | Equi-Euler GraphNet: An Equivariant, Temporal-Dynamics Informed Graph Neural Network for Dual Force and Trajectory Prediction in Multi-Body Systems | Vinay Sharma, Rémi Tanguy Oddon, Pietro Tesini et.al. | 2504.13768 | | permission not yet received for arXiv | Abstract (click to expand)Accurate real-time modeling of multi-body dynamical systems is essential for enabling digital twin applications across industries. While many data-driven approaches aim to learn system dynamics, jointly predicting internal loads and system trajectories remains a key challenge. This dual prediction is especially important for fault detection and predictive maintenance, where internal loads-such as contact forces-act as early indicators of faults, reflecting wear or misalignment before affecting motion. These forces also serve as inputs to degradation models (e.g., crack growth), enabling damage prediction and remaining useful life estimation. We propose Equi-Euler GraphNet, a physics-informed graph neural network (GNN) that simultaneously predicts internal forces and global trajectories in multi-body systems. In this mesh-free framework, nodes represent system components and edges encode interactions. Equi-Euler GraphNet introduces two inductive biases: (1) an equivariant message-passing scheme, interpreting edge messages as interaction forces consistent under Euclidean transformations; and (2) a temporal-aware iterative node update mechanism, based on Euler integration, to capture influence of distant interactions over time. Tailored for cylindrical roller bearings, it decouples ring dynamics from constrained motion of rolling elements. Trained on high-fidelity multiphysics simulations, Equi-Euler GraphNet generalizes beyond the training distribution, accurately predicting loads and trajectories under unseen speeds, loads, and configurations. It outperforms state-of-the-art GNNs focused on trajectory prediction, delivering stable rollouts over thousands of time steps with minimal error accumulation. Achieving up to a 200x speedup over conventional solvers while maintaining comparable accuracy, it serves as an efficient reduced-order model for digital twins, design, and maintenance. |
2025-04-18 | Bitcoin's Edge: Embedded Sentiment in Blockchain Transactional Data | Charalampos Kleitsikas, Nikolaos Korfiatis, Stefanos Leonardos et.al. | 2504.13598 | | Published in IEEE International Conference on Blockchain and Cryptocurrency 2025 | Abstract (click to expand)Cryptocurrency blockchains, beyond their primary role as distributed payment systems, are increasingly used to store and share arbitrary content, such as text messages and files. Although often non-financial, this hidden content can impact price movements by conveying private information, shaping sentiment, and influencing public opinion. However, current analyses of such data are limited in scope and scalability, primarily relying on manual classification or hand-crafted heuristics. In this work, we address these limitations by employing Natural Language Processing techniques to analyze, detect patterns, and extract public sentiment encoded within blockchain transactional data. Using a variety of Machine Learning techniques, we showcase for the first time the predictive power of blockchain-embedded sentiment in forecasting cryptocurrency price movements on the Bitcoin and Ethereum blockchains. Our findings shed light on a previously underexplored source of freely available, transparent, and immutable data and introduce blockchain sentiment analysis as a novel and robust framework for enhancing financial predictions in cryptocurrency markets. Incidentally, we discover an asymmetry between cryptocurrencies; Bitcoin has an informational advantage over Ethereum in that the sentiment embedded into transactional data is sufficient to predict its price movement. |
2025-04-21 | Deep Learning Models Meet Financial Data Modalities | Kasymkhan Khubiev, Mikhail Semenov et.al. | 2504.13521 | | 15 pages, 14 images, 7 tables | Abstract (click to expand)Algorithmic trading relies on extracting meaningful signals from diverse financial data sources, including candlestick charts, order statistics on put and canceled orders, traded volume data, limit order books, and news flow. While deep learning has demonstrated remarkable success in processing unstructured data and has significantly advanced natural language processing, its application to structured financial data remains an ongoing challenge. This study investigates the integration of deep learning models with financial data modalities, aiming to enhance predictive performance in trading strategies and portfolio optimization. We present a novel approach to incorporating limit order book analysis into algorithmic trading by developing embedding techniques and treating sequential limit order book snapshots as distinct input channels in an image-based representation. Our methodology for processing limit order book data achieves state-of-the-art performance in high-frequency trading algorithms, underscoring the effectiveness of deep learning in financial applications. |
2025-04-18 | Ascribe New Dimensions to Scientific Data Visualization with VR | Daniela Ushizima, Guilherme Melo dos Santos, Zineb Sordo et.al. | 2504.13448 | | | Abstract (click to expand)For over half a century, the computer mouse has been the primary tool for interacting with digital data, yet it remains a limiting factor in exploring complex, multi-scale scientific images. Traditional 2D visualization methods hinder intuitive analysis of inherently 3D structures. Virtual Reality (VR) offers a transformative alternative, providing immersive, interactive environments that enhance data comprehension. This article introduces ASCRIBE-VR, a VR platform of Autonomous Solutions for Computational Research with Immersive Browsing \& Exploration, which integrates AI-driven algorithms with scientific images. ASCRIBE-VR enables multimodal analysis, structural assessments, and immersive visualization, supporting scientific visualization of advanced datasets such as X-ray CT, Magnetic Resonance, and synthetic 3D imaging. Our VR tools, compatible with Meta Quest, can consume the output of our AI-based segmentation and iterative feedback processes to enable seamless exploration of large-scale 3D images. By merging AI-generated results with VR visualization, ASCRIBE-VR enhances scientific discovery, bridging the gap between computational analysis and human intuition in materials research, connecting human-in-the-loop with digital twins. |
2025-04-17 | CardioFit: A WebGL-Based Tool for Fast and Efficient Parameterization of Cardiac Action Potential Models to Fit User-Provided Data | Darby I. Cairns, Maxfield R. Comstock, Flavio H. Fenton et.al. | 2504.13274 | | Darby I. Cairns and Maxfield R. Comstock contributed equally to this work | Abstract (click to expand)Cardiac action potential models allow examination of a variety of cardiac dynamics, including how behavior may change under specific interventions. To study a specific scenario, including patient-specific cases, model parameter sets must be found that accurately reproduce the dynamics of interest. To facilitate this complex and time-consuming process, we present an interactive browser-based tool that uses the particle swarm optimization (PSO) algorithm implemented in JavaScript and taking advantage of the WebGL API for hardware acceleration. Our tool allows rapid customization and can find low-error fittings to user-provided voltage time series or action potential duration data from multiple cycle lengths in a few iterations (10-32), corresponding to a runtime of a few seconds on most machines. Additionally, our tool focuses on ease of use and flexibility, providing a webpage interface that allows users to select a subset of parameters to fit, set the range of values each parameter is allowed to assume, and control the PSO algorithm hyperparameters. We demonstrate our tool's utility by fitting a variety of models to different datasets, showing how convergence is affected by model choice, dataset properties, and PSO algorithmic settings, and explaining new insights gained about the physiological and dynamical roles of the model parameters. |
2025-04-17 | Fast and Accurate Prediction of Antenna Reflection Coefficients in Planar Layered Media Environment via Generalized Scattering Matrix | Chenbo Shi, Shichen Liang, Xin Gu et.al. | 2504.12613 | | | Abstract (click to expand)The numerical algorithm for evaluating the reflection coefficient of an antenna in the presence of the planar layered medium is reformulated using the antenna's generalized scattering matrix (GSM). The interaction between the antenna and the layered medium is modeled through spherical-to-planar vector wave transformations, ensuring no approximations that could compromise computational accuracy. This theoretical framework significantly reduces algebraic complexity, resulting in a marked increase in the speed of antenna performance evaluation. Excluding the one-time preprocessing cost of obtaining the antenna's GSM in free space, the numerical evaluation speed of this method exceeds that of the commercial software FEKO by several orders of magnitude, while maintaining nearly identical accuracy. |
2025-04-16 | Continual Learning Strategies for 3D Engineering Regression Problems: A Benchmarking Study | Kaira M. Samuel, Faez Ahmed et.al. | 2504.12503 | link | | Abstract (click to expand)Engineering problems that apply machine learning often involve computationally intensive methods but rely on limited datasets. As engineering data evolves with new designs and constraints, models must incorporate new knowledge over time. However, high computational costs make retraining models from scratch infeasible. Continual learning (CL) offers a promising solution by enabling models to learn from sequential data while mitigating catastrophic forgetting, where a model forgets previously learned mappings. This work introduces CL to engineering design by benchmarking several CL methods on representative regression tasks. We apply these strategies to five engineering datasets and construct nine new engineering CL benchmarks to evaluate their ability to address forgetting and improve generalization. Preliminary results show that applying existing CL methods to these tasks improves performance over naive baselines. In particular, the Replay strategy achieved performance comparable to retraining in several benchmarks while reducing training time by nearly half, demonstrating its potential for real-world engineering workflows. The code and datasets used in this work will be available at: https://github.com/kmsamuel/cl-for-engineering-release. |
2025-04-16 | Deep Material Network: Overview, applications and current directions | Ting-Ju Wei, Wen-Ning Wan, Chuin-Shan Chen et.al. | 2504.12159 | | | Abstract (click to expand)Deep Material Network (DMN) has emerged as a powerful framework for multiscale material modeling, enabling efficient and accurate predictions of material behavior across different length scales. Unlike traditional machine learning approaches, the trainable parameters in DMN have direct physical interpretations, capturing the geometric characteristics of the microstructure rather than serving as purely statistical fitting parameters. Its hierarchical tree structure effectively encodes microstructural interactions and deformation mechanisms, allowing DMN to achieve a balance between accuracy and computational efficiency. This physics-informed architecture significantly reduces computational costs compared to direct numerical simulations while preserving essential microstructural physics. Furthermore, DMN can be trained solely on a linear elastic dataset while effectively extrapolating nonlinear responses during online prediction, making it a highly efficient and scalable approach for multiscale material modeling. This article provides a comprehensive review of DMN, detailing its motivation, underlying methodology, and recent advancements. We discuss key modeling aspects, including its hierarchical structure, training process, and the role of physics-based constraints in enhancing predictive accuracy. Furthermore, we highlight its applications in component-scale multiscale analysis and inverse parameter identification, demonstrating its capability to bridge microscale material behavior with macroscale engineering predictions. Finally, we discuss challenges and future directions in improving DMN's generalization capabilities and its potential extensions for broader applications in multiscale modeling. |
2025-04-16 | A viscoplasticity model with an invariant-based non-Newtonian flow rule for unidirectional thermoplastic composites | P. Hofman, D. Kovačević, F. P. van der Meer et.al. | 2504.12069 | | 40 pages, 20 figures, 3 tables | Abstract (click to expand)A three-dimensional mesoscopic viscoplasticity model for simulating rate-dependent plasticity and creep in unidirectional thermoplastic composites is presented. The constitutive model is a transversely isotropic extension of an isotropic finite strain viscoplasticity model for neat polymers. Rate-dependent plasticity and creep are described by a non-Newtonian flow rule where the viscosity of the material depends on an equivalent stress measure through an Eyring-type relation. In the present formulation, transverse isotropy is incorporated by defining the equivalent stress measure and flow rule as functions of transversely isotropic stress invariants. In addition, the Eyring-type viscosity function is extended with anisotropic pressure dependence. As a result of the formulation, plastic flow in fiber direction is effectively excluded and pressure dependence of the polymer matrix is accounted for. The re-orientation of the transversely isotropic plane during plastic deformations is incorporated in the constitutive equations, allowing for an accurate large deformation response. The formulation is fully implicit and a consistent linearization of the algorithmic constitutive equations is performed to derive the consistent tangent modulus. The performance of the mesoscopic constitutive model is assessed through a comparison with a micromechanical model for carbon/PEEK, with the original isotropic viscoplastic version for the polymer matrix and with hyperelastic fibers. The micromodel is first used to determine the material parameters of the mesoscale model with a few stress-strain curves. It is demonstrated that the mesoscale model gives a similar response to the micromodel under various loading conditions. Finally, the mesoscale model is validated against off-axis experiments on unidirectional thermoplastic composite plies. |
2025-04-16 | Topological Analysis of Mixer Activities in the Bitcoin Network | Francesco Zola, Jon Ander Medina, Andrea Venturi et.al. | 2504.11924 | | The paper is presented at the 3rd IEEE International Workshop on Cryptocurrency Exchanges (CryptoEx 2025) | Abstract (click to expand)Cryptocurrency users increasingly rely on obfuscation techniques such as mixers, swappers, and decentralised or no-KYC exchanges to protect their anonymity. However, at the same time, these services are exploited by criminals to conceal and launder illicit funds. Among obfuscation services, mixers remain one of the most challenging entities to tackle. This is because their owners are often unwilling to cooperate with Law Enforcement Agencies, and technically, they operate as 'black boxes'. To better understand their functionalities, this paper proposes an approach to analyse the operations of mixers by examining their address-transaction graphs and identifying topological similarities to uncover common patterns that can define the mixer's modus operandi. The approach utilises community detection algorithms to extract dense topological structures and clustering algorithms to group similar communities. The analysis is further enriched by incorporating data from external sources related to known Exchanges, in order to understand their role in mixer operations. The approach is applied to dissect the Blender.io mixer activities within the Bitcoin blockchain, revealing: i) consistent structural patterns across address-transaction graphs; ii) that Exchanges play a key role, following a well-established pattern, which raises several concerns about their AML/KYC policies. This paper represents an initial step toward dissecting and understanding the complex nature of mixer operations in cryptocurrency networks and extracting their modus operandi. |
2025-04-15 | Magnetic Field Conforming Formulations for Foil Windings | Louis Denis, Elias Paakkunainen, Paavo Rasilo et.al. | 2504.11191 | | This work has been submitted to the IEEE for possible publication | Abstract (click to expand)We extend the foil winding homogenization method to magnetic field conforming formulations. We first propose a full magnetic field foil winding formulation by analogy with magnetic flux density conforming formulations. We then introduce the magnetic scalar potential in non-conducting regions to improve the efficiency of the model. This leads to a significant reduction in the number of degrees of freedom, particularly in 3-D applications. The proposed models are verified on two frequency-domain benchmark problems: a 2-D axisymmetric problem and a 3-D problem. They reproduce results obtained with magnetic flux density conforming formulations and with resolved conductor models that explicitly discretize all turns. Moreover, the models are applied in the transient simulation of a high-temperature superconducting coil. In all investigated configurations, the proposed models provide reliable results while considerably reducing the size of the numerical problem to be solved. |
2025-04-15 | A study of troubled-cell indicators applied to finite volume methods using a novel monotonicity parameter | R. Shivananda Rao, M. Ramakrishna et.al. | 2504.11056 | | | Abstract (click to expand)We adapt a troubled-cell indicator developed for discontinuous Galerkin (DG) methods to the finite volume method (FVM) framework for solving hyperbolic conservation laws. This indicator depends solely on the cell-average data of the target cell and its immediate neighbours. Once the troubled-cells are identified, we apply the limiter only in these cells instead of applying in all computational cells. We introduce a novel technique to quantify the quality of the solution in the neighbourhood of the shock by defining a monotonicity parameter \(\mu\) . Numerical results from various two-dimensional simulations on the hyperbolic systems of Euler equations using a finite volume solver employing MUSCL reconstruction validate the performance of the troubled-cell indicator and the approach of limiting only in the troubled-cells. These results show that limiting only in the troubled-cells is preferable to limiting everywhere as it improves convergence without compromising on the solution accuracy. |
2025-04-15 | Design and Verification of a Synchronus First In First Out (FIFO) | Yatheeswar Penta, Riadul Islam et.al. | 2504.10901 | | | Abstract (click to expand)This project focuses on designing and verifying a synchronous FIFO First In First Out (FIFO) memory, a critical component in digital systems for temporary data storage and seamless data transfer. The FIFO operates under a single clock domain, ensuring synchronized read and write operations, making it suitable for systems requiring high-speed, reliable data buffering. This design includes FIFO's key features, such as read and write operations, full and empty flag generation, and pointer management for memory control. The FIFO was implemented using Verilog to define the Register Transfer Level (RTL) design, ensuring functionality and timing requirements were met. For verification, three approaches were employed: (1) UVM-based Verification: A Universal Verification Methodology (UVM) testbench was developed to test the FIFO design rigorously. The testbench includes components like interface, sequence item, driver, monitor, scoreboard, agent, and environment. Directed and random tests were performed to verify corner cases, such as simultaneous reads and writes, full and empty conditions, and overflow and underflow scenarios; (2) Traditional Verilog Testbench: A standalone Verilog testbench was also used to validate the functionality of the FIFO through directed test scenarios and waveform analysis; (3) FPGA implementation: Additionally, the design was implemented on an FPGA for real-time testing to verify its functionality and timing behavior in hardware. FPGA-based verification ensured the design performed as expected under practical conditions. The results confirmed the correct operation of the FIFO, including accurate data transfer, flag behavior, and timing synchronization. The project successfully demonstrated the robustness and reliability of the synchronous FIFO design, highlighting its importance in modern digital systems for efficient data handling and buffering. |
2025-04-14 | Finding Pathways in Reaction Networks guided by Energy Barriers using Integer Linear Programming | Adittya Pal, Rolf Fagerberg, Jakob Lykke Andersen et.al. | 2504.10609 | | | Abstract (click to expand)Analyzing synthesis pathways for target molecules in a chemical reaction network annotated with information on the kinetics of individual reactions is an area of active study. This work presents a computational methodology for searching for pathways in reaction networks which is based on integer linear programming and the modeling of reaction networks by directed hypergraphs. Often multiple pathways fit the given search criteria. To rank them, we develop an objective function based on physical arguments maximizing the probability of the pathway. We furthermore develop an automated pipeline to estimate the energy barriers of individual reactions in reaction networks. Combined, the methodology facilitates flexible and kinetically informed pathway investigations on large reaction networks by computational means, even for networks coming without kinetic annotation, such as those created via generative approaches for expanding molecular spaces. |
2025-04-09 | Artificial Intelligence and the Dual Paradoxes: Examining the Interplay of Efficiency, Resource Consumption, and Labor Dynamics | Mfon Akpan, Adeyemi Adebayo et.al. | 2504.10503 | | 27 pages | Abstract (click to expand)Artificial Intelligence's (AI) rapid development and growth not only transformed industries but also fired up important debates about its impacts on employment, resource allocation, and the ethics involved in decision-making. It serves to understand how changes within an industry will be able to influence society with that change. Advancing AI technologies will create a dual paradox of efficiency, greater resource consumption, and displacement of traditional labor. In this context, we explore the impact of AI on energy consumption, human labor roles, and hybrid roles widespread human labor replacement. We used mixed methods involving qualitative and quantitative analyses of data identified from various sources. Findings suggest that AI increases energy consumption and has impacted human labor roles to a minimal extent, considering that its applicability is limited to some tasks that require human judgment. In this context, the |
2025-04-14 | Unleashing Expert Opinion from Social Media for Stock Prediction | Wanyun Zhou, Saizhuo Wang, Xiang Li et.al. | 2504.10078 | link | | Abstract (click to expand)While stock prediction task traditionally relies on volume-price and fundamental data to predict the return ratio or price movement trend, sentiment factors derived from social media platforms such as StockTwits offer a complementary and useful source of real-time market information. However, we find that most social media posts, along with the public sentiment they reflect, provide limited value for trading predictions due to their noisy nature. To tackle this, we propose a novel dynamic expert tracing algorithm that filters out non-informative posts and identifies both true and inverse experts whose consistent predictions can serve as valuable trading signals. Our approach achieves significant improvements over existing expert identification methods in stock trend prediction. However, when using binary expert predictions to predict the return ratio, similar to all other expert identification methods, our approach faces a common challenge of signal sparsity with expert signals cover only about 4% of all stock-day combinations in our dataset. To address this challenge, we propose a dual graph attention neural network that effectively propagates expert signals across related stocks, enabling accurate prediction of return ratios and significantly increasing signal coverage. Empirical results show that our propagated expert-based signals not only exhibit strong predictive power independently but also work synergistically with traditional financial features. These combined signals significantly outperform representative baseline models in all quant-related metrics including predictive accuracy, return metrics, and correlation metrics, resulting in more robust investment strategies. We hope this work inspires further research into leveraging social media data for enhancing quantitative investment strategies. The code can be seen in https://github.com/wanyunzh/DualGAT. |
2025-04-14 | BO-SA-PINNs: Self-adaptive physics-informed neural networks based on Bayesian optimization for automatically designing PDE solvers | Rui Zhang, Liang Li, Stéphane Lanteri et.al. | 2504.09804 | link | 23 pages, 5 figure | Abstract (click to expand)Physics-informed neural networks (PINNs) is becoming a popular alternative method for solving partial differential equations (PDEs). However, they require dedicated manual modifications to the hyperparameters of the network, the sampling methods and loss function weights for different PDEs, which reduces the efficiency of the solvers. In this paper, we pro- pose a general multi-stage framework, i.e. BO-SA-PINNs to alleviate this issue. In the first stage, Bayesian optimization (BO) is used to select hyperparameters for the training process, and based on the results of the pre-training, the network architecture, learning rate, sampling points distribution and loss function weights suitable for the PDEs are automatically determined. The proposed hyperparameters search space based on experimental results can enhance the efficiency of BO in identifying optimal hyperparameters. After selecting the appropriate hyperparameters, we incorporate a global self-adaptive (SA) mechanism the second stage. Using the pre-trained model and loss information in the second-stage training, the exponential moving average (EMA) method is employed to optimize the loss function weights, and residual-based adaptive refinement with distribution (RAR-D) is used to optimize the sampling points distribution. In the third stage, L-BFGS is used for stable training. In addition, we introduce a new activation function that enables BO-SA-PINNs to achieve higher accuracy. In numerical experiments, we conduct comparative and ablation experiments to verify the performance of the model on Helmholtz, Maxwell, Burgers and high-dimensional Poisson equations. The comparative experiment results show that our model can achieve higher accuracy and fewer iterations in test cases, and the ablation experiments demonstrate the positive impact of every improvement. |
2025-04-13 | Earthquake Simulation | Palak Chawla et.al. | 2504.09673 | | 4 pages | Abstract (click to expand)This paper presents a seismic activity simulator that models the effects of fault lines on surface pressure. This project uses C programming to create a fully interactive learning resource intended to educate users on the mechanics of earthquakes. The motivation behind this project is to make studying seismic activity more accessible, engaging and cost effective. |
2025-04-16 | Causal integration of chemical structures improves representations of microscopy images for morphological profiling | Yemin Yu, Neil Tenenholtz, Lester Mackey et.al. | 2504.09544 | link | 24 pages | Abstract (click to expand)Recent advances in self-supervised deep learning have improved our ability to quantify cellular morphological changes in high-throughput microscopy screens, a process known as morphological profiling. However, most current methods only learn from images, despite many screens being inherently multimodal, as they involve both a chemical or genetic perturbation as well as an image-based readout. We hypothesized that incorporating chemical compound structure during self-supervised pre-training could improve learned representations of images in high-throughput microscopy screens. We introduce a representation learning framework, MICON (Molecular-Image Contrastive Learning), that models chemical compounds as treatments that induce counterfactual transformations of cell phenotypes. MICON significantly outperforms classical hand-crafted features such as CellProfiler and existing deep-learning-based representation learning methods in challenging evaluation settings where models must identify reproducible effects of drugs across independent replicates and data-generating centers. We demonstrate that incorporating chemical compound information into the learning process provides consistent improvements in our evaluation setting and that modeling compounds specifically as treatments in a causal framework outperforms approaches that directly align images and compounds in a single representation space. Our findings point to a new direction for representation learning in morphological profiling, suggesting that methods should explicitly account for the multimodal nature of microscopy screening data. |
2025-04-11 | Exponential Shift: Humans Adapt to AI Economies | Kevin J McNamara, Rhea Pritham Marpu et.al. | 2504.08855 | | | Abstract (click to expand)This paper explores how artificial intelligence (AI) and robotics are transforming the global labor market. Human workers, limited to a 33% duty cycle due to rest and holidays, cost $14 to $55 per hour. In contrast, digital labor operates nearly 24/7 at just \(0.10 to\) 0.50 per hour. We examine sectors like healthcare, education, manufacturing, and retail, finding that 40-70% of tasks could be automated. Yet, human skills like emotional intelligence and adaptability remain essential. Humans process 5,000-20,000 tokens (units of information) per hour, while AI far exceeds this, though its energy use-3.5 to 7 times higher than humans-could offset 20-40% of cost savings. Using real-world examples, such as AI in journalism and law, we illustrate these dynamics and propose six strategies-like a 4-day workweek and retraining-to ensure a fair transition to an AI-driven economy. |
2025-04-11 | Exploring Cognitive Attributes in Financial Decision-Making | Mallika Mainali, Rosina O. Weber et.al. | 2504.08849 | | 7 pages, 2 figures. Presented in SIAM International Conference on Data Mining (SDM25) METACOG-25: 2nd Workshop on Metacognitive Prediction of AI Behavior | Abstract (click to expand)Cognitive attributes are fundamental to metacognition, shaping how individuals process information, evaluate choices, and make decisions. To develop metacognitive artificial intelligence (AI) models that reflect human reasoning, it is essential to account for the attributes that influence reasoning patterns and decision-maker behavior, often leading to different or even conflicting choices. This makes it crucial to incorporate cognitive attributes in designing AI models that align with human decision-making processes, especially in high-stakes domains such as finance, where decisions have significant real-world consequences. However, existing AI alignment research has primarily focused on value alignment, often overlooking the role of individual cognitive attributes that distinguish decision-makers. To address this issue, this paper (1) analyzes the literature on cognitive attributes, (2) establishes five criteria for defining them, and (3) categorizes 19 domain-specific cognitive attributes relevant to financial decision-making. These three components provide a strong basis for developing AI systems that accurately reflect and align with human decision-making processes in financial contexts. |
2025-04-09 | Analogical Learning for Cross-Scenario Generalization: Framework and Application to Intelligent Localization | Zirui Chen, Zhaoyang Zhang, Ziqing Xing et.al. | 2504.08811 | link | | Abstract (click to expand)Existing learning models often exhibit poor generalization when deployed across diverse scenarios. It is mainly due to that the underlying reference frame of the data varies with the deployment environment and settings. However, despite the data of each scenario has its distinct reference frame, its generation generally follows the same underlying physical rule. Based on these findings, this article proposes a brand-new universal deep learning framework named analogical learning (AL), which provides a highly efficient way to implicitly retrieve the reference frame information associated with a scenario and then to make accurate prediction by relative analogy across scenarios. Specifically, an elegant bipartite neural network architecture called Mateformer is designed, the first part of which calculates the relativity within multiple feature spaces between the input data and a small amount of embedded data from the current scenario, while the second part uses these relativity to guide the nonlinear analogy. We apply AL to the typical multi-scenario learning problem of intelligent wireless localization in cellular networks. Extensive experiments show that AL achieves state-of-the-art accuracy, stable transferability and robust adaptation to new scenarios without any tuning, and outperforming conventional methods with a precision improvement of nearly two orders of magnitude. All data and code are available at https://github.com/ziruichen-research/ALLoc. |
2025-04-11 | Control Co-Design Under Uncertainty for Offshore Wind Farms: Optimizing Grid Integration, Energy Storage, and Market Participation | Himanshu Sharma, Wei Wang, Bowen Huang et.al. | 2504.08555 | | | Abstract (click to expand)Offshore wind farms (OWFs) are set to significantly contribute to global decarbonization efforts. Developers often use a sequential approach to optimize design variables and market participation for grid-integrated offshore wind farms. However, this method can lead to sub-optimal system performance, and uncertainties associated with renewable resources are often overlooked in decision-making. This paper proposes a control co-design approach, optimizing design and control decisions for integrating OWFs into the power grid while considering energy market and primary frequency market participation. Additionally, we introduce optimal sizing solutions for energy storage systems deployed onshore to enhance revenue for OWF developers over time. This framework addresses uncertainties related to wind resources and energy prices. We analyze five U.S. west-coast offshore wind farm locations and potential interconnection points, as identified by the Bureau of Ocean Energy Management (BOEM). Results show that optimized control co-design solutions can increase market revenue by 3.2\% and provide flexibility in managing wind resource uncertainties. |
2025-04-11 | Approximation Algorithms for the UAV Path Planning with Object Coverage Constraints | Jiawei Wang, Vincent Chau, Weiwei Wu et.al. | 2504.08405 | | | Abstract (click to expand)We study the problem of the Unmanned Aerial Vehicle (UAV) such that a specific set of objects needs to be observed while ensuring a quality of observation. Our goal is to determine the shortest path for the UAV. This paper proposes an offline algorithm with an approximation of \((2+2n)(1+\epsilon)\) where \(\epsilon >0\) is a small constant, and \(n\) is the number of objects. We then propose several online algorithms in which objects are discovered during the process. To evaluate the performance of these algorithms, we conduct experimental comparisons. Our results show that the online algorithms perform similarly to the offline algorithm, but with significantly faster execution times ranging from 0.01 seconds to 200 seconds. We also show that our methods yield solutions with costs comparable to those obtained by the Gurobi optimizer that requires 30000 seconds of runtime. |
2025-04-11 | A 120 lines code for isogeometric topology optimization and its extension to 3D in MATLAB | Xianda Xie, Zhihui Ou, Aodi Yang et.al. | 2504.08233 | | | Abstract (click to expand)In this paper, a compact and efficient code implementation is presented for isogeometric topology optimization (ITO) approach. With the aid of B.ezier extraction technique, a derived explicit stiffness matrix computation formula is applied to all B-spline IGA elements with rectangular shape under linear elasticity assumption. Using the aforementioned explicit formula, the stiffness matrix calculation and updating of IGA are significantly simplified, which leads to the current ITO code implemented only in one main function without calling subroutines, such as IGA mesh generation and Gaussian quadrature. Both two-dimensional (2D) and three-dimensional (3D) cases are taken into consideration, which result into iga_top120 and iga_top3D257 MATLAB codes for 2D and 3D design problems. Numerical examples validate the effectiveness of our open-source codes, with several user-defined input parameters basically identical to those used in top88 and top3D. Therefore, iga_top120 and iga_top3D257 provide an effective entry for the code transforming from FEM-based TO into ITO. |
2025-04-17 | Variational quantum and neural quantum states algorithms for the linear complementarity problem | Saibal De, Oliver Knitter, Rohan Kodati et.al. | 2504.08141 | | 13 pages, 5 figures, to appear in Philosophical Transactions of the Royal Society A | Abstract (click to expand)Variational quantum algorithms (VQAs) are promising hybrid quantum-classical methods designed to leverage the computational advantages of quantum computing while mitigating the limitations of current noisy intermediate-scale quantum (NISQ) hardware. Although VQAs have been demonstrated as proofs of concept, their practical utility in solving real-world problems -- and whether quantum-inspired classical algorithms can match their performance -- remains an open question. We present a novel application of the variational quantum linear solver (VQLS) and its classical neural quantum states-based counterpart, the variational neural linear solver (VNLS), as key components within a minimum map Newton solver for a complementarity-based rigid body contact model. We demonstrate using the VNLS that our solver accurately simulates the dynamics of rigid spherical bodies during collision events. These results suggest that quantum and quantum-inspired linear algebra algorithms can serve as viable alternatives to standard linear algebra solvers for modeling certain physical systems. |
2025-04-10 | Dual Engines of Thoughts: A Depth-Breadth Integration Framework for Open-Ended Analysis | Fei-Hsuan Yu, Yun-Cheng Chou, Teng-Ruei Chen et.al. | 2504.07872 | | | Abstract (click to expand)We propose the Dual Engines of Thoughts (DEoT), an analytical framework for comprehensive open-ended reasoning. While traditional reasoning frameworks primarily focus on finding "the best answer" or "the correct answer" for single-answer problems, DEoT is specifically designed for "open-ended questions," enabling both broader and deeper analytical exploration. The framework centers on three key components: a Base Prompter for refining user queries, a Solver Agent that orchestrates task decomposition, execution, and validation, and a Dual-Engine System consisting of a Breadth Engine (to explore diverse impact factors) and a Depth Engine (to perform deep investigations). This integrated design allows DEoT to balance wide-ranging coverage with in-depth analysis, and it is highly customizable, enabling users to adjust analytical parameters and tool configurations based on specific requirements. Experimental results show that DEoT excels in addressing complex, multi-faceted questions, achieving a total win rate of 77-86% compared to existing reasoning models, thus highlighting its effectiveness in real-world applications. |
2025-04-10 | A Review of HPC-Accelerated CFD in National Security and Defense | James Afful et.al. | 2504.07837 | | | Abstract (click to expand)Using High-Performance Computing (HPC), Computational Fluid Dynamics (CFD) now serves as an essential component in defense-related national security applications including missile interception and hypersonic propulsion as well as naval stealth optimization and urban hazard dispersion. This review combines two decades of open-source and public-domain research on HPC-accelerated CFD in defense, addressing three key questions: Which security-sensitive simulations have utilized open-source CFD frameworks such as OpenFOAM, SU2 and ADflow? Which HPC techniques, such as MPI domain decomposition and GPU acceleration together with hybrid parallelism best enhance open-source frameworks to manage large defense CFD simulations? Which technological advancements and research voids currently drive the directional development of the field? Examining several research studies sourced from NASA, DoD HPC centers, and academic institutions, scientific contributions have been classified into air, maritime, and space domains. Modular frameworks like NavyFOAM and SU2 and ADflow's adjoint-based solvers show how custom open-source solutions support workflows with rapid completion of multi-million cell simulations. The conclusion highlights new trends that combine exascale readiness with machine learning surrogate models for real-time CFD applications and interdisciplinary HPC-driven multi-physics integration to deliver practical insights for improving CFD use in defense research and development. |
2025-04-10 | Benchmarking Image Embeddings for E-Commerce: Evaluating Off-the Shelf Foundation Models, Fine-Tuning Strategies and Practical Trade-offs | Urszula Czerwinska, Cenk Bircanoglu, Jeremy Chamoux et.al. | 2504.07567 | | accepted at Future Technologies Conference (FTC 2025) | Abstract (click to expand)We benchmark foundation models image embeddings for classification and retrieval in e-Commerce, evaluating their suitability for real-world applications. Our study spans embeddings from pre-trained convolutional and transformer models trained via supervised, self-supervised, and text-image contrastive learning. We assess full fine-tuning and transfer learning (top-tuning) on six diverse e-Commerce datasets: fashion, consumer goods, cars, food, and retail. Results show full fine-tuning consistently performs well, while text-image and self-supervised embeddings can match its performance with less training. While supervised embeddings remain stable across architectures, SSL and contrastive embeddings vary significantly, often benefiting from top-tuning. Top-tuning emerges as an efficient alternative to full fine-tuning, reducing computational costs. We also explore cross-tuning, noting its impact depends on dataset characteristics. Our findings offer practical guidelines for embedding selection and fine-tuning strategies, balancing efficiency and performance. |
2025-04-06 | SolRPDS: A Dataset for Analyzing Rug Pulls in Solana Decentralized Finance | Abdulrahman Alhaidari, Bhavani Kalal, Balaji Palanisamy et.al. | 2504.07132 | link | Accepted paper to appear in the 15th ACM Conference on Data and Application Security and Privacy (CODASPY 2025) | Abstract (click to expand)Rug pulls in Solana have caused significant damage to users interacting with Decentralized Finance (DeFi). A rug pull occurs when developers exploit users' trust and drain liquidity from token pools on Decentralized Exchanges (DEXs), leaving users with worthless tokens. Although rug pulls in Ethereum and Binance Smart Chain (BSC) have gained attention recently, analysis of rug pulls in Solana remains largely under-explored. In this paper, we introduce SolRPDS (Solana Rug Pull Dataset), the first public rug pull dataset derived from Solana's transactions. We examine approximately four years of DeFi data (2021-2024) that covers suspected and confirmed tokens exhibiting rug pull patterns. The dataset, derived from 3.69 billion transactions, consists of 62,895 suspicious liquidity pools. The data is annotated for inactivity states, which is a key indicator, and includes several detailed liquidity activities such as additions, removals, and last interaction as well as other attributes such as inactivity periods and withdrawn token amounts, to help identify suspicious behavior. Our preliminary analysis reveals clear distinctions between legitimate and fraudulent liquidity pools and we found that 22,195 tokens in the dataset exhibit rug pull patterns during the examined period. SolRPDS can support a wide range of future research on rug pulls including the development of data-driven and heuristic-based solutions for real-time rug pull detection and mitigation. |
2025-04-09 | Machine Learning (ML) based Reduced Order Modeling (ROM) for linear and non-linear solid and structural mechanics | Mikhael Tannous, Chady Ghnatios, Eivind Fonn et.al. | 2504.06860 | | | Abstract (click to expand)Multiple model reduction techniques have been proposed to tackle linear and non linear problems. Intrusive model order reduction techniques exhibit high accuracy levels, however, they are rarely used as a standalone industrial tool, because of the required high level knowledge involved in the construction and usage of these techniques. Moreover, the computation time benefit is compromised for highly nonlinear problems. On the other hand, non-intrusive methods often struggle with accuracy in nonlinear cases, typically requiring a large design of experiment and a large number of snapshots achieve a reliable performance. However, generating the stiffness matrix in a non-intrusive approach presents an optimal way to align accuracy with efficiency, allying the advantages of both intrusive and non-intrusive methods.This work introduces a lightly intrusive model order reduction technique that employs machine learning within a Proper Orthogonal Decomposition framework to achieve this alliance. By leveraging outputs from commercial full-order models, this method constructs a reduced-order model that operates effectively without requiring expert user intervention. The proposed technique has the possibility to approximate linear non affine as well as non linear terms. It is showcased for linear and nonlinear structural mechanics problems. |
2025-04-08 | Neural Network Enhanced Polyconvexification of Isotropic Energy Densities in Computational Mechanics | Loïc Balazi, Timo Neumeier, Malte A. Peter et.al. | 2504.06425 | | 24 pages, 14 figures | Abstract (click to expand)We present a neural network approach for fast evaluation of parameter-dependent polyconvex envelopes, which are crucial in computational mechanics. Our method uses a neural network architecture that inherently encodes polyconvexity in the main variable by combining a feature extraction layer that computes the minors function on the signed singular value characterisation of isotropic energy densities with a partially input convex neural network (PICNN). Polyconvex underestimation is weakly enforced by penalisation during training, as are the symmetries of the function. As a guiding example, we focus on a well-known isotropic damage problem, reformulated in terms of signed singular values, and apply a splitting approach to reduce the dimensionality of the parameter space, thereby making training more tractable. Numerical experiments show that the networks achieve sufficient accuracy for engineering applications while providing high compression and significant speed-up over traditional polyconvexification schemes. Most importantly, the network adapts to varying physical or material parameters, enabling real-time polyconvexification in large-scale computational mechanics scenarios. |
2025-04-08 | Quantum Annealing for Combinatorial Optimization: A Benchmarking Study | Seongmin Kim, Sang-Woo Ahn, In-Saeng Suh et.al. | 2504.06201 | | | Abstract (click to expand)Quantum annealing (QA) has the potential to significantly improve solution quality and reduce time complexity in solving combinatorial optimization problems compared to classical optimization methods. However, due to the limited number of qubits and their connectivity, the QA hardware did not show such an advantage over classical methods in past benchmarking studies. Recent advancements in QA with more than 5,000 qubits, enhanced qubit connectivity, and the hybrid architecture promise to realize the quantum advantage. Here, we use a quantum annealer with state-of-the-art techniques and benchmark its performance against classical solvers. To compare their performance, we solve over 50 optimization problem instances represented by large and dense Hamiltonian matrices using quantum and classical solvers. The results demonstrate that a state-of-the-art quantum solver has higher accuracy (~0.013%) and a significantly faster problem-solving time (~6,561x) than the best classical solver. Our results highlight the advantages of leveraging QA over classical counterparts, particularly in hybrid configurations, for achieving high accuracy and substantially reduced problem solving time in large-scale real-world optimization problems. |
2025-04-08 | Coupling approaches with non-matching grids for classical linear elasticity and bond-based peridynamic models in 1D | Patrick Diehl, Emily Downing, Autumn Edwards et.al. | 2504.06093 | | | Abstract (click to expand)Local-nonlocal coupling approaches provide a means to combine the computational efficiency of local models and the accuracy of nonlocal models. To facilitate the coupling of the two models, non-matching grids are often desirable as nonlocal grids usually require a finer resolution than local grids. In that case, it is often convenient to resort to interpolation operators so that models can exchange information in the overlap regions when nodes from the two grids do not coincide. This paper studies three existing coupling approaches, namely 1) a method that enforces matching displacements in an overlap region, 2) a variant that enforces a constraint on the stresses instead, and 3) a method that considers a variable horizon in the vicinity of the interfaces. The effect of the interpolation order and of the grid ratio on the performance of the three coupling methods with non-matching grids is carefully studied on one-dimensional examples using polynomial manufactured solutions. The numerical results show that the degree of the interpolants should be chosen with care to avoid introducing additional modeling errors, or simply minimize these errors, in the coupling approach. |
2025-04-08 | MLPROP -- an open interactive web interface for thermophysical property prediction with machine learning | Marco Hoffmann, Thomas Specht, Nicolas Hayer et.al. | 2504.05970 | | 6 pages, 2 figures | Abstract (click to expand)Machine learning (ML) enables the development of powerful methods for predicting thermophysical properties with unprecedented scope and accuracy. However, technical barriers like cumbersome implementation in established workflows hinder their application in practice. With MLPROP, we provide an interactive web interface for directly applying advanced ML methods to predict thermophysical properties without requiring ML expertise, thereby substantially increasing the accessibility of novel models. MLPROP currently includes models for predicting the vapor pressure of pure components (GRAPPA), activity coefficients and vapor-liquid equilibria in binary mixtures (UNIFAC 2.0, mod. UNIFAC 2.0, and HANNA), and a routine to fit NRTL parameters to the model predictions. MLPROP will be continuously updated and extended and is accessible free of charge via https://ml-prop.mv.rptu.de/. MLPROP removes the barrier to learning and experimenting with new ML-based methods for predicting thermophysical properties. The source code of all models is available as open source, which allows integration into existing workflows. |
2025-04-08 | Physics-aware generative models for turbulent fluid flows through energy-consistent stochastic interpolants | Nikolaj T. Mücke, Benjamin Sanderse et.al. | 2504.05852 | link | | Abstract (click to expand)Generative models have demonstrated remarkable success in domains such as text, image, and video synthesis. In this work, we explore the application of generative models to fluid dynamics, specifically for turbulence simulation, where classical numerical solvers are computationally expensive. We propose a novel stochastic generative model based on stochastic interpolants, which enables probabilistic forecasting while incorporating physical constraints such as energy stability and divergence-freeness. Unlike conventional stochastic generative models, which are often agnostic to underlying physical laws, our approach embeds energy consistency by making the parameters of the stochastic interpolant learnable coefficients. We evaluate our method on a benchmark turbulence problem - Kolmogorov flow - demonstrating superior accuracy and stability over state-of-the-art alternatives such as autoregressive conditional diffusion models (ACDMs) and PDE-Refiner. Furthermore, we achieve stable results for significantly longer roll-outs than standard stochastic interpolants. Our results highlight the potential of physics-aware generative models in accelerating and enhancing turbulence simulations while preserving fundamental conservation properties. |
2025-04-07 | Deep Reinforcement Learning Algorithms for Option Hedging | Andrei Neagu, Frédéric Godin, Leila Kosseim et.al. | 2504.05521 | link | | Abstract (click to expand)Dynamic hedging is a financial strategy that consists in periodically transacting one or multiple financial assets to offset the risk associated with a correlated liability. Deep Reinforcement Learning (DRL) algorithms have been used to find optimal solutions to dynamic hedging problems by framing them as sequential decision-making problems. However, most previous work assesses the performance of only one or two DRL algorithms, making an objective comparison across algorithms difficult. In this paper, we compare the performance of eight DRL algorithms in the context of dynamic hedging; Monte Carlo Policy Gradient (MCPG), Proximal Policy Optimization (PPO), along with four variants of Deep Q-Learning (DQL) and two variants of Deep Deterministic Policy Gradient (DDPG). Two of these variants represent a novel application to the task of dynamic hedging. In our experiments, we use the Black-Scholes delta hedge as a baseline and simulate the dataset using a GJR-GARCH(1,1) model. Results show that MCPG, followed by PPO, obtain the best performance in terms of the root semi-quadratic penalty. Moreover, MCPG is the only algorithm to outperform the Black-Scholes delta hedge baseline with the allotted computational budget, possibly due to the sparsity of rewards in our environment. |
2025-04-07 | GraphPINE: Graph Importance Propagation for Interpretable Drug Response Prediction | Yoshitaka Inoue, Tianfan Fu, Augustin Luna et.al. | 2504.05454 | | | Abstract (click to expand)Explainability is necessary for many tasks in biomedical research. Recent explainability methods have focused on attention, gradient, and Shapley value. These do not handle data with strong associated prior knowledge and fail to constrain explainability results based on known relationships between predictive features. We propose GraphPINE, a graph neural network (GNN) architecture leveraging domain-specific prior knowledge to initialize node importance optimized during training for drug response prediction. Typically, a manual post-prediction step examines literature (i.e., prior knowledge) to understand returned predictive features. While node importance can be obtained for gradient and attention after prediction, node importance from these methods lacks complementary prior knowledge; GraphPINE seeks to overcome this limitation. GraphPINE differs from other GNN gating methods by utilizing an LSTM-like sequential format. We introduce an importance propagation layer that unifies 1) updates for feature matrix and node importance and 2) uses GNN-based graph propagation of feature values. This initialization and updating mechanism allows for informed feature learning and improved graph representation. We apply GraphPINE to cancer drug response prediction using drug screening and gene data collected for over 5,000 gene nodes included in a gene-gene graph with a drug-target interaction (DTI) graph for initial importance. The gene-gene graph and DTIs were obtained from curated sources and weighted by article count discussing relationships between drugs and genes. GraphPINE achieves a PR-AUC of 0.894 and ROC-AUC of 0.796 across 952 drugs. Code is available at https://anonymous.4open.science/r/GraphPINE-40DE. |
2025-04-07 | A hydro-geomechanical porous-media model to study effects of engineered carbonate precipitation in faults | Yue Wang, Holger Class et.al. | 2504.05171 | | | Abstract (click to expand)Hydro-geomechanical models are required to predict or understand the impact of subsurface engineering applications as, for example, in gas storage in geological formations. This study puts a focus on engineered carbonate precipitation through biomineralization in a fault zone of a cap-rock to reduce gas leakage from a reservoir. Besides hydraulic properties like porosity and permeability, precipitated carbonates also change the mechanical properties of the rock. We present a conceptual modeling approach implemented into the open-source simulator Dumux and, after verification examples, at hand of a CO2-storage scenario, we discuss impacts of biomineralization on the stress distribution in the rock and potentially altered risks of fault reactivations and induced seismic events. The generic study shows the tendency towards increased stiffness due to precipitated carbonate, which may cause shear failure events to occur earlier than in an untreated setup, while the magnitude of the seismicity is smaller. |
2025-04-07 | Scaling Graph Neural Networks for Particle Track Reconstruction | Alok Tripathy, Alina Lazar, Xiangyang Ju et.al. | 2504.04670 | link | | Abstract (click to expand)Particle track reconstruction is an important problem in high-energy physics (HEP), necessary to study properties of subatomic particles. Traditional track reconstruction algorithms scale poorly with the number of particles within the accelerator. The Exa.TrkX project, to alleviate this computational burden, introduces a pipeline that reduces particle track reconstruction to edge classification on a graph, and uses graph neural networks (GNNs) to produce particle tracks. However, this GNN-based approach is memory-prohibitive and skips graphs that would exceed GPU memory. We introduce improvements to the Exa.TrkX pipeline to train on samples of input particle graphs, and show that these improvements generalize to higher precision and recall. In addition, we adapt performance optimizations, introduced for GNN training, to fit our augmented Exa.TrkX pipeline. These optimizations provide a \(2\times\) speedup over our baseline implementation in PyTorch Geometric. |
2025-04-06 | SECQUE: A Benchmark for Evaluating Real-World Financial Analysis Capabilities | Noga Ben Yoash, Meni Brief, Oded Ovadia et.al. | 2504.04596 | | Benchmark available at: https://huggingface.co/datasets/nogabenyoash/SecQue | Abstract (click to expand)We introduce SECQUE, a comprehensive benchmark for evaluating large language models (LLMs) in financial analysis tasks. SECQUE comprises 565 expert-written questions covering SEC filings analysis across four key categories: comparison analysis, ratio calculation, risk assessment, and financial insight generation. To assess model performance, we develop SECQUE-Judge, an evaluation mechanism leveraging multiple LLM-based judges, which demonstrates strong alignment with human evaluations. Additionally, we provide an extensive analysis of various models' performance on our benchmark. By making SECQUE publicly available, we aim to facilitate further research and advancements in financial AI. |
2025-04-06 | A model agnostic eXplainable AI based fuzzy framework for sensor constrained Aerospace maintenance applications | Bharadwaj Dogga, Anoop Sathyan, Kelly Cohen et.al. | 2504.04541 | | | Abstract (click to expand)Machine Learning methods have extensively evolved to support industrial big data methods and their corresponding need in gas turbine maintenance and prognostics. However, most unsupervised methods need extensively labeled data to perform predictions across many dimensions. The cutting edge of small and medium applications do not necessarily maintain operational sensors and data acquisition with rising costs and diminishing profits. We propose a framework to make sensor maintenance priority decisions using a combination of SHAP, UMAP, Fuzzy C-means clustering. An aerospace jet engine dataset is used as a case study. |
2025-04-06 | Robust and scalable nonlinear solvers for finite element discretizations of biological transportation networks | Jan Haskovec, Peter Markowich, Simone Portaro et.al. | 2504.04447 | | | Abstract (click to expand)We develop robust and scalable fully implicit nonlinear finite element solvers for the simulations of biological transportation networks driven by the gradient flow minimization of a non-convex energy cost functional. Our approach employs a discontinuous space for the conductivity tensor that allows us to guarantee the preservation of its positive semi-definiteness throughout the entire minimization procedure arising from the time integration of the gradient flow dynamics using a backward Euler scheme. Extensive tests in two and three dimensions demonstrate the robustness and performance of the solver, highlight the sensitivity of the emergent network structures to mesh resolution and topology, and validate the resilience of the linear preconditioner to the ill-conditioning of the model. The implementation achieves near-optimal parallel scaling on large-scale, high-performance computing platforms. To the best of our knowledge, the network formation system has never been simulated in three dimensions before. Consequently, our three-dimensional results are the first of their kind. |
2025-04-06 | From Coarse to Fine: A Physics-Informed Self-Guided Flow Diffusion Model | Ruoyan Li, Zijie Huang, Yizhou Sun et.al. | 2504.04375 | | | Abstract (click to expand)Machine learning methods are widely explored as a promising way to reconstruct high-fidelity computational fluid dynamics (CFD) data from faster-to-compute low-fidelity input. Diffusion models have achieved great success as they can reconstruct high-fidelity data from low-fidelity inputs at arbitrary resolution without re-training. However, most existing approaches assume that low-fidelity data is generated artificially via downsampling high-fidelity data. In reality, low-fidelity data is produced by numerical solvers that use a coarser resolution from the start, leading to substantial differences compared to high-fidelity data, especially in the long-range. Solver-generated low-fidelity data usually sacrifices fine-grained details, such as small-scale vortices compared to high-fidelity ones. To bridge this gap, we propose \model, a novel diffusion model for reconstruction, where both low- and high-fidelity data are straight from numerical solvers. Our findings show that state-of-the-art models struggle to generate fine-scale details when faced with solver-generated low-fidelity inputs. To address this challenge, we propose an \textit{Importance Weight} strategy during training that serves as a form of self-guidance, along with a training-free \textit{Residual Correction} approach during inference that embeds physical insights into the model. Together, these techniques steer the diffusion model toward more accurate reconstructions. Experimental results on four 2D turbulent flow datasets demonstrate the efficacy of our proposed method. |
2025-04-05 | Dynamic Hedging Strategies in Derivatives Markets with LLM-Driven Sentiment and News Analytics | Jie Yang, Yiqiu Tang, Yongjie Li et.al. | 2504.04295 | | Accepted by IJCNN 2025 | Abstract (click to expand)Dynamic hedging strategies are essential for effective risk management in derivatives markets, where volatility and market sentiment can greatly impact performance. This paper introduces a novel framework that leverages large language models (LLMs) for sentiment analysis and news analytics to inform hedging decisions. By analyzing textual data from diverse sources like news articles, social media, and financial reports, our approach captures critical sentiment indicators that reflect current market conditions. The framework allows for real-time adjustments to hedging strategies, adapting positions based on continuous sentiment signals. Backtesting results on historical derivatives data reveal that our dynamic hedging strategies achieve superior risk-adjusted returns compared to conventional static approaches. The incorporation of LLM-driven sentiment analysis into hedging practices presents a significant advancement in decision-making processes within derivatives trading. This research showcases how sentiment-informed dynamic hedging can enhance portfolio management and effectively mitigate associated risks. |
2025-04-05 | Cross-Asset Risk Management: Integrating LLMs for Real-Time Monitoring of Equity, Fixed Income, and Currency Markets | Jie Yang, Yiqiu Tang, Yongjie Li et.al. | 2504.04292 | | Accepted by IJCNN 2025 | Abstract (click to expand)Large language models (LLMs) have emerged as powerful tools in the field of finance, particularly for risk management across different asset classes. In this work, we introduce a Cross-Asset Risk Management framework that utilizes LLMs to facilitate real-time monitoring of equity, fixed income, and currency markets. This innovative approach enables dynamic risk assessment by aggregating diverse data sources, ultimately enhancing decision-making processes. Our model effectively synthesizes and analyzes market signals to identify potential risks and opportunities while providing a holistic view of asset classes. By employing advanced analytics, we leverage LLMs to interpret financial texts, news articles, and market reports, ensuring that risks are contextualized within broader market narratives. Extensive backtesting and real-time simulations validate the framework, showing increased accuracy in predicting market shifts compared to conventional methods. The focus on real-time data integration enhances responsiveness, allowing financial institutions to manage risks adeptly under varying market conditions and promoting financial stability through the advanced application of LLMs in risk analysis. |
2025-04-04 | A Unit-Cell Shape Optimization Approach for Maximizing Heat Transfer in Periodic Fin Arrays at Constant Solid Temperature | Maarten Blommaert, Arthur Vangeffelen, Mehmet Basaran et.al. | 2504.03436 | | Submitted to Structural and Multidisciplinary Optimization | Abstract (click to expand)Periodic fin structures are often employed to enhance heat transfer in compact cooling solutions and heat exchangers. Adjoint-based optimization methods are able to further increase the heat transfer by optimizing the fin geometry. However, obtaining optimal geometries remains challenging in general because of the high computational cost of full array simulations. In this paper, a unit cell optimization approach is presented that starts from recently developed macro-scale models for isothermal solid structures. The models exploit the periodicity of the problem to reduce the computational cost of evaluating the array heat transfer to that of a single periodic unit cell. By combining these models with a geometrically-constrained free-shape optimization approach, optimal fin geometries are obtained for the periodic fin array that maintain a minimal fin distance. Moreover, using an augmented Lagrangian approach, also the average pressure gradient and barycenter of the fin can be fixed. On a fictitious use-case, heat transfer increases up to 104 \% are obtained. When also flow rate is constrained in addition to maintain a high effectiveness, only up to 8 \% heat transfer increase is observed. Finally, the errors of the unit-cell optimization approach are investigated, indicating that with a good choice of cost functional formulation, errors of the approach as low as 1-2 \% can be obtained for the periodically developed part of the array. Finally, the entrance effect to the heat transfer is found to be non-negligible with a contribution of 10-15 \% for the considered fin array. This advocates for further research to extend the unit-cell models towards improved modeling of entrance effects. |
2025-04-04 | Optimal Sizing and Material Choice for Additively Manufactured Compact Plate Heat Exchangers | Mehmet Basaran, Frederik Rogiers, Martine Baelmans et.al. | 2504.03372 | | | Abstract (click to expand)With advancements in additive manufacturing (AM) capabilities, new opportunities arise to design compact heat exchangers (cHEXs) that leverage AM's degrees of freedom (DOFs) to enhance energy and material efficiency. However, excessive size reduction in counterflow cHEXs can compromise effectiveness due to axial heat conduction through the solid material, influenced by thermal conductivity and wall thickness. This study investigates how AM material selection and thin-wall production limitations might constrain the core size of counterflow plate heat exchangers when targeting maximum power density. An optimization framework evaluates power densities for six materials: plastic, austenitic steel, Al2O3, AlN, aluminum, and copper. Evaluations are conducted under constant effectiveness and pressure drop while accounting for AM-specific plate thickness limits and a lower bound on plate spacing to address fouling. Across all scenarios, copper cHEXs exhibit the lowest power density, despite high thermal conductivity. Without constraints on plate thickness and spacing, the optimal plastic cHEX achieves a power density 1800x greater than the steel baseline, while copper decreases by a factor of 0.98. With equal plate thickness of 0.5 mm for all materials, plastic retains the highest power density, 12.2x more than copper. Introducing a fouling constraint of 0.8 mm plate spacing shifts the optimal material to austenitic steel. When material-specific plate thicknesses are considered, the plastic cHEX achieves the highest power density, five times greater than copper, due to superior thin-wall resolution. This study highlights the impact of AM constraints on the energy and material efficiency of cHEXs, and shows that low-conductivity materials like plastic or austenitic steel can outperform high-conductivity materials like copper in compact designs. |
2025-04-03 | Adaptive Finite State Projection with Quantile-Based Pruning for Solving the Chemical Master Equation | Aditya Dendukuri, Linda Petzold et.al. | 2504.03070 | | | Abstract (click to expand)We present an adaptive Finite State Projection (FSP) method for efficiently solving the Chemical Master Equation (CME) with rigorous error control. Our approach integrates time-stepping with dynamic state-space truncation, balancing accuracy and computational cost. Krylov subspace methods approximate the matrix exponential, while quantile-based pruning controls state-space growth by removing low-probability states. Theoretical error bounds ensure that the truncation error remains bounded by the pruned mass at each step, which is user-controlled, and does not propagate forward in time. Numerical experiments on biochemical systems, including the Lotka-Volterra and Michaelis-Menten and bi-stable toggle switch models. |
2025-04-03 | Atrial constitutive neural networks | Mathias Peirlinck, Kevin Linka, Ellen Kuhl et.al. | 2504.02748 | | | Abstract (click to expand)This work presents a novel approach for characterizing the mechanical behavior of atrial tissue using constitutive neural networks. Based on experimental biaxial tensile test data of healthy human atria, we automatically discover the most appropriate constitutive material model, thereby overcoming the limitations of traditional, pre-defined models. This approach offers a new perspective on modeling atrial mechanics and is a significant step towards improved simulation and prediction of cardiac health. |
2025-04-03 | Grammar-based Ordinary Differential Equation Discovery | Karin L. Yu, Eleni Chatzi, Georgios Kissas et.al. | 2504.02630 | | | Abstract (click to expand)The understanding and modeling of complex physical phenomena through dynamical systems has historically driven scientific progress, as it provides the tools for predicting the behavior of different systems under diverse conditions through time. The discovery of dynamical systems has been indispensable in engineering, as it allows for the analysis and prediction of complex behaviors for computational modeling, diagnostics, prognostics, and control of engineered systems. Joining recent efforts that harness the power of symbolic regression in this domain, we propose a novel framework for the end-to-end discovery of ordinary differential equations (ODEs), termed Grammar-based ODE Discovery Engine (GODE). The proposed methodology combines formal grammars with dimensionality reduction and stochastic search for efficiently navigating high-dimensional combinatorial spaces. Grammars allow us to seed domain knowledge and structure for both constraining, as well as, exploring the space of candidate expressions. GODE proves to be more sample- and parameter-efficient than state-of-the-art transformer-based models and to discover more accurate and parsimonious ODE expressions than both genetic programming- and other grammar-based methods for more complex inference tasks, such as the discovery of structural dynamics. Thus, we introduce a tool that could play a catalytic role in dynamics discovery tasks, including modeling, system identification, and monitoring tasks. |
2025-04-03 | A Multi-Level Sentiment Analysis Framework for Financial Texts | Yiwei Liu, Junbo Wang, Lei Long et.al. | 2504.02429 | link | 14pages, 11 figures, 8 tables | Abstract (click to expand)Existing financial sentiment analysis methods often fail to capture the multi-faceted nature of risk in bond markets due to their single-level approach and neglect of temporal dynamics. We propose Multi-Level Sentiment Analysis based on pre-trained language models (PLMs) and large language models (LLMs), a novel framework that systematically integrates firm-specific micro-level sentiment, industry-specific meso-level sentiment, and duration-aware smoothing to model the latency and persistence of textual impact. Applying our framework to the comprehensive Chinese bond market corpus constructed by us (2013-2023, 1.39M texts), we extracted a daily composite sentiment index. Empirical results show statistically measurable improvements in credit spread forecasting when incorporating sentiment (3.25% MAE and 10.96% MAPE reduction), with sentiment shifts closely correlating with major social risk events and firm-specific crises. This framework provides a more nuanced understanding of sentiment across different market levels while accounting for the temporal evolution of sentiment effects. |
2025-04-03 | Solving adhesive rough contact problems with Atomic Force Microscope data | Maria Rosaria Marulli, Jacopo Bonari, Pasqualantonio Pingue et.al. | 2504.02307 | | | Abstract (click to expand)This study presents an advanced numerical framework that integrates experimentally acquired Atomic Force Microscope (AFM) data into high-fidelity simulations for adhesive rough contact problems, bridging the gap between experimental physics and computational mechanics. The proposed approach extends the eMbedded Profile for Joint Roughness (MPJR) interface finite element method to incorporate both surface topography and spatially varying adhesion properties, imported directly from AFM measurements. The adhesion behavior is modeled using a modified Lennard-Jones potential, which is locally parameterized based on the AFM-extracted adhesion peak force and energy dissipation data. The effectiveness of this method is demonstrated through 2D and 3D finite element simulations of a heterogeneous PS-LDPE (polystyrene matrix with low-density polyethylene inclusions) sample, where the bulk elastic properties are also experimentally characterized via AFM. The results highlight the significance of accounting for both surface adhesion variability and material bulk heterogeneity in accurately predicting contact responses. |
2025-04-03 | Parallel Market Environments for FinRL Contests | Keyi Wang, Kairong Xiao, Xiao-Yang Liu Yanglet et.al. | 2504.02281 | | | Abstract (click to expand)Reinforcement learning has shown great potential in finance. We have organized the FinRL Contests 2023-2025 featuring different financial tasks. Large language models have a strong capability to process financial texts. Integrating LLM-generated signals into FinRL is a new task, enabling agents to use both structured market data and unstructured financial text. To address the sampling bottleneck during training, we introduce GPU-based parallel market environments to improve sampling speed. In this paper, we summarize the parallel market environments used in FinRL Contests 2023-2025. Two new environments incorporate LLM-generated signals and support massively parallel simulation. Contestants utilize these environments to train agents for stock and cryptocurrency trading tasks. |
2025-04-04 | Stock Price Prediction Using Triple Barrier Labeling and Raw OHLCV Data: Evidence from Korean Markets | Sungwoo Kang et.al. | 2504.02249 | | 7 pages, 2 figures | Abstract (click to expand)This paper demonstrates that deep learning models trained on raw OHLCV (open-high-low-close-volume) data can achieve comparable performance to traditional machine learning (ML) models using technical indicators for stock price prediction in Korean markets. While previous studies have emphasized the importance of technical indicators and feature engineering, we show that a simple LSTM network trained on raw OHLCV data alone can match the performance of sophisticated ML models that incorporate technical indicators. Using a dataset of Korean stocks from 2006 to 2024, we optimize the triple barrier labeling parameters to achieve balanced label proportions with a 29-day window and 9\% barriers. Our experiments reveal that LSTM networks achieve similar performance to traditional machine learning models like XGBoost, despite using only raw OHLCV data without any technical indicators. Furthermore, we identify that the optimal window size varies with model hidden size, with a configuration of window size 100 and hidden size 8 yielding the best performance. Additionally, our results confirm that using full OHLCV data provides better predictive accuracy compared to using only close price or close price with volume. These findings challenge conventional approaches to feature engineering in financial forecasting and suggest that simpler approaches focusing on raw data and appropriate model selection may be more effective than complex feature engineering strategies. |
2025-04-04 | A User-Tunable Machine Learning Framework for Step-Wise Synthesis Planning | Shivesh Prakash, Hans-Arno Jacobsen, Viki Kumar Prasad et.al. | 2504.02191 | link | | Abstract (click to expand)We introduce MHNpath, a machine learning-driven retrosynthetic tool designed for computer-aided synthesis planning. Leveraging modern Hopfield networks and novel comparative metrics, MHNpath efficiently prioritizes reaction templates, improving the scalability and accuracy of retrosynthetic predictions. The tool incorporates a tunable scoring system that allows users to prioritize pathways based on cost, reaction temperature, and toxicity, thereby facilitating the design of greener and cost-effective reaction routes. We demonstrate its effectiveness through case studies involving complex molecules from ChemByDesign, showcasing its ability to predict novel synthetic and enzymatic pathways. Furthermore, we benchmark MHNpath against existing frameworks, replicating experimentally validated "gold-standard" pathways from PaRoutes. Our case studies reveal that the tool can generate shorter, cheaper, moderate-temperature routes employing green solvents, as exemplified by compounds such as dronabinol, arformoterol, and lupinine. |
2025-04-02 | A Quality Diversity Approach to Evolving Model Rockets | Jacob Schrum, Cody Crosby et.al. | 2504.02177 | link | In Genetic and Evolutionary Computation Conference (GECCO '25), July 14-18, 2025, Malaga, Spain. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3712256.3726338 | Abstract (click to expand)Model rocketry presents a design task accessible to undergraduates while remaining an interesting challenge. Allowing for variation in fins, nose cones, and body tubes presents a rich design space containing numerous ways to achieve various altitudes. Therefore, when exploring possible designs computationally, it makes sense to apply a method that produces various possibilities for decision-makers to choose from: Quality Diversity (QD). The QD methods MAP-Elites, CMA-ME, and CMA-MAE are applied to model rocket design using the open-source OpenRocket software to characterize the behavior and determine the fitness of evolved designs. Selected rockets were manufactured and launched to evaluate them in the real world. Simulation results demonstrate that CMA-ME produces the widest variety of rocket designs, which is surprising given that CMA-MAE is a more recent method designed to overcome shortcomings with CMA-ME. Real-world testing demonstrates that a wide range of standard and unconventional designs are viable, though issues with the jump from simulation to reality cause some rockets to perform unexpectedly. This paper provides a case study on applying QD to a task accessible to a broader audience than industrial engineering tasks and uncovers unexpected results about the relative performance of different QD algorithms. |
2025-04-02 | Responsible Innovation: A Strategic Framework for Financial LLM Integration | Ahmadreza Tavasoli, Maedeh Sharbaf, Seyed Mohamad Madani et.al. | 2504.02165 | | 27 pages, 3 figures | Abstract (click to expand)Financial institutions of all sizes are increasingly adopting Large Language Models (LLMs) to enhance credit assessments, deliver personalized client advisory services, and automate various language-intensive processes. However, effectively deploying LLMs requires careful management of stringent data governance requirements, heightened demands for interpretability, ethical responsibilities, and rapidly evolving regulatory landscapes. To address these challenges, we introduce a structured six-decision framework specifically designed for the financial sector, guiding organizations systematically from initial feasibility assessments to final deployment strategies. The framework encourages institutions to: (1) evaluate whether an advanced LLM is necessary at all, (2) formalize robust data governance and privacy safeguards, (3) establish targeted risk management mechanisms, (4) integrate ethical considerations early in the development process, (5) justify the initiative's return on investment (ROI) and strategic value, and only then (6) choose the optimal implementation pathway -- open-source versus proprietary, or in-house versus vendor-supported -- aligned with regulatory requirements and operational realities. By linking strategic considerations with practical steps such as pilot testing, maintaining comprehensive audit trails, and conducting ongoing compliance evaluations, this decision framework offers a structured roadmap for responsibly leveraging LLMs. Rather than acting as a rigid, one-size-fits-all solution, it shows how advanced language models can be thoughtfully integrated into existing workflows -- balancing innovation with accountability to uphold stakeholder trust and regulatory integrity. |
2025-04-02 | Vectorised Parallel in Time methods for low-order discretizations with application to Porous Media problems | Christian Engwer, Alexander Schell, Nils-Arne Dreier et.al. | 2504.02117 | | | Abstract (click to expand)High order methods have shown great potential to overcome performance issues of simulations of partial differential equations (PDEs) on modern hardware, still many users stick to low-order, matrixbased simulations, in particular in porous media applications. Heterogeneous coefficients and low regularity of the solution are reasons not to employ high order discretizations. We present a new approach for the simulation of instationary PDEs that allows to partially mitigate the performance problems. By reformulating the original problem we derive a parallel in time time integrator that increases the arithmetic intensity and introduces additional structure into the problem. By this it helps accelerate matrix-based simulations on modern hardware architectures. Based on a system for multiple time steps we will formulate a matrix equation that can be solved using vectorised solvers like Block Krylov methods. The structure of this approach makes it applicable for a wide range of linear and nonlinear problems. In our numerical experiments we present some first results for three different PDEs, a linear convection-diffusion equation, a nonlinear diffusion-reaction equation and a realistic example based on the Richards' equation. |
2025-04-02 | Focal Mechanism Uncertainty Quantification In Ground Motion Simulations Of Le Teil Earthquake | Valeria Soto, Fernando Lopez-Caballero et.al. | 2504.01868 | | | Abstract (click to expand)Ensuring the seismic safety of nuclear power plants (NPPs) is essential, especially for facilities that rely on base isolation to reduce earthquake impacts. For understanding the seismic response, accurate models are key to predict the ground motions, which are generally sensitive to various factors, including earthquake source parameters like the focal mechanism, i.e., strike, dip, and rake angles. This study examines how uncertainties in these parameters affect ground motion predictions. The analysis is based on the SMATCH benchmark, which provides a standardized approach for evaluating the seismic response of the Cruas-Meysse NPP in France during the Mw 4.9 Le-Teil earthquake of 2019. A set of 27 3D high-fidelity numerical simulations was performed using a spectral-element method, each incorporating different focal mechanism variations. These simulations provide an effective approach for investigating the factors behind the exceptional ground motion observed during this event. To quantify uncertainty, the simulated ground motions were compared to recorded data using two well-established goodness-of-fit criteria: one assessing time-frequency domain characteristics and another focusing on the characterization of the ground motion signals by intensity measures. Results highlight the significant influence of focal mechanism variability on ground motion predictions, especially on the rake angle, which showed the strongest correlation with wave and intensity measures. |
2025-04-02 | Design and Experimental Validation of an Urban Microclimate Tool Integrating Indoor-Outdoor Detailed Longwave Radiative Fluxes at District Scale | Marie-Hélène Azam, Julien Berger, Edouard Walther et.al. | 2504.01736 | | | Abstract (click to expand)Numerical simulation is a powerful tool for assessing the causes of an Urban Heat Island (UHI) effect or quantifying the impact of mitigation solutions on outdoor and indoor thermal comfort. For that purpose, several models have been developed at the district scale. At this scale, the outside surface energy budget is detailed, however building models are very simplified and considered as a boundary condition of the district scale model. This shortcoming inhibits the opportunity to investigate the effect of urban microclimate on the inside building conditions. The aim of this work is to improve the representation of the physical phenomena involved in the building models of a district model. For that purpose, the model integrates inside and outside fully detailed long-wave radiative flux. The numerical model is based on finite differences to solve conduction through all the surfaces and the radiosity method to solve long-wave radiative heat fluxes inside and outside. Calculated temperatures and heat fluxes are evaluated with respect to \textit{in situ} measurements from an experimental demonstrator over 14 sensors and a 24-day period. Results are also compared to state-of-the-art models simulation tool show improvement of the RMSE of \(0.9 \ \mathsf{^{\,\circ}C}\) to \(2.1 \ \mathsf{^{\,\circ}C}\) on the surface temperature modeled. |
2025-04-02 | Anomaly Detection for Hybrid Butterfly Subspecies via Probability Filtering | Bo-Kai Ruan, Yi-Zeng Fang, Hong-Han Shuai et.al. | 2504.01671 | link | AAAI'25 Workshop in Anomaly Detection in Scientific Domains | Abstract (click to expand)Detecting butterfly hybrids requires knowledge of the parent subspecies, and the process can be tedious when encountering a new subspecies. This study focuses on a specific scenario where a model trained to recognize hybrid species A can generalize to species B when B biologically mimics A. Since species A and B share similar patterns, we leverage BioCLIP as our feature extractor to capture features based on their taxonomy. Consequently, the algorithm designed for species A can be transferred to B, as their hybrid and non-hybrid patterns exhibit similar relationships. To determine whether a butterfly is a hybrid, we adopt proposed probability filtering and color jittering to augment and simulate the mimicry. With these approaches, we achieve second place in the official development phase. Our code is publicly available at https://github.com/Justin900429/NSF-HDR-Challenge. |
2025-04-02 | A computational framework for evaluating tire-asphalt hysteretic friction including pavement roughness | Ivana Ban, Jacopo Bonari, Marco Paggi et.al. | 2504.01511 | | | Abstract (click to expand)Pavement surface textures obtained by a photogrammetry-based method for data acquisition and analysis are employed to investigate if related roughness descriptors are comparable to the frictional performance evaluated by finite element analysis. Pavement surface profiles are obtained from 3D digital surface models created with Close-Range Orthogonal Photogrammetry. To characterize the roughness features of analyzed profiles, selected texture parameters were calculated from the profile's geometry. The parameters values were compared to the frictional performance obtained by numerical simulations. Contact simulations are performed according to a dedicated finite element scheme where surface roughness is directly embedded into a special class of interface finite elements. Simulations were performed for different case scenarios and the obtained results showed a notable trend between roughness descriptors and friction performance, indicating a promising potential for this numerical method to be consistently employed to predict the frictional properties of actual pavement surface profiles. |
2025-04-02 | Multi-convex Programming for Discrete Latent Factor Models Prototyping | Hao Zhu, Shengchao Yan, Jasper Hoffmann et.al. | 2504.01431 | link | | Abstract (click to expand)Discrete latent factor models (DLFMs) are widely used in various domains such as machine learning, economics, neuroscience, psychology, etc. Currently, fitting a DLFM to some dataset relies on a customized solver for individual models, which requires lots of effort to implement and is limited to the targeted specific instance of DLFMs. In this paper, we propose a generic framework based on CVXPY, which allows users to specify and solve the fitting problem of a wide range of DLFMs, including both regression and classification models, within a very short script. Our framework is flexible and inherently supports the integration of regularization terms and constraints on the DLFM parameters and latent factors, such that the users can easily prototype the DLFM structure according to their dataset and application scenario. We introduce our open-source Python implementation and illustrate the framework in several examples. |
2025-04-02 | Accelerating Blockchain Scalability: New Models for Parallel Transaction Execution in the EVM | Souradeep Das, Konpat Preechakul, Jonas Bäumer et.al. | 2504.01370 | | | Abstract (click to expand)As the number of decentralized applications and users on Ethereum grows, the ability of the blockchain to efficiently handle a growing number of transactions becomes increasingly strained. Ethereums current execution model relies heavily on sequential processing, meaning that operations are processed one after the other, which creates significant bottlenecks to future scalability demands. While scalability solutions for Ethereum exist, they inherit the limitations of the EVM, restricting the extent to which they can scale. This paper proposes a novel solution to enable maximally parallelizable executions within Ethereum, built out of three self-sufficient approaches. These approaches include strategies in which Ethereum transaction state accesses could be strategically and efficiently predetermined, and further propose how the incorporation of gas based incentivization mechanisms could enforce a maximally parallelizable network. |
2025-04-01 | A batch production scheduling problem in a reconfigurable hybrid manufacturing-remanufacturing system | Behdin Vahedi-Nouria, Mohammad Rohaninejad, Zdeněk Hanzálek et.al. | 2504.00605 | | | Abstract (click to expand)In recent years, remanufacturing of End-of-Life (EOL) products has been adopted by manufacturing sectors as a competent practice to enhance their sustainability, resiliency, and market share. Due to the mass customization of products and high volatility of market, processing of new products and remanufacturing of EOLs in a same shared facility, namely Hybrid Manufacturing-Remanufacturing System (HMRS), is a mean to keep such production efficient. Accordingly, customized production capabilities are required to increase flexibility, which can be suitably provided under the Reconfigurable Manufacturing System (RMS) paradigm. Despite the advantages of utilizing RMS technologies in HMRSs, production management of such systems suffers excessive complexity. Hence, this study concentrates on the production scheduling of an HMRS consisting of non-identical parallel reconfigurable machines where the orders can be grouped into batches. In this regard, Mixed-Integer Linear Programming (MILP) and Constraint Programming (CP) models are devised to formulate the problem. Furthermore, an efficient solution method is developed based on a Logic-based Benders Decomposition (LBBD) approach. The warm start technique is also implemented by providing a decent initial solution to the MILP model. Computational experiments attest to the LBBD method's superiority over the MILP, CP, and warm started MILP models by obtaining an average gap of about 2%, besides it provides valuable managerial insights. |
2025-04-01 | Towards Calibrating Financial Market Simulators with High-frequency Data | Peng Yang, Junji Ren, Feng Wang et.al. | 2504.00538 | | | Abstract (click to expand)The fidelity of financial market simulation is restricted by the so-called "non-identifiability" difficulty when calibrating high-frequency data. This paper first analyzes the inherent loss of data information in this difficulty, and proposes to use the Kolmogorov-Smirnov test (K-S) as the objective function for high-frequency calibration. Empirical studies verify that K-S has better identifiability of calibrating high-frequency data, while also leads to a much harder multi-modal landscape in the calibration space. To this end, we propose the adaptive stochastic ranking based negatively correlated search algorithm for improving the balance between exploration and exploitation. Experimental results on both simulated data and real market data demonstrate that the proposed method can obtain up to 36.0% improvement in high-frequency data calibration problems over the compared methods. |
2025-04-01 | Carbon and Reliability-Aware Computing for Heterogeneous Data Centers | Yichao Zhang, Yubo Song, Subham Sahoo et.al. | 2504.00518 | | The manuscript has been submitted for review to IEEE Transactions on Smart Grid | Abstract (click to expand)The rapid expansion of data centers (DCs) has intensified energy and carbon footprint, incurring a massive environmental computing cost. While carbon-aware workload migration strategies have been examined, existing approaches often overlook reliability metrics such as server lifetime degradation, and quality-of-service (QoS) that substantially affects both carbon and operational efficiency of DCs. Hence, this paper proposes a comprehensive optimization framework for spatio-temporal workload migration across distributed DCs that jointly minimizes operational and embodied carbon emissions while complying with service-level agreements (SLA). A key contribution is the development of an embodied carbon emission model based on servers' expected lifetime analysis, which explicitly considers server heterogeneity resulting from aging and utilization conditions. These issues are accommodated using new server dispatch strategies, and backup resource allocation model, accounting hardware, software and workload-induced failure. The overall model is formulated as a mixed-integer optimization problem with multiple linearization techniques to ensure computational tractability. Numerical case studies demonstrate that the proposed method reduces total carbon emissions by up to 21%, offering a pragmatic approach to sustainable DC operations. |
2025-04-01 | Aggregate Flexibility of Thermostatically Controlled Loads using Generalized Polymatroids | Karan Mukhi, Alessandro Abate et.al. | 2504.00484 | | | Abstract (click to expand)Leveraging populations of thermostatically controlled loads could provide vast storage capacity to the grid. To realize this potential, their flexibility must be accurately aggregated and represented to the system operator as a single, controllable virtual device. Mathematically this is computed by calculating the Minkowski sum of the individual flexibility of each of the devices. Previous work showed how to exactly characterize the flexibility of lossless storage devices as generalized polymatroids-a family of polytope that enable an efficient computation of the Minkowski sum. In this paper we build on these results to encompass devices with dissipative storage dynamics. In doing so we are able to provide tractable methods of accurately characterizing the flexibility in populations consisting of a variety of heterogeneous devices. Numerical results demonstrate that the proposed characterizations are tight. |
2025-04-01 | Anisotropic mesh spacing prediction using neural networks | Callum Lock, Oubay Hassan, Ruben Sevilla et.al. | 2504.00456 | | 30 pages, 16 figures | Abstract (click to expand)This work presents a framework to predict near-optimal anisotropic spacing functions suitable to perform simulations with unseen operating conditions or geometric configurations. The strategy consists of utilising the vast amount of high fidelity data available in industry to compute a target anisotropic spacing and train an artificial neural network to predict the spacing for unseen scenarios. The trained neural network outputs the metric tensor at the nodes of a coarse background mesh that is then used to generate meshes for unseen cases. Examples are used to demonstrate the effect of the network hyperparameters and the training dataset on the accuracy of the predictions. The potential is demonstrated for examples involving up to 11 geometric parameters on CFD simulations involving a full aircraft configuration. |
2025-04-01 | Transfer Learning in Financial Time Series with Gramian Angular Field | Hou-Wan Long, On-In Ho, Qi-Qiao He et.al. | 2504.00378 | | | Abstract (click to expand)In financial analysis, time series modeling is often hampered by data scarcity, limiting neural network models' ability to generalize. Transfer learning mitigates this by leveraging data from similar domains, but selecting appropriate source domains is crucial to avoid negative transfer. This study enhances source domain selection in transfer learning by introducing Gramian Angular Field (GAF) transformations to improve time series similarity functions. We evaluate a comprehensive range of baseline similarity functions, including both basic and state-of-the-art (SOTA) functions, and perform extensive experiments with Deep Neural Networks (DNN) and Long Short-Term Memory (LSTM) networks. The results demonstrate that GAF-based similarity functions significantly reduce prediction errors. Notably, Coral (GAF) for DNN and CMD (GAF) for LSTM consistently deliver superior performance, highlighting their effectiveness in complex financial environments. |
2025-03-31 | Graph Neural Network-Based Predictive Modeling for Robotic Plaster Printing | Diego Machain Rivera, Selen Ercan Jenny, Ping Hsun Tsai et.al. | 2503.24130 | | | Abstract (click to expand)This work proposes a Graph Neural Network (GNN) modeling approach to predict the resulting surface from a particle based fabrication process. The latter consists of spray-based printing of cementitious plaster on a wall and is facilitated with the use of a robotic arm. The predictions are computed using the robotic arm trajectory features, such as position, velocity and direction, as well as the printing process parameters. The proposed approach, based on a particle representation of the wall domain and the end effector, allows for the adoption of a graph-based solution. The GNN model consists of an encoder-processor-decoder architecture and is trained using data from laboratory tests, while the hyperparameters are optimized by means of a Bayesian scheme. The aim of this model is to act as a simulator of the printing process, and ultimately used for the generation of the robotic arm trajectory and the optimization of the printing parameters, towards the materialization of an autonomous plastering process. The performance of the proposed model is assessed in terms of the prediction error against unseen ground truth data, which shows its generality in varied scenarios, as well as in comparison with the performance of an existing benchmark model. The results demonstrate a significant improvement over the benchmark model, with notably better performance and enhanced error scaling across prediction steps. |
2025-03-31 | Organizations, teams, and job mobility: A social microdynamics approach | Bryan Adams, Valentín Vergara Hidd, Daniel Stimpson et.al. | 2503.24117 | | | Abstract (click to expand)The internal structures of large organizations determine much of what occurs inside including the way in which tasks are performed, the workers that perform them, and the mobility of those workers within the organization. However, regarding this latter process, most of the theoretical and modeling approaches used to understand organizational worker mobility are highly stylized, using idealizations such as structureless organizations, indistinguishable workers, and a lack of social bonding of the workers. In this article, aided by a decade of precise, temporally resolved data of a large US government organization, we introduce a new model to describe organizations as composites of teams within which individuals perform specific tasks and where social connections develop. By tracking the personnel composition of organizational teams, we find that workers that change jobs are highly influenced by preferring to reunite with past co-workers. In this organization, 34\% of all moves lead to worker reunions, a percentage well-above expectation. We find that the greater the time workers spend together or the smaller the team they share both increase their likelihood to reunite, supporting the notion of increased familiarity and trust behind such reunions and the dominant role of social capital in the evolution of large organizations. |
2025-03-31 | Estimation of thermal properties and boundary heat transfer coefficient of the ground with a Bayesian technique | Zhanat Karashbayeva, Julien Berger, Helcio R. B. Orlande et.al. | 2503.24072 | | | Abstract (click to expand)Urbanization is the key contributor for climate change. Increasing urbanization rate causes an urban heat island (UHI) effect, which strongly depends on the short- and long-wave radiation balance heat flux between the surfaces. In order to calculate accurately this heat flux, it is required to assess the surface temperature which depends on the knowledge of the thermal properties and the surface heat transfer coefficients in the heat transfer problem. The aim of this paper is to estimate the thermal properties of the ground and the time varying surface heat transfer coefficient by solving an inverse problem. The Dufort--Frankel scheme is applied for solving the unsteady heat transfer problem. For the inverse problem, a Markov chain Monte Carlo method is used to estimate the posterior probability density function of unknown parameters within the Bayesian framework of statistics, by applying the Metropolis-Hastings algorithm for random sample generation. Actual temperature measurements available at different ground depths were used for the solution of the inverse problem. Different time discretizations were examined for the transient heat transfer coefficient at the ground surface, which then involved different prior distributions. Results of different case studies show that the estimated values of the unknown parameters were in accordance with literature values. Moreover, with the present solution of the inverse problem the temperature residuals were smaller than those obtained by using literature values for the unknowns. |
2025-03-31 | Evaluating Variational Quantum Eigensolver and Quantum Dynamics Algorithms on the Advection-Diffusion Equation | A. Barış Özgüler et.al. | 2503.24045 | | 7 pages, 2 figures | Abstract (click to expand)We investigate the potential of near-term quantum algorithms for solving partial differential equations (PDEs), focusing on a linear one-dimensional advection-diffusion equation as a test case. This study benchmarks a ground-state algorithm, Variational Quantum Eigensolver (VQE), against three leading quantum dynamics algorithms, Trotterization, Variational Quantum Imaginary Time Evolution (VarQTE), and Adaptive Variational Quantum Dynamics Simulation (AVQDS), applied to the same PDE on small quantum hardware. While Trotterization is fully quantum, VarQTE and AVQDS are variational algorithms that reduce circuit depth for noisy intermediate-scale quantum (NISQ) devices. However, hardware results from these dynamics methods show sizable errors due to noise and limited shot statistics. To establish a noise-free performance baseline, we implement the VQE-based solver on a noiseless statevector simulator. Our results show VQE can reach final-time infidelities as low as \({O}(10^{-9})\) with \(N=4\) qubits and moderate circuit depths, outperforming hardware-deployed dynamics methods that show infidelities \(\gtrsim 10^{-2}\) . By comparing noiseless VQE to shot-based and hardware-run algorithms, we assess their accuracy and resource demands, providing a baseline for future quantum PDE solvers. We conclude with a discussion of limitations and potential extensions to higher-dimensional, nonlinear PDEs relevant to engineering and finance. |
2025-03-30 | Exact Characterization of Aggregate Flexibility via Generalized Polymatroids | Karan Mukhi, Georg Loho, Alessandro Abate et.al. | 2503.23458 | | | Abstract (click to expand)There is growing interest in utilizing the flexibility in populations of distributed energy resources (DER) to mitigate the intermittency and uncertainty of renewable generation and provide additional grid services. To enable this, aggregators must effectively represent the flexibility in the populations they control to the market or system operator. A key challenge is accurately computing the aggregate flexibility of a population, which can be formally expressed as the Minkowski sum of a collection of polytopes - a problem that is generally computationally intractable. However, the flexibility polytopes of many DERs exhibit structural symmetries that can be exploited for computational efficiency. To this end, we introduce generalized polymatroids - a family of polytope - into the flexibility aggregation literature. We demonstrate that individual flexibility sets belong to this family, enabling efficient computation of their Minkowski sum. For homogeneous populations of DERs we further derive simplifications that yield more succinct representations of aggregate flexibility. Additionally, we develop an efficient optimization framework over these sets and propose a vertex-based disaggregation method, to allocate aggregate flexibility among individual DERs. Finally, we validate the optimality and computational efficiency of our approach through comparisons with existing methods. |
2025-03-30 | AI Agents in Engineering Design: A Multi-Agent Framework for Aesthetic and Aerodynamic Car Design | Mohamed Elrefaie, Janet Qian, Raina Wu et.al. | 2503.23315 | | | Abstract (click to expand)We introduce the concept of "Design Agents" for engineering applications, particularly focusing on the automotive design process, while emphasizing that our approach can be readily extended to other engineering and design domains. Our framework integrates AI-driven design agents into the traditional engineering workflow, demonstrating how these specialized computational agents interact seamlessly with engineers and designers to augment creativity, enhance efficiency, and significantly accelerate the overall design cycle. By automating and streamlining tasks traditionally performed manually, such as conceptual sketching, styling enhancements, 3D shape retrieval and generative modeling, computational fluid dynamics (CFD) meshing, and aerodynamic simulations, our approach reduces certain aspects of the conventional workflow from weeks and days down to minutes. These agents leverage state-of-the-art vision-language models (VLMs), large language models (LLMs), and geometric deep learning techniques, providing rapid iteration and comprehensive design exploration capabilities. We ground our methodology in industry-standard benchmarks, encompassing a wide variety of conventional automotive designs, and utilize high-fidelity aerodynamic simulations to ensure practical and applicable outcomes. Furthermore, we present design agents that can swiftly and accurately predict simulation outcomes, empowering engineers and designers to engage in more informed design optimization and exploration. This research underscores the transformative potential of integrating advanced generative AI techniques into complex engineering tasks, paving the way for broader adoption and innovation across multiple engineering disciplines. |
2025-03-29 | Ethereum Price Prediction Employing Large Language Models for Short-term and Few-shot Forecasting | Eftychia Makri, Georgios Palaiokrassas, Sarah Bouraga et.al. | 2503.23190 | | | Abstract (click to expand)Cryptocurrencies have transformed financial markets with their innovative blockchain technology and volatile price movements, presenting both challenges and opportunities for predictive analytics. Ethereum, being one of the leading cryptocurrencies, has experienced significant market fluctuations, making its price prediction an attractive yet complex problem. This paper presents a comprehensive study on the effectiveness of Large Language Models (LLMs) in predicting Ethereum prices for short-term and few-shot forecasting scenarios. The main challenge in training models for time series analysis is the lack of data. We address this by leveraging a novel approach that adapts existing pre-trained LLMs on natural language or images from billions of tokens to the unique characteristics of Ethereum price time series data. Through thorough experimentation and comparison with traditional and contemporary models, our results demonstrate that selectively freezing certain layers of pre-trained LLMs achieves state-of-the-art performance in this domain. This approach consistently surpasses benchmarks across multiple metrics, including Mean Squared Error (MSE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE), demonstrating its effectiveness and robustness. Our research not only contributes to the existing body of knowledge on LLMs but also provides practical insights in the cryptocurrency prediction domain. The adaptability of pre-trained LLMs to handle the nature of Ethereum prices suggests a promising direction for future research, potentially including the integration of sentiment analysis to further refine forecasting accuracy. |
2025-04-01 | HRET: A Self-Evolving LLM Evaluation Toolkit for Korean | Hanwool Lee, Soo Yong Kim, Dasol Choi et.al. | 2503.22968 | | | Abstract (click to expand)Recent advancements in Korean large language models (LLMs) have spurred numerous benchmarks and evaluation methodologies, yet the lack of a standardized evaluation framework has led to inconsistent results and limited comparability. To address this, we introduce HRET Haerae Evaluation Toolkit, an open-source, self-evolving evaluation framework tailored specifically for Korean LLMs. HRET unifies diverse evaluation methods, including logit-based scoring, exact-match, language-inconsistency penalization, and LLM-as-a-Judge assessments. Its modular, registry-based architecture integrates major benchmarks (HAE-RAE Bench, KMMLU, KUDGE, HRM8K) and multiple inference backends (vLLM, HuggingFace, OpenAI-compatible endpoints). With automated pipelines for continuous evolution, HRET provides a robust foundation for reproducible, fair, and transparent Korean NLP research. |
2025-03-28 | Co-design of materials, structures and stimuli for magnetic soft robots with large deformation and dynamic contacts | Liwei Wang et.al. | 2503.22767 | | | Abstract (click to expand)Magnetic soft robots embedded with hard magnetic particles enable untethered actuation via external magnetic fields, offering remote, rapid, and precise control, which is highly promising for biomedical applications. However, designing such systems is challenging due to the complex interplay of magneto-elastic dynamics, large deformation, solid contacts, time-varying stimuli, and posture-dependent loading. As a result, most existing research relies on heuristics and trial-and-error methods or focuses on the independent design of stimuli or structures under static conditions. We propose a topology optimization framework for magnetic soft robots that simultaneously designs structures, location-specific material magnetization and time-varying magnetic stimuli, accounting for large deformations, dynamic motion, and solid contacts. This is achieved by integrating generalized topology optimization with the magneto-elastic material point method, which supports GPU-accelerated parallel simulations and auto-differentiation for sensitivity analysis. We applied this framework to design magnetic robots for various tasks, including multi-task shape morphing and locomotion, in both 2D and 3D. The method autonomously generates optimized robotic systems to achieve target behaviors without requiring human intervention. Despite the nonlinear physics and large design space, it demonstrates exceptional efficiency, completing all cases within minutes. This proposed framework represents a significant step toward the automatic co-design of magnetic soft robots for applications such as metasurfaces, drug delivery, and minimally invasive procedures. |
2025-03-28 | A high order multigrid-preconditioned immersed interface solver for the Poisson equation with boundary and interface conditions | James Gabbard, Andrea Paris, Wim M. van Rees et.al. | 2503.22455 | | | Abstract (click to expand)This work presents a multigrid preconditioned high order immersed finite difference solver to accurately and efficiently solve the Poisson equation on complex 2D and 3D domains. The solver employs a low order Shortley-Weller multigrid method to precondition a high order matrix-free Krylov subspace solver. The matrix-free approach enables full compatibility with high order IIM discretizations of boundary and interface conditions, as well as high order wavelet-adapted multiresolution grids. Through verification and analysis on 2D domains, we demonstrate the ability of the algorithm to provide high order accurate results to Laplace and Poisson problems with Dirichlet, Neumann, and/or interface jump boundary conditions, all effectively preconditioned using the multigrid method. We further show that the proposed method is able to efficiently solve high order discretizations of Laplace and Poisson problems on complex 3D domains using thousands of compute cores and on multiresolution grids. To our knowledge, this work presents the largest problem sizes tackled with high order immersed methods applied to elliptic partial differential equations, and the first high order results on 3D multiresolution adaptive grids. Together, this work paves the way for employing high order immersed methods to a variety of 3D partial differential equations with boundary or inter-face conditions, including linear and non-linear elasticity problems, the incompressible Navier-Stokes equations, and fluid-structure interactions. |
2025-03-28 | Numerical optimization of aviation decarbonization scenarios: balancing traffic and emissions with maturing energy carriers and aircraft technology | Ian Costa-Alves, Nicolas Gourdain, François Gallard et.al. | 2503.22435 | link | | Abstract (click to expand)Despite being considered a hard-to-abate sector, aviation's emissions will play an important role in long-term climate mitigation of transportation. The introduction of low-carbon energy carriers and the deployment of new aircraft in the current fleet are modeled as a technology-centered decarbonization policy, and supply constraints in targeted market segments are modeled as demand-side policy. Shared socioeconomic pathways (SSP) are used to estimate the trend traffic demand and limit the sectoral consumption of electricity and biomass. Mitigation scenarios are formulated as optimization problems and three applications are demonstrated: single-policy optimization, scenario-robust policy, and multiobjective policy trade-off. Overall, we find that the choice of energy carrier to embark is highly dependent on assumptions regarding aircraft technology and background energy system, and that aligning trend scenarios with the Paris Agreement market-targeted traffic constraints are required to align trend scenarios with the Paris Agreement. The usual burdens associated with nonlinear optimization with high-dimensional variables are dealt with by jointly using libraries for Multidisciplinary Optimization (GEMSEO) and Automatic Differentiation (JAX), which resulted in speedups of two orders of magnitude at the optimization level, while reducing associated implementation efforts. |
2025-03-28 | Inverse design of dual-band valley-Hall topological photonic crystals with arbitrary pseudospin states | Yuki Sato, Shrinathan Esaki Muthu Pandara Kone, Junpei Oba et.al. | 2503.22206 | | 13 pages, 9 figures | Abstract (click to expand)Valley photonic crystals (VPCs) offer topological kink states that ensure robust, unidirectional, and backscattering-immune light propagation. The design of VPCs is typically based on analogies with condensed-matter topological insulators that exhibit the quantum valley Hall effect; trial-and-error approaches are often used to tailor the photonic band structures and their topological properties, which are characterized by the local Berry curvatures. In this paper, we present an inverse design framework based on frequency-domain analysis for VPCs with arbitrary pseudospin states. Specifically, we utilize the transverse spin angular momentum (TSAM) at the band edge to formulate the objective function for engineering the desired topological properties. Numerical experiments demonstrate that our proposed design approach can successfully produce photonic crystal waveguides exhibiting dual-band operation, enabling frequency-dependent light routing. Our pseudospin-engineering method thus provides a cost-effective alternative for designing topological photonic waveguides, offering novel functionalities. |
2025-03-28 | Convolutional optimization with convex kernel and power lift | Zhipeng Lu et.al. | 2503.22135 | | | Abstract (click to expand)We focus on establishing the foundational paradigm of a novel optimization theory based on convolution with convex kernels. Our goal is to devise a morally deterministic model of locating the global optima of an arbitrary function, which is distinguished from most commonly used statistical models. Limited preliminary numerical results are provided to test the efficiency of some specific algorithms derived from our paradigm, which we hope to stimulate further practical interest. |
2025-03-28 | A production planning benchmark for real-world refinery-petrochemical complexes | Wenli Du, Chuan Wang, Chen Fan et.al. | 2503.22057 | link | | Abstract (click to expand)To achieve digital intelligence transformation and carbon neutrality, effective production planning is crucial for integrated refinery-petrochemical complexes. Modern refinery planning relies on advanced optimization techniques, whose development requires reproducible benchmark problems. However, existing benchmarks lack practical context or impose oversimplified assumptions, limiting their applicability to enterprise-wide optimization. To bridge the substantial gap between theoretical research and industrial applications, this paper introduces the first open-source, demand-driven benchmark for industrial-scale refinery-petrochemical complexes with transparent model formulations and comprehensive input parameters. The benchmark incorporates a novel port-stream hybrid superstructure for modular modeling and broad generalizability. Key secondary processing units are represented using the delta-base approach grounded in historical data. Three real-world cases have been constructed to encompass distinct scenario characteristics, respectively addressing (1) a stand-alone refinery without integer variables, (2) chemical site integration with inventory-related integer variables, and (3) multi-period planning. All model parameters are fully accessible. Additionally, this paper provides an analysis of computational performance, ablation experiments on delta-base modeling, and application scenarios for the proposed benchmark. |
2025-03-27 | Multimodal Data Integration for Sustainable Indoor Gardening: Tracking Anyplant with Time Series Foundation Model | Seyed Hamidreza Nabaei, Zeyang Zheng, Dong Chen et.al. | 2503.21932 | | Accepted at ASCE International Conference on Computing in Civil Engineering (i3ce) | Abstract (click to expand)Indoor gardening within sustainable buildings offers a transformative solution to urban food security and environmental sustainability. By 2030, urban farming, including Controlled Environment Agriculture (CEA) and vertical farming, is expected to grow at a compound annual growth rate (CAGR) of 13.2% from 2024 to 2030, according to market reports. This growth is fueled by advancements in Internet of Things (IoT) technologies, sustainable innovations such as smart growing systems, and the rising interest in green interior design. This paper presents a novel framework that integrates computer vision, machine learning (ML), and environmental sensing for the automated monitoring of plant health and growth. Unlike previous approaches, this framework combines RGB imagery, plant phenotyping data, and environmental factors such as temperature and humidity, to predict plant water stress in a controlled growth environment. The system utilizes high-resolution cameras to extract phenotypic features, such as RGB, plant area, height, and width while employing the Lag-Llama time series model to analyze and predict water stress. Experimental results demonstrate that integrating RGB, size ratios, and environmental data significantly enhances predictive accuracy, with the Fine-tuned model achieving the lowest errors (MSE = 0.420777, MAE = 0.595428) and reduced uncertainty. These findings highlight the potential of multimodal data and intelligent systems to automate plant care, optimize resource consumption, and align indoor gardening with sustainable building management practices, paving the way for resilient, green urban spaces. |
2025-03-27 | Data-Driven Nonlinear Model Reduction to Spectral Submanifolds via Oblique Projection | Leonardo Bettini, Bálint Kaszás, Bernhard Zybach et.al. | 2503.21895 | | | Abstract (click to expand)The dynamics in a primary Spectral Submanifold (SSM) constructed over the slowest modes of a dynamical system provide an ideal reduced-order model for nearby trajectories. Modeling the dynamics of trajectories further away from the primary SSM, however, is difficult if the linear part of the system exhibits strong non-normal behavior. Such non-normality implies that simply projecting trajectories onto SSMs along directions normal to the slow linear modes will not pair those trajectories correctly with their reduced counterparts on the SSMs. In principle, a well-defined nonlinear projection along a stable invariant foliation exists and would exactly match the full dynamics to the SSM-reduced dynamics. This foliation, however, cannot realistically be constructed from practically feasible amounts and distributions of experimental data. Here we develop an oblique projection technique that is able to approximate this foliation efficiently, even from a single experimental trajectory of a significantly non-normal and nonlinear beam. |
2025-03-27 | CMADiff: Cross-Modal Aligned Diffusion for Controllable Protein Generation | Changjian Zhou, Yuexi Qiu, Tongtong Ling et.al. | 2503.21450 | | | Abstract (click to expand)AI-assisted protein design has emerged as a critical tool for advancing biotechnology, as deep generative models have demonstrated their reliability in this domain. However, most existing models primarily utilize protein sequence or structural data for training, neglecting the physicochemical properties of proteins.Moreover, they are deficient to control the generation of proteins in intuitive conditions. To address these limitations,we propose CMADiff here, a novel framework that enables controllable protein generation by aligning the physicochemical properties of protein sequences with text-based descriptions through a latent diffusion process. Specifically, CMADiff employs a Conditional Variational Autoencoder (CVAE) to integrate physicochemical features as conditional input, forming a robust latent space that captures biological traits. In this latent space, we apply a conditional diffusion process, which is guided by BioAligner, a contrastive learning-based module that aligns text descriptions with protein features, enabling text-driven control over protein sequence generation. Validated by a series of evaluations including AlphaFold3, the experimental results indicate that CMADiff outperforms protein sequence generation benchmarks and holds strong potential for future applications. The implementation and code are available at https://github.com/HPC-NEAU/PhysChemDiff. |
2025-03-27 | Large Language Models for Traffic and Transportation Research: Methodologies, State of the Art, and Future Opportunities | Yimo Yan, Yejia Liao, Guanhao Xu et.al. | 2503.21330 | | | Abstract (click to expand)The rapid rise of Large Language Models (LLMs) is transforming traffic and transportation research, with significant advancements emerging between the years 2023 and 2025 -- a period marked by the inception and swift growth of adopting and adapting LLMs for various traffic and transportation applications. However, despite these significant advancements, a systematic review and synthesis of the existing studies remain lacking. To address this gap, this paper provides a comprehensive review of the methodologies and applications of LLMs in traffic and transportation, highlighting their ability to process unstructured textual data to advance transportation research. We explore key applications, including autonomous driving, travel behavior prediction, and general transportation-related queries, alongside methodologies such as zero- or few-shot learning, prompt engineering, and fine-tuning. Our analysis identifies critical research gaps. From the methodological perspective, many research gaps can be addressed by integrating LLMs with existing tools and refining LLM architectures. From the application perspective, we identify numerous opportunities for LLMs to tackle a variety of traffic and transportation challenges, building upon existing research. By synthesizing these findings, this review not only clarifies the current state of LLM adoption and adaptation in traffic and transportation but also proposes future research directions, paving the way for smarter and more sustainable transportation systems. |
2025-03-27 | ResearchBench: Benchmarking LLMs in Scientific Discovery via Inspiration-Based Task Decomposition | Yujie Liu, Zonglin Yang, Tong Xie et.al. | 2503.21248 | | | Abstract (click to expand)Large language models (LLMs) have demonstrated potential in assisting scientific research, yet their ability to discover high-quality research hypotheses remains unexamined due to the lack of a dedicated benchmark. To address this gap, we introduce the first large-scale benchmark for evaluating LLMs with a near-sufficient set of sub-tasks of scientific discovery: inspiration retrieval, hypothesis composition, and hypothesis ranking. We develop an automated framework that extracts critical components - research questions, background surveys, inspirations, and hypotheses - from scientific papers across 12 disciplines, with expert validation confirming its accuracy. To prevent data contamination, we focus exclusively on papers published in 2024, ensuring minimal overlap with LLM pretraining data. Our evaluation reveals that LLMs perform well in retrieving inspirations, an out-of-distribution task, suggesting their ability to surface novel knowledge associations. This positions LLMs as "research hypothesis mines", capable of facilitating automated scientific discovery by generating innovative hypotheses at scale with minimal human intervention. |
2025-03-27 | GPU-Accelerated Charge-Equilibration for Shadow Molecular Dynamics in Python | Mehmet Cagri Kaymak, Nicholas Lubbers, Christian F. A. Negre et.al. | 2503.21176 | link | | Abstract (click to expand)With recent advancements in machine learning for interatomic potentials, Python has become the go-to programming language for exploring new ideas. While machine-learning potentials are often developed in Python-based frameworks, existing molecular dynamics software is predominantly written in lower-level languages. This disparity complicates the integration of machine learning potentials into these molecular dynamics libraries. Additionally, machine learning potentials typically focus on local features, often neglecting long-range electrostatics due to computational complexities. This is a key limitation as applications can require long-range electrostatics and even flexible charges to achieve the desired accuracy. Recent charge equilibration models can address these issues, but they require iterative solvers to assign relaxed flexible charges to the atoms. Conventional implementations also demand very tight convergence to achieve long-term stability, further increasing computational cost. In this work, we present a scalable Python implementation of a recently proposed shadow molecular dynamics scheme based on a charge equilibration model, which avoids the convergence problem while maintaining long-term energy stability and accuracy of observable properties. To deliver a functional and user-friendly Python-based library, we implemented an efficient neighbor list algorithm, Particle Mesh Ewald, and traditional Ewald summation techniques, leveraging the GPU-accelerated power of Triton and PyTorch. We integrated these approaches with the Python-based shadow molecular dynamics scheme, enabling fast charge equilibration for scalable machine learning potentials involving systems with hundreds of thousands of atoms. |
2025-03-26 | FinAudio: A Benchmark for Audio Large Language Models in Financial Applications | Yupeng Cao, Haohang Li, Yangyang Yu et.al. | 2503.20990 | | | Abstract (click to expand)Audio Large Language Models (AudioLLMs) have received widespread attention and have significantly improved performance on audio tasks such as conversation, audio understanding, and automatic speech recognition (ASR). Despite these advancements, there is an absence of a benchmark for assessing AudioLLMs in financial scenarios, where audio data, such as earnings conference calls and CEO speeches, are crucial resources for financial analysis and investment decisions. In this paper, we introduce \textsc{FinAudio}, the first benchmark designed to evaluate the capacity of AudioLLMs in the financial domain. We first define three tasks based on the unique characteristics of the financial domain: 1) ASR for short financial audio, 2) ASR for long financial audio, and 3) summarization of long financial audio. Then, we curate two short and two long audio datasets, respectively, and develop a novel dataset for financial audio summarization, comprising the \textsc{FinAudio} benchmark. Then, we evaluate seven prevalent AudioLLMs on \textsc{FinAudio}. Our evaluation reveals the limitations of existing AudioLLMs in the financial domain and offers insights for improving AudioLLMs. All datasets and codes will be released. |
2025-03-26 | TransDiffSBDD: Causality-Aware Multi-Modal Structure-Based Drug Design | Xiuyuan Hu, Guoqing Liu, Can Chen et.al. | 2503.20913 | | | Abstract (click to expand)Structure-based drug design (SBDD) is a critical task in drug discovery, requiring the generation of molecular information across two distinct modalities: discrete molecular graphs and continuous 3D coordinates. However, existing SBDD methods often overlook two key challenges: (1) the multi-modal nature of this task and (2) the causal relationship between these modalities, limiting their plausibility and performance. To address both challenges, we propose TransDiffSBDD, an integrated framework combining autoregressive transformers and diffusion models for SBDD. Specifically, the autoregressive transformer models discrete molecular information, while the diffusion model samples continuous distributions, effectively resolving the first challenge. To address the second challenge, we design a hybrid-modal sequence for protein-ligand complexes that explicitly respects the causality between modalities. Experiments on the CrossDocked2020 benchmark demonstrate that TransDiffSBDD outperforms existing baselines. |
2025-03-26 | Technical Note: Continuum Theory of Mixture for Three-phase Thermomechanical Model of Fiber-reinforced Aerogel Composites | Pratyush Kumar Singh, Danial Faghihi et.al. | 2503.20713 | | | Abstract (click to expand)We present a thermodynamically consistent three-phase model for the coupled thermal transport and mechanical deformation of ceramic aerogel porous composite materials, which is formulated via continuum mixture theory. The composite comprises a solid silica skeleton, a gaseous fluid phase, and dispersed solid fibers. The thermal transport model incorporates the effects of meso- and macro-pore size variations due to the Knudsen effect, achieved by upscaling phonon transport relations to derive constitutive equations for the fluid thermal conductivity. The mechanical model captures solid-solid and solid-fluid interactions through momentum exchange between phases. A mixed finite element formulation is employed to solve the multiphase model, and numerical studies are conducted to analyze key features of the computational model. |
2025-03-26 | General Method for Conversion Between Multimode Network Parameters | Alexander Zhuravlev, Juan D. Baena et.al. | 2503.20298 | | | Abstract (click to expand)Different types of network parameters have been used in electronics since long ago. The most typical network parameters, but not the only ones, are \(S\), \(T\), \(ABCD\), \(Z\), \(Y\) , and \(h\) that relate input and output signals in different ways. There exist practical formulas for conversion between them. Due to the development of powerful software tools that can deal efficiently and accurately with higher-order modes in each port, researchers need conversion rules between multimode network parameters. However, the usual way to get each conversion rule is just developing cumbersome algebraic manipulations which, at the end, are useful only for some specific conversion. Here, we propose a general algebraic method to obtain any conversion rule between different multimode network parameters. It is based on the assumption of a state vector space and each conversion rule between network parameters can be interpreted as a simple change of basis. This procedure explains any conversion between multimode network parameters under the same algebraic steps. |
2025-03-26 | Dynamic Learning and Productivity for Data Analysts: A Bayesian Hidden Markov Model Perspective | Yue Yin et.al. | 2503.20233 | | 29 pages; a shorter 11-page version is accepted by HCI International (HCII) 2025; | Abstract (click to expand)Data analysts are essential in organizations, transforming raw data into insights that drive decision-making and strategy. This study explores how analysts' productivity evolves on a collaborative platform, focusing on two key learning activities: writing queries and viewing peer queries. While traditional research often assumes static models, where performance improves steadily with cumulative learning, such models fail to capture the dynamic nature of real-world learning. To address this, we propose a Hidden Markov Model (HMM) that tracks how analysts transition between distinct learning states based on their participation in these activities. Using an industry dataset with 2,001 analysts and 79,797 queries, this study identifies three learning states: novice, intermediate, and advanced. Productivity increases as analysts advance to higher states, reflecting the cumulative benefits of learning. Writing queries benefits analysts across all states, with the largest gains observed for novices. Viewing peer queries supports novices but may hinder analysts in higher states due to cognitive overload or inefficiencies. Transitions between states are also uneven, with progression from intermediate to advanced being particularly challenging. This study advances understanding of into dynamic learning behavior of knowledge worker and offers practical implications for designing systems, optimizing training, enabling personalized learning, and fostering effective knowledge sharing. |
2025-03-26 | Solving 2-D Helmholtz equation in the rectangular, circular, and elliptical domains using neural networks | D. Veerababu, Prasanta K. Ghosh et.al. | 2503.20222 | | 59 pages | Abstract (click to expand)Physics-informed neural networks offered an alternate way to solve several differential equations that govern complicated physics. However, their success in predicting the acoustic field is limited by the vanishing-gradient problem that occurs when solving the Helmholtz equation. In this paper, a formulation is presented that addresses this difficulty. The problem of solving the two-dimensional Helmholtz equation with the prescribed boundary conditions is posed as an unconstrained optimization problem using trial solution method. According to this method, a trial neural network that satisfies the given boundary conditions prior to the training process is constructed using the technique of transfinite interpolation and the theory of R-functions. This ansatz is initially applied to the rectangular domain and later extended to the circular and elliptical domains. The acoustic field predicted from the proposed formulation is compared with that obtained from the two-dimensional finite element methods. Good agreement is observed in all three domains considered. Minor limitations associated with the proposed formulation and their remedies are also discussed. |
2025-03-25 | Lossy Compression of Scientific Data: Applications Constrains and Requirements | Franck Cappello, Allison Baker, Ebru Bozda et.al. | 2503.20031 | | 33 pages | Abstract (click to expand)Increasing data volumes from scientific simulations and instruments (supercomputers, accelerators, telescopes) often exceed network, storage, and analysis capabilities. The scientific community's response to this challenge is scientific data reduction. Reduction can take many forms, such as triggering, sampling, filtering, quantization, and dimensionality reduction. This report focuses on a specific technique: lossy compression. Lossy compression retains all data points, leveraging correlations and controlled reduced accuracy. Quality constraints, especially for quantities of interest, are crucial for preserving scientific discoveries. User requirements also include compression ratio and speed. While many papers have been published on lossy compression techniques and reference datasets are shared by the community, there is a lack of detailed specifications of application needs that can guide lossy compression researchers and developers. This report fills this gap by reporting on the requirements and constraints of nine scientific applications covering a large spectrum of domains (climate, combustion, cosmology, fusion, light sources, molecular dynamics, quantum circuit simulation, seismology, and system logs). The report also details key lossy compression technologies (SZ, ZFP, MGARD, LC, SPERR, DCTZ, TEZip, LibPressio), discussing their history, principles, error control, hardware support, features, and impact. By presenting both application needs and compression technologies, the report aims to inspire new research to fill existing gaps. |
2025-03-25 | A comparative study of calibration techniques for finite strain elastoplasticity: Numerically-exact sensitivities for FEMU and VFM | Sanjeev Kumar, D. Thomas Seidl, Brian N. Granzow et.al. | 2503.19782 | | 44 pages, 15 figures | Abstract (click to expand)Accurate identification of material parameters is crucial for predictive modeling in computational mechanics. The two primary approaches in the experimental mechanics' community for calibration from full-field digital image correlation data are known as finite element model updating (FEMU) and the virtual fields method (VFM). In VFM, the objective function is a squared mismatch between internal and external virtual work or power. In FEMU, the objective function quantifies the weighted mismatch between model predictions and corresponding experimentally measured quantities of interest. It is minimized by iteratively updating the parameters of an FE model. While FEMU is seen as more flexible, VFM is commonly used instead of FEMU due to its considerably greater computational expense. However, comparisons between the two methods usually involve approximations of gradients or sensitivities with finite difference schemes, thereby making direct assessments difficult. Hence, in this study, we rigorously compare VFM and FEMU in the context of numerically-exact sensitivities obtained through local sensitivity analyses and the application of automatic differentiation software. To this end, both methods are tested on a finite strain elastoplasticity model. We conduct a series of test cases to assess both methods' robustness under practical challenges. |
2025-03-25 | Decoupled Dynamics Framework with Neural Fields for 3D Spatio-temporal Prediction of Vehicle Collisions | Sanghyuk Kim, Minsik Seo, Namwoo Kang et.al. | 2503.19712 | | 24 pages, 13 figures | Abstract (click to expand)This study proposes a neural framework that predicts 3D vehicle collision dynamics by independently modeling global rigid-body motion and local structural deformation. Unlike approaches directly predicting absolute displacement, this method explicitly separates the vehicle's overall translation and rotation from its structural deformation. Two specialized networks form the core of the framework: a quaternion-based Rigid Net for rigid motion and a coordinate-based Deformation Net for local deformation. By independently handling fundamentally distinct physical phenomena, the proposed architecture achieves accurate predictions without requiring separate supervision for each component. The model, trained on only 10% of available simulation data, significantly outperforms baseline models, including single multi-layer perceptron (MLP) and deep operator networks (DeepONet), with prediction errors reduced by up to 83%. Extensive validation demonstrates strong generalization to collision conditions outside the training range, accurately predicting responses even under severe impacts involving extreme velocities and large impact angles. Furthermore, the framework successfully reconstructs high-resolution deformation details from low-resolution inputs without increased computational effort. Consequently, the proposed approach provides an effective, computationally efficient method for rapid and reliable assessment of vehicle safety across complex collision scenarios, substantially reducing the required simulation data and time while preserving prediction fidelity. |
2025-03-25 | Characteristic boundary conditions for Hybridizable Discontinuous Galerkin methods | Jan Ellmenreich, Matteo Giacomini, Antonio Huerta et.al. | 2503.19684 | | | Abstract (click to expand)In this work we introduce the concept of characteristic boundary conditions (CBCs) within the framework of Hybridizable Discontinuous Galerkin (HDG) methods, including both the Navier-Stokes characteristic boundary conditions (NSCBCs) and a novel approach to generalized characteristic relaxation boundary conditions (GRCBCs). CBCs are based on the characteristic decomposition of the compressible Euler equations and are designed to prevent the reflection of waves at the domain boundaries. We show the effectiveness of the proposed method for weakly compressible flows through a series of numerical experiments by comparing the results with common boundary conditions in the HDG setting and reference solutions available in the literature. In particular, HDG with CBCs show superior performance minimizing the reflection of vortices at artificial boundaries, for both inviscid and viscous flows. |
2025-03-25 | Estimation of the Acoustic Field in a Uniform Duct with Mean Flow using Neural Networks | D. Veerababu, Prasanta K. Ghosh et.al. | 2503.19412 | | 23 pages | Abstract (click to expand)The study of sound propagation in a uniform duct having a mean flow has many applications, such as in the design of gas turbines, heating, ventilation and air conditioning ducts, automotive intake and exhaust systems, and in the modeling of speech. In this paper, the convective effects of the mean flow on the plane wave acoustic field inside a uniform duct were studied using artificial neural networks. The governing differential equation and the associated boundary conditions form a constrained optimization problem. It is converted to an unconstrained optimization problem and solved by approximating the acoustic field variable to a neural network. The complex-valued acoustic pressure and particle velocity were predicted at different frequencies, and validated against the analytical solution and the finite element models. The effect of the mean flow is studied in terms of the acoustic impedance. A closed-form expression that describes the influence of various factors on the acoustic field is derived. |
2025-03-24 | Multi-Physics Inverse Design of Varifocal Optical Devices using Data-Driven Surrogates and Differential Modeling | Zeqing Jin, Zhaocheng Liu, Nagi Elabbasi et.al. | 2503.18911 | | 15 pages, 4 figures | Abstract (click to expand)Designing a new varifocal architecture in AR glasses poses significant challenges due to the complex interplay of multiple physics disciplines, including innovated piezo-electric material, solid mechanics, electrostatics, and optics. Traditional design methods, which treat each physics separately, are insufficient for this problem as they fail to establish the intricate relationships among design parameters in such a large and sensitive space, leading to suboptimal solutions. To address this challenge, we propose a novel design pipeline, mPhDBBs (multi-Physics Differential Building Blocks), that integrates these diverse physics through a graph neural network-based surrogate model and a differentiable ray tracing model. A hybrid optimization method combining evolutionary and gradient approaches is employed to efficiently determine superior design variables that achieve desired optical objectives, such as focal length and focusing quality. Our results demonstrate the effectiveness of mPhDBBs, achieving high accuracy with minimal training data and computational resources, resulting in a speedup of at least 1000 times compared to non-gradient-based methods. This work offers a promising paradigm shift in product design, enabling rapid and accurate optimization of complex multi-physics systems, and demonstrates its adaptability to other inverse design problems. |
2025-03-24 | Differentiable Simulator for Electrically Reconfigurable Electromagnetic Structures | Johannes Müller, Dennis Philipp, Matthias Günther et.al. | 2503.18479 | | | Abstract (click to expand)This paper introduces a novel CUDA-enabled PyTorch-based framework designed for the gradient-based optimization of such reconfigurable electromagnetic structures with electrically tunable parameters. Traditional optimization techniques for these structures often rely on non-gradient-based methods, limiting efficiency and flexibility. Our framework leverages automatic differentiation, facilitating the application of gradient-based optimization methods. This approach is particularly advantageous for embedding within deep learning frameworks, enabling sophisticated optimization strategies. We demonstrate the framework's effectiveness through comprehensive simulations involving resonant structures with tunable parameters. Key contributions include the efficient solution of the inverse problem. The framework's performance is validated using three different resonant structures: a single-loop copper wire (Unit-Cell) as well as an 8x1 and an 8x8 array of resonant unit cells with multiple inductively coupled unit cells (1d and 2d Metasurfaces). Results show precise in-silico control over the magnetic field's component normal to the surface of each resonant structure, achieving desired field strengths with minimal error. The proposed framework is compatible with existing simulation software. This PyTorch-based framework sets the stage for advanced electromagnetic control strategies for resonant structures with application in e.g. MRI, providing a robust platform for further exploration and innovation in the design and optimization of resonant electromagnetic structures. |
2025-03-24 | DeepFund: Will LLM be Professional at Fund Investment? A Live Arena Perspective | Changlun Li, Yao Shi, Yuyu Luo et.al. | 2503.18313 | | Work in progress | Abstract (click to expand)Large Language Models (LLMs) have demonstrated impressive capabilities across various domains, but their effectiveness in financial decision making, particularly in fund investment, remains inadequately evaluated. Current benchmarks primarily assess LLMs understanding of financial documents rather than their ability to manage assets or analyze trading opportunities in dynamic market conditions. A critical limitation in existing evaluation methodologies is the backtesting approach, which suffers from information leakage when LLMs are evaluated on historical data they may have encountered during pretraining. This paper introduces DeepFund, a comprehensive platform for evaluating LLM based trading strategies in a simulated live environment. Our approach implements a multi agent framework where LLMs serve as both analysts and managers, creating a realistic simulation of investment decision making. The platform employs a forward testing methodology that mitigates information leakage by evaluating models on market data released after their training cutoff dates. We provide a web interface that visualizes model performance across different market conditions and investment parameters, enabling detailed comparative analysis. Through DeepFund, we aim to provide a more accurate and fair assessment of LLMs capabilities in fund investment, offering insights into their potential real world applications in financial markets. |
2025-03-23 | The Power of Small LLMs in Geometry Generation for Physical Simulations | Ossama Shafiq, Bahman Ghiassi, Alessio Alexiadis et.al. | 2503.18178 | | 24 pages, 17 figures | Abstract (click to expand)Engineers widely rely on simulation platforms like COMSOL or ANSYS to model and optimise processes. However, setting up such simulations requires expertise in defining geometry, generating meshes, establishing boundary conditions, and configuring solvers. This research aims to simplify this process by enabling engineers to describe their setup in plain language, allowing a Large Language Model (LLM) to generate the necessary input files for their specific application. This novel approach allows establishing a direct link between natural language and complex engineering tasks. Building on previous work that evaluated various LLMs for generating input files across simple and complex geometries, this study demonstrates that small LLMs - specifically, Phi-3 Mini and Qwen-2.5 1.5B - can be fine-tuned to generate precise engineering geometries in GMSH format. Through Low-Rank Adaptation (LoRA), we curated a dataset of 480 instruction-output pairs encompassing simple shapes (squares, rectangles, circles, and half circles) and more complex structures (I-beams, cylindrical pipes, and bent pipes). The fine-tuned models produced high-fidelity outputs, handling routine geometry generation with minimal intervention. While challenges remain with geometries involving combinations of multiple bodies, this study demonstrates that fine-tuned small models can outperform larger models like GPT-4o in specialised tasks, offering a precise and resource-efficient alternative for engineering applications. |
2025-03-23 | Strategic Prompt Pricing for AIGC Services: A User-Centric Approach | Xiang Li, Bing Luo, Jianwei Huang et.al. | 2503.18168 | | accepted in WiOpt 2025 | Abstract (click to expand)The rapid growth of AI-generated content (AIGC) services has created an urgent need for effective prompt pricing strategies, yet current approaches overlook users' strategic two-step decision-making process in selecting and utilizing generative AI models. This oversight creates two key technical challenges: quantifying the relationship between user prompt capabilities and generation outcomes, and optimizing platform payoff while accounting for heterogeneous user behaviors. We address these challenges by introducing prompt ambiguity, a theoretical framework that captures users' varying abilities in prompt engineering, and developing an Optimal Prompt Pricing (OPP) algorithm. Our analysis reveals a counterintuitive insight: users with higher prompt ambiguity (i.e., lower capability) exhibit non-monotonic prompt usage patterns, first increasing then decreasing with ambiguity levels, reflecting complex changes in marginal utility. Experimental evaluation using a character-level GPT-like model demonstrates that our OPP algorithm achieves up to 31.72% improvement in platform payoff compared to existing pricing mechanisms, validating the importance of user-centric prompt pricing in AIGC services. |
2025-03-23 | (G)I-DLE: Generative Inference via Distribution-preserving Logit Exclusion with KL Divergence Minimization for Constrained Decoding | Hanwool Lee et.al. | 2503.18050 | | preprint | Abstract (click to expand)We propose (G)I-DLE, a new approach to constrained decoding that leverages KL divergence minimization to preserve the intrinsic conditional probability distribution of autoregressive language models while excluding undesirable tokens. Unlike conventional methods that naively set banned tokens' logits to \(-\infty\) , which can distort the conversion from raw logits to posterior probabilities and increase output variance, (G)I-DLE re-normalizes the allowed token probabilities to minimize such distortion. We validate our method on the K2-Eval dataset, specifically designed to assess Korean language fluency, logical reasoning, and cultural appropriateness. Experimental results on Qwen2.5 models (ranging from 1.5B to 14B) demonstrate that G-IDLE not only boosts mean evaluation scores but also substantially reduces the variance of output quality. |
2025-03-23 | Financial Wind Tunnel: A Retrieval-Augmented Market Simulator | Bokai Cao, Xueyuan Lin, Yiyan Qi et.al. | 2503.17909 | | | Abstract (click to expand)Market simulator tries to create high-quality synthetic financial data that mimics real-world market dynamics, which is crucial for model development and robust assessment. Despite continuous advancements in simulation methodologies, market fluctuations vary in terms of scale and sources, but existing frameworks often excel in only specific tasks. To address this challenge, we propose Financial Wind Tunnel (FWT), a retrieval-augmented market simulator designed to generate controllable, reasonable, and adaptable market dynamics for model testing. FWT offers a more comprehensive and systematic generative capability across different data frequencies. By leveraging a retrieval method to discover cross-sectional information as the augmented condition, our diffusion-based simulator seamlessly integrates both macro- and micro-level market patterns. Furthermore, our framework allows the simulation to be controlled with wide applicability, including causal generation through "what-if" prompts or unprecedented cross-market trend synthesis. Additionally, we develop an automated optimizer for downstream quantitative models, using stress testing of simulated scenarios via FWT to enhance returns while controlling risks. Experimental results demonstrate that our approach enables the generalizable and reliable market simulation, significantly improve the performance and adaptability of downstream models, particularly in highly complex and volatile market conditions. Our code and data sample is available at https://anonymous.4open.science/r/fwt_-E852 |
2025-03-22 | Accelerating and enhancing thermodynamic simulations of electrochemical interfaces | Xiaochen Du, Mengren Liu, Jiayu Peng et.al. | 2503.17870 | link | 19 pages main text, 5 figures, supplementary information (SI) in ancillary files | Abstract (click to expand)Electrochemical interfaces are crucial in catalysis, energy storage, and corrosion, where their stability and reactivity depend on complex interactions between the electrode, adsorbates, and electrolyte. Predicting stable surface structures remains challenging, as traditional surface Pourbaix diagrams tend to either rely on expert knowledge or costly \(\textit{ab initio}\) sampling, and neglect thermodynamic equilibration with the environment. Machine learning (ML) potentials can accelerate static modeling but often overlook dynamic surface transformations. Here, we extend the Virtual Surface Site Relaxation-Monte Carlo (VSSR-MC) method to autonomously sample surface reconstructions modeled under aqueous electrochemical conditions. Through fine-tuning foundational ML force fields, we accurately and efficiently predict surface energetics, recovering known Pt(111) phases and revealing new LaMnO\(_\mathrm{3}\) (001) surface reconstructions. By explicitly accounting for bulk-electrolyte equilibria, our framework enhances electrochemical stability predictions, offering a scalable approach to understanding and designing materials for electrochemical applications. |
2025-03-22 | Generalized Scattering Matrix Synthesis for Hybrid Systems with Multiple Scatterers and Antennas Using Independent Structure Simulations | Chenbo Shi, Shichen Liang, Jin Pan et.al. | 2503.17616 | | | Abstract (click to expand)This paper presents a unified formulation for calculating the generalized scattering matrix (GS-matrix) of hybrid systems involving multiple scatterers and antennas. The GS-matrix of the entire system is synthesized through the scattering matrices and GS-matrices of each independent component, using the addition theorem of vector spherical wavefunctions and fully matrix-based operations. Since our formulation is applicable to general antenna-scatterer hybrid systems, previous formulas for multiple scattering and antenna arrays become special cases of our approach. This also establishes our formulation as a universal domain decomposition method for analyzing the electromagnetic performance of hybrid systems. We provide numerous numerical examples to comprehensively demonstrate the capabilities and compatibility of the proposed formulation, including its potential application in studying the effects of structural rotation. |
2025-03-21 | Adjoint Sensitivities for the Optimization of Nonlinear Structural Dynamics via Spectral Submanifolds | Matteo Pozzi, Jacopo Marconi, Shobhit Jain et.al. | 2503.17431 | | | Abstract (click to expand)This work presents an optimization framework for tailoring the nonlinear dynamic response of lightly damped mechanical systems using Spectral Submanifold (SSM) reduction. We derive the SSM-based backbone curve and its sensitivity with respect to parameters up to arbitrary polynomial orders, enabling efficient and accurate optimization of the nonlinear frequency-amplitude relation. We use the adjoint method to derive sensitivity expressions, which drastically reduces the computational cost compared to direct differentiation as the number of parameters increases. An important feature of this framework is the automatic adjustment of the expansion order of SSM-based ROMs using user-defined error tolerances during the optimization process. We demonstrate the effectiveness of the approach in optimizing the nonlinear response over several numerical examples of mechanical systems. Hence, the proposed framework extends the applicability of SSM-based optimization methods to practical engineering problems, offering a robust tool for the design and optimization of nonlinear mechanical structures. |
2025-03-20 | Accelerated Medicines Development using a Digital Formulator and a Self-Driving Tableting DataFactory | Faisal Abbas, Mohammad Salehian, Peter Hou et.al. | 2503.17411 | link | | Abstract (click to expand)Pharmaceutical tablet formulation and process development, traditionally a complex and multi-dimensional decision-making process, necessitates extensive experimentation and resources, often resulting in suboptimal solutions. This study presents an integrated platform for tablet formulation and manufacturing, built around a Digital Formulator and a Self-Driving Tableting DataFactory. By combining predictive modelling, optimisation algorithms, and automation, this system offers a material-to-product approach to predict and optimise critical quality attributes for different formulations, linking raw material attributes to key blend and tablet properties, such as flowability, porosity, and tensile strength. The platform leverages the Digital Formulator, an in-silico optimisation framework that employs a hybrid system of models - melding data-driven and mechanistic models - to identify optimal formulation settings for manufacturability. Optimised formulations then proceed through the self-driving Tableting DataFactory, which includes automated powder dosing, tablet compression and performance testing, followed by iterative refinement of process parameters through Bayesian optimisation methods. This approach accelerates the timeline from material characterisation to development of an in-specification tablet within 6 hours, utilising less than 5 grams of API, and manufacturing small batch sizes of up to 1,440 tablets with augmented and mixed reality enabled real-time quality control within 24 hours. Validation across multiple APIs and drug loadings underscores the platform's capacity to reliably meet target quality attributes, positioning it as a transformative solution for accelerated and resource-efficient pharmaceutical development. |
2025-03-21 | ML-Based Bidding Price Prediction for Pay-As-Bid Ancillary Services Markets: A Use Case in the German Control Reserve Market | Vincent Bezold, Lukas Baur, Alexander Sauer et.al. | 2503.17214 | | | Abstract (click to expand)The increasing integration of renewable energy sources has led to greater volatility and unpredictability in electricity generation, posing challenges to grid stability. Ancillary service markets, such as the German control reserve market, allow industrial consumers and producers to offer flexibility in their power consumption or generation, contributing to grid stability while earning additional income. However, many participants use simple bidding strategies that may not maximize their revenues. This paper presents a methodology for forecasting bidding prices in pay-as-bid ancillary service markets, focusing on the German control reserve market. We evaluate various machine learning models, including Support Vector Regression, Decision Trees, and k-Nearest Neighbors, and compare their performance against benchmark models. To address the asymmetry in the revenue function of pay-as-bid markets, we introduce an offset adjustment technique that enhances the practical applicability of the forecasting models. Our analysis demonstrates that the proposed approach improves potential revenues by 27.43 % to 37.31 % compared to baseline models. When analyzing the relationship between the model forecasting errors and the revenue, a negative correlation is measured for three markets; according to the results, a reduction of 1 EUR/MW model price forecasting error (MAE) statistically leads to a yearly revenue increase between 483 EUR/MW and 3,631 EUR/MW. The proposed methodology enables industrial participants to optimize their bidding strategies, leading to increased earnings and contributing to the efficiency and stability of the electrical grid. |
2025-03-21 | A Comprehensive Framework for Predictive Computational Modeling of Growth and Remodeling in Tissue-Engineered Cardiovascular Implants | Mahmoud Sesa, Hagen Holthusen, Christian Böhm et.al. | 2503.17151 | | Preprint submitted to Springer Nature | Abstract (click to expand)Developing clinically viable tissue-engineered cardiovascular implants remains a formidable challenge. Achieving reliable and durable outcomes requires a deeper understanding of the fundamental mechanisms driving tissue evolution during in vitro maturation. Although considerable progress has been made in modeling soft tissue growth and remodeling, studies focused on the early stages of tissue engineering remain limited. Here, we present a general, thermodynamically consistent model to predict tissue evolution and mechanical response throughout maturation. The formulation utilizes a stress-driven homeostatic surface to capture volumetric growth, coupled with an energy-based approach to describe collagen densification via the strain energy of the fibers. We further employ a co-rotated intermediate configuration to ensure the model's consistency and generality. The framework is demonstrated with two numerical examples: a uniaxially constrained tissue strip validated against experimental data, and a biaxially constrained specimen subjected to a perturbation load. These results highlight the potential of the proposed model to advance the design and optimization of tissue-engineered implants with clinically relevant performance. |
2025-03-26 | Assessing Consistency and Reproducibility in the Outputs of Large Language Models: Evidence Across Diverse Finance and Accounting Tasks | Julian Junyan Wang, Victor Xiaoqi Wang et.al. | 2503.16974 | | 97 pages, 20 tables, 15 figures | Abstract (click to expand)This study provides the first comprehensive assessment of consistency and reproducibility in Large Language Model (LLM) outputs in finance and accounting research. We evaluate how consistently LLMs produce outputs given identical inputs through extensive experimentation with 50 independent runs across five common tasks: classification, sentiment analysis, summarization, text generation, and prediction. Using three OpenAI models (GPT-3.5-turbo, GPT-4o-mini, and GPT-4o), we generate over 3.4 million outputs from diverse financial source texts and data, covering MD&As, FOMC statements, finance news articles, earnings call transcripts, and financial statements. Our findings reveal substantial but task-dependent consistency, with binary classification and sentiment analysis achieving near-perfect reproducibility, while complex tasks show greater variability. More advanced models do not consistently demonstrate better consistency and reproducibility, with task-specific patterns emerging. LLMs significantly outperform expert human annotators in consistency and maintain high agreement even where human experts significantly disagree. We further find that simple aggregation strategies across 3-5 runs dramatically improve consistency. We also find that aggregation may come with an additional benefit of improved accuracy for sentiment analysis when using newer models. Simulation analysis reveals that despite measurable inconsistency in LLM outputs, downstream statistical inferences remain remarkably robust. These findings address concerns about what we term "G-hacking," the selective reporting of favorable outcomes from multiple Generative AI runs, by demonstrating that such risks are relatively low for finance and accounting tasks. |
2025-03-19 | Reliable Radiologic Skeletal Muscle Area Assessment -- A Biomarker for Cancer Cachexia Diagnosis | Sabeen Ahmed, Nathan Parker, Margaret Park et.al. | 2503.16556 | | 47 pages, 19 figures, 9 Tables | Abstract (click to expand)Cancer cachexia is a common metabolic disorder characterized by severe muscle atrophy which is associated with poor prognosis and quality of life. Monitoring skeletal muscle area (SMA) longitudinally through computed tomography (CT) scans, an imaging modality routinely acquired in cancer care, is an effective way to identify and track this condition. However, existing tools often lack full automation and exhibit inconsistent accuracy, limiting their potential for integration into clinical workflows. To address these challenges, we developed SMAART-AI (Skeletal Muscle Assessment-Automated and Reliable Tool-based on AI), an end-to-end automated pipeline powered by deep learning models (nnU-Net 2D) trained on mid-third lumbar level CT images with 5-fold cross-validation, ensuring generalizability and robustness. SMAART-AI incorporates an uncertainty-based mechanism to flag high-error SMA predictions for expert review, enhancing reliability. We combined the SMA, skeletal muscle index, BMI, and clinical data to train a multi-layer perceptron (MLP) model designed to predict cachexia at the time of cancer diagnosis. Tested on the gastroesophageal cancer dataset, SMAART-AI achieved a Dice score of 97.80% +/- 0.93%, with SMA estimated across all four datasets in this study at a median absolute error of 2.48% compared to manual annotations with SliceOmatic. Uncertainty metrics-variance, entropy, and coefficient of variation-strongly correlated with SMA prediction errors (0.83, 0.76, and 0.73 respectively). The MLP model predicts cachexia with 79% precision, providing clinicians with a reliable tool for early diagnosis and intervention. By combining automation, accuracy, and uncertainty awareness, SMAART-AI bridges the gap between research and clinical application, offering a transformative approach to managing cancer cachexia. |
2025-03-20 | Deep Feynman-Kac Methods for High-dimensional Semilinear Parabolic Equations: Revisit | Xiaotao Zheng, Xingye Yue, Jiyang Shi et.al. | 2503.16407 | | | Abstract (click to expand)Deep Feynman-Kac method was first introduced to solve parabolic partial differential equations(PDE) by Beck et al. (SISC, V.43, 2021), named Deep Splitting method since they trained the Neural Networks step by step in the time direction. In this paper, we propose a new training approach with two different features. Firstly, neural networks are trained at all time steps globally, instead of step by step. Secondly, the training data are generated in a new way, in which the method is consistent with a direct Monte Carlo scheme when dealing with a linear parabolic PDE. Numerical examples show that our method has significant improvement both in efficiency and accuracy. |
2025-03-20 | Filters reveal emergent structure in computational morphogenesis | Hazhir Aliahmadi, Aidan Sheedy, Greg van Anders et.al. | 2503.16211 | | 17 pages, 9 figures | Abstract (click to expand)Revolutionary advances in both manufacturing and computational morphogenesis raise critical questions about design sensitivity. Sensitivity questions are especially critical in contexts, such as topology optimization, that yield structures with emergent morphology. However, analyzing emergent structures via conventional, perturbative techniques can mask larger-scale vulnerabilities that could manifest in essential components. Risks that fail to appear in perturbative sensitivity analyses will only continue to proliferate as topology optimization-driven manufacturing penetrates more deeply into engineering design and consumer products. Here, we introduce Laplace-transform based computational filters that supplement computational morphogenesis with a set of nonperturbative sensitivity analyses. We demonstrate how this approach identifies important elements of a structure even in the absence of knowledge of the ultimate, optimal structure itself. We leverage techniques from molecular dynamics and implement these methods in open-source codes, demonstrating their application to compliance minimization problems in both 2D and 3D. Our implementation extends straightforwardly to topology optimization for other problems and benefits from the strong scaling properties observed in conventional molecular simulation. |
2025-03-20 | Sustainable Open-Data Management for Field Research: A Cloud-Based Approach in the Underlandscape Project | Augusto Ciuffoletti, Letizia Chiti et.al. | 2503.16042 | | 8 pages, 4 figures | Abstract (click to expand)Field-based research projects require a robust suite of ICT services to support data acquisition, documentation, storage, and dissemination. A key challenge lies in ensuring the sustainability of data management - not only during the project's funded period but also beyond its conclusion, when maintenance and support often depend on voluntary efforts. In the Underlandscape project, we tackled this challenge by extensively leveraging public cloud services while minimizing reliance on complex custom infrastructure. This paper provides a comprehensive overview of the project's final infrastructure, detailing the adopted data formats, the cloud-based solutions enabling data management, and the custom applications developed for system integration. |
2025-03-20 | Practical Portfolio Optimization with Metaheuristics:Pre-assignment Constraint and Margin Trading | Hang Kin Poon et.al. | 2503.15965 | | | Abstract (click to expand)Portfolio optimization is a critical area in finance, aiming to maximize returns while minimizing risk. Metaheuristic algorithms were shown to solve complex optimization problems efficiently, with Genetic Algorithms and Particle Swarm Optimization being among the most popular methods. This paper introduces an innovative approach to portfolio optimization that incorporates pre-assignment to limit the search space for investor preferences and better results. Additionally, taking margin trading strategies in account and using a rare performance ratio to evaluate portfolio efficiency. Through an illustrative example, this paper demonstrates that the metaheuristic-based methodology yields superior risk-adjusted returns compared to traditional benchmarks. The results highlight the potential of metaheuristics with help of assets filtering in enhancing portfolio performance in terms of risk adjusted return. |
2025-03-20 | WeirdFlows: Anomaly Detection in Financial Transaction Flows | Arthur Capozzi, Salvatore Vilella, Dario Moncalvo et.al. | 2503.15896 | | 12 pages, 6 figures, ITADATA2024 | Abstract (click to expand)In recent years, the digitization and automation of anti-financial crime (AFC) investigative processes have faced significant challenges, particularly the need for interpretability of AI model results and the lack of labeled data for training. Network analysis has emerged as a valuable approach in this context. In this paper, we present WeirdFlows, a top-down search pipeline for detecting potentially fraudulent transactions and non-compliant agents. In a transaction network, fraud attempts are often based on complex transaction patterns that change over time to avoid detection. The WeirdFlows pipeline requires neither an a priori set of patterns nor a training set. In addition, by providing elements to explain the anomalies found, it facilitates and supports the work of an AFC analyst. We evaluate WeirdFlows on a dataset from Intesa Sanpaolo (ISP) bank, comprising 80 million cross-country transactions over 15 months, benchmarking our implementation of the algorithm. The results, corroborated by ISP AFC experts, highlight its effectiveness in identifying suspicious transactions and actors, particularly in the context of the economic sanctions imposed in the EU after February 2022. This demonstrates \textit{WeirdFlows}' capability to handle large datasets, detect complex transaction patterns, and provide the necessary interpretability for formal AFC investigations. |
2025-03-19 | Impact of pH and chloride content on the biodegradation of magnesium alloys for medical implants: An in vitro and phase-field study | S. Kovacevic, W. Ali, T. K. Mandal et.al. | 2503.15700 | | | Abstract (click to expand)The individual contributions of pH and chloride concentration to the corrosion kinetics of bioabsorbable magnesium (Mg) alloys remain unresolved despite their significant roles as driving factors in Mg corrosion. This study demonstrates and quantifies hitherto unknown separate effects of pH and chloride content on the corrosion of Mg alloys pertinent to biomedical implant applications. The experimental setup designed for this purpose enables the quantification of the dependence of corrosion on pH and chloride concentration. The in vitro tests conclusively demonstrate that variations in chloride concentration, relevant to biomedical applications, have a negligible effect on corrosion kinetics. The findings identify pH as a critical factor in the corrosion of bioabsorbable Mg alloys. A variationally consistent phase-field model is developed for assessing the degradation of Mg alloys in biological fluids. The model accurately predicts the corrosion performance of Mg alloys observed during the experiments, including their dependence on pH and chloride concentration. The capability of the framework to account for mechano-chemical effects during corrosion is demonstrated in practical orthopaedic applications considering bioabsorbable Mg alloy implants for bone fracture fixation and porous scaffolds for bone tissue engineering. The strategy has the potential to assess the in vitro and in vivo service life of bioabsorbable Mg-based biomedical devices. |
2025-03-19 | Shap-MeD | Nicolás Laverde, Melissa Robles, Johan Rodríguez et.al. | 2503.15562 | | | Abstract (click to expand)We present Shap-MeD, a text-to-3D object generative model specialized in the biomedical domain. The objective of this study is to develop an assistant that facilitates the 3D modeling of medical objects, thereby reducing development time. 3D modeling in medicine has various applications, including surgical procedure simulation and planning, the design of personalized prosthetic implants, medical education, the creation of anatomical models, and the development of research prototypes. To achieve this, we leverage Shap-e, an open-source text-to-3D generative model developed by OpenAI, and fine-tune it using a dataset of biomedical objects. Our model achieved a mean squared error (MSE) of 0.089 in latent generation on the evaluation set, compared to Shap-e's MSE of 0.147. Additionally, we conducted a qualitative evaluation, comparing our model with others in the generation of biomedical objects. Our results indicate that Shap-MeD demonstrates higher structural accuracy in biomedical object generation. |
2025-03-19 | Design for Sensing and Digitalisation (DSD): A Modern Approach to Engineering Design | Daniel N. Wilke et.al. | 2503.14851 | | 4 pages, conference, SACAM 2025 | Abstract (click to expand)This paper introduces Design for Sensing and Digitalisation (DSD), a new engineering design paradigm that integrates sensor technology for digitisation and digitalisation from the earliest stages of the design process. Unlike traditional methodologies that treat sensing as an afterthought, DSD emphasises sensor integration, signal path optimisation, and real-time data utilisation as core design principles. The paper outlines DSD's key principles, discusses its role in enabling digital twin technology, and argues for its importance in modern engineering education. By adopting DSD, engineers can create more intelligent and adaptable systems that leverage real-time data for continuous design iteration, operational optimisation and data-driven predictive maintenance. |
2025-03-18 | Teaching Artificial Intelligence to Perform Rapid, Resolution-Invariant Grain Growth Modeling via Fourier Neural Operator | Iman Peivaste, Ahmed Makradi, Salim Belouettar et.al. | 2503.14568 | link | | Abstract (click to expand)Microstructural evolution, particularly grain growth, plays a critical role in shaping the physical, optical, and electronic properties of materials. Traditional phase-field modeling accurately simulates these phenomena but is computationally intensive, especially for large systems and fine spatial resolutions. While machine learning approaches have been employed to accelerate simulations, they often struggle with resolution dependence and generalization across different grain scales. This study introduces a novel approach utilizing Fourier Neural Operator (FNO) to achieve resolution-invariant modeling of microstructure evolution in multi-grain systems. FNO operates in the Fourier space and can inherently handle varying resolutions by learning mappings between function spaces. By integrating FNO with the phase field method, we developed a surrogate model that significantly reduces computational costs while maintaining high accuracy across different spatial scales. We generated a comprehensive dataset from phase-field simulations using the Fan Chen model, capturing grain evolution over time. Data preparation involved creating input-output pairs with a time shift, allowing the model to predict future microstructures based on current and past states. The FNO-based neural network was trained using sequences of microstructures and demonstrated remarkable accuracy in predicting long-term evolution, even for unseen configurations and higher-resolution grids not encountered during training. |
2025-03-17 | AI-Driven Rapid Identification of Bacterial and Fungal Pathogens in Blood Smears of Septic Patients | Agnieszka Sroka-Oleksiak, Adam Pardyl, Dawid Rymarczyk et.al. | 2503.14542 | | | Abstract (click to expand)Sepsis is a life-threatening condition which requires rapid diagnosis and treatment. Traditional microbiological methods are time-consuming and expensive. In response to these challenges, deep learning algorithms were developed to identify 14 bacteria species and 3 yeast-like fungi from microscopic images of Gram-stained smears of positive blood samples from sepsis patients. A total of 16,637 Gram-stained microscopic images were used in the study. The analysis used the Cellpose 3 model for segmentation and Attention-based Deep Multiple Instance Learning for classification. Our model achieved an accuracy of 77.15% for bacteria and 71.39% for fungi, with ROC AUC of 0.97 and 0.88, respectively. The highest values, reaching up to 96.2%, were obtained for Cutibacterium acnes, Enterococcus faecium, Stenotrophomonas maltophilia and Nakaseomyces glabratus. Classification difficulties were observed in closely related species, such as Staphylococcus hominis and Staphylococcus haemolyticus, due to morphological similarity, and within Candida albicans due to high morphotic diversity. The study confirms the potential of our model for microbial classification, but it also indicates the need for further optimisation and expansion of the training data set. In the future, this technology could support microbial diagnosis, reducing diagnostic time and improving the effectiveness of sepsis treatment due to its simplicity and accessibility. Part of the results presented in this publication was covered by a patent application at the European Patent Office EP24461637.1 "A computer implemented method for identifying a microorganism in a blood and a data processing system therefor". |
2025-03-18 | Tensor-decomposition-based A Priori Surrogate (TAPS) modeling for ultra large-scale simulations | Jiachen Guo, Gino Domel, Chanwook Park et.al. | 2503.13933 | | | Abstract (click to expand)A data-free, predictive scientific AI model, Tensor-decomposition-based A Priori Surrogate (TAPS), is proposed for tackling ultra large-scale engineering simulations with significant speedup, memory savings, and storage gain. TAPS can effectively obtain surrogate models for high-dimensional parametric problems with equivalent zetta-scale ( \(10^{21}\)) degrees of freedom (DoFs). TAPS achieves this by directly obtaining reduced-order models through solving governing equations with multiple independent variables such as spatial coordinates, parameters, and time. The paper first introduces an AI-enhanced finite element-type interpolation function called convolution hierarchical deep-learning neural network (C-HiDeNN) with tensor decomposition (TD). Subsequently, the generalized space-parameter-time Galerkin weak form and the corresponding matrix form are derived. Through the choice of TAPS hyperparameters, an arbitrary convergence rate can be achieved. To show the capabilities of this framework, TAPS is then used to simulate a large-scale additive manufacturing process as an example and achieves around 1,370x speedup, 14.8x memory savings, and 955x storage gain compared to the finite difference method with \(3.46\) billion spatial degrees of freedom (DoFs). As a result, the TAPS framework opens a new avenue for many challenging ultra large-scale engineering problems, such as additive manufacturing and integrated circuit design, among others. |
2025-03-17 | Quantum Dynamics Simulation of the Advection-Diffusion Equation | Hirad Alipanah, Feng Zhang, Yongxin Yao et.al. | 2503.13729 | | | Abstract (click to expand)The advection-diffusion equation is simulated on a superconducting quantum computer via several quantum algorithms. Three formulations are considered: (1) Trotterization, (2) variational quantum time evolution (VarQTE), and (3) adaptive variational quantum dynamics simulation (AVQDS). These schemes were originally developed for the Hamiltonian simulation of many-body quantum systems. The finite-difference discretized operator of the transport equation is formulated as a Hamiltonian and solved without the need for ancillary qubits. Computations are conducted on a quantum simulator (IBM Qiskit Aer) and an actual quantum hardware (IBM Fez). The former emulates the latter without the noise. The predicted results are compared with direct numerical simulation (DNS) data with infidelities of the order \(10^{-5}\) . In the quantum simulator, Trotterization is observed to have the lowest infidelity and is suitable for fault-tolerant computation. The AVQDS algorithm requires the lowest gate count and the lowest circuit depth. The VarQTE algorithm is the next best in terms of gate counts, but the number of its optimization variables is directly proportional to the number of qubits. Due to current hardware limitations, Trotterization cannot be implemented, as it has an overwhelming large number of operations. Meanwhile, AVQDS and VarQTE can be executed, but suffer from large errors due to significant hardware noise. These algorithms present a new paradigm for computational transport phenomena on quantum computers. |
2025-03-17 | Competitive algorithms for calculating the ground state properties of Bose-Fermi mixtures | Tomasz Świsłocki, Krzysztof Gawryluk, Mirosław Brewczyk et.al. | 2503.13717 | | | Abstract (click to expand)In this work we define, analyze, and compare different numerical schemes that can be used to study the ground state properties of Bose-Fermi systems, such as mixtures of different atomic species under external forces or self-bound quantum droplets. The bosonic atoms are assumed to be condensed and are described by the generalized Gross-Pitaevskii equation. The fermionic atoms, on the other hand, are treated individually, and each atom is associated with a wave function whose evolution follows the Hartree-Fock equation. We solve such a formulated set of equations using a variety of methods, including those based on adiabatic switching of interactions and the imaginary time propagation technique combined with the Gram-Schmidt orthonormalization or the diagonalization of the Hamiltonian matrix. We show how different algorithms compete at the numerical level by studying the mixture in the range of parameters covering the formation of self-bound quantum Bose-Fermi droplets. |
2025-03-17 | PERC: a suite of software tools for the curation of cryoEM data with application to simulation, modelling and machine learning | Beatriz Costa-Gomes, Joel Greer, Nikolai Juraschko et.al. | 2503.13329 | | 22 pages, 4 figures | Abstract (click to expand)Ease of access to data, tools and models expedites scientific research. In structural biology there are now numerous open repositories of experimental and simulated datasets. Being able to easily access and utilise these is crucial for allowing researchers to make optimal use of their research effort. The tools presented here are useful for collating existing public cryoEM datasets and/or creating new synthetic cryoEM datasets to aid the development of novel data processing and interpretation algorithms. In recent years, structural biology has seen the development of a multitude of machine-learning based algorithms for aiding numerous steps in the processing and reconstruction of experimental datasets and the use of these approaches has become widespread. Developing such techniques in structural biology requires access to large datasets which can be cumbersome to curate and unwieldy to make use of. In this paper we present a suite of Python software packages which we collectively refer to as PERC (profet, EMPIARreader and CAKED). These are designed to reduce the burden which data curation places upon structural biology research. The protein structure fetcher (profet) package allows users to conveniently download and cleave sequences or structures from the Protein Data Bank or Alphafold databases. EMPIARreader allows lazy loading of Electron Microscopy Public Image Archive datasets in a machine-learning compatible structure. The Class Aggregator for Key Electron-microscopy Data (CAKED) package is designed to seamlessly facilitate the training of machine learning models on electron microscopy data, including electron-cryo-microscopy-specific data augmentation and labelling. These packages may be utilised independently or as building blocks in workflows. All are available in open source repositories and designed to be easily extensible to facilitate more advanced workflows if required. |
2025-03-17 | Magneto-thermally Coupled Field Simulation of Homogenized Foil Winding Models | Silas Weinert, Jonas Bundschuh, Yvonne Späck-Leigsnering et.al. | 2503.13010 | | 6 pages, 8 figures | Abstract (click to expand)Foil windings have, due to their layered structure, different properties than conventional wire windings, which make them advantageous for high frequency applications. Both electromagnetic and thermal analyses are relevant for foil windings. These two physical areas are coupled through Joule losses and temperature dependent material properties. For an efficient simulation of foil windings, homogenization techniques are used to avoid resolving the single turns. Therefore, this paper comprises a coupled magneto-thermal simulation that uses a homogenization method in the electromagnetic and thermal part. A weak coupling with different time step sizes for both parts is presented. The method is validated on a simple geometry and showcased for a pot transformer that uses a foil and a wire winding. |
2025-03-17 | AUTV: Creating Underwater Video Datasets with Pixel-wise Annotations | Quang Trung Truong, Wong Yuk Kwan, Duc Thanh Nguyen et.al. | 2503.12828 | | under review | Abstract (click to expand)Underwater video analysis, hampered by the dynamic marine environment and camera motion, remains a challenging task in computer vision. Existing training-free video generation techniques, learning motion dynamics on the frame-by-frame basis, often produce poor results with noticeable motion interruptions and misaligments. To address these issues, we propose AUTV, a framework for synthesizing marine video data with pixel-wise annotations. We demonstrate the effectiveness of this framework by constructing two video datasets, namely UTV, a real-world dataset comprising 2,000 video-text pairs, and SUTV, a synthetic video dataset including 10,000 videos with segmentation masks for marine objects. UTV provides diverse underwater videos with comprehensive annotations including appearance, texture, camera intrinsics, lighting, and animal behavior. SUTV can be used to improve underwater downstream tasks, which are demonstrated in video inpainting and video object segmentation. |
2025-03-17 | Cohort-attention Evaluation Metric against Tied Data: Studying Performance of Classification Models in Cancer Detection | Longfei Wei, Fang Sheng, Jianfei Zhang et.al. | 2503.12755 | | | Abstract (click to expand)Artificial intelligence (AI) has significantly improved medical screening accuracy, particularly in cancer detection and risk assessment. However, traditional classification metrics often fail to account for imbalanced data, varying performance across cohorts, and patient-level inconsistencies, leading to biased evaluations. We propose the Cohort-Attention Evaluation Metrics (CAT) framework to address these challenges. CAT introduces patient-level assessment, entropy-based distribution weighting, and cohort-weighted sensitivity and specificity. Key metrics like CATSensitivity (CATSen), CATSpecificity (CATSpe), and CATMean ensure balanced and fair evaluation across diverse populations. This approach enhances predictive reliability, fairness, and interpretability, providing a robust evaluation method for AI-driven medical screening models. |
2025-03-16 | Discovering uncertainty: Gaussian constitutive neural networks with correlated weights | Jeremy A. McCulloch, Ellen Kuhl et.al. | 2503.12679 | link | 10 pages, 5 figures, 1 table | Abstract (click to expand)When characterizing materials, it can be important to not only predict their mechanical properties, but also to estimate the probability distribution of these properties across a set of samples. Constitutive neural networks allow for the automated discovery of constitutive models that exactly satisfy physical laws given experimental testing data, but are only capable of predicting the mean stress response. Stochastic methods treat each weight as a random variable and are capable of learning their probability distributions. Bayesian constitutive neural networks combine both methods, but their weights lack physical interpretability and we must sample each weight from a probability distribution to train or evaluate the model. Here we introduce a more interpretable network with fewer parameters, simpler training, and the potential to discover correlated weights: Gaussian constitutive neural networks. We demonstrate the performance of our new Gaussian network on biaxial testing data, and discover a sparse and interpretable four-term model with correlated weights. Importantly, the discovered distributions of material parameters across a set of samples can serve as priors to discover better constitutive models for new samples with limited data. We anticipate that Gaussian constitutive neural networks are a natural first step towards generative constitutive models informed by physical laws and parameter uncertainty. |
2025-03-16 | Modeling ice cliff stability using a new Mohr-Coulomb-based phase field fracture model | T. Clayton, R. Duddu, T. Hageman et.al. | 2503.12481 | | | Abstract (click to expand)Iceberg calving at glacier termini results in mass loss from ice sheets, but the associated fracture mechanics is often poorly represented using simplistic (empirical or elementary mechanics-based) failure criteria. Here, we propose an advanced Mohr-Coulomb failure criterion that drives cracking based on the visco-elastic stress state in ice. This criterion is implemented in a phase field fracture framework, and finite element simulations are conducted to determine the critical conditions that can trigger ice cliff collapse. Results demonstrate that fast-moving glaciers with negligible basal friction are prone to tensile failure causing crevasse propagation far away from the ice front; whilst slow-moving glaciers with significant basal friction are likely to exhibit shear failure near the ice front. Results also indicate that seawater pressure plays a major role in modulating cliff failure. For land terminating glaciers, full thickness cliff failure is observed if the glacier exceeds a critical height, dependent on cohesive strength \(\tau_\mathrm{c}\) (\(H \approx 120\;\text{m}\) for \(\tau_\mathrm{c}=0.5\;\text{MPa}\)). For marine-terminating glaciers, ice cliff failure occurs if a critical glacier free-board (\(H-h_\mathrm{w}\)) is exceeded, with ice slumping only observed above the ocean-water height; for \(\tau_\mathrm{c} = 0.5\;\text{MPa}\), the model-predicted critical free-board is \(H-h_\mathrm{w} \approx 215\;\text{m}\) , which is in good agreement with field observations. While the critical free-board height is larger than that predicted by some previous models, we cannot conclude that marine ice cliff instability is less likely because we do not include other failure processes such as hydrofracture of basal crevasses and plastic necking. |
2025-03-16 | Development of a Cost-Effective Simulation Tool for Loss of Flow Accident Transients in High-Temperature Gas-cooled Reactors | Bo Liu, Wei Wang, Charles Moulinec et.al. | 2503.12467 | | | Abstract (click to expand)The aim of this work is to further expand the capability of the coarse-grid Computational Fluid Dynamics (CFD) approach, SubChCFD, to effectively simulate transient and buoyancy-influenced flows, which are critical in accident analyses of High-Temperature Gas-cooled Reactors (HTGRs). It has been demonstrated in our previous work that SubChCFD is highly adaptable to HTGR fuel designs and performs exceptionally well in modelling steady-state processes. In this study, the approach is extended to simulate a Loss of Flow Accident (LOFA) transient, where coolant circulation is disrupted, causing the transition from forced convection to buoyancy-driven natural circulation within the reactor core. To enable SubChCFD to capture the complex physics involved, corrections were introduced to the empirical correlations to account for the effects of flow unsteadiness, property variation and buoyancy. A 1/12th sector of the reactor core, representing the smallest symmetric unit, was modelled using a coarse mesh of approximately 60 million cells. This mesh size is about 6% of that required for a Reynolds Averaged Navier Stokes (RANS) model, where mesh sizes can typically reach the order of 1 billion cells for such configurations. Simulation results show that SubChCFD effectively captures the thermal hydraulic behaviours of the reactor during a LOFA transient, producing predictions in good agreement with RANS simulations while significantly reducing computational cost. |
2025-03-14 | Adiabatic Flame Temperatures for Oxy-Methane, Oxy-Hydrogen, Air-Methane, and Air-Hydrogen Stoichiometric Combustion using the NASA CEARUN Tool, GRI-Mech 3.0 Reaction Mechanism, and Cantera Python Package | Osama A. Marzouk et.al. | 2503.11826 | | 8 pages, 8 figures, 8 tables, peer-reviewed journal paper, open access | Abstract (click to expand)The Adiabatic Flame Temperature (AFT) in combustion represents the maximum attainable temperature at which the chemical energy in the reactant fuel is converted into sensible heat in combustion products without heat loss. AFT depends on the fuel, oxidizer, and chemical composition of the products. Computing AFT requires solving either a nonlinear equation or a larger minimization problem. This study obtained the AFTs for oxy-methane (methane and oxygen), oxy-hydrogen (hydrogen and oxygen), air-methane (methane and air), and air-hydrogen (hydrogen and air) for stoichiometric conditions. The reactant temperature was 298.15 K (25{\deg}C), and the pressure was kept constant at 1 atm. Two reaction mechanisms were attempted: a global single-step irreversible reaction for complete combustion and the GRI-Mech 3.0 elementary mechanism (53 species, 325 steps) for chemical equilibrium with its associated thermodynamic data. NASA CEARUN was the main modeling tool used. Two other tools were used for benchmarking: an Excel and a Cantera-Python implementation of GRI-Mech 3.0. The results showed that the AFTs for oxy-methane were 5,166.47 K (complete combustion) and 3,050.12 K (chemical equilibrium), and dropped to 2,326.35 K and 2,224.25 K for air-methane, respectively. The AFTs for oxy-hydrogen were 4,930.56 K (complete combustion) and 3,074.51 K (chemical equilibrium), and dropped to 2,520.33 K and 2,378.62 K for air-hydrogen, respectively. For eight combustion modeling cases, the relative deviation between the AFTs predicted by CEARUN and GRI-Mech 3.0 ranged from 0.064% to 3.503%. |
2025-03-14 | Unfitted hybrid high-order methods stabilized by polynomial extension for elliptic interface problems | Erik Burman, Alexandre Ern, Romain Mottier et.al. | 2503.11397 | | | Abstract (click to expand)In this work, we propose the design and the analysis of a novel hybrid high-order (HHO) method on unfitted meshes. HHO methods rely on a pair of unknowns, combining polynomials attached to the mesh faces and the mesh cells. In the unfitted framework, the interface can cut through the mesh cells in a very general fashion, and the polynomial unknowns are doubled in the cut cells and the cut faces. In order to avoid the ill-conditioning issues caused by the presence of small cut cells, the novel approach introduced herein is to use polynomial extensions in the definition of the gradient reconstruction operator. Stability and consistency results are established, leading to optimally decaying error estimates. The theory is illustrated by numerical experiments. |
2025-03-14 | Corrected Riemann smoothed particle hydrodynamics method for multi-resolution fluid-structure interaction | Bo Zhang, Jianfeng Zhu, Xiangyu Hu et.al. | 2503.11292 | link | 47 pages 19 figues | Abstract (click to expand)As a mesh-free method, smoothed particle hydrodynamics (SPH) has been widely used for modeling and simulating fluid-structure interaction (FSI) problems. While the kernel gradient correction (KGC) method is commonly applied in structural domains to enhance numerical consistency, high-order consistency corrections that preserve conservation remain underutilized in fluid domains despite their critical role in FSI analysis, especially for the multi-resolution scheme where fluid domains generally have a low resolution. In this study, we incorporate the reverse kernel gradient correction (RKGC) formulation, a conservative high-order consistency approximation, into the fluid discretization for solving FSI problems. RKGC has been proven to achieve exact second-order convergence with relaxed particles and improve numerical accuracy while particularly enhancing energy conservation in free-surface flow simulations. By integrating this correction into the Riemann SPH method to solve different typical FSI problems with a multi-resolution scheme, numerical results consistently show improvements in accuracy and convergence compared to uncorrected fluid discretization. Despite these advances, further refinement of correction techniques for solid domains and fluid-structure interfaces remains significant for enhancing the overall accuracy of SPH-based FSI modeling and simulation. |
2025-03-13 | Predicting Stock Movement with BERTweet and Transformers | Michael Charles Albada, Mojolaoluwa Joshua Sonola et.al. | 2503.10957 | | 9 pages, 4 figures, 2 tables | Abstract (click to expand)Applying deep learning and computational intelligence to finance has been a popular area of applied research, both within academia and industry, and continues to attract active attention. The inherently high volatility and non-stationary of the data pose substantial challenges to machine learning models, especially so for today's expressive and highly-parameterized deep learning models. Recent work has combined natural language processing on data from social media to augment models based purely on historic price data to improve performance has received particular attention. Previous work has achieved state-of-the-art performance on this task by combining techniques such as bidirectional GRUs, variational autoencoders, word and document embeddings, self-attention, graph attention, and adversarial training. In this paper, we demonstrated the efficacy of BERTweet, a variant of BERT pre-trained specifically on a Twitter corpus, and the transformer architecture by achieving competitive performance with the existing literature and setting a new baseline for Matthews Correlation Coefficient on the Stocknet dataset without auxiliary data sources. |
2025-03-13 | Design and Analysis of an Extreme-Scale, High-Performance, and Modular Agent-Based Simulation Platform | Lukas Johannes Breitwieser et.al. | 2503.10796 | | PhD Thesis submitted to ETH Zurich | Abstract (click to expand)Agent-based modeling is indispensable for studying complex systems across many domains. However, existing simulation platforms exhibit two major issues: performance and modularity. Low performance prevents simulations with a large number of agents, increases development time, limits parameter exploration, and raises computing costs. Inflexible software designs motivate modelers to create their own tools, diverting valuable resources. This dissertation introduces a novel simulation platform called BioDynaMo and its significant improvement, TeraAgent, to alleviate these challenges via three major works. First, we lay the platform's foundation by defining abstractions, establishing software infrastructure, and implementing a multitude of features for agent-based modeling. We demonstrate BioDynaMo's modularity through use cases in neuroscience, epidemiology, and oncology. We validate these models and show the simplicity of adding new functionality with few lines of code. Second, we perform a rigorous performance analysis and identify challenges for shared-memory parallelism. Provided solutions include an optimized grid for neighbor searching, mechanisms to reduce the memory access latency, and exploiting domain knowledge to omit unnecessary work. These improvements yield up to three orders of magnitude speedups, enabling simulations of 1.7 billion agents on a single server. Third, we present TeraAgent, a distributed simulation engine that allows scaling out the computation of one simulation to multiple servers. We identify and address server communication bottlenecks and implement solutions for serialization and delta encoding to accelerate and reduce data transfer. TeraAgent can simulate 500 billion agents and scales to 84096 CPU cores. BioDynaMo has been widely adopted, including a prize-winning radiotherapy simulation recognized as a top 10 breakthrough in physics in 2024. |
2025-03-13 | Unifying monitoring and modelling of water concentration levels in surface waters | Peter B Sorensen, Anders Nielsen, Peter E Holm et.al. | 2503.10285 | | 41 pages, 11 figures, Developed to support the Danish EPA | Abstract (click to expand)Accurate prediction of expected concentrations is essential for effective catchment management, requiring both extensive monitoring and advanced modeling techniques. However, due to limitations in the equation solving capacity, the integration of monitoring and modeling has been suffering suboptimal statistical approaches. This limitation results in models that can only partially leverage monitoring data, thus being an obstacle for realistic uncertainty assessments by overlooking critical correlations between both measurements and model parameters. This study presents a novel solution that integrates catchment monitoring and a unified hieratical statistical catchment modeling that employs a log-normal distribution for residuals within a left-censored likelihood function to address measurements below detection limits. This enables the estimation of concentrations within sub-catchments in conjunction with a source/fate sub-catchment model and monitoring data. This approach is possible due to a model builder R package denoted RTMB. The proposed approach introduces a statistical paradigm based on a hierarchical structure, capable of accommodating heterogeneous sampling across various sampling locations and the authors suggest that this also will encourage further refinement of other existing modeling platforms within the scientific community to improve synergy with monitoring programs. The application of the method is demonstrated through an analysis of nickel concentrations in Danish surface waters. |
2025-03-13 | A Neumann-Neumann Acceleration with Coarse Space for Domain Decomposition of Extreme Learning Machines | Chang-Ock Lee, Byungeun Ryoo et.al. | 2503.10032 | | 21 pages, 6 figures, 6 tables | Abstract (click to expand)Extreme learning machines (ELMs), which preset hidden layer parameters and solve for last layer coefficients via a least squares method, can typically solve partial differential equations faster and more accurately than Physics Informed Neural Networks. However, they remain computationally expensive when high accuracy requires large least squares problems to be solved. Domain decomposition methods (DDMs) for ELMs have allowed parallel computation to reduce training times of large systems. This paper constructs a coarse space for ELMs, which enables further acceleration of their training. By partitioning interface variables into coarse and non-coarse variables, selective elimination introduces a Schur complement system on the non-coarse variables with the coarse problem embedded. Key to the performance of the proposed method is a Neumann-Neumann acceleration that utilizes the coarse space. Numerical experiments demonstrate significant speedup compared to a previous DDM method for ELMs. |
2025-03-12 | A Deep Reinforcement Learning Approach to Automated Stock Trading, using xLSTM Networks | Faezeh Sarlakifar, Mohammadreza Mohammadzadeh Asl, Sajjad Rezvani Khaledi et.al. | 2503.09655 | | | Abstract (click to expand)Traditional Long Short-Term Memory (LSTM) networks are effective for handling sequential data but have limitations such as gradient vanishing and difficulty in capturing long-term dependencies, which can impact their performance in dynamic and risky environments like stock trading. To address these limitations, this study explores the usage of the newly introduced Extended Long Short Term Memory (xLSTM) network in combination with a deep reinforcement learning (DRL) approach for automated stock trading. Our proposed method utilizes xLSTM networks in both actor and critic components, enabling effective handling of time series data and dynamic market environments. Proximal Policy Optimization (PPO), with its ability to balance exploration and exploitation, is employed to optimize the trading strategy. Experiments were conducted using financial data from major tech companies over a comprehensive timeline, demonstrating that the xLSTM-based model outperforms LSTM-based methods in key trading evaluation metrics, including cumulative return, average profitability per trade, maximum earning rate, maximum pullback, and Sharpe ratio. These findings mark the potential of xLSTM for enhancing DRL-based stock trading systems. |
2025-03-18 | Leveraging LLMS for Top-Down Sector Allocation In Automated Trading | Ryan Quek Wei Heng, Edoardo Vittori, Keane Ong et.al. | 2503.09647 | | | Abstract (click to expand)This paper introduces a methodology leveraging Large Language Models (LLMs) for sector-level portfolio allocation through systematic analysis of macroeconomic conditions and market sentiment. Our framework emphasizes top-down sector allocation by processing multiple data streams simultaneously, including policy documents, economic indicators, and sentiment patterns. Empirical results demonstrate superior risk-adjusted returns compared to traditional cross momentum strategies, achieving a Sharpe ratio of 2.51 and portfolio return of 8.79% versus -0.61 and -1.39% respectively. These results suggest that LLM-based systematic macro analysis presents a viable approach for enhancing automated portfolio allocation decisions at the sector level. |
2025-03-12 | AI-based Framework for Robust Model-Based Connector Mating in Robotic Wire Harness Installation | Claudius Kienle, Benjamin Alt, Finn Schneider et.al. | 2503.09409 | | 6 pages, 6 figures, 4 tables, submitted to the 2025 IEEE 21st International Conference on Automation Science and Engineering | Abstract (click to expand)Despite the widespread adoption of industrial robots in automotive assembly, wire harness installation remains a largely manual process, as it requires precise and flexible manipulation. To address this challenge, we design a novel AI-based framework that automates cable connector mating by integrating force control with deep visuotactile learning. Our system optimizes search-and-insertion strategies using first-order optimization over a multimodal transformer architecture trained on visual, tactile, and proprioceptive data. Additionally, we design a novel automated data collection and optimization pipeline that minimizes the need for machine learning expertise. The framework optimizes robot programs that run natively on standard industrial controllers, permitting human experts to audit and certify them. Experimental validations on a center console assembly task demonstrate significant improvements in cycle times and robustness compared to conventional robot programming approaches. Videos are available under https://claudius-kienle.github.io/AppMuTT. |
2025-03-12 | Large-scale Thermo-Mechanical Simulation of Laser Beam Welding Using High-Performance Computing: A Qualitative Reproduction of Experimental Results | Tommaso Bevilacqua, Andrey Gumenyuk, Niloufar Habibi et.al. | 2503.09345 | | | Abstract (click to expand)Laser beam welding is a non-contact joining technique that has gained significant importance in the course of the increasing degree of automation in industrial manufacturing. This process has established itself as a suitable joining tool for metallic materials due to its non-contact processing, short cycle times, and small heat-affected zones. One potential problem, however, is the formation of solidification cracks, which particularly affects alloys with a pronounced melting range. Since solidification cracking is influenced by both temperature and strain rate, precise measurement technologies are of crucial importance. For this purpose, as an experimental setup, a Controlled Tensile Weldability (CTW) test combined with a local deformation measurement technique is used. The aim of the present work is the development of computational methods and software tools to numerically simulate the CTW. The numerical results are compared with those obtained from the experimental CTW. In this study, an austenitic stainless steel sheet is selected. A thermo-elastoplastic material behavior with temperature-dependent material parameters is assumed. The time-dependent problem is first discretized in time and then the resulting nonlinear problem is linearized with Newton's method. For the discretization in space, finite elements are used. In order to obtain a sufficiently accurate solution, a large number of finite elements has to be used. In each Newton step, this yields a large linear system of equations that has to be solved. Therefore, a highly parallel scalable solver framework, based on the software library PETSc, was used to solve this computationally challenging problem on a high-performance computing architecture. Finally, the experimental results and the numerical simulations are compared, showing to be qualitatively in good agreement. |
2025-03-12 | A 3d particle visualization system for temperature management | Benoit Lange, Nancy Rodriguez, William Puech et.al. | 2503.09198 | | | Abstract (click to expand)This paper deals with a 3D visualization technique proposed to analyze and manage energy efficiency from a data center. Data are extracted from sensors located in the IBM Green Data Center in Montpellier France. These sensors measure different information such as hygrometry, pressure and temperature. We want to visualize in real-time the large among of data produced by these sensors. A visualization engine has been designed, based on particles system and a client server paradigm. In order to solve performance problems, a Level Of Detail solution has been developed. These methods are based on the earlier work introduced by J. Clark in 1976. In this paper we introduce a particle method used for this work and subsequently we explain different simplification methods we have applied to improve our solution. |
2025-03-11 | Capturing Lifecycle System Degradation in Digital Twin Model Updating | Yifan Tang, Mostafa Rahmani Dehaghani, G. Gary Wang et.al. | 2503.08953 | | 32 pages, 25 figures | Abstract (click to expand)Digital twin (DT) has emerged as a powerful tool to facilitate monitoring, control, and other decision-making tasks in real-world engineering systems. Online update methods have been proposed to update DT models. Considering the degradation behavior in the system lifecycle, these methods fail to enable DT models to predict the system responses affected by the system degradation over time. To alleviate this problem, degradation models of measurable parameters have been integrated into DT construction. However, identifying the degradation parameters relies on prior knowledge of the system and expensive experiments. To mitigate those limitations, this paper proposes a lifelong update method for DT models to capture the effects of system degradation on system responses without any prior knowledge and expensive offline experiments on the system. The core idea in the work is to represent the system degradation during the lifecycle as the dynamic changes of DT configurations (i.e., model parameters with a fixed model structure) at all degradation stages. During the lifelong update process, an Autoencoder is adopted to reconstruct the model parameters of all hidden layers simultaneously, so that the latent features taking into account the dependencies among hidden layers are obtained for each degradation stage. The dynamic behavior of latent features among successive degradation stages is then captured by a long short-term memory model, which enables prediction of the latent feature at any unseen stage. Based on the predicted latent features, the model configuration at future degradation stage is reconstructed to determine the new DT model, which predicts the system responses affected by the degradation at the same stage. The test results on two engineering datasets demonstrate that the proposed update method could capture effects of system degradation on system responses during the lifecycle. |
2025-03-11 | Towards Efficient Parametric State Estimation in Circulating Fuel Reactors with Shallow Recurrent Decoder Networks | Stefano Riva, Carolina Introini, J. Nathan Kutz et.al. | 2503.08904 | link | arXiv admin note: text overlap with arXiv:2409.12550 | Abstract (click to expand)The recent developments in data-driven methods have paved the way to new methodologies to provide accurate state reconstruction of engineering systems; nuclear reactors represent particularly challenging applications for this task due to the complexity of the strongly coupled physics involved and the extremely harsh and hostile environments, especially for new technologies such as Generation-IV reactors. Data-driven techniques can combine different sources of information, including computational proxy models and local noisy measurements on the system, to robustly estimate the state. This work leverages the novel Shallow Recurrent Decoder architecture to infer the entire state vector (including neutron fluxes, precursors concentrations, temperature, pressure and velocity) of a reactor from three out-of-core time-series neutron flux measurements alone. In particular, this work extends the standard architecture to treat parametric time-series data, ensuring the possibility of investigating different accidental scenarios and showing the capabilities of this approach to provide an accurate state estimation in various operating conditions. This paper considers as a test case the Molten Salt Fast Reactor (MSFR), a Generation-IV reactor concept, characterised by strong coupling between the neutronics and the thermal hydraulics due to the liquid nature of the fuel. The promising results of this work are further strengthened by the possibility of quantifying the uncertainty associated with the state estimation, due to the considerably low training cost. The accurate reconstruction of every characteristic field in real-time makes this approach suitable for monitoring and control purposes in the framework of a reactor digital twin. |
2025-03-11 | Nonlinear optimals and their role in sustaining turbulence in channel flow | Dario Klingenberg, Rich R. Kerswell et.al. | 2503.08283 | link | | Abstract (click to expand)We investigate the energy transfer from the mean profile to velocity fluctuations in channel flow by calculating nonlinear optimal disturbances,i.e. the initial condition of a given finite energy that achieves the highest possible energy growth during a given fixed time horizon. It is found that for a large range of time horizons and initial disturbance energies, the nonlinear optimal exhibits streak spacing and amplitude consistent with DNS at least at Re_tau = 180, which suggests that they isolate the relevant physical mechanisms that sustain turbulence. Moreover, the time horizon necessary for a nonlinear disturbance to outperform a linear optimal is consistent with previous DNS-based estimates using eddy turnover time, which offers a new perspective on how some turbulent time scales are determined. |
2025-03-11 | XAI4Extremes: An interpretable machine learning framework for understanding extreme-weather precursors under climate change | Jiawen Wei, Aniruddha Bora, Vivek Oommen et.al. | 2503.08163 | | | Abstract (click to expand)Extreme weather events are increasing in frequency and intensity due to climate change. This, in turn, is exacting a significant toll in communities worldwide. While prediction skills are increasing with advances in numerical weather prediction and artificial intelligence tools, extreme weather still present challenges. More specifically, identifying the precursors of such extreme weather events and how these precursors may evolve under climate change remain unclear. In this paper, we propose to use post-hoc interpretability methods to construct relevance weather maps that show the key extreme-weather precursors identified by deep learning models. We then compare this machine view with existing domain knowledge to understand whether deep learning models identified patterns in data that may enrich our understanding of extreme-weather precursors. We finally bin these relevant maps into different multi-year time periods to understand the role that climate change is having on these precursors. The experiments are carried out on Indochina heatwaves, but the methodology can be readily extended to other extreme weather events worldwide. |
2025-03-10 | Network Analysis of Uniswap: Centralization and Fragility in the Decentralized Exchange Market | Tao Yan, Claudio J. Tessone et.al. | 2503.07834 | | | Abstract (click to expand)The Uniswap is a Decentralized Exchange (DEX) protocol that facilitates automatic token exchange without the need for traditional order books. Every pair of tokens forms a liquidity pool on Uniswap, and each token can be paired with any other token to create liquidity pools. This characteristic motivates us to employ a complex network approach to analyze the features of the Uniswap market. This research presents a comprehensive analysis of the Uniswap network using complex network methods. The network on October 31, 2023, is built to observe its recent features, showcasing both scale-free and core-periphery properties. By employing node and edge-betweenness metrics, we detect the most important tokens and liquidity pools. Additionally, we construct daily networks spanning from the beginning of Uniswap V2 on May 5, 2020, until October 31, 2023, and our findings demonstrate that the network becomes increasingly fragile over time. Furthermore, we conduct a robustness analysis by simulating the deletion of nodes to estimate the impact of some extreme events such as the Terra collapse. The results indicate that the Uniswap network exhibits robustness, yet it is notably fragile when deleting tokens with high betweenness centrality. This finding highlights that, despite being a decentralized exchange, Uniswap exhibits significant centralization tendencies in terms of token network connectivity and the distribution of TVL across nodes (tokens) and edges (liquidity pools). |
2025-03-10 | What is missing from existing Lithium-Sulfur models to capture coin-cell behaviour? | Miss. Elizabeth Olisa Monica Marinescu et.al. | 2503.07684 | | 27 pages, 7 figures, conferences presented: ModVal 2025, ECS 2025 | Abstract (click to expand)Lithium-sulfur (Li-S) batteries offer a promising alternative to current lithium-ion (Li-ion) batteries, with a high theoretical energy density, improved safety and high abundance, low cost of materials. For Li-S to reach commercial application, it is essential to understand how the behaviour scales between cell formats; new material development is predominately completed at coin-cell level, whilst pouch-cells will be used for commercial applications. Differences such as reduced electrolyte-to-sulfur (E/S) ratios and increased geometric size at larger cell formats contribute to the behavioural differences, in terms of achievable capacity, cyclability and potential degradation mechanisms. This work focuses on the steps required to capture and test coin-cell behaviour, building upon the existing models within the literature, which predominately focus on pouch-cells. The areas investigated throughout this study, to improve the capability of the model in terms of scaling ability and causality of predictions, include the cathode surface area, precipitation dynamics and C-rate dependence. |
2025-03-10 | Simultaneous Energy Harvesting and Bearing Fault Detection using Piezoelectric Cantilevers | P. Peralta-Braz, M. M. Alamdari, C. T. Chou et.al. | 2503.07462 | | | Abstract (click to expand)Bearings are critical components in industrial machinery, yet their vulnerability to faults often leads to costly breakdowns. Conventional fault detection methods depend on continuous, high-frequency vibration sensing, digitising, and wireless transmission to the cloud-an approach that significantly drains the limited energy reserves of battery-powered sensors, accelerating their depletion and increasing maintenance costs. This work proposes a fundamentally different approach: rather than using instantaneous vibration data, we employ piezoelectric energy harvesters (PEHs) tuned to specific frequencies and leverage the cumulative harvested energy over time as the key diagnostic feature. By directly utilising the energy generated from the machinery's vibrations, we eliminate the need for frequent analog-to-digital conversions and data transmission, thereby reducing energy consumption at the sensor node and extending its operational lifetime. To validate this approach, we use a numerical PEH model and publicly available acceleration datasets, examining various PEH designs with different natural frequencies. We also consider the influence of the classification algorithm, the number of devices, and the observation window duration. The results demonstrate that the harvested energy reliably indicates bearing faults across a range of conditions and severities. By converting vibration energy into both a power source and a diagnostic feature, our solution offers a more sustainable, low-maintenance strategy for fault detection in smart machinery. |
2025-03-10 | Early signs of stuck pipe detection based on Crossformer | Bo Cao, Yu Song, Jin Yang et.al. | 2503.07440 | | 33 pages,9 figure | Abstract (click to expand)Stuck pipe incidents are one of the major challenges in drilling engineering,leading to massive time loss and additional costs.To address the limitations of insufficient long sequence modeling capability,the difficulty in accurately establishing warning threshold,and the lack of model interpretability in existing methods,we utilize Crossformer for early signs of detection indicating potential stuck events in order to provide guidance for on-site drilling engineers and prevent stuck pipe incidents.The sliding window technique is integrated into Crossformer to allow it to output and display longer outputs,the improved Crossformer model is trained using normal time series drilling data to generate predictions for various parameters at each time step.The relative reconstruction error of model is regard as the risk of stuck pipe,thereby considering data that the model can't predict as anomalies,which represent the early signs of stuck pipe incidents.The multi-step prediction capability of Crossformer and relative reconstruction error are combined to assess stuck pipe risk at each time step in advance.We partition the reconstruction error into modeling error and error due to anomalous data fluctuations,furthermore,the dynamic warning threshold and warning time for stuck pipe incidents are determined using the probability density function of reconstruction errors from normal drilling data.The results indicate that our method can effectively detect early signs of stuck pipe incidents during the drilling process.Crossformer exhibits superior modeling and predictive capabilities compared with other deep learning models.Transformer-based models with multi-step prediction capability are more suitable for stuck pipe prediction compared to the current single-step prediction models. |
2025-03-10 | An Analytics-Driven Approach to Enhancing Supply Chain Visibility with Graph Neural Networks and Federated Learning | Ge Zheng, Alexandra Brintrup et.al. | 2503.07231 | | 15 pages, 5 figures, 5 tables, submitted to a journal | Abstract (click to expand)In today's globalised trade, supply chains form complex networks spanning multiple organisations and even countries, making them highly vulnerable to disruptions. These vulnerabilities, highlighted by recent global crises, underscore the urgent need for improved visibility and resilience of the supply chain. However, data-sharing limitations often hinder the achievement of comprehensive visibility between organisations or countries due to privacy, security, and regulatory concerns. Moreover, most existing research studies focused on individual firm- or product-level networks, overlooking the multifaceted interactions among diverse entities that characterise real-world supply chains, thus limiting a holistic understanding of supply chain dynamics. To address these challenges, we propose a novel approach that integrates Federated Learning (FL) and Graph Convolutional Neural Networks (GCNs) to enhance supply chain visibility through relationship prediction in supply chain knowledge graphs. FL enables collaborative model training across countries by facilitating information sharing without requiring raw data exchange, ensuring compliance with privacy regulations and maintaining data security. GCNs empower the framework to capture intricate relational patterns within knowledge graphs, enabling accurate link prediction to uncover hidden connections and provide comprehensive insights into supply chain networks. Experimental results validate the effectiveness of the proposed approach, demonstrating its ability to accurately predict relationships within country-level supply chain knowledge graphs. This enhanced visibility supports actionable insights, facilitates proactive risk management, and contributes to the development of resilient and adaptive supply chain strategies, ensuring that supply chains are better equipped to navigate the complexities of the global economy. |
2025-03-10 | Simulating programmable morphing of shape memory polymer beam systems with complex geometry and topology | Giulio Ferri, Enzo Marino et.al. | 2503.07150 | | | Abstract (click to expand)We propose a novel approach to the analysis of programmable geometrically exact shear deformable beam systems made of shape memory polymers. The proposed method combines the viscoelastic Generalized Maxwell model with the Williams, Landel and Ferry relaxation principle, enabling the reproduction of the shape memory effect of structural systems featuring complex geometry and topology. Very high efficiency is pursued by discretizing the differential problem in space through the isogeometric collocation (IGA-C) method. The method, in addition to the desirable attributes of isogeometric analysis (IGA), such as exactness of the geometric reconstruction of complex shapes and high-order accuracy, circumvents the need for numerical integration since it discretizes the problem in the strong form. Other distinguishing features of the proposed formulation are: i) \({\rm SO}(3)\) -consistency for the linearization of the problem and for the time stepping; ii) minimal (finite) rotation parametrization, that means only three rotational unknowns are used; iii) no additional unknowns are needed to account for the rate-dependent material compared to the purely elastic case. Through different numerical applications involving challenging initial geometries, we show that the proposed formulation possesses all the sought attributes in terms of programmability of complex systems, geometric flexibility, and high order accuracy. |
2025-03-10 | Effect of Selection Format on LLM Performance | Yuchen Han, Yucheng Wu, Jeffrey Willard et.al. | 2503.06926 | | | Abstract (click to expand)This paper investigates a critical aspect of large language model (LLM) performance: the optimal formatting of classification task options in prompts. Through an extensive experimental study, we compared two selection formats -- bullet points and plain English -- to determine their impact on model performance. Our findings suggest that presenting options via bullet points generally yields better results, although there are some exceptions. Furthermore, our research highlights the need for continued exploration of option formatting to drive further improvements in model performance. |
2025-03-09 | Modular Photobioreactor Façade Systems for Sustainable Architecture: Design, Fabrication, and Real-Time Monitoring | Xiujin Liu et.al. | 2503.06769 | | 21 pages, 22 figures, 3 tables | Abstract (click to expand)This paper proposes an innovative solution to the growing issue of greenhouse gas emissions: a closed photobioreactor (PBR) fa\c{c}ade system to mitigate greenhouse gas (GHG) concentrations. With digital fabrication technology, this study explores the transition from traditional, single function building facades to multifunctional, integrated building systems. It introduces a photobioreactor (PBR) fa\c{c}ade system to mitigate greenhouse gas (GHG) concentrations while addressing the challenge of large-scale prefabricated components transportation. This research introduces a novel approach by designing the fa\c{c}ade system as modular, user-friendly and transportation-friendly bricks, enabling the creation of a user-customized and self-assembled photobioreactor (PBR) system. The single module in the system is proposed to be "neutralization bricks", which embedded with algae and equipped with an air circulation system, facilitating the photobioreactor (PBR)'s functionality. A connection system between modules allows for easy assembly by users, while a limited variety of brick styles ensures modularity in manufacturing without sacrificing customization and diversity. The system is also equipped with an advanced microalgae status detection algorithm, which allows users to monitor the condition of the microalgae using monocular camera. This functionality ensures timely alerts and notifications for users to replace the algae, thereby optimizing the operational efficiency and sustainability of the algae cultivation process. |
2025-03-09 | Energy-Adaptive Checkpoint-Free Intermittent Inference for Low Power Energy Harvesting Systems | Sahidul Islam, Wei Wei, Jishnu Banarjee et.al. | 2503.06663 | | | Abstract (click to expand)Deep neural network (DNN) inference in energy harvesting (EH) devices poses significant challenges due to resource constraints and frequent power interruptions. These power losses not only increase end-to-end latency, but also compromise inference consistency and accuracy, as existing checkpointing and restore mechanisms are prone to errors. Consequently, the quality of service (QoS) for DNN inference on EH devices is severely impacted. In this paper, we propose an energy-adaptive DNN inference mechanism capable of dynamically transitioning the model into a low-power mode by reducing computational complexity when harvested energy is limited. This approach ensures that end-to-end latency requirements are met. Additionally, to address the limitations of error-prone checkpoint-and-restore mechanisms, we introduce a checkpoint-free intermittent inference framework that ensures consistent, progress-preserving DNN inference during power failures in energy-harvesting systems. |