2025-03-31 | Graph Neural Network-Based Predictive Modeling for Robotic Plaster Printing | Diego Machain Rivera, Selen Ercan Jenny, Ping Hsun Tsai et.al. | 2503.24130 | | | Abstract (click to expand)This work proposes a Graph Neural Network (GNN) modeling approach to predict the resulting surface from a particle based fabrication process. The latter consists of spray-based printing of cementitious plaster on a wall and is facilitated with the use of a robotic arm. The predictions are computed using the robotic arm trajectory features, such as position, velocity and direction, as well as the printing process parameters. The proposed approach, based on a particle representation of the wall domain and the end effector, allows for the adoption of a graph-based solution. The GNN model consists of an encoder-processor-decoder architecture and is trained using data from laboratory tests, while the hyperparameters are optimized by means of a Bayesian scheme. The aim of this model is to act as a simulator of the printing process, and ultimately used for the generation of the robotic arm trajectory and the optimization of the printing parameters, towards the materialization of an autonomous plastering process. The performance of the proposed model is assessed in terms of the prediction error against unseen ground truth data, which shows its generality in varied scenarios, as well as in comparison with the performance of an existing benchmark model. The results demonstrate a significant improvement over the benchmark model, with notably better performance and enhanced error scaling across prediction steps. |
2025-03-31 | Organizations, teams, and job mobility: A social microdynamics approach | Bryan Adams, Valentín Vergara Hidd, Daniel Stimpson et.al. | 2503.24117 | | | Abstract (click to expand)The internal structures of large organizations determine much of what occurs inside including the way in which tasks are performed, the workers that perform them, and the mobility of those workers within the organization. However, regarding this latter process, most of the theoretical and modeling approaches used to understand organizational worker mobility are highly stylized, using idealizations such as structureless organizations, indistinguishable workers, and a lack of social bonding of the workers. In this article, aided by a decade of precise, temporally resolved data of a large US government organization, we introduce a new model to describe organizations as composites of teams within which individuals perform specific tasks and where social connections develop. By tracking the personnel composition of organizational teams, we find that workers that change jobs are highly influenced by preferring to reunite with past co-workers. In this organization, 34\% of all moves lead to worker reunions, a percentage well-above expectation. We find that the greater the time workers spend together or the smaller the team they share both increase their likelihood to reunite, supporting the notion of increased familiarity and trust behind such reunions and the dominant role of social capital in the evolution of large organizations. |
2025-03-31 | Estimation of thermal properties and boundary heat transfer coefficient of the ground with a Bayesian technique | Zhanat Karashbayeva, Julien Berger, Helcio R. B. Orlande et.al. | 2503.24072 | | | Abstract (click to expand)Urbanization is the key contributor for climate change. Increasing urbanization rate causes an urban heat island (UHI) effect, which strongly depends on the short- and long-wave radiation balance heat flux between the surfaces. In order to calculate accurately this heat flux, it is required to assess the surface temperature which depends on the knowledge of the thermal properties and the surface heat transfer coefficients in the heat transfer problem. The aim of this paper is to estimate the thermal properties of the ground and the time varying surface heat transfer coefficient by solving an inverse problem. The Dufort--Frankel scheme is applied for solving the unsteady heat transfer problem. For the inverse problem, a Markov chain Monte Carlo method is used to estimate the posterior probability density function of unknown parameters within the Bayesian framework of statistics, by applying the Metropolis-Hastings algorithm for random sample generation. Actual temperature measurements available at different ground depths were used for the solution of the inverse problem. Different time discretizations were examined for the transient heat transfer coefficient at the ground surface, which then involved different prior distributions. Results of different case studies show that the estimated values of the unknown parameters were in accordance with literature values. Moreover, with the present solution of the inverse problem the temperature residuals were smaller than those obtained by using literature values for the unknowns. |
2025-03-31 | Evaluating Variational Quantum Eigensolver and Quantum Dynamics Algorithms on the Advection-Diffusion Equation | A. Barış Özgüler et.al. | 2503.24045 | | 7 pages, 2 figures | Abstract (click to expand)We investigate the potential of near-term quantum algorithms for solving partial differential equations (PDEs), focusing on a linear one-dimensional advection-diffusion equation as a test case. This study benchmarks a ground-state algorithm, Variational Quantum Eigensolver (VQE), against three leading quantum dynamics algorithms, Trotterization, Variational Quantum Imaginary Time Evolution (VarQTE), and Adaptive Variational Quantum Dynamics Simulation (AVQDS), applied to the same PDE on small quantum hardware. While Trotterization is fully quantum, VarQTE and AVQDS are variational algorithms that reduce circuit depth for noisy intermediate-scale quantum (NISQ) devices. However, hardware results from these dynamics methods show sizable errors due to noise and limited shot statistics. To establish a noise-free performance baseline, we implement the VQE-based solver on a noiseless statevector simulator. Our results show VQE can reach final-time infidelities as low as \({O}(10^{-9})\) with \(N=4\) qubits and moderate circuit depths, outperforming hardware-deployed dynamics methods that show infidelities \(\gtrsim 10^{-2}\) . By comparing noiseless VQE to shot-based and hardware-run algorithms, we assess their accuracy and resource demands, providing a baseline for future quantum PDE solvers. We conclude with a discussion of limitations and potential extensions to higher-dimensional, nonlinear PDEs relevant to engineering and finance. |
2025-03-30 | Exact Characterization of Aggregate Flexibility via Generalized Polymatroids | Karan Mukhi, Georg Loho, Alessandro Abate et.al. | 2503.23458 | | | Abstract (click to expand)There is growing interest in utilizing the flexibility in populations of distributed energy resources (DER) to mitigate the intermittency and uncertainty of renewable generation and provide additional grid services. To enable this, aggregators must effectively represent the flexibility in the populations they control to the market or system operator. A key challenge is accurately computing the aggregate flexibility of a population, which can be formally expressed as the Minkowski sum of a collection of polytopes - a problem that is generally computationally intractable. However, the flexibility polytopes of many DERs exhibit structural symmetries that can be exploited for computational efficiency. To this end, we introduce generalized polymatroids - a family of polytope - into the flexibility aggregation literature. We demonstrate that individual flexibility sets belong to this family, enabling efficient computation of their Minkowski sum. For homogeneous populations of DERs we further derive simplifications that yield more succinct representations of aggregate flexibility. Additionally, we develop an efficient optimization framework over these sets and propose a vertex-based disaggregation method, to allocate aggregate flexibility among individual DERs. Finally, we validate the optimality and computational efficiency of our approach through comparisons with existing methods. |
2025-03-30 | AI Agents in Engineering Design: A Multi-Agent Framework for Aesthetic and Aerodynamic Car Design | Mohamed Elrefaie, Janet Qian, Raina Wu et.al. | 2503.23315 | | | Abstract (click to expand)We introduce the concept of "Design Agents" for engineering applications, particularly focusing on the automotive design process, while emphasizing that our approach can be readily extended to other engineering and design domains. Our framework integrates AI-driven design agents into the traditional engineering workflow, demonstrating how these specialized computational agents interact seamlessly with engineers and designers to augment creativity, enhance efficiency, and significantly accelerate the overall design cycle. By automating and streamlining tasks traditionally performed manually, such as conceptual sketching, styling enhancements, 3D shape retrieval and generative modeling, computational fluid dynamics (CFD) meshing, and aerodynamic simulations, our approach reduces certain aspects of the conventional workflow from weeks and days down to minutes. These agents leverage state-of-the-art vision-language models (VLMs), large language models (LLMs), and geometric deep learning techniques, providing rapid iteration and comprehensive design exploration capabilities. We ground our methodology in industry-standard benchmarks, encompassing a wide variety of conventional automotive designs, and utilize high-fidelity aerodynamic simulations to ensure practical and applicable outcomes. Furthermore, we present design agents that can swiftly and accurately predict simulation outcomes, empowering engineers and designers to engage in more informed design optimization and exploration. This research underscores the transformative potential of integrating advanced generative AI techniques into complex engineering tasks, paving the way for broader adoption and innovation across multiple engineering disciplines. |
2025-03-29 | Ethereum Price Prediction Employing Large Language Models for Short-term and Few-shot Forecasting | Eftychia Makri, Georgios Palaiokrassas, Sarah Bouraga et.al. | 2503.23190 | | | Abstract (click to expand)Cryptocurrencies have transformed financial markets with their innovative blockchain technology and volatile price movements, presenting both challenges and opportunities for predictive analytics. Ethereum, being one of the leading cryptocurrencies, has experienced significant market fluctuations, making its price prediction an attractive yet complex problem. This paper presents a comprehensive study on the effectiveness of Large Language Models (LLMs) in predicting Ethereum prices for short-term and few-shot forecasting scenarios. The main challenge in training models for time series analysis is the lack of data. We address this by leveraging a novel approach that adapts existing pre-trained LLMs on natural language or images from billions of tokens to the unique characteristics of Ethereum price time series data. Through thorough experimentation and comparison with traditional and contemporary models, our results demonstrate that selectively freezing certain layers of pre-trained LLMs achieves state-of-the-art performance in this domain. This approach consistently surpasses benchmarks across multiple metrics, including Mean Squared Error (MSE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE), demonstrating its effectiveness and robustness. Our research not only contributes to the existing body of knowledge on LLMs but also provides practical insights in the cryptocurrency prediction domain. The adaptability of pre-trained LLMs to handle the nature of Ethereum prices suggests a promising direction for future research, potentially including the integration of sentiment analysis to further refine forecasting accuracy. |
2025-04-01 | HRET: A Self-Evolving LLM Evaluation Toolkit for Korean | Hanwool Lee, Soo Yong Kim, Dasol Choi et.al. | 2503.22968 | | | Abstract (click to expand)Recent advancements in Korean large language models (LLMs) have spurred numerous benchmarks and evaluation methodologies, yet the lack of a standardized evaluation framework has led to inconsistent results and limited comparability. To address this, we introduce HRET Haerae Evaluation Toolkit, an open-source, self-evolving evaluation framework tailored specifically for Korean LLMs. HRET unifies diverse evaluation methods, including logit-based scoring, exact-match, language-inconsistency penalization, and LLM-as-a-Judge assessments. Its modular, registry-based architecture integrates major benchmarks (HAE-RAE Bench, KMMLU, KUDGE, HRM8K) and multiple inference backends (vLLM, HuggingFace, OpenAI-compatible endpoints). With automated pipelines for continuous evolution, HRET provides a robust foundation for reproducible, fair, and transparent Korean NLP research. |
2025-03-28 | Co-design of materials, structures and stimuli for magnetic soft robots with large deformation and dynamic contacts | Liwei Wang et.al. | 2503.22767 | | | Abstract (click to expand)Magnetic soft robots embedded with hard magnetic particles enable untethered actuation via external magnetic fields, offering remote, rapid, and precise control, which is highly promising for biomedical applications. However, designing such systems is challenging due to the complex interplay of magneto-elastic dynamics, large deformation, solid contacts, time-varying stimuli, and posture-dependent loading. As a result, most existing research relies on heuristics and trial-and-error methods or focuses on the independent design of stimuli or structures under static conditions. We propose a topology optimization framework for magnetic soft robots that simultaneously designs structures, location-specific material magnetization and time-varying magnetic stimuli, accounting for large deformations, dynamic motion, and solid contacts. This is achieved by integrating generalized topology optimization with the magneto-elastic material point method, which supports GPU-accelerated parallel simulations and auto-differentiation for sensitivity analysis. We applied this framework to design magnetic robots for various tasks, including multi-task shape morphing and locomotion, in both 2D and 3D. The method autonomously generates optimized robotic systems to achieve target behaviors without requiring human intervention. Despite the nonlinear physics and large design space, it demonstrates exceptional efficiency, completing all cases within minutes. This proposed framework represents a significant step toward the automatic co-design of magnetic soft robots for applications such as metasurfaces, drug delivery, and minimally invasive procedures. |
2025-03-28 | A high order multigrid-preconditioned immersed interface solver for the Poisson equation with boundary and interface conditions | James Gabbard, Andrea Paris, Wim M. van Rees et.al. | 2503.22455 | | | Abstract (click to expand)This work presents a multigrid preconditioned high order immersed finite difference solver to accurately and efficiently solve the Poisson equation on complex 2D and 3D domains. The solver employs a low order Shortley-Weller multigrid method to precondition a high order matrix-free Krylov subspace solver. The matrix-free approach enables full compatibility with high order IIM discretizations of boundary and interface conditions, as well as high order wavelet-adapted multiresolution grids. Through verification and analysis on 2D domains, we demonstrate the ability of the algorithm to provide high order accurate results to Laplace and Poisson problems with Dirichlet, Neumann, and/or interface jump boundary conditions, all effectively preconditioned using the multigrid method. We further show that the proposed method is able to efficiently solve high order discretizations of Laplace and Poisson problems on complex 3D domains using thousands of compute cores and on multiresolution grids. To our knowledge, this work presents the largest problem sizes tackled with high order immersed methods applied to elliptic partial differential equations, and the first high order results on 3D multiresolution adaptive grids. Together, this work paves the way for employing high order immersed methods to a variety of 3D partial differential equations with boundary or inter-face conditions, including linear and non-linear elasticity problems, the incompressible Navier-Stokes equations, and fluid-structure interactions. |
2025-03-28 | Numerical optimization of aviation decarbonization scenarios: balancing traffic and emissions with maturing energy carriers and aircraft technology | Ian Costa-Alves, Nicolas Gourdain, François Gallard et.al. | 2503.22435 | | | Abstract (click to expand)Despite being considered a hard-to-abate sector, aviation's emissions will play an important role in long-term climate mitigation of transportation. The introduction of low-carbon energy carriers and the deployment of new aircraft in the current fleet are modeled as a technology-centered decarbonization policy, and supply constraints in targeted market segments are modeled as demand-side policy. Shared socioeconomic pathways (SSP) are used to estimate the trend traffic demand and limit the sectoral consumption of electricity and biomass. Mitigation scenarios are formulated as optimization problems and three applications are demonstrated: single-policy optimization, scenario-robust policy, and multiobjective policy trade-off. Overall, we find that the choice of energy carrier to embark is highly dependent on assumptions regarding aircraft technology and background energy system, and that aligning trend scenarios with the Paris Agreement market-targeted traffic constraints are required to align trend scenarios with the Paris Agreement. The usual burdens associated with nonlinear optimization with high-dimensional variables are dealt with by jointly using libraries for Multidisciplinary Optimization (GEMSEO) and Automatic Differentiation (JAX), which resulted in speedups of two orders of magnitude at the optimization level, while reducing associated implementation efforts. |
2025-03-28 | Inverse design of dual-band valley-Hall topological photonic crystals with arbitrary pseudospin states | Yuki Sato, Shrinathan Esaki Muthu Pandara Kone, Junpei Oba et.al. | 2503.22206 | | 13 pages, 9 figures | Abstract (click to expand)Valley photonic crystals (VPCs) offer topological kink states that ensure robust, unidirectional, and backscattering-immune light propagation. The design of VPCs is typically based on analogies with condensed-matter topological insulators that exhibit the quantum valley Hall effect; trial-and-error approaches are often used to tailor the photonic band structures and their topological properties, which are characterized by the local Berry curvatures. In this paper, we present an inverse design framework based on frequency-domain analysis for VPCs with arbitrary pseudospin states. Specifically, we utilize the transverse spin angular momentum (TSAM) at the band edge to formulate the objective function for engineering the desired topological properties. Numerical experiments demonstrate that our proposed design approach can successfully produce photonic crystal waveguides exhibiting dual-band operation, enabling frequency-dependent light routing. Our pseudospin-engineering method thus provides a cost-effective alternative for designing topological photonic waveguides, offering novel functionalities. |
2025-03-28 | Convolutional optimization with convex kernel and power lift | Zhipeng Lu et.al. | 2503.22135 | | | Abstract (click to expand)We focus on establishing the foundational paradigm of a novel optimization theory based on convolution with convex kernels. Our goal is to devise a morally deterministic model of locating the global optima of an arbitrary function, which is distinguished from most commonly used statistical models. Limited preliminary numerical results are provided to test the efficiency of some specific algorithms derived from our paradigm, which we hope to stimulate further practical interest. |
2025-03-28 | A production planning benchmark for real-world refinery-petrochemical complexes | Wenli Du, Chuan Wang, Chen Fan et.al. | 2503.22057 | | | Abstract (click to expand)To achieve digital intelligence transformation and carbon neutrality, effective production planning is crucial for integrated refinery-petrochemical complexes. Modern refinery planning relies on advanced optimization techniques, whose development requires reproducible benchmark problems. However, existing benchmarks lack practical context or impose oversimplified assumptions, limiting their applicability to enterprise-wide optimization. To bridge the substantial gap between theoretical research and industrial applications, this paper introduces the first open-source, demand-driven benchmark for industrial-scale refinery-petrochemical complexes with transparent model formulations and comprehensive input parameters. The benchmark incorporates a novel port-stream hybrid superstructure for modular modeling and broad generalizability. Key secondary processing units are represented using the delta-base approach grounded in historical data. Three real-world cases have been constructed to encompass distinct scenario characteristics, respectively addressing (1) a stand-alone refinery without integer variables, (2) chemical site integration with inventory-related integer variables, and (3) multi-period planning. All model parameters are fully accessible. Additionally, this paper provides an analysis of computational performance, ablation experiments on delta-base modeling, and application scenarios for the proposed benchmark. |
2025-03-27 | Multimodal Data Integration for Sustainable Indoor Gardening: Tracking Anyplant with Time Series Foundation Model | Seyed Hamidreza Nabaei, Zeyang Zheng, Dong Chen et.al. | 2503.21932 | | Accepted at ASCE International Conference on Computing in Civil Engineering (i3ce) | Abstract (click to expand)Indoor gardening within sustainable buildings offers a transformative solution to urban food security and environmental sustainability. By 2030, urban farming, including Controlled Environment Agriculture (CEA) and vertical farming, is expected to grow at a compound annual growth rate (CAGR) of 13.2% from 2024 to 2030, according to market reports. This growth is fueled by advancements in Internet of Things (IoT) technologies, sustainable innovations such as smart growing systems, and the rising interest in green interior design. This paper presents a novel framework that integrates computer vision, machine learning (ML), and environmental sensing for the automated monitoring of plant health and growth. Unlike previous approaches, this framework combines RGB imagery, plant phenotyping data, and environmental factors such as temperature and humidity, to predict plant water stress in a controlled growth environment. The system utilizes high-resolution cameras to extract phenotypic features, such as RGB, plant area, height, and width while employing the Lag-Llama time series model to analyze and predict water stress. Experimental results demonstrate that integrating RGB, size ratios, and environmental data significantly enhances predictive accuracy, with the Fine-tuned model achieving the lowest errors (MSE = 0.420777, MAE = 0.595428) and reduced uncertainty. These findings highlight the potential of multimodal data and intelligent systems to automate plant care, optimize resource consumption, and align indoor gardening with sustainable building management practices, paving the way for resilient, green urban spaces. |
2025-03-27 | Data-Driven Nonlinear Model Reduction to Spectral Submanifolds via Oblique Projection | Leonardo Bettini, Bálint Kaszás, Bernhard Zybach et.al. | 2503.21895 | | | Abstract (click to expand)The dynamics in a primary Spectral Submanifold (SSM) constructed over the slowest modes of a dynamical system provide an ideal reduced-order model for nearby trajectories. Modeling the dynamics of trajectories further away from the primary SSM, however, is difficult if the linear part of the system exhibits strong non-normal behavior. Such non-normality implies that simply projecting trajectories onto SSMs along directions normal to the slow linear modes will not pair those trajectories correctly with their reduced counterparts on the SSMs. In principle, a well-defined nonlinear projection along a stable invariant foliation exists and would exactly match the full dynamics to the SSM-reduced dynamics. This foliation, however, cannot realistically be constructed from practically feasible amounts and distributions of experimental data. Here we develop an oblique projection technique that is able to approximate this foliation efficiently, even from a single experimental trajectory of a significantly non-normal and nonlinear beam. |
2025-03-27 | CMADiff: Cross-Modal Aligned Diffusion for Controllable Protein Generation | Changjian Zhou, Yuexi Qiu, Tongtong Ling et.al. | 2503.21450 | | | Abstract (click to expand)AI-assisted protein design has emerged as a critical tool for advancing biotechnology, as deep generative models have demonstrated their reliability in this domain. However, most existing models primarily utilize protein sequence or structural data for training, neglecting the physicochemical properties of proteins.Moreover, they are deficient to control the generation of proteins in intuitive conditions. To address these limitations,we propose CMADiff here, a novel framework that enables controllable protein generation by aligning the physicochemical properties of protein sequences with text-based descriptions through a latent diffusion process. Specifically, CMADiff employs a Conditional Variational Autoencoder (CVAE) to integrate physicochemical features as conditional input, forming a robust latent space that captures biological traits. In this latent space, we apply a conditional diffusion process, which is guided by BioAligner, a contrastive learning-based module that aligns text descriptions with protein features, enabling text-driven control over protein sequence generation. Validated by a series of evaluations including AlphaFold3, the experimental results indicate that CMADiff outperforms protein sequence generation benchmarks and holds strong potential for future applications. The implementation and code are available at https://github.com/HPC-NEAU/PhysChemDiff. |
2025-03-27 | Large Language Models for Traffic and Transportation Research: Methodologies, State of the Art, and Future Opportunities | Yimo Yan, Yejia Liao, Guanhao Xu et.al. | 2503.21330 | | | Abstract (click to expand)The rapid rise of Large Language Models (LLMs) is transforming traffic and transportation research, with significant advancements emerging between the years 2023 and 2025 -- a period marked by the inception and swift growth of adopting and adapting LLMs for various traffic and transportation applications. However, despite these significant advancements, a systematic review and synthesis of the existing studies remain lacking. To address this gap, this paper provides a comprehensive review of the methodologies and applications of LLMs in traffic and transportation, highlighting their ability to process unstructured textual data to advance transportation research. We explore key applications, including autonomous driving, travel behavior prediction, and general transportation-related queries, alongside methodologies such as zero- or few-shot learning, prompt engineering, and fine-tuning. Our analysis identifies critical research gaps. From the methodological perspective, many research gaps can be addressed by integrating LLMs with existing tools and refining LLM architectures. From the application perspective, we identify numerous opportunities for LLMs to tackle a variety of traffic and transportation challenges, building upon existing research. By synthesizing these findings, this review not only clarifies the current state of LLM adoption and adaptation in traffic and transportation but also proposes future research directions, paving the way for smarter and more sustainable transportation systems. |
2025-03-27 | ResearchBench: Benchmarking LLMs in Scientific Discovery via Inspiration-Based Task Decomposition | Yujie Liu, Zonglin Yang, Tong Xie et.al. | 2503.21248 | | | Abstract (click to expand)Large language models (LLMs) have demonstrated potential in assisting scientific research, yet their ability to discover high-quality research hypotheses remains unexamined due to the lack of a dedicated benchmark. To address this gap, we introduce the first large-scale benchmark for evaluating LLMs with a near-sufficient set of sub-tasks of scientific discovery: inspiration retrieval, hypothesis composition, and hypothesis ranking. We develop an automated framework that extracts critical components - research questions, background surveys, inspirations, and hypotheses - from scientific papers across 12 disciplines, with expert validation confirming its accuracy. To prevent data contamination, we focus exclusively on papers published in 2024, ensuring minimal overlap with LLM pretraining data. Our evaluation reveals that LLMs perform well in retrieving inspirations, an out-of-distribution task, suggesting their ability to surface novel knowledge associations. This positions LLMs as "research hypothesis mines", capable of facilitating automated scientific discovery by generating innovative hypotheses at scale with minimal human intervention. |
2025-03-27 | GPU-Accelerated Charge-Equilibration for Shadow Molecular Dynamics in Python | Mehmet Cagri Kaymak, Nicholas Lubbers, Christian F. A. Negre et.al. | 2503.21176 | | | Abstract (click to expand)With recent advancements in machine learning for interatomic potentials, Python has become the go-to programming language for exploring new ideas. While machine-learning potentials are often developed in Python-based frameworks, existing molecular dynamics software is predominantly written in lower-level languages. This disparity complicates the integration of machine learning potentials into these molecular dynamics libraries. Additionally, machine learning potentials typically focus on local features, often neglecting long-range electrostatics due to computational complexities. This is a key limitation as applications can require long-range electrostatics and even flexible charges to achieve the desired accuracy. Recent charge equilibration models can address these issues, but they require iterative solvers to assign relaxed flexible charges to the atoms. Conventional implementations also demand very tight convergence to achieve long-term stability, further increasing computational cost. In this work, we present a scalable Python implementation of a recently proposed shadow molecular dynamics scheme based on a charge equilibration model, which avoids the convergence problem while maintaining long-term energy stability and accuracy of observable properties. To deliver a functional and user-friendly Python-based library, we implemented an efficient neighbor list algorithm, Particle Mesh Ewald, and traditional Ewald summation techniques, leveraging the GPU-accelerated power of Triton and PyTorch. We integrated these approaches with the Python-based shadow molecular dynamics scheme, enabling fast charge equilibration for scalable machine learning potentials involving systems with hundreds of thousands of atoms. |
2025-03-26 | FinAudio: A Benchmark for Audio Large Language Models in Financial Applications | Yupeng Cao, Haohang Li, Yangyang Yu et.al. | 2503.20990 | | | Abstract (click to expand)Audio Large Language Models (AudioLLMs) have received widespread attention and have significantly improved performance on audio tasks such as conversation, audio understanding, and automatic speech recognition (ASR). Despite these advancements, there is an absence of a benchmark for assessing AudioLLMs in financial scenarios, where audio data, such as earnings conference calls and CEO speeches, are crucial resources for financial analysis and investment decisions. In this paper, we introduce \textsc{FinAudio}, the first benchmark designed to evaluate the capacity of AudioLLMs in the financial domain. We first define three tasks based on the unique characteristics of the financial domain: 1) ASR for short financial audio, 2) ASR for long financial audio, and 3) summarization of long financial audio. Then, we curate two short and two long audio datasets, respectively, and develop a novel dataset for financial audio summarization, comprising the \textsc{FinAudio} benchmark. Then, we evaluate seven prevalent AudioLLMs on \textsc{FinAudio}. Our evaluation reveals the limitations of existing AudioLLMs in the financial domain and offers insights for improving AudioLLMs. All datasets and codes will be released. |
2025-03-26 | TransDiffSBDD: Causality-Aware Multi-Modal Structure-Based Drug Design | Xiuyuan Hu, Guoqing Liu, Can Chen et.al. | 2503.20913 | | | Abstract (click to expand)Structure-based drug design (SBDD) is a critical task in drug discovery, requiring the generation of molecular information across two distinct modalities: discrete molecular graphs and continuous 3D coordinates. However, existing SBDD methods often overlook two key challenges: (1) the multi-modal nature of this task and (2) the causal relationship between these modalities, limiting their plausibility and performance. To address both challenges, we propose TransDiffSBDD, an integrated framework combining autoregressive transformers and diffusion models for SBDD. Specifically, the autoregressive transformer models discrete molecular information, while the diffusion model samples continuous distributions, effectively resolving the first challenge. To address the second challenge, we design a hybrid-modal sequence for protein-ligand complexes that explicitly respects the causality between modalities. Experiments on the CrossDocked2020 benchmark demonstrate that TransDiffSBDD outperforms existing baselines. |
2025-03-26 | Technical Note: Continuum Theory of Mixture for Three-phase Thermomechanical Model of Fiber-reinforced Aerogel Composites | Pratyush Kumar Singh, Danial Faghihi et.al. | 2503.20713 | | | Abstract (click to expand)We present a thermodynamically consistent three-phase model for the coupled thermal transport and mechanical deformation of ceramic aerogel porous composite materials, which is formulated via continuum mixture theory. The composite comprises a solid silica skeleton, a gaseous fluid phase, and dispersed solid fibers. The thermal transport model incorporates the effects of meso- and macro-pore size variations due to the Knudsen effect, achieved by upscaling phonon transport relations to derive constitutive equations for the fluid thermal conductivity. The mechanical model captures solid-solid and solid-fluid interactions through momentum exchange between phases. A mixed finite element formulation is employed to solve the multiphase model, and numerical studies are conducted to analyze key features of the computational model. |
2025-03-26 | General Method for Conversion Between Multimode Network Parameters | Alexander Zhuravlev, Juan D. Baena et.al. | 2503.20298 | | | Abstract (click to expand)Different types of network parameters have been used in electronics since long ago. The most typical network parameters, but not the only ones, are \(S\), \(T\), \(ABCD\), \(Z\), \(Y\) , and \(h\) that relate input and output signals in different ways. There exist practical formulas for conversion between them. Due to the development of powerful software tools that can deal efficiently and accurately with higher-order modes in each port, researchers need conversion rules between multimode network parameters. However, the usual way to get each conversion rule is just developing cumbersome algebraic manipulations which, at the end, are useful only for some specific conversion. Here, we propose a general algebraic method to obtain any conversion rule between different multimode network parameters. It is based on the assumption of a state vector space and each conversion rule between network parameters can be interpreted as a simple change of basis. This procedure explains any conversion between multimode network parameters under the same algebraic steps. |
2025-03-26 | Dynamic Learning and Productivity for Data Analysts: A Bayesian Hidden Markov Model Perspective | Yue Yin et.al. | 2503.20233 | | 29 pages; a shorter 11-page version is accepted by HCI International (HCII) 2025; | Abstract (click to expand)Data analysts are essential in organizations, transforming raw data into insights that drive decision-making and strategy. This study explores how analysts' productivity evolves on a collaborative platform, focusing on two key learning activities: writing queries and viewing peer queries. While traditional research often assumes static models, where performance improves steadily with cumulative learning, such models fail to capture the dynamic nature of real-world learning. To address this, we propose a Hidden Markov Model (HMM) that tracks how analysts transition between distinct learning states based on their participation in these activities. Using an industry dataset with 2,001 analysts and 79,797 queries, this study identifies three learning states: novice, intermediate, and advanced. Productivity increases as analysts advance to higher states, reflecting the cumulative benefits of learning. Writing queries benefits analysts across all states, with the largest gains observed for novices. Viewing peer queries supports novices but may hinder analysts in higher states due to cognitive overload or inefficiencies. Transitions between states are also uneven, with progression from intermediate to advanced being particularly challenging. This study advances understanding of into dynamic learning behavior of knowledge worker and offers practical implications for designing systems, optimizing training, enabling personalized learning, and fostering effective knowledge sharing. |
2025-03-26 | Solving 2-D Helmholtz equation in the rectangular, circular, and elliptical domains using neural networks | D. Veerababu, Prasanta K. Ghosh et.al. | 2503.20222 | | 59 pages | Abstract (click to expand)Physics-informed neural networks offered an alternate way to solve several differential equations that govern complicated physics. However, their success in predicting the acoustic field is limited by the vanishing-gradient problem that occurs when solving the Helmholtz equation. In this paper, a formulation is presented that addresses this difficulty. The problem of solving the two-dimensional Helmholtz equation with the prescribed boundary conditions is posed as an unconstrained optimization problem using trial solution method. According to this method, a trial neural network that satisfies the given boundary conditions prior to the training process is constructed using the technique of transfinite interpolation and the theory of R-functions. This ansatz is initially applied to the rectangular domain and later extended to the circular and elliptical domains. The acoustic field predicted from the proposed formulation is compared with that obtained from the two-dimensional finite element methods. Good agreement is observed in all three domains considered. Minor limitations associated with the proposed formulation and their remedies are also discussed. |
2025-03-25 | Lossy Compression of Scientific Data: Applications Constrains and Requirements | Franck Cappello, Allison Baker, Ebru Bozda et.al. | 2503.20031 | | 33 pages | Abstract (click to expand)Increasing data volumes from scientific simulations and instruments (supercomputers, accelerators, telescopes) often exceed network, storage, and analysis capabilities. The scientific community's response to this challenge is scientific data reduction. Reduction can take many forms, such as triggering, sampling, filtering, quantization, and dimensionality reduction. This report focuses on a specific technique: lossy compression. Lossy compression retains all data points, leveraging correlations and controlled reduced accuracy. Quality constraints, especially for quantities of interest, are crucial for preserving scientific discoveries. User requirements also include compression ratio and speed. While many papers have been published on lossy compression techniques and reference datasets are shared by the community, there is a lack of detailed specifications of application needs that can guide lossy compression researchers and developers. This report fills this gap by reporting on the requirements and constraints of nine scientific applications covering a large spectrum of domains (climate, combustion, cosmology, fusion, light sources, molecular dynamics, quantum circuit simulation, seismology, and system logs). The report also details key lossy compression technologies (SZ, ZFP, MGARD, LC, SPERR, DCTZ, TEZip, LibPressio), discussing their history, principles, error control, hardware support, features, and impact. By presenting both application needs and compression technologies, the report aims to inspire new research to fill existing gaps. |
2025-03-25 | A comparative study of calibration techniques for finite strain elastoplasticity: Numerically-exact sensitivities for FEMU and VFM | Sanjeev Kumar, D. Thomas Seidl, Brian N. Granzow et.al. | 2503.19782 | | 44 pages, 15 figures | Abstract (click to expand)Accurate identification of material parameters is crucial for predictive modeling in computational mechanics. The two primary approaches in the experimental mechanics' community for calibration from full-field digital image correlation data are known as finite element model updating (FEMU) and the virtual fields method (VFM). In VFM, the objective function is a squared mismatch between internal and external virtual work or power. In FEMU, the objective function quantifies the weighted mismatch between model predictions and corresponding experimentally measured quantities of interest. It is minimized by iteratively updating the parameters of an FE model. While FEMU is seen as more flexible, VFM is commonly used instead of FEMU due to its considerably greater computational expense. However, comparisons between the two methods usually involve approximations of gradients or sensitivities with finite difference schemes, thereby making direct assessments difficult. Hence, in this study, we rigorously compare VFM and FEMU in the context of numerically-exact sensitivities obtained through local sensitivity analyses and the application of automatic differentiation software. To this end, both methods are tested on a finite strain elastoplasticity model. We conduct a series of test cases to assess both methods' robustness under practical challenges. |
2025-03-25 | Decoupled Dynamics Framework with Neural Fields for 3D Spatio-temporal Prediction of Vehicle Collisions | Sanghyuk Kim, Minsik Seo, Namwoo Kang et.al. | 2503.19712 | | 24 pages, 13 figures | Abstract (click to expand)This study proposes a neural framework that predicts 3D vehicle collision dynamics by independently modeling global rigid-body motion and local structural deformation. Unlike approaches directly predicting absolute displacement, this method explicitly separates the vehicle's overall translation and rotation from its structural deformation. Two specialized networks form the core of the framework: a quaternion-based Rigid Net for rigid motion and a coordinate-based Deformation Net for local deformation. By independently handling fundamentally distinct physical phenomena, the proposed architecture achieves accurate predictions without requiring separate supervision for each component. The model, trained on only 10% of available simulation data, significantly outperforms baseline models, including single multi-layer perceptron (MLP) and deep operator networks (DeepONet), with prediction errors reduced by up to 83%. Extensive validation demonstrates strong generalization to collision conditions outside the training range, accurately predicting responses even under severe impacts involving extreme velocities and large impact angles. Furthermore, the framework successfully reconstructs high-resolution deformation details from low-resolution inputs without increased computational effort. Consequently, the proposed approach provides an effective, computationally efficient method for rapid and reliable assessment of vehicle safety across complex collision scenarios, substantially reducing the required simulation data and time while preserving prediction fidelity. |
2025-03-25 | Characteristic boundary conditions for Hybridizable Discontinuous Galerkin methods | Jan Ellmenreich, Matteo Giacomini, Antonio Huerta et.al. | 2503.19684 | | | Abstract (click to expand)In this work we introduce the concept of characteristic boundary conditions (CBCs) within the framework of Hybridizable Discontinuous Galerkin (HDG) methods, including both the Navier-Stokes characteristic boundary conditions (NSCBCs) and a novel approach to generalized characteristic relaxation boundary conditions (GRCBCs). CBCs are based on the characteristic decomposition of the compressible Euler equations and are designed to prevent the reflection of waves at the domain boundaries. We show the effectiveness of the proposed method for weakly compressible flows through a series of numerical experiments by comparing the results with common boundary conditions in the HDG setting and reference solutions available in the literature. In particular, HDG with CBCs show superior performance minimizing the reflection of vortices at artificial boundaries, for both inviscid and viscous flows. |
2025-03-25 | Estimation of the Acoustic Field in a Uniform Duct with Mean Flow using Neural Networks | D. Veerababu, Prasanta K. Ghosh et.al. | 2503.19412 | | 23 pages | Abstract (click to expand)The study of sound propagation in a uniform duct having a mean flow has many applications, such as in the design of gas turbines, heating, ventilation and air conditioning ducts, automotive intake and exhaust systems, and in the modeling of speech. In this paper, the convective effects of the mean flow on the plane wave acoustic field inside a uniform duct were studied using artificial neural networks. The governing differential equation and the associated boundary conditions form a constrained optimization problem. It is converted to an unconstrained optimization problem and solved by approximating the acoustic field variable to a neural network. The complex-valued acoustic pressure and particle velocity were predicted at different frequencies, and validated against the analytical solution and the finite element models. The effect of the mean flow is studied in terms of the acoustic impedance. A closed-form expression that describes the influence of various factors on the acoustic field is derived. |
2025-03-24 | Multi-Physics Inverse Design of Varifocal Optical Devices using Data-Driven Surrogates and Differential Modeling | Zeqing Jin, Zhaocheng Liu, Nagi Elabbasi et.al. | 2503.18911 | | 15 pages, 4 figures | Abstract (click to expand)Designing a new varifocal architecture in AR glasses poses significant challenges due to the complex interplay of multiple physics disciplines, including innovated piezo-electric material, solid mechanics, electrostatics, and optics. Traditional design methods, which treat each physics separately, are insufficient for this problem as they fail to establish the intricate relationships among design parameters in such a large and sensitive space, leading to suboptimal solutions. To address this challenge, we propose a novel design pipeline, mPhDBBs (multi-Physics Differential Building Blocks), that integrates these diverse physics through a graph neural network-based surrogate model and a differentiable ray tracing model. A hybrid optimization method combining evolutionary and gradient approaches is employed to efficiently determine superior design variables that achieve desired optical objectives, such as focal length and focusing quality. Our results demonstrate the effectiveness of mPhDBBs, achieving high accuracy with minimal training data and computational resources, resulting in a speedup of at least 1000 times compared to non-gradient-based methods. This work offers a promising paradigm shift in product design, enabling rapid and accurate optimization of complex multi-physics systems, and demonstrates its adaptability to other inverse design problems. |
2025-03-24 | Differentiable Simulator for Electrically Reconfigurable Electromagnetic Structures | Johannes Müller, Dennis Philipp, Matthias Günther et.al. | 2503.18479 | | | Abstract (click to expand)This paper introduces a novel CUDA-enabled PyTorch-based framework designed for the gradient-based optimization of such reconfigurable electromagnetic structures with electrically tunable parameters. Traditional optimization techniques for these structures often rely on non-gradient-based methods, limiting efficiency and flexibility. Our framework leverages automatic differentiation, facilitating the application of gradient-based optimization methods. This approach is particularly advantageous for embedding within deep learning frameworks, enabling sophisticated optimization strategies. We demonstrate the framework's effectiveness through comprehensive simulations involving resonant structures with tunable parameters. Key contributions include the efficient solution of the inverse problem. The framework's performance is validated using three different resonant structures: a single-loop copper wire (Unit-Cell) as well as an 8x1 and an 8x8 array of resonant unit cells with multiple inductively coupled unit cells (1d and 2d Metasurfaces). Results show precise in-silico control over the magnetic field's component normal to the surface of each resonant structure, achieving desired field strengths with minimal error. The proposed framework is compatible with existing simulation software. This PyTorch-based framework sets the stage for advanced electromagnetic control strategies for resonant structures with application in e.g. MRI, providing a robust platform for further exploration and innovation in the design and optimization of resonant electromagnetic structures. |
2025-03-24 | DeepFund: Will LLM be Professional at Fund Investment? A Live Arena Perspective | Changlun Li, Yao Shi, Yuyu Luo et.al. | 2503.18313 | | Work in progress | Abstract (click to expand)Large Language Models (LLMs) have demonstrated impressive capabilities across various domains, but their effectiveness in financial decision making, particularly in fund investment, remains inadequately evaluated. Current benchmarks primarily assess LLMs understanding of financial documents rather than their ability to manage assets or analyze trading opportunities in dynamic market conditions. A critical limitation in existing evaluation methodologies is the backtesting approach, which suffers from information leakage when LLMs are evaluated on historical data they may have encountered during pretraining. This paper introduces DeepFund, a comprehensive platform for evaluating LLM based trading strategies in a simulated live environment. Our approach implements a multi agent framework where LLMs serve as both analysts and managers, creating a realistic simulation of investment decision making. The platform employs a forward testing methodology that mitigates information leakage by evaluating models on market data released after their training cutoff dates. We provide a web interface that visualizes model performance across different market conditions and investment parameters, enabling detailed comparative analysis. Through DeepFund, we aim to provide a more accurate and fair assessment of LLMs capabilities in fund investment, offering insights into their potential real world applications in financial markets. |
2025-03-23 | The Power of Small LLMs in Geometry Generation for Physical Simulations | Ossama Shafiq, Bahman Ghiassi, Alessio Alexiadis et.al. | 2503.18178 | | 24 pages, 17 figures | Abstract (click to expand)Engineers widely rely on simulation platforms like COMSOL or ANSYS to model and optimise processes. However, setting up such simulations requires expertise in defining geometry, generating meshes, establishing boundary conditions, and configuring solvers. This research aims to simplify this process by enabling engineers to describe their setup in plain language, allowing a Large Language Model (LLM) to generate the necessary input files for their specific application. This novel approach allows establishing a direct link between natural language and complex engineering tasks. Building on previous work that evaluated various LLMs for generating input files across simple and complex geometries, this study demonstrates that small LLMs - specifically, Phi-3 Mini and Qwen-2.5 1.5B - can be fine-tuned to generate precise engineering geometries in GMSH format. Through Low-Rank Adaptation (LoRA), we curated a dataset of 480 instruction-output pairs encompassing simple shapes (squares, rectangles, circles, and half circles) and more complex structures (I-beams, cylindrical pipes, and bent pipes). The fine-tuned models produced high-fidelity outputs, handling routine geometry generation with minimal intervention. While challenges remain with geometries involving combinations of multiple bodies, this study demonstrates that fine-tuned small models can outperform larger models like GPT-4o in specialised tasks, offering a precise and resource-efficient alternative for engineering applications. |
2025-03-23 | Strategic Prompt Pricing for AIGC Services: A User-Centric Approach | Xiang Li, Bing Luo, Jianwei Huang et.al. | 2503.18168 | | accepted in WiOpt 2025 | Abstract (click to expand)The rapid growth of AI-generated content (AIGC) services has created an urgent need for effective prompt pricing strategies, yet current approaches overlook users' strategic two-step decision-making process in selecting and utilizing generative AI models. This oversight creates two key technical challenges: quantifying the relationship between user prompt capabilities and generation outcomes, and optimizing platform payoff while accounting for heterogeneous user behaviors. We address these challenges by introducing prompt ambiguity, a theoretical framework that captures users' varying abilities in prompt engineering, and developing an Optimal Prompt Pricing (OPP) algorithm. Our analysis reveals a counterintuitive insight: users with higher prompt ambiguity (i.e., lower capability) exhibit non-monotonic prompt usage patterns, first increasing then decreasing with ambiguity levels, reflecting complex changes in marginal utility. Experimental evaluation using a character-level GPT-like model demonstrates that our OPP algorithm achieves up to 31.72% improvement in platform payoff compared to existing pricing mechanisms, validating the importance of user-centric prompt pricing in AIGC services. |
2025-03-23 | (G)I-DLE: Generative Inference via Distribution-preserving Logit Exclusion with KL Divergence Minimization for Constrained Decoding | Hanwool Lee et.al. | 2503.18050 | | preprint | Abstract (click to expand)We propose (G)I-DLE, a new approach to constrained decoding that leverages KL divergence minimization to preserve the intrinsic conditional probability distribution of autoregressive language models while excluding undesirable tokens. Unlike conventional methods that naively set banned tokens' logits to \(-\infty\) , which can distort the conversion from raw logits to posterior probabilities and increase output variance, (G)I-DLE re-normalizes the allowed token probabilities to minimize such distortion. We validate our method on the K2-Eval dataset, specifically designed to assess Korean language fluency, logical reasoning, and cultural appropriateness. Experimental results on Qwen2.5 models (ranging from 1.5B to 14B) demonstrate that G-IDLE not only boosts mean evaluation scores but also substantially reduces the variance of output quality. |
2025-03-23 | Financial Wind Tunnel: A Retrieval-Augmented Market Simulator | Bokai Cao, Xueyuan Lin, Yiyan Qi et.al. | 2503.17909 | | | Abstract (click to expand)Market simulator tries to create high-quality synthetic financial data that mimics real-world market dynamics, which is crucial for model development and robust assessment. Despite continuous advancements in simulation methodologies, market fluctuations vary in terms of scale and sources, but existing frameworks often excel in only specific tasks. To address this challenge, we propose Financial Wind Tunnel (FWT), a retrieval-augmented market simulator designed to generate controllable, reasonable, and adaptable market dynamics for model testing. FWT offers a more comprehensive and systematic generative capability across different data frequencies. By leveraging a retrieval method to discover cross-sectional information as the augmented condition, our diffusion-based simulator seamlessly integrates both macro- and micro-level market patterns. Furthermore, our framework allows the simulation to be controlled with wide applicability, including causal generation through "what-if" prompts or unprecedented cross-market trend synthesis. Additionally, we develop an automated optimizer for downstream quantitative models, using stress testing of simulated scenarios via FWT to enhance returns while controlling risks. Experimental results demonstrate that our approach enables the generalizable and reliable market simulation, significantly improve the performance and adaptability of downstream models, particularly in highly complex and volatile market conditions. Our code and data sample is available at https://anonymous.4open.science/r/fwt_-E852 |
2025-03-22 | Accelerating and enhancing thermodynamic simulations of electrochemical interfaces | Xiaochen Du, Mengren Liu, Jiayu Peng et.al. | 2503.17870 | link | 19 pages main text, 5 figures, supplementary information (SI) in ancillary files | Abstract (click to expand)Electrochemical interfaces are crucial in catalysis, energy storage, and corrosion, where their stability and reactivity depend on complex interactions between the electrode, adsorbates, and electrolyte. Predicting stable surface structures remains challenging, as traditional surface Pourbaix diagrams tend to either rely on expert knowledge or costly \(\textit{ab initio}\) sampling, and neglect thermodynamic equilibration with the environment. Machine learning (ML) potentials can accelerate static modeling but often overlook dynamic surface transformations. Here, we extend the Virtual Surface Site Relaxation-Monte Carlo (VSSR-MC) method to autonomously sample surface reconstructions modeled under aqueous electrochemical conditions. Through fine-tuning foundational ML force fields, we accurately and efficiently predict surface energetics, recovering known Pt(111) phases and revealing new LaMnO\(_\mathrm{3}\) (001) surface reconstructions. By explicitly accounting for bulk-electrolyte equilibria, our framework enhances electrochemical stability predictions, offering a scalable approach to understanding and designing materials for electrochemical applications. |
2025-03-22 | Generalized Scattering Matrix Synthesis for Hybrid Systems with Multiple Scatterers and Antennas Using Independent Structure Simulations | Chenbo Shi, Shichen Liang, Jin Pan et.al. | 2503.17616 | | | Abstract (click to expand)This paper presents a unified formulation for calculating the generalized scattering matrix (GS-matrix) of hybrid systems involving multiple scatterers and antennas. The GS-matrix of the entire system is synthesized through the scattering matrices and GS-matrices of each independent component, using the addition theorem of vector spherical wavefunctions and fully matrix-based operations. Since our formulation is applicable to general antenna-scatterer hybrid systems, previous formulas for multiple scattering and antenna arrays become special cases of our approach. This also establishes our formulation as a universal domain decomposition method for analyzing the electromagnetic performance of hybrid systems. We provide numerous numerical examples to comprehensively demonstrate the capabilities and compatibility of the proposed formulation, including its potential application in studying the effects of structural rotation. |
2025-03-21 | Adjoint Sensitivities for the Optimization of Nonlinear Structural Dynamics via Spectral Submanifolds | Matteo Pozzi, Jacopo Marconi, Shobhit Jain et.al. | 2503.17431 | | | Abstract (click to expand)This work presents an optimization framework for tailoring the nonlinear dynamic response of lightly damped mechanical systems using Spectral Submanifold (SSM) reduction. We derive the SSM-based backbone curve and its sensitivity with respect to parameters up to arbitrary polynomial orders, enabling efficient and accurate optimization of the nonlinear frequency-amplitude relation. We use the adjoint method to derive sensitivity expressions, which drastically reduces the computational cost compared to direct differentiation as the number of parameters increases. An important feature of this framework is the automatic adjustment of the expansion order of SSM-based ROMs using user-defined error tolerances during the optimization process. We demonstrate the effectiveness of the approach in optimizing the nonlinear response over several numerical examples of mechanical systems. Hence, the proposed framework extends the applicability of SSM-based optimization methods to practical engineering problems, offering a robust tool for the design and optimization of nonlinear mechanical structures. |
2025-03-20 | Accelerated Medicines Development using a Digital Formulator and a Self-Driving Tableting DataFactory | Faisal Abbas, Mohammad Salehian, Peter Hou et.al. | 2503.17411 | link | | Abstract (click to expand)Pharmaceutical tablet formulation and process development, traditionally a complex and multi-dimensional decision-making process, necessitates extensive experimentation and resources, often resulting in suboptimal solutions. This study presents an integrated platform for tablet formulation and manufacturing, built around a Digital Formulator and a Self-Driving Tableting DataFactory. By combining predictive modelling, optimisation algorithms, and automation, this system offers a material-to-product approach to predict and optimise critical quality attributes for different formulations, linking raw material attributes to key blend and tablet properties, such as flowability, porosity, and tensile strength. The platform leverages the Digital Formulator, an in-silico optimisation framework that employs a hybrid system of models - melding data-driven and mechanistic models - to identify optimal formulation settings for manufacturability. Optimised formulations then proceed through the self-driving Tableting DataFactory, which includes automated powder dosing, tablet compression and performance testing, followed by iterative refinement of process parameters through Bayesian optimisation methods. This approach accelerates the timeline from material characterisation to development of an in-specification tablet within 6 hours, utilising less than 5 grams of API, and manufacturing small batch sizes of up to 1,440 tablets with augmented and mixed reality enabled real-time quality control within 24 hours. Validation across multiple APIs and drug loadings underscores the platform's capacity to reliably meet target quality attributes, positioning it as a transformative solution for accelerated and resource-efficient pharmaceutical development. |
2025-03-21 | ML-Based Bidding Price Prediction for Pay-As-Bid Ancillary Services Markets: A Use Case in the German Control Reserve Market | Vincent Bezold, Lukas Baur, Alexander Sauer et.al. | 2503.17214 | | | Abstract (click to expand)The increasing integration of renewable energy sources has led to greater volatility and unpredictability in electricity generation, posing challenges to grid stability. Ancillary service markets, such as the German control reserve market, allow industrial consumers and producers to offer flexibility in their power consumption or generation, contributing to grid stability while earning additional income. However, many participants use simple bidding strategies that may not maximize their revenues. This paper presents a methodology for forecasting bidding prices in pay-as-bid ancillary service markets, focusing on the German control reserve market. We evaluate various machine learning models, including Support Vector Regression, Decision Trees, and k-Nearest Neighbors, and compare their performance against benchmark models. To address the asymmetry in the revenue function of pay-as-bid markets, we introduce an offset adjustment technique that enhances the practical applicability of the forecasting models. Our analysis demonstrates that the proposed approach improves potential revenues by 27.43 % to 37.31 % compared to baseline models. When analyzing the relationship between the model forecasting errors and the revenue, a negative correlation is measured for three markets; according to the results, a reduction of 1 EUR/MW model price forecasting error (MAE) statistically leads to a yearly revenue increase between 483 EUR/MW and 3,631 EUR/MW. The proposed methodology enables industrial participants to optimize their bidding strategies, leading to increased earnings and contributing to the efficiency and stability of the electrical grid. |
2025-03-21 | A Comprehensive Framework for Predictive Computational Modeling of Growth and Remodeling in Tissue-Engineered Cardiovascular Implants | Mahmoud Sesa, Hagen Holthusen, Christian Böhm et.al. | 2503.17151 | | Preprint submitted to Springer Nature | Abstract (click to expand)Developing clinically viable tissue-engineered cardiovascular implants remains a formidable challenge. Achieving reliable and durable outcomes requires a deeper understanding of the fundamental mechanisms driving tissue evolution during in vitro maturation. Although considerable progress has been made in modeling soft tissue growth and remodeling, studies focused on the early stages of tissue engineering remain limited. Here, we present a general, thermodynamically consistent model to predict tissue evolution and mechanical response throughout maturation. The formulation utilizes a stress-driven homeostatic surface to capture volumetric growth, coupled with an energy-based approach to describe collagen densification via the strain energy of the fibers. We further employ a co-rotated intermediate configuration to ensure the model's consistency and generality. The framework is demonstrated with two numerical examples: a uniaxially constrained tissue strip validated against experimental data, and a biaxially constrained specimen subjected to a perturbation load. These results highlight the potential of the proposed model to advance the design and optimization of tissue-engineered implants with clinically relevant performance. |
2025-03-26 | Assessing Consistency and Reproducibility in the Outputs of Large Language Models: Evidence Across Diverse Finance and Accounting Tasks | Julian Junyan Wang, Victor Xiaoqi Wang et.al. | 2503.16974 | | 97 pages, 20 tables, 15 figures | Abstract (click to expand)This study provides the first comprehensive assessment of consistency and reproducibility in Large Language Model (LLM) outputs in finance and accounting research. We evaluate how consistently LLMs produce outputs given identical inputs through extensive experimentation with 50 independent runs across five common tasks: classification, sentiment analysis, summarization, text generation, and prediction. Using three OpenAI models (GPT-3.5-turbo, GPT-4o-mini, and GPT-4o), we generate over 3.4 million outputs from diverse financial source texts and data, covering MD&As, FOMC statements, finance news articles, earnings call transcripts, and financial statements. Our findings reveal substantial but task-dependent consistency, with binary classification and sentiment analysis achieving near-perfect reproducibility, while complex tasks show greater variability. More advanced models do not consistently demonstrate better consistency and reproducibility, with task-specific patterns emerging. LLMs significantly outperform expert human annotators in consistency and maintain high agreement even where human experts significantly disagree. We further find that simple aggregation strategies across 3-5 runs dramatically improve consistency. We also find that aggregation may come with an additional benefit of improved accuracy for sentiment analysis when using newer models. Simulation analysis reveals that despite measurable inconsistency in LLM outputs, downstream statistical inferences remain remarkably robust. These findings address concerns about what we term "G-hacking," the selective reporting of favorable outcomes from multiple Generative AI runs, by demonstrating that such risks are relatively low for finance and accounting tasks. |
2025-03-19 | Reliable Radiologic Skeletal Muscle Area Assessment -- A Biomarker for Cancer Cachexia Diagnosis | Sabeen Ahmed, Nathan Parker, Margaret Park et.al. | 2503.16556 | | 47 pages, 19 figures, 9 Tables | Abstract (click to expand)Cancer cachexia is a common metabolic disorder characterized by severe muscle atrophy which is associated with poor prognosis and quality of life. Monitoring skeletal muscle area (SMA) longitudinally through computed tomography (CT) scans, an imaging modality routinely acquired in cancer care, is an effective way to identify and track this condition. However, existing tools often lack full automation and exhibit inconsistent accuracy, limiting their potential for integration into clinical workflows. To address these challenges, we developed SMAART-AI (Skeletal Muscle Assessment-Automated and Reliable Tool-based on AI), an end-to-end automated pipeline powered by deep learning models (nnU-Net 2D) trained on mid-third lumbar level CT images with 5-fold cross-validation, ensuring generalizability and robustness. SMAART-AI incorporates an uncertainty-based mechanism to flag high-error SMA predictions for expert review, enhancing reliability. We combined the SMA, skeletal muscle index, BMI, and clinical data to train a multi-layer perceptron (MLP) model designed to predict cachexia at the time of cancer diagnosis. Tested on the gastroesophageal cancer dataset, SMAART-AI achieved a Dice score of 97.80% +/- 0.93%, with SMA estimated across all four datasets in this study at a median absolute error of 2.48% compared to manual annotations with SliceOmatic. Uncertainty metrics-variance, entropy, and coefficient of variation-strongly correlated with SMA prediction errors (0.83, 0.76, and 0.73 respectively). The MLP model predicts cachexia with 79% precision, providing clinicians with a reliable tool for early diagnosis and intervention. By combining automation, accuracy, and uncertainty awareness, SMAART-AI bridges the gap between research and clinical application, offering a transformative approach to managing cancer cachexia. |
2025-03-20 | Deep Feynman-Kac Methods for High-dimensional Semilinear Parabolic Equations: Revisit | Xiaotao Zheng, Xingye Yue, Jiyang Shi et.al. | 2503.16407 | | | Abstract (click to expand)Deep Feynman-Kac method was first introduced to solve parabolic partial differential equations(PDE) by Beck et al. (SISC, V.43, 2021), named Deep Splitting method since they trained the Neural Networks step by step in the time direction. In this paper, we propose a new training approach with two different features. Firstly, neural networks are trained at all time steps globally, instead of step by step. Secondly, the training data are generated in a new way, in which the method is consistent with a direct Monte Carlo scheme when dealing with a linear parabolic PDE. Numerical examples show that our method has significant improvement both in efficiency and accuracy. |
2025-03-20 | Filters reveal emergent structure in computational morphogenesis | Hazhir Aliahmadi, Aidan Sheedy, Greg van Anders et.al. | 2503.16211 | | 17 pages, 9 figures | Abstract (click to expand)Revolutionary advances in both manufacturing and computational morphogenesis raise critical questions about design sensitivity. Sensitivity questions are especially critical in contexts, such as topology optimization, that yield structures with emergent morphology. However, analyzing emergent structures via conventional, perturbative techniques can mask larger-scale vulnerabilities that could manifest in essential components. Risks that fail to appear in perturbative sensitivity analyses will only continue to proliferate as topology optimization-driven manufacturing penetrates more deeply into engineering design and consumer products. Here, we introduce Laplace-transform based computational filters that supplement computational morphogenesis with a set of nonperturbative sensitivity analyses. We demonstrate how this approach identifies important elements of a structure even in the absence of knowledge of the ultimate, optimal structure itself. We leverage techniques from molecular dynamics and implement these methods in open-source codes, demonstrating their application to compliance minimization problems in both 2D and 3D. Our implementation extends straightforwardly to topology optimization for other problems and benefits from the strong scaling properties observed in conventional molecular simulation. |
2025-03-20 | Sustainable Open-Data Management for Field Research: A Cloud-Based Approach in the Underlandscape Project | Augusto Ciuffoletti, Letizia Chiti et.al. | 2503.16042 | | 8 pages, 4 figures | Abstract (click to expand)Field-based research projects require a robust suite of ICT services to support data acquisition, documentation, storage, and dissemination. A key challenge lies in ensuring the sustainability of data management - not only during the project's funded period but also beyond its conclusion, when maintenance and support often depend on voluntary efforts. In the Underlandscape project, we tackled this challenge by extensively leveraging public cloud services while minimizing reliance on complex custom infrastructure. This paper provides a comprehensive overview of the project's final infrastructure, detailing the adopted data formats, the cloud-based solutions enabling data management, and the custom applications developed for system integration. |
2025-03-20 | Practical Portfolio Optimization with Metaheuristics:Pre-assignment Constraint and Margin Trading | Hang Kin Poon et.al. | 2503.15965 | | | Abstract (click to expand)Portfolio optimization is a critical area in finance, aiming to maximize returns while minimizing risk. Metaheuristic algorithms were shown to solve complex optimization problems efficiently, with Genetic Algorithms and Particle Swarm Optimization being among the most popular methods. This paper introduces an innovative approach to portfolio optimization that incorporates pre-assignment to limit the search space for investor preferences and better results. Additionally, taking margin trading strategies in account and using a rare performance ratio to evaluate portfolio efficiency. Through an illustrative example, this paper demonstrates that the metaheuristic-based methodology yields superior risk-adjusted returns compared to traditional benchmarks. The results highlight the potential of metaheuristics with help of assets filtering in enhancing portfolio performance in terms of risk adjusted return. |
2025-03-20 | WeirdFlows: Anomaly Detection in Financial Transaction Flows | Arthur Capozzi, Salvatore Vilella, Dario Moncalvo et.al. | 2503.15896 | | 12 pages, 6 figures, ITADATA2024 | Abstract (click to expand)In recent years, the digitization and automation of anti-financial crime (AFC) investigative processes have faced significant challenges, particularly the need for interpretability of AI model results and the lack of labeled data for training. Network analysis has emerged as a valuable approach in this context. In this paper, we present WeirdFlows, a top-down search pipeline for detecting potentially fraudulent transactions and non-compliant agents. In a transaction network, fraud attempts are often based on complex transaction patterns that change over time to avoid detection. The WeirdFlows pipeline requires neither an a priori set of patterns nor a training set. In addition, by providing elements to explain the anomalies found, it facilitates and supports the work of an AFC analyst. We evaluate WeirdFlows on a dataset from Intesa Sanpaolo (ISP) bank, comprising 80 million cross-country transactions over 15 months, benchmarking our implementation of the algorithm. The results, corroborated by ISP AFC experts, highlight its effectiveness in identifying suspicious transactions and actors, particularly in the context of the economic sanctions imposed in the EU after February 2022. This demonstrates \textit{WeirdFlows}' capability to handle large datasets, detect complex transaction patterns, and provide the necessary interpretability for formal AFC investigations. |
2025-03-19 | Impact of pH and chloride content on the biodegradation of magnesium alloys for medical implants: An in vitro and phase-field study | S. Kovacevic, W. Ali, T. K. Mandal et.al. | 2503.15700 | | | Abstract (click to expand)The individual contributions of pH and chloride concentration to the corrosion kinetics of bioabsorbable magnesium (Mg) alloys remain unresolved despite their significant roles as driving factors in Mg corrosion. This study demonstrates and quantifies hitherto unknown separate effects of pH and chloride content on the corrosion of Mg alloys pertinent to biomedical implant applications. The experimental setup designed for this purpose enables the quantification of the dependence of corrosion on pH and chloride concentration. The in vitro tests conclusively demonstrate that variations in chloride concentration, relevant to biomedical applications, have a negligible effect on corrosion kinetics. The findings identify pH as a critical factor in the corrosion of bioabsorbable Mg alloys. A variationally consistent phase-field model is developed for assessing the degradation of Mg alloys in biological fluids. The model accurately predicts the corrosion performance of Mg alloys observed during the experiments, including their dependence on pH and chloride concentration. The capability of the framework to account for mechano-chemical effects during corrosion is demonstrated in practical orthopaedic applications considering bioabsorbable Mg alloy implants for bone fracture fixation and porous scaffolds for bone tissue engineering. The strategy has the potential to assess the in vitro and in vivo service life of bioabsorbable Mg-based biomedical devices. |
2025-03-19 | Shap-MeD | Nicolás Laverde, Melissa Robles, Johan Rodríguez et.al. | 2503.15562 | | | Abstract (click to expand)We present Shap-MeD, a text-to-3D object generative model specialized in the biomedical domain. The objective of this study is to develop an assistant that facilitates the 3D modeling of medical objects, thereby reducing development time. 3D modeling in medicine has various applications, including surgical procedure simulation and planning, the design of personalized prosthetic implants, medical education, the creation of anatomical models, and the development of research prototypes. To achieve this, we leverage Shap-e, an open-source text-to-3D generative model developed by OpenAI, and fine-tune it using a dataset of biomedical objects. Our model achieved a mean squared error (MSE) of 0.089 in latent generation on the evaluation set, compared to Shap-e's MSE of 0.147. Additionally, we conducted a qualitative evaluation, comparing our model with others in the generation of biomedical objects. Our results indicate that Shap-MeD demonstrates higher structural accuracy in biomedical object generation. |
2025-03-19 | Design for Sensing and Digitalisation (DSD): A Modern Approach to Engineering Design | Daniel N. Wilke et.al. | 2503.14851 | | 4 pages, conference, SACAM 2025 | Abstract (click to expand)This paper introduces Design for Sensing and Digitalisation (DSD), a new engineering design paradigm that integrates sensor technology for digitisation and digitalisation from the earliest stages of the design process. Unlike traditional methodologies that treat sensing as an afterthought, DSD emphasises sensor integration, signal path optimisation, and real-time data utilisation as core design principles. The paper outlines DSD's key principles, discusses its role in enabling digital twin technology, and argues for its importance in modern engineering education. By adopting DSD, engineers can create more intelligent and adaptable systems that leverage real-time data for continuous design iteration, operational optimisation and data-driven predictive maintenance. |
2025-03-18 | Teaching Artificial Intelligence to Perform Rapid, Resolution-Invariant Grain Growth Modeling via Fourier Neural Operator | Iman Peivaste, Ahmed Makradi, Salim Belouettar et.al. | 2503.14568 | link | | Abstract (click to expand)Microstructural evolution, particularly grain growth, plays a critical role in shaping the physical, optical, and electronic properties of materials. Traditional phase-field modeling accurately simulates these phenomena but is computationally intensive, especially for large systems and fine spatial resolutions. While machine learning approaches have been employed to accelerate simulations, they often struggle with resolution dependence and generalization across different grain scales. This study introduces a novel approach utilizing Fourier Neural Operator (FNO) to achieve resolution-invariant modeling of microstructure evolution in multi-grain systems. FNO operates in the Fourier space and can inherently handle varying resolutions by learning mappings between function spaces. By integrating FNO with the phase field method, we developed a surrogate model that significantly reduces computational costs while maintaining high accuracy across different spatial scales. We generated a comprehensive dataset from phase-field simulations using the Fan Chen model, capturing grain evolution over time. Data preparation involved creating input-output pairs with a time shift, allowing the model to predict future microstructures based on current and past states. The FNO-based neural network was trained using sequences of microstructures and demonstrated remarkable accuracy in predicting long-term evolution, even for unseen configurations and higher-resolution grids not encountered during training. |
2025-03-17 | AI-Driven Rapid Identification of Bacterial and Fungal Pathogens in Blood Smears of Septic Patients | Agnieszka Sroka-Oleksiak, Adam Pardyl, Dawid Rymarczyk et.al. | 2503.14542 | | | Abstract (click to expand)Sepsis is a life-threatening condition which requires rapid diagnosis and treatment. Traditional microbiological methods are time-consuming and expensive. In response to these challenges, deep learning algorithms were developed to identify 14 bacteria species and 3 yeast-like fungi from microscopic images of Gram-stained smears of positive blood samples from sepsis patients. A total of 16,637 Gram-stained microscopic images were used in the study. The analysis used the Cellpose 3 model for segmentation and Attention-based Deep Multiple Instance Learning for classification. Our model achieved an accuracy of 77.15% for bacteria and 71.39% for fungi, with ROC AUC of 0.97 and 0.88, respectively. The highest values, reaching up to 96.2%, were obtained for Cutibacterium acnes, Enterococcus faecium, Stenotrophomonas maltophilia and Nakaseomyces glabratus. Classification difficulties were observed in closely related species, such as Staphylococcus hominis and Staphylococcus haemolyticus, due to morphological similarity, and within Candida albicans due to high morphotic diversity. The study confirms the potential of our model for microbial classification, but it also indicates the need for further optimisation and expansion of the training data set. In the future, this technology could support microbial diagnosis, reducing diagnostic time and improving the effectiveness of sepsis treatment due to its simplicity and accessibility. Part of the results presented in this publication was covered by a patent application at the European Patent Office EP24461637.1 "A computer implemented method for identifying a microorganism in a blood and a data processing system therefor". |
2025-03-18 | Tensor-decomposition-based A Priori Surrogate (TAPS) modeling for ultra large-scale simulations | Jiachen Guo, Gino Domel, Chanwook Park et.al. | 2503.13933 | | | Abstract (click to expand)A data-free, predictive scientific AI model, Tensor-decomposition-based A Priori Surrogate (TAPS), is proposed for tackling ultra large-scale engineering simulations with significant speedup, memory savings, and storage gain. TAPS can effectively obtain surrogate models for high-dimensional parametric problems with equivalent zetta-scale ( \(10^{21}\)) degrees of freedom (DoFs). TAPS achieves this by directly obtaining reduced-order models through solving governing equations with multiple independent variables such as spatial coordinates, parameters, and time. The paper first introduces an AI-enhanced finite element-type interpolation function called convolution hierarchical deep-learning neural network (C-HiDeNN) with tensor decomposition (TD). Subsequently, the generalized space-parameter-time Galerkin weak form and the corresponding matrix form are derived. Through the choice of TAPS hyperparameters, an arbitrary convergence rate can be achieved. To show the capabilities of this framework, TAPS is then used to simulate a large-scale additive manufacturing process as an example and achieves around 1,370x speedup, 14.8x memory savings, and 955x storage gain compared to the finite difference method with \(3.46\) billion spatial degrees of freedom (DoFs). As a result, the TAPS framework opens a new avenue for many challenging ultra large-scale engineering problems, such as additive manufacturing and integrated circuit design, among others. |
2025-03-17 | Quantum Dynamics Simulation of the Advection-Diffusion Equation | Hirad Alipanah, Feng Zhang, Yongxin Yao et.al. | 2503.13729 | | | Abstract (click to expand)The advection-diffusion equation is simulated on a superconducting quantum computer via several quantum algorithms. Three formulations are considered: (1) Trotterization, (2) variational quantum time evolution (VarQTE), and (3) adaptive variational quantum dynamics simulation (AVQDS). These schemes were originally developed for the Hamiltonian simulation of many-body quantum systems. The finite-difference discretized operator of the transport equation is formulated as a Hamiltonian and solved without the need for ancillary qubits. Computations are conducted on a quantum simulator (IBM Qiskit Aer) and an actual quantum hardware (IBM Fez). The former emulates the latter without the noise. The predicted results are compared with direct numerical simulation (DNS) data with infidelities of the order \(10^{-5}\) . In the quantum simulator, Trotterization is observed to have the lowest infidelity and is suitable for fault-tolerant computation. The AVQDS algorithm requires the lowest gate count and the lowest circuit depth. The VarQTE algorithm is the next best in terms of gate counts, but the number of its optimization variables is directly proportional to the number of qubits. Due to current hardware limitations, Trotterization cannot be implemented, as it has an overwhelming large number of operations. Meanwhile, AVQDS and VarQTE can be executed, but suffer from large errors due to significant hardware noise. These algorithms present a new paradigm for computational transport phenomena on quantum computers. |
2025-03-17 | Competitive algorithms for calculating the ground state properties of Bose-Fermi mixtures | Tomasz Świsłocki, Krzysztof Gawryluk, Mirosław Brewczyk et.al. | 2503.13717 | | | Abstract (click to expand)In this work we define, analyze, and compare different numerical schemes that can be used to study the ground state properties of Bose-Fermi systems, such as mixtures of different atomic species under external forces or self-bound quantum droplets. The bosonic atoms are assumed to be condensed and are described by the generalized Gross-Pitaevskii equation. The fermionic atoms, on the other hand, are treated individually, and each atom is associated with a wave function whose evolution follows the Hartree-Fock equation. We solve such a formulated set of equations using a variety of methods, including those based on adiabatic switching of interactions and the imaginary time propagation technique combined with the Gram-Schmidt orthonormalization or the diagonalization of the Hamiltonian matrix. We show how different algorithms compete at the numerical level by studying the mixture in the range of parameters covering the formation of self-bound quantum Bose-Fermi droplets. |
2025-03-17 | PERC: a suite of software tools for the curation of cryoEM data with application to simulation, modelling and machine learning | Beatriz Costa-Gomes, Joel Greer, Nikolai Juraschko et.al. | 2503.13329 | | 22 pages, 4 figures | Abstract (click to expand)Ease of access to data, tools and models expedites scientific research. In structural biology there are now numerous open repositories of experimental and simulated datasets. Being able to easily access and utilise these is crucial for allowing researchers to make optimal use of their research effort. The tools presented here are useful for collating existing public cryoEM datasets and/or creating new synthetic cryoEM datasets to aid the development of novel data processing and interpretation algorithms. In recent years, structural biology has seen the development of a multitude of machine-learning based algorithms for aiding numerous steps in the processing and reconstruction of experimental datasets and the use of these approaches has become widespread. Developing such techniques in structural biology requires access to large datasets which can be cumbersome to curate and unwieldy to make use of. In this paper we present a suite of Python software packages which we collectively refer to as PERC (profet, EMPIARreader and CAKED). These are designed to reduce the burden which data curation places upon structural biology research. The protein structure fetcher (profet) package allows users to conveniently download and cleave sequences or structures from the Protein Data Bank or Alphafold databases. EMPIARreader allows lazy loading of Electron Microscopy Public Image Archive datasets in a machine-learning compatible structure. The Class Aggregator for Key Electron-microscopy Data (CAKED) package is designed to seamlessly facilitate the training of machine learning models on electron microscopy data, including electron-cryo-microscopy-specific data augmentation and labelling. These packages may be utilised independently or as building blocks in workflows. All are available in open source repositories and designed to be easily extensible to facilitate more advanced workflows if required. |
2025-03-17 | Magneto-thermally Coupled Field Simulation of Homogenized Foil Winding Models | Silas Weinert, Jonas Bundschuh, Yvonne Späck-Leigsnering et.al. | 2503.13010 | | 6 pages, 8 figures | Abstract (click to expand)Foil windings have, due to their layered structure, different properties than conventional wire windings, which make them advantageous for high frequency applications. Both electromagnetic and thermal analyses are relevant for foil windings. These two physical areas are coupled through Joule losses and temperature dependent material properties. For an efficient simulation of foil windings, homogenization techniques are used to avoid resolving the single turns. Therefore, this paper comprises a coupled magneto-thermal simulation that uses a homogenization method in the electromagnetic and thermal part. A weak coupling with different time step sizes for both parts is presented. The method is validated on a simple geometry and showcased for a pot transformer that uses a foil and a wire winding. |
2025-03-17 | AUTV: Creating Underwater Video Datasets with Pixel-wise Annotations | Quang Trung Truong, Wong Yuk Kwan, Duc Thanh Nguyen et.al. | 2503.12828 | | under review | Abstract (click to expand)Underwater video analysis, hampered by the dynamic marine environment and camera motion, remains a challenging task in computer vision. Existing training-free video generation techniques, learning motion dynamics on the frame-by-frame basis, often produce poor results with noticeable motion interruptions and misaligments. To address these issues, we propose AUTV, a framework for synthesizing marine video data with pixel-wise annotations. We demonstrate the effectiveness of this framework by constructing two video datasets, namely UTV, a real-world dataset comprising 2,000 video-text pairs, and SUTV, a synthetic video dataset including 10,000 videos with segmentation masks for marine objects. UTV provides diverse underwater videos with comprehensive annotations including appearance, texture, camera intrinsics, lighting, and animal behavior. SUTV can be used to improve underwater downstream tasks, which are demonstrated in video inpainting and video object segmentation. |
2025-03-17 | Cohort-attention Evaluation Metric against Tied Data: Studying Performance of Classification Models in Cancer Detection | Longfei Wei, Fang Sheng, Jianfei Zhang et.al. | 2503.12755 | | | Abstract (click to expand)Artificial intelligence (AI) has significantly improved medical screening accuracy, particularly in cancer detection and risk assessment. However, traditional classification metrics often fail to account for imbalanced data, varying performance across cohorts, and patient-level inconsistencies, leading to biased evaluations. We propose the Cohort-Attention Evaluation Metrics (CAT) framework to address these challenges. CAT introduces patient-level assessment, entropy-based distribution weighting, and cohort-weighted sensitivity and specificity. Key metrics like CATSensitivity (CATSen), CATSpecificity (CATSpe), and CATMean ensure balanced and fair evaluation across diverse populations. This approach enhances predictive reliability, fairness, and interpretability, providing a robust evaluation method for AI-driven medical screening models. |
2025-03-16 | Discovering uncertainty: Gaussian constitutive neural networks with correlated weights | Jeremy A. McCulloch, Ellen Kuhl et.al. | 2503.12679 | link | 10 pages, 5 figures, 1 table | Abstract (click to expand)When characterizing materials, it can be important to not only predict their mechanical properties, but also to estimate the probability distribution of these properties across a set of samples. Constitutive neural networks allow for the automated discovery of constitutive models that exactly satisfy physical laws given experimental testing data, but are only capable of predicting the mean stress response. Stochastic methods treat each weight as a random variable and are capable of learning their probability distributions. Bayesian constitutive neural networks combine both methods, but their weights lack physical interpretability and we must sample each weight from a probability distribution to train or evaluate the model. Here we introduce a more interpretable network with fewer parameters, simpler training, and the potential to discover correlated weights: Gaussian constitutive neural networks. We demonstrate the performance of our new Gaussian network on biaxial testing data, and discover a sparse and interpretable four-term model with correlated weights. Importantly, the discovered distributions of material parameters across a set of samples can serve as priors to discover better constitutive models for new samples with limited data. We anticipate that Gaussian constitutive neural networks are a natural first step towards generative constitutive models informed by physical laws and parameter uncertainty. |
2025-03-16 | Modeling ice cliff stability using a new Mohr-Coulomb-based phase field fracture model | T. Clayton, R. Duddu, T. Hageman et.al. | 2503.12481 | | | Abstract (click to expand)Iceberg calving at glacier termini results in mass loss from ice sheets, but the associated fracture mechanics is often poorly represented using simplistic (empirical or elementary mechanics-based) failure criteria. Here, we propose an advanced Mohr-Coulomb failure criterion that drives cracking based on the visco-elastic stress state in ice. This criterion is implemented in a phase field fracture framework, and finite element simulations are conducted to determine the critical conditions that can trigger ice cliff collapse. Results demonstrate that fast-moving glaciers with negligible basal friction are prone to tensile failure causing crevasse propagation far away from the ice front; whilst slow-moving glaciers with significant basal friction are likely to exhibit shear failure near the ice front. Results also indicate that seawater pressure plays a major role in modulating cliff failure. For land terminating glaciers, full thickness cliff failure is observed if the glacier exceeds a critical height, dependent on cohesive strength \(\tau_\mathrm{c}\) (\(H \approx 120\;\text{m}\) for \(\tau_\mathrm{c}=0.5\;\text{MPa}\)). For marine-terminating glaciers, ice cliff failure occurs if a critical glacier free-board (\(H-h_\mathrm{w}\)) is exceeded, with ice slumping only observed above the ocean-water height; for \(\tau_\mathrm{c} = 0.5\;\text{MPa}\), the model-predicted critical free-board is \(H-h_\mathrm{w} \approx 215\;\text{m}\) , which is in good agreement with field observations. While the critical free-board height is larger than that predicted by some previous models, we cannot conclude that marine ice cliff instability is less likely because we do not include other failure processes such as hydrofracture of basal crevasses and plastic necking. |
2025-03-16 | Development of a Cost-Effective Simulation Tool for Loss of Flow Accident Transients in High-Temperature Gas-cooled Reactors | Bo Liu, Wei Wang, Charles Moulinec et.al. | 2503.12467 | | | Abstract (click to expand)The aim of this work is to further expand the capability of the coarse-grid Computational Fluid Dynamics (CFD) approach, SubChCFD, to effectively simulate transient and buoyancy-influenced flows, which are critical in accident analyses of High-Temperature Gas-cooled Reactors (HTGRs). It has been demonstrated in our previous work that SubChCFD is highly adaptable to HTGR fuel designs and performs exceptionally well in modelling steady-state processes. In this study, the approach is extended to simulate a Loss of Flow Accident (LOFA) transient, where coolant circulation is disrupted, causing the transition from forced convection to buoyancy-driven natural circulation within the reactor core. To enable SubChCFD to capture the complex physics involved, corrections were introduced to the empirical correlations to account for the effects of flow unsteadiness, property variation and buoyancy. A 1/12th sector of the reactor core, representing the smallest symmetric unit, was modelled using a coarse mesh of approximately 60 million cells. This mesh size is about 6% of that required for a Reynolds Averaged Navier Stokes (RANS) model, where mesh sizes can typically reach the order of 1 billion cells for such configurations. Simulation results show that SubChCFD effectively captures the thermal hydraulic behaviours of the reactor during a LOFA transient, producing predictions in good agreement with RANS simulations while significantly reducing computational cost. |
2025-03-14 | Adiabatic Flame Temperatures for Oxy-Methane, Oxy-Hydrogen, Air-Methane, and Air-Hydrogen Stoichiometric Combustion using the NASA CEARUN Tool, GRI-Mech 3.0 Reaction Mechanism, and Cantera Python Package | Osama A. Marzouk et.al. | 2503.11826 | | 8 pages, 8 figures, 8 tables, peer-reviewed journal paper, open access | Abstract (click to expand)The Adiabatic Flame Temperature (AFT) in combustion represents the maximum attainable temperature at which the chemical energy in the reactant fuel is converted into sensible heat in combustion products without heat loss. AFT depends on the fuel, oxidizer, and chemical composition of the products. Computing AFT requires solving either a nonlinear equation or a larger minimization problem. This study obtained the AFTs for oxy-methane (methane and oxygen), oxy-hydrogen (hydrogen and oxygen), air-methane (methane and air), and air-hydrogen (hydrogen and air) for stoichiometric conditions. The reactant temperature was 298.15 K (25{\deg}C), and the pressure was kept constant at 1 atm. Two reaction mechanisms were attempted: a global single-step irreversible reaction for complete combustion and the GRI-Mech 3.0 elementary mechanism (53 species, 325 steps) for chemical equilibrium with its associated thermodynamic data. NASA CEARUN was the main modeling tool used. Two other tools were used for benchmarking: an Excel and a Cantera-Python implementation of GRI-Mech 3.0. The results showed that the AFTs for oxy-methane were 5,166.47 K (complete combustion) and 3,050.12 K (chemical equilibrium), and dropped to 2,326.35 K and 2,224.25 K for air-methane, respectively. The AFTs for oxy-hydrogen were 4,930.56 K (complete combustion) and 3,074.51 K (chemical equilibrium), and dropped to 2,520.33 K and 2,378.62 K for air-hydrogen, respectively. For eight combustion modeling cases, the relative deviation between the AFTs predicted by CEARUN and GRI-Mech 3.0 ranged from 0.064% to 3.503%. |
2025-03-14 | Unfitted hybrid high-order methods stabilized by polynomial extension for elliptic interface problems | Erik Burman, Alexandre Ern, Romain Mottier et.al. | 2503.11397 | | | Abstract (click to expand)In this work, we propose the design and the analysis of a novel hybrid high-order (HHO) method on unfitted meshes. HHO methods rely on a pair of unknowns, combining polynomials attached to the mesh faces and the mesh cells. In the unfitted framework, the interface can cut through the mesh cells in a very general fashion, and the polynomial unknowns are doubled in the cut cells and the cut faces. In order to avoid the ill-conditioning issues caused by the presence of small cut cells, the novel approach introduced herein is to use polynomial extensions in the definition of the gradient reconstruction operator. Stability and consistency results are established, leading to optimally decaying error estimates. The theory is illustrated by numerical experiments. |
2025-03-14 | Corrected Riemann smoothed particle hydrodynamics method for multi-resolution fluid-structure interaction | Bo Zhang, Jianfeng Zhu, Xiangyu Hu et.al. | 2503.11292 | link | 47 pages 19 figues | Abstract (click to expand)As a mesh-free method, smoothed particle hydrodynamics (SPH) has been widely used for modeling and simulating fluid-structure interaction (FSI) problems. While the kernel gradient correction (KGC) method is commonly applied in structural domains to enhance numerical consistency, high-order consistency corrections that preserve conservation remain underutilized in fluid domains despite their critical role in FSI analysis, especially for the multi-resolution scheme where fluid domains generally have a low resolution. In this study, we incorporate the reverse kernel gradient correction (RKGC) formulation, a conservative high-order consistency approximation, into the fluid discretization for solving FSI problems. RKGC has been proven to achieve exact second-order convergence with relaxed particles and improve numerical accuracy while particularly enhancing energy conservation in free-surface flow simulations. By integrating this correction into the Riemann SPH method to solve different typical FSI problems with a multi-resolution scheme, numerical results consistently show improvements in accuracy and convergence compared to uncorrected fluid discretization. Despite these advances, further refinement of correction techniques for solid domains and fluid-structure interfaces remains significant for enhancing the overall accuracy of SPH-based FSI modeling and simulation. |
2025-03-13 | Predicting Stock Movement with BERTweet and Transformers | Michael Charles Albada, Mojolaoluwa Joshua Sonola et.al. | 2503.10957 | | 9 pages, 4 figures, 2 tables | Abstract (click to expand)Applying deep learning and computational intelligence to finance has been a popular area of applied research, both within academia and industry, and continues to attract active attention. The inherently high volatility and non-stationary of the data pose substantial challenges to machine learning models, especially so for today's expressive and highly-parameterized deep learning models. Recent work has combined natural language processing on data from social media to augment models based purely on historic price data to improve performance has received particular attention. Previous work has achieved state-of-the-art performance on this task by combining techniques such as bidirectional GRUs, variational autoencoders, word and document embeddings, self-attention, graph attention, and adversarial training. In this paper, we demonstrated the efficacy of BERTweet, a variant of BERT pre-trained specifically on a Twitter corpus, and the transformer architecture by achieving competitive performance with the existing literature and setting a new baseline for Matthews Correlation Coefficient on the Stocknet dataset without auxiliary data sources. |
2025-03-13 | Design and Analysis of an Extreme-Scale, High-Performance, and Modular Agent-Based Simulation Platform | Lukas Johannes Breitwieser et.al. | 2503.10796 | | PhD Thesis submitted to ETH Zurich | Abstract (click to expand)Agent-based modeling is indispensable for studying complex systems across many domains. However, existing simulation platforms exhibit two major issues: performance and modularity. Low performance prevents simulations with a large number of agents, increases development time, limits parameter exploration, and raises computing costs. Inflexible software designs motivate modelers to create their own tools, diverting valuable resources. This dissertation introduces a novel simulation platform called BioDynaMo and its significant improvement, TeraAgent, to alleviate these challenges via three major works. First, we lay the platform's foundation by defining abstractions, establishing software infrastructure, and implementing a multitude of features for agent-based modeling. We demonstrate BioDynaMo's modularity through use cases in neuroscience, epidemiology, and oncology. We validate these models and show the simplicity of adding new functionality with few lines of code. Second, we perform a rigorous performance analysis and identify challenges for shared-memory parallelism. Provided solutions include an optimized grid for neighbor searching, mechanisms to reduce the memory access latency, and exploiting domain knowledge to omit unnecessary work. These improvements yield up to three orders of magnitude speedups, enabling simulations of 1.7 billion agents on a single server. Third, we present TeraAgent, a distributed simulation engine that allows scaling out the computation of one simulation to multiple servers. We identify and address server communication bottlenecks and implement solutions for serialization and delta encoding to accelerate and reduce data transfer. TeraAgent can simulate 500 billion agents and scales to 84096 CPU cores. BioDynaMo has been widely adopted, including a prize-winning radiotherapy simulation recognized as a top 10 breakthrough in physics in 2024. |
2025-03-13 | Unifying monitoring and modelling of water concentration levels in surface waters | Peter B Sorensen, Anders Nielsen, Peter E Holm et.al. | 2503.10285 | | 41 pages, 11 figures, Developed to support the Danish EPA | Abstract (click to expand)Accurate prediction of expected concentrations is essential for effective catchment management, requiring both extensive monitoring and advanced modeling techniques. However, due to limitations in the equation solving capacity, the integration of monitoring and modeling has been suffering suboptimal statistical approaches. This limitation results in models that can only partially leverage monitoring data, thus being an obstacle for realistic uncertainty assessments by overlooking critical correlations between both measurements and model parameters. This study presents a novel solution that integrates catchment monitoring and a unified hieratical statistical catchment modeling that employs a log-normal distribution for residuals within a left-censored likelihood function to address measurements below detection limits. This enables the estimation of concentrations within sub-catchments in conjunction with a source/fate sub-catchment model and monitoring data. This approach is possible due to a model builder R package denoted RTMB. The proposed approach introduces a statistical paradigm based on a hierarchical structure, capable of accommodating heterogeneous sampling across various sampling locations and the authors suggest that this also will encourage further refinement of other existing modeling platforms within the scientific community to improve synergy with monitoring programs. The application of the method is demonstrated through an analysis of nickel concentrations in Danish surface waters. |
2025-03-13 | A Neumann-Neumann Acceleration with Coarse Space for Domain Decomposition of Extreme Learning Machines | Chang-Ock Lee, Byungeun Ryoo et.al. | 2503.10032 | | 21 pages, 6 figures, 6 tables | Abstract (click to expand)Extreme learning machines (ELMs), which preset hidden layer parameters and solve for last layer coefficients via a least squares method, can typically solve partial differential equations faster and more accurately than Physics Informed Neural Networks. However, they remain computationally expensive when high accuracy requires large least squares problems to be solved. Domain decomposition methods (DDMs) for ELMs have allowed parallel computation to reduce training times of large systems. This paper constructs a coarse space for ELMs, which enables further acceleration of their training. By partitioning interface variables into coarse and non-coarse variables, selective elimination introduces a Schur complement system on the non-coarse variables with the coarse problem embedded. Key to the performance of the proposed method is a Neumann-Neumann acceleration that utilizes the coarse space. Numerical experiments demonstrate significant speedup compared to a previous DDM method for ELMs. |
2025-03-12 | A Deep Reinforcement Learning Approach to Automated Stock Trading, using xLSTM Networks | Faezeh Sarlakifar, Mohammadreza Mohammadzadeh Asl, Sajjad Rezvani Khaledi et.al. | 2503.09655 | | | Abstract (click to expand)Traditional Long Short-Term Memory (LSTM) networks are effective for handling sequential data but have limitations such as gradient vanishing and difficulty in capturing long-term dependencies, which can impact their performance in dynamic and risky environments like stock trading. To address these limitations, this study explores the usage of the newly introduced Extended Long Short Term Memory (xLSTM) network in combination with a deep reinforcement learning (DRL) approach for automated stock trading. Our proposed method utilizes xLSTM networks in both actor and critic components, enabling effective handling of time series data and dynamic market environments. Proximal Policy Optimization (PPO), with its ability to balance exploration and exploitation, is employed to optimize the trading strategy. Experiments were conducted using financial data from major tech companies over a comprehensive timeline, demonstrating that the xLSTM-based model outperforms LSTM-based methods in key trading evaluation metrics, including cumulative return, average profitability per trade, maximum earning rate, maximum pullback, and Sharpe ratio. These findings mark the potential of xLSTM for enhancing DRL-based stock trading systems. |
2025-03-18 | Leveraging LLMS for Top-Down Sector Allocation In Automated Trading | Ryan Quek Wei Heng, Edoardo Vittori, Keane Ong et.al. | 2503.09647 | | | Abstract (click to expand)This paper introduces a methodology leveraging Large Language Models (LLMs) for sector-level portfolio allocation through systematic analysis of macroeconomic conditions and market sentiment. Our framework emphasizes top-down sector allocation by processing multiple data streams simultaneously, including policy documents, economic indicators, and sentiment patterns. Empirical results demonstrate superior risk-adjusted returns compared to traditional cross momentum strategies, achieving a Sharpe ratio of 2.51 and portfolio return of 8.79% versus -0.61 and -1.39% respectively. These results suggest that LLM-based systematic macro analysis presents a viable approach for enhancing automated portfolio allocation decisions at the sector level. |
2025-03-12 | AI-based Framework for Robust Model-Based Connector Mating in Robotic Wire Harness Installation | Claudius Kienle, Benjamin Alt, Finn Schneider et.al. | 2503.09409 | | 6 pages, 6 figures, 4 tables, submitted to the 2025 IEEE 21st International Conference on Automation Science and Engineering | Abstract (click to expand)Despite the widespread adoption of industrial robots in automotive assembly, wire harness installation remains a largely manual process, as it requires precise and flexible manipulation. To address this challenge, we design a novel AI-based framework that automates cable connector mating by integrating force control with deep visuotactile learning. Our system optimizes search-and-insertion strategies using first-order optimization over a multimodal transformer architecture trained on visual, tactile, and proprioceptive data. Additionally, we design a novel automated data collection and optimization pipeline that minimizes the need for machine learning expertise. The framework optimizes robot programs that run natively on standard industrial controllers, permitting human experts to audit and certify them. Experimental validations on a center console assembly task demonstrate significant improvements in cycle times and robustness compared to conventional robot programming approaches. Videos are available under https://claudius-kienle.github.io/AppMuTT. |
2025-03-12 | Large-scale Thermo-Mechanical Simulation of Laser Beam Welding Using High-Performance Computing: A Qualitative Reproduction of Experimental Results | Tommaso Bevilacqua, Andrey Gumenyuk, Niloufar Habibi et.al. | 2503.09345 | | | Abstract (click to expand)Laser beam welding is a non-contact joining technique that has gained significant importance in the course of the increasing degree of automation in industrial manufacturing. This process has established itself as a suitable joining tool for metallic materials due to its non-contact processing, short cycle times, and small heat-affected zones. One potential problem, however, is the formation of solidification cracks, which particularly affects alloys with a pronounced melting range. Since solidification cracking is influenced by both temperature and strain rate, precise measurement technologies are of crucial importance. For this purpose, as an experimental setup, a Controlled Tensile Weldability (CTW) test combined with a local deformation measurement technique is used. The aim of the present work is the development of computational methods and software tools to numerically simulate the CTW. The numerical results are compared with those obtained from the experimental CTW. In this study, an austenitic stainless steel sheet is selected. A thermo-elastoplastic material behavior with temperature-dependent material parameters is assumed. The time-dependent problem is first discretized in time and then the resulting nonlinear problem is linearized with Newton's method. For the discretization in space, finite elements are used. In order to obtain a sufficiently accurate solution, a large number of finite elements has to be used. In each Newton step, this yields a large linear system of equations that has to be solved. Therefore, a highly parallel scalable solver framework, based on the software library PETSc, was used to solve this computationally challenging problem on a high-performance computing architecture. Finally, the experimental results and the numerical simulations are compared, showing to be qualitatively in good agreement. |
2025-03-12 | A 3d particle visualization system for temperature management | Benoit Lange, Nancy Rodriguez, William Puech et.al. | 2503.09198 | | | Abstract (click to expand)This paper deals with a 3D visualization technique proposed to analyze and manage energy efficiency from a data center. Data are extracted from sensors located in the IBM Green Data Center in Montpellier France. These sensors measure different information such as hygrometry, pressure and temperature. We want to visualize in real-time the large among of data produced by these sensors. A visualization engine has been designed, based on particles system and a client server paradigm. In order to solve performance problems, a Level Of Detail solution has been developed. These methods are based on the earlier work introduced by J. Clark in 1976. In this paper we introduce a particle method used for this work and subsequently we explain different simplification methods we have applied to improve our solution. |
2025-03-11 | Capturing Lifecycle System Degradation in Digital Twin Model Updating | Yifan Tang, Mostafa Rahmani Dehaghani, G. Gary Wang et.al. | 2503.08953 | | 32 pages, 25 figures | Abstract (click to expand)Digital twin (DT) has emerged as a powerful tool to facilitate monitoring, control, and other decision-making tasks in real-world engineering systems. Online update methods have been proposed to update DT models. Considering the degradation behavior in the system lifecycle, these methods fail to enable DT models to predict the system responses affected by the system degradation over time. To alleviate this problem, degradation models of measurable parameters have been integrated into DT construction. However, identifying the degradation parameters relies on prior knowledge of the system and expensive experiments. To mitigate those limitations, this paper proposes a lifelong update method for DT models to capture the effects of system degradation on system responses without any prior knowledge and expensive offline experiments on the system. The core idea in the work is to represent the system degradation during the lifecycle as the dynamic changes of DT configurations (i.e., model parameters with a fixed model structure) at all degradation stages. During the lifelong update process, an Autoencoder is adopted to reconstruct the model parameters of all hidden layers simultaneously, so that the latent features taking into account the dependencies among hidden layers are obtained for each degradation stage. The dynamic behavior of latent features among successive degradation stages is then captured by a long short-term memory model, which enables prediction of the latent feature at any unseen stage. Based on the predicted latent features, the model configuration at future degradation stage is reconstructed to determine the new DT model, which predicts the system responses affected by the degradation at the same stage. The test results on two engineering datasets demonstrate that the proposed update method could capture effects of system degradation on system responses during the lifecycle. |
2025-03-11 | Towards Efficient Parametric State Estimation in Circulating Fuel Reactors with Shallow Recurrent Decoder Networks | Stefano Riva, Carolina Introini, J. Nathan Kutz et.al. | 2503.08904 | link | arXiv admin note: text overlap with arXiv:2409.12550 | Abstract (click to expand)The recent developments in data-driven methods have paved the way to new methodologies to provide accurate state reconstruction of engineering systems; nuclear reactors represent particularly challenging applications for this task due to the complexity of the strongly coupled physics involved and the extremely harsh and hostile environments, especially for new technologies such as Generation-IV reactors. Data-driven techniques can combine different sources of information, including computational proxy models and local noisy measurements on the system, to robustly estimate the state. This work leverages the novel Shallow Recurrent Decoder architecture to infer the entire state vector (including neutron fluxes, precursors concentrations, temperature, pressure and velocity) of a reactor from three out-of-core time-series neutron flux measurements alone. In particular, this work extends the standard architecture to treat parametric time-series data, ensuring the possibility of investigating different accidental scenarios and showing the capabilities of this approach to provide an accurate state estimation in various operating conditions. This paper considers as a test case the Molten Salt Fast Reactor (MSFR), a Generation-IV reactor concept, characterised by strong coupling between the neutronics and the thermal hydraulics due to the liquid nature of the fuel. The promising results of this work are further strengthened by the possibility of quantifying the uncertainty associated with the state estimation, due to the considerably low training cost. The accurate reconstruction of every characteristic field in real-time makes this approach suitable for monitoring and control purposes in the framework of a reactor digital twin. |
2025-03-11 | Nonlinear optimals and their role in sustaining turbulence in channel flow | Dario Klingenberg, Rich R. Kerswell et.al. | 2503.08283 | link | | Abstract (click to expand)We investigate the energy transfer from the mean profile to velocity fluctuations in channel flow by calculating nonlinear optimal disturbances,i.e. the initial condition of a given finite energy that achieves the highest possible energy growth during a given fixed time horizon. It is found that for a large range of time horizons and initial disturbance energies, the nonlinear optimal exhibits streak spacing and amplitude consistent with DNS at least at Re_tau = 180, which suggests that they isolate the relevant physical mechanisms that sustain turbulence. Moreover, the time horizon necessary for a nonlinear disturbance to outperform a linear optimal is consistent with previous DNS-based estimates using eddy turnover time, which offers a new perspective on how some turbulent time scales are determined. |
2025-03-11 | XAI4Extremes: An interpretable machine learning framework for understanding extreme-weather precursors under climate change | Jiawen Wei, Aniruddha Bora, Vivek Oommen et.al. | 2503.08163 | | | Abstract (click to expand)Extreme weather events are increasing in frequency and intensity due to climate change. This, in turn, is exacting a significant toll in communities worldwide. While prediction skills are increasing with advances in numerical weather prediction and artificial intelligence tools, extreme weather still present challenges. More specifically, identifying the precursors of such extreme weather events and how these precursors may evolve under climate change remain unclear. In this paper, we propose to use post-hoc interpretability methods to construct relevance weather maps that show the key extreme-weather precursors identified by deep learning models. We then compare this machine view with existing domain knowledge to understand whether deep learning models identified patterns in data that may enrich our understanding of extreme-weather precursors. We finally bin these relevant maps into different multi-year time periods to understand the role that climate change is having on these precursors. The experiments are carried out on Indochina heatwaves, but the methodology can be readily extended to other extreme weather events worldwide. |
2025-03-10 | Network Analysis of Uniswap: Centralization and Fragility in the Decentralized Exchange Market | Tao Yan, Claudio J. Tessone et.al. | 2503.07834 | | | Abstract (click to expand)The Uniswap is a Decentralized Exchange (DEX) protocol that facilitates automatic token exchange without the need for traditional order books. Every pair of tokens forms a liquidity pool on Uniswap, and each token can be paired with any other token to create liquidity pools. This characteristic motivates us to employ a complex network approach to analyze the features of the Uniswap market. This research presents a comprehensive analysis of the Uniswap network using complex network methods. The network on October 31, 2023, is built to observe its recent features, showcasing both scale-free and core-periphery properties. By employing node and edge-betweenness metrics, we detect the most important tokens and liquidity pools. Additionally, we construct daily networks spanning from the beginning of Uniswap V2 on May 5, 2020, until October 31, 2023, and our findings demonstrate that the network becomes increasingly fragile over time. Furthermore, we conduct a robustness analysis by simulating the deletion of nodes to estimate the impact of some extreme events such as the Terra collapse. The results indicate that the Uniswap network exhibits robustness, yet it is notably fragile when deleting tokens with high betweenness centrality. This finding highlights that, despite being a decentralized exchange, Uniswap exhibits significant centralization tendencies in terms of token network connectivity and the distribution of TVL across nodes (tokens) and edges (liquidity pools). |
2025-03-10 | What is missing from existing Lithium-Sulfur models to capture coin-cell behaviour? | Miss. Elizabeth Olisa Monica Marinescu et.al. | 2503.07684 | | 27 pages, 7 figures, conferences presented: ModVal 2025, ECS 2025 | Abstract (click to expand)Lithium-sulfur (Li-S) batteries offer a promising alternative to current lithium-ion (Li-ion) batteries, with a high theoretical energy density, improved safety and high abundance, low cost of materials. For Li-S to reach commercial application, it is essential to understand how the behaviour scales between cell formats; new material development is predominately completed at coin-cell level, whilst pouch-cells will be used for commercial applications. Differences such as reduced electrolyte-to-sulfur (E/S) ratios and increased geometric size at larger cell formats contribute to the behavioural differences, in terms of achievable capacity, cyclability and potential degradation mechanisms. This work focuses on the steps required to capture and test coin-cell behaviour, building upon the existing models within the literature, which predominately focus on pouch-cells. The areas investigated throughout this study, to improve the capability of the model in terms of scaling ability and causality of predictions, include the cathode surface area, precipitation dynamics and C-rate dependence. |
2025-03-10 | Simultaneous Energy Harvesting and Bearing Fault Detection using Piezoelectric Cantilevers | P. Peralta-Braz, M. M. Alamdari, C. T. Chou et.al. | 2503.07462 | | | Abstract (click to expand)Bearings are critical components in industrial machinery, yet their vulnerability to faults often leads to costly breakdowns. Conventional fault detection methods depend on continuous, high-frequency vibration sensing, digitising, and wireless transmission to the cloud-an approach that significantly drains the limited energy reserves of battery-powered sensors, accelerating their depletion and increasing maintenance costs. This work proposes a fundamentally different approach: rather than using instantaneous vibration data, we employ piezoelectric energy harvesters (PEHs) tuned to specific frequencies and leverage the cumulative harvested energy over time as the key diagnostic feature. By directly utilising the energy generated from the machinery's vibrations, we eliminate the need for frequent analog-to-digital conversions and data transmission, thereby reducing energy consumption at the sensor node and extending its operational lifetime. To validate this approach, we use a numerical PEH model and publicly available acceleration datasets, examining various PEH designs with different natural frequencies. We also consider the influence of the classification algorithm, the number of devices, and the observation window duration. The results demonstrate that the harvested energy reliably indicates bearing faults across a range of conditions and severities. By converting vibration energy into both a power source and a diagnostic feature, our solution offers a more sustainable, low-maintenance strategy for fault detection in smart machinery. |
2025-03-10 | Early signs of stuck pipe detection based on Crossformer | Bo Cao, Yu Song, Jin Yang et.al. | 2503.07440 | | 33 pages,9 figure | Abstract (click to expand)Stuck pipe incidents are one of the major challenges in drilling engineering,leading to massive time loss and additional costs.To address the limitations of insufficient long sequence modeling capability,the difficulty in accurately establishing warning threshold,and the lack of model interpretability in existing methods,we utilize Crossformer for early signs of detection indicating potential stuck events in order to provide guidance for on-site drilling engineers and prevent stuck pipe incidents.The sliding window technique is integrated into Crossformer to allow it to output and display longer outputs,the improved Crossformer model is trained using normal time series drilling data to generate predictions for various parameters at each time step.The relative reconstruction error of model is regard as the risk of stuck pipe,thereby considering data that the model can't predict as anomalies,which represent the early signs of stuck pipe incidents.The multi-step prediction capability of Crossformer and relative reconstruction error are combined to assess stuck pipe risk at each time step in advance.We partition the reconstruction error into modeling error and error due to anomalous data fluctuations,furthermore,the dynamic warning threshold and warning time for stuck pipe incidents are determined using the probability density function of reconstruction errors from normal drilling data.The results indicate that our method can effectively detect early signs of stuck pipe incidents during the drilling process.Crossformer exhibits superior modeling and predictive capabilities compared with other deep learning models.Transformer-based models with multi-step prediction capability are more suitable for stuck pipe prediction compared to the current single-step prediction models. |
2025-03-10 | An Analytics-Driven Approach to Enhancing Supply Chain Visibility with Graph Neural Networks and Federated Learning | Ge Zheng, Alexandra Brintrup et.al. | 2503.07231 | | 15 pages, 5 figures, 5 tables, submitted to a journal | Abstract (click to expand)In today's globalised trade, supply chains form complex networks spanning multiple organisations and even countries, making them highly vulnerable to disruptions. These vulnerabilities, highlighted by recent global crises, underscore the urgent need for improved visibility and resilience of the supply chain. However, data-sharing limitations often hinder the achievement of comprehensive visibility between organisations or countries due to privacy, security, and regulatory concerns. Moreover, most existing research studies focused on individual firm- or product-level networks, overlooking the multifaceted interactions among diverse entities that characterise real-world supply chains, thus limiting a holistic understanding of supply chain dynamics. To address these challenges, we propose a novel approach that integrates Federated Learning (FL) and Graph Convolutional Neural Networks (GCNs) to enhance supply chain visibility through relationship prediction in supply chain knowledge graphs. FL enables collaborative model training across countries by facilitating information sharing without requiring raw data exchange, ensuring compliance with privacy regulations and maintaining data security. GCNs empower the framework to capture intricate relational patterns within knowledge graphs, enabling accurate link prediction to uncover hidden connections and provide comprehensive insights into supply chain networks. Experimental results validate the effectiveness of the proposed approach, demonstrating its ability to accurately predict relationships within country-level supply chain knowledge graphs. This enhanced visibility supports actionable insights, facilitates proactive risk management, and contributes to the development of resilient and adaptive supply chain strategies, ensuring that supply chains are better equipped to navigate the complexities of the global economy. |
2025-03-10 | Simulating programmable morphing of shape memory polymer beam systems with complex geometry and topology | Giulio Ferri, Enzo Marino et.al. | 2503.07150 | | | Abstract (click to expand)We propose a novel approach to the analysis of programmable geometrically exact shear deformable beam systems made of shape memory polymers. The proposed method combines the viscoelastic Generalized Maxwell model with the Williams, Landel and Ferry relaxation principle, enabling the reproduction of the shape memory effect of structural systems featuring complex geometry and topology. Very high efficiency is pursued by discretizing the differential problem in space through the isogeometric collocation (IGA-C) method. The method, in addition to the desirable attributes of isogeometric analysis (IGA), such as exactness of the geometric reconstruction of complex shapes and high-order accuracy, circumvents the need for numerical integration since it discretizes the problem in the strong form. Other distinguishing features of the proposed formulation are: i) \({\rm SO}(3)\) -consistency for the linearization of the problem and for the time stepping; ii) minimal (finite) rotation parametrization, that means only three rotational unknowns are used; iii) no additional unknowns are needed to account for the rate-dependent material compared to the purely elastic case. Through different numerical applications involving challenging initial geometries, we show that the proposed formulation possesses all the sought attributes in terms of programmability of complex systems, geometric flexibility, and high order accuracy. |
2025-03-10 | Effect of Selection Format on LLM Performance | Yuchen Han, Yucheng Wu, Jeffrey Willard et.al. | 2503.06926 | | | Abstract (click to expand)This paper investigates a critical aspect of large language model (LLM) performance: the optimal formatting of classification task options in prompts. Through an extensive experimental study, we compared two selection formats -- bullet points and plain English -- to determine their impact on model performance. Our findings suggest that presenting options via bullet points generally yields better results, although there are some exceptions. Furthermore, our research highlights the need for continued exploration of option formatting to drive further improvements in model performance. |
2025-03-09 | Modular Photobioreactor Façade Systems for Sustainable Architecture: Design, Fabrication, and Real-Time Monitoring | Xiujin Liu et.al. | 2503.06769 | | 21 pages, 22 figures, 3 tables | Abstract (click to expand)This paper proposes an innovative solution to the growing issue of greenhouse gas emissions: a closed photobioreactor (PBR) fa\c{c}ade system to mitigate greenhouse gas (GHG) concentrations. With digital fabrication technology, this study explores the transition from traditional, single function building facades to multifunctional, integrated building systems. It introduces a photobioreactor (PBR) fa\c{c}ade system to mitigate greenhouse gas (GHG) concentrations while addressing the challenge of large-scale prefabricated components transportation. This research introduces a novel approach by designing the fa\c{c}ade system as modular, user-friendly and transportation-friendly bricks, enabling the creation of a user-customized and self-assembled photobioreactor (PBR) system. The single module in the system is proposed to be "neutralization bricks", which embedded with algae and equipped with an air circulation system, facilitating the photobioreactor (PBR)'s functionality. A connection system between modules allows for easy assembly by users, while a limited variety of brick styles ensures modularity in manufacturing without sacrificing customization and diversity. The system is also equipped with an advanced microalgae status detection algorithm, which allows users to monitor the condition of the microalgae using monocular camera. This functionality ensures timely alerts and notifications for users to replace the algae, thereby optimizing the operational efficiency and sustainability of the algae cultivation process. |
2025-03-09 | Energy-Adaptive Checkpoint-Free Intermittent Inference for Low Power Energy Harvesting Systems | Sahidul Islam, Wei Wei, Jishnu Banarjee et.al. | 2503.06663 | | | Abstract (click to expand)Deep neural network (DNN) inference in energy harvesting (EH) devices poses significant challenges due to resource constraints and frequent power interruptions. These power losses not only increase end-to-end latency, but also compromise inference consistency and accuracy, as existing checkpointing and restore mechanisms are prone to errors. Consequently, the quality of service (QoS) for DNN inference on EH devices is severely impacted. In this paper, we propose an energy-adaptive DNN inference mechanism capable of dynamically transitioning the model into a low-power mode by reducing computational complexity when harvested energy is limited. This approach ensures that end-to-end latency requirements are met. Additionally, to address the limitations of error-prone checkpoint-and-restore mechanisms, we introduce a checkpoint-free intermittent inference framework that ensures consistent, progress-preserving DNN inference during power failures in energy-harvesting systems. |