Dust grains play a significant role in several astrophysical processes, including gas/dust dynamics, chemical reactions, and radiative transfer. Replenishment of small-grain populations is mainly governed by fragmentation during pair-wise collisions between grains. The wide spectrum of fragmentation outcomes, from complete disruption to erosion and/or mass transfer, can be modelled by the general non-linear fragmentation equation. Efficiently solving this equation is crucial for an accurate treatment of the dust fragmentation in numerical modelling. However, similar to dust coagulation, numerical errors in current fragmentation algorithms employed in astrophysics are dominated by the numerical overdiffusion problem - particularly in three-dimensional hydrodynamic simulations where the discrete resolution of the mass-density distribution tends to be highly limited. With this in mind, we have derived the first conservative form of the general non-linear fragmentation with a mass flux highlighting the mass transfer phenomenon. Then, to address cases of limited mass density resolution, we applied a high-order discontinuous Galerkin scheme to efficiently solve the conservative fragmentation equation with a reduced number of dust bins. An accuracy of <inline-formula><tex-math id="TM0001" notation="LaTeX">$0.1{\!-\!}1~{{\ \rm per\ cent}}$</tex-math></inline-formula> is reached with 20 dust bins spanning a mass range of 9 orders of magnitude.
Star-galaxy separation is a crucial step in creating target catalogues for extragalactic spectroscopic surveys. A classifier biased towards inclusivity risks including high numbers of stars, wasting fibre hours, while a more conservative classifier might overlook galaxies, compromising completeness and hence survey objectives. To avoid bias introduced by a training set in supervised methods, we employ an unsupervised machine learning approach. Using photometry from the Wide Area VISTA Extragalactic Survey (WAVES)-Wide catalogue comprising 9-band u - Ks data, we create a feature space with colours, fluxes, and apparent size information extracted by PROFOUND. We apply the non-linear dimensionality reduction method UMAP (Uniform Manifold Approximation and Projection) combined with the classifier HDBSCAN to classify stars and galaxies. Our method is verified against a baseline colour and morphological method using a truth catalogue from Gaia, SDSS, GAMA, and DESI. We correctly identify 99.75% of galaxies within the AB magnitude limit of Z = 21.2, with an F1 score of 0.9971 ± 0.0018 across the entire ground truth sample, compared to 0.9879 ± 0.0088 from the baseline method. Our method's higher purity (0.9967 ± 0.0021) compared to the baseline (0.9795 ± 0.0172) increases efficiency, identifying 11% fewer galaxy or ambiguous sources, saving approximately 70,000 fibre hours on the 4MOST instrument. We achieve reliable classification statistics for challenging sources including quasars, compact galaxies, and low surface brightness galaxies, retrieving 92.7%, 84.6%, and 99.5% of them respectively. Angular clustering analysis validates our classifications, showing consistency with expected galaxy clustering, regardless of the baseline classification.
High-cadence high-resolution spectroscopic observations of infant Type II supernovae (SNe) represent an exquisite probe of the atmospheres and winds of exploding red-supergiant (RSG) stars. Using radiation hydrodynamics and radiative transfer, we study the gas and radiation properties during and after the phase of shock breakout, considering RSG progenitors enshrouded within a circumstellar material (CSM) that varies in extent, density, and velocity profile. In all cases, the original, unadulterated CSM structure is probed only at the onset of shock breakout, visible in high-resolution spectra as narrow, often blueshifted emission, possibly with an absorption trough. As the SN luminosity rises during breakout, radiative acceleration of the unshocked CSM starts, leading to a broadening of the ``narrow'' lines by ~100 and up to ~1000km/s, depending on CSM properties. This acceleration is maximum close to the shock, where the radiative flux is greater, and thus typically masked by optical-depth effects. Generally, narrow-line broadening is greater for more compact, tenuous CSM because of the proximity to the shock where the flux is born, and smaller in denser and more extended CSM. Narrow-line emission should show a broadening that slowly increases first (the line forms further out in the original wind), then sharply rises (the line forms in a region that is radiatively accelerated), before decreasing until late times (the line forms further away in regions more weakly accelerated). Radiative acceleration should inhibit X-ray emission during the early, IIn phase. Although high spectral resolution is critical at the earliest times to probe the original slow wind, radiative acceleration and associated line broadening may be captured with medium resolution allowing a simultaneous view of narrow, Doppler-broadened as well as extended, electron-scattering broadened line emission.
Abridged: The fortunate proximity of the SN2023ixf allowed astronomers to follow its evolution from almost the moment of the collapse of the progenitor's core. SN2023ixf can be explained as an explosion of a massive star with an energy of 0.7e51 erg, however with a greatly reduced envelope mass, probably because of binary interaction. In our radiative-transfer simulations, the SN ejecta of 6 Msun interact with circumstellar material (CSM) of ~0.6 Msun extending to 1.e15 cm, which results in a light curve (LC) peak matching that of SN2023ixf. The origin of this required CSM might be gravity waves originating from convective shell burning, which could enhance wind-like mass-loss during the late stages of stellar evolution. The steeply rising, low-luminosity flux during the first hours after observationally confirmed non-detection, however, cannot be explained by the collision of the energetic SN shock with the CSM. Instead, we considered it as a precursor that we could fit by the emission from ~0.5 Msun of matter that was ejected with an energy of 1.e49 erg a fraction of a day before the main shock of the SN explosion reached the surface of the progenitor. The source of this energy injection into the outermost shell of the stellar envelope could also be dynamical processes related to the convective activity in the progenitor's interior or envelope. Alternatively, the early rise of the LC could point to the initial breakout of a highly non-spherical SN shock or of fast-moving, asymmetrically ejected matter that was swept out well ahead of the SN shock, potentially in a low-energy, nearly relativistic jet. We also discuss that pre-SN outbursts and LC precursors can be used to study or to constrain energy deposition in the outermost stellar layers by the decay of exotic particles, such as axions, which could be produced simultaneously with neutrinos in the newly formed, hot neutron star.
Context. Recent evidence from spectroscopic surveys points towards the presence of a metal-poor, young stellar population in the low- α, chemically thin disk. In this context, the investigation of the spatial distribution and time evolution of precise, unbiased abundances is fundamental to disentangle the scenarios of formation and evolution of the Galaxy. Aims. We study the evolution of abundance gradients in the Milky Way by taking advantage of a large sample of open star clusters, which are among the best tracers for this purpose. In particular, we used data from the last release of the Gaia-ESO survey. Methods. We performed a careful selection of open cluster member stars, excluding those members that may be affected by biases in spectral analysis. We compared the cleaned open cluster sample with detailed chemical evolution models for the Milky Way, using well-tested stellar yields and prescription for radial migration. We tested different scenarios of Galaxy evolution to explain the data, namely, the two-infall and the three-infall frameworks, which suggest the chemical thin disk is formed by one or two subsequent gas accretion episodes, respectively. Results. With the performed selection in cluster member stars, we still find a metallicity decrease between intermediate-age (1 < Age/Gyr < 3) and young (Age < 1 Gyr) open clusters. This decrease cannot be explained in the context of the two-infall scenario, even by accounting for the effect of migration and yield prescriptions. The three-infall framework, with its late gas accretion in the last 3 Gyr, is able to explain the low metallic content in young clusters. However, we have invoked a milder metal dilution for this gas infall episode relative to previous findings. Conclusions. To explain the observed low metallic content in young clusters, we propose that a late gas accretion episode triggering a metal dilution would have taken place, extending the framework of the three-infall model for the first time to the entire Galactic disk.
Protoplanetary discs, as the birthplaces and nurseries of planets, are crucial to understanding planet formation. Disc winds and planet-disc interactions are fundamental mechanisms shaping the structure and evolution of protoplanetary discs and the planets within them. Massive planets can influence their discs by creating substructures such as gaps and spiral density waves, significantly impacting the dynamics of gas and dust within the disc. Winds can strip material from the disc, eventually dispersing it and setting an upper limit on both its lifetime and the timeframe available for planet formation. Despite their importance, the detailed mechanisms driving these winds – particularly the roles of thermal and magnetic processes at various locations and evolutionary stages – remain poorly constrained. This thesis investigates the intricate interplay between a thermal disc wind launched by X-ray photoevaporation and the substructures produced by giant planets. While previous detailed studies examined these processes separately, this work integrates them into one comprehensive model to investigate their interactions. Additional focus is put on producing synthetic observations of atomic forbidden emission lines in several disc wind models that can be compared to observational data and help constrain the launching conditions of disc winds. [...]
In single-field inflation, violation of the slow-roll approximation can lead to growth of curvature perturbation outside the horizon. This violation is characterized by a period with a large negative value of the second slow-roll parameter. At an early time, inflation must satisfy the slow-roll approximation, so the large-scale curvature perturbation can explain the cosmic microwave background fluctuations. At intermediate time, it is viable to have a theory that violates the slow-roll approximation, which implies amplification of the curvature perturbation on small scales. Specifically, we consider ultraslow-roll inflation as the intermediate period. At late time, inflation should go back to the slow roll period so that it can end. This means that there are two transitions of the second slow-roll parameter. In this paper, we compare two different possibilities for the second transition: sharp and smooth transitions. Focusing on effects generated by the relevant cubic self-interaction of the curvature perturbation, we find that the bispectrum and one-loop correction to the power spectrum due to the change of the second slow-roll parameter vanish if and only if the Mukhanov-Sasaki equation for perturbation satisfies a specific condition called Wands duality. We also find in the case of sharp transition that, even though this duality is satisfied in the ultraslow-roll and slow-roll phases, it is severely violated at the transition so that the resultant one-loop correction is extremely large inversely proportional to the duration of the transition.
We identify a new production channel for QCD axions in supernova environments that contributes to axion emissivity for all models solving the strong CP problem. This channel arises at tree-level from a shift-symmetry-breaking operator constructed at next-to-leading order in Chiral Perturbation Theory. In scenarios where model-dependent derivative couplings to nucleons are absent, this sets the strongest model-independent constraint on the axion mass, improving on existing bounds by two orders of magnitude.
Baryonification algorithms model the impact of galaxy formation and feedback on the matter field in gravity-only simulations by adopting physically motivated parametric prescriptions. In this paper, we extend these models to describe gas temperature and pressure, allowing for a self-consistent modelling of the thermal Sunyaev-Zel'dovich effect, weak gravitational lensing, and their cross-correlation, down to small scales. We validate our approach by showing that it can simultaneously reproduce the electron pressure, gas, stellar, and dark matter power spectra as measured in all BAHAMAS hydrodynamical simulations. Specifically, with only two additional free parameters, we can fit the electron pressure auto- and cross-power spectra at 10% while reproducing the suppression in the matter power spectrum induced by baryons at the per cent level, for different active galactic nuclei (AGN) feedback strengths in BAHAMAS. Furthermore, we reproduce BAHAMAS convergence and thermal Sunyaev Zel'dovich angular power spectra within 1% and 10% accuracy, respectively, down to ℓ = 5000. When used jointly with cosmological rescaling algorithms, the baryonification presented here allows for a fast and accurate exploration of cosmological and astrophysical scenarios. Therefore, it can be employed to create mock catalogues, lightcones, and large training sets for emulators aimed at interpreting forthcoming multi-wavelength observations of the large-scale structure of the Universe.
Tentative observations of cosmic-ray antihelium by the AMS-02 collaboration have re-energized the quest to use antinuclei to search for physics beyond the standard model. However, our transition to a data-driven era requires more accurate models of the expected astrophysical antinuclei fluxes. We use a state-of-the-art cosmic-ray propagation model, fit to high-precision antiproton and cosmic-ray nuclei (B, Be, Li) data, to constrain the antinuclei flux from both astrophysical and dark matter annihilation models. We show that astrophysical sources are capable of producing
We investigate ultra-high frequency gravitational waves (GWs) from gravitons generated during inflationary reheating. Specifically, we study inflaton scattering with its decay product, where the couplings involved in this 2 → 2 scattering are the same as those in the 1 → 3 graviton Bremsstrahlung process. We compute the graviton production rate via such 2 → 2 scattering. Additionally, we compare the resulting GW spectrum with that from Bremsstrahlung as well as that from pure 2 → 2 inflaton scatterings. For completeness, the GW spectrum from graviton pair production through one-loop induced 1 → 2 inflaton decay is also analyzed. With a systematic comparison among the four sources of GWs, we find that 2 → 2 inflaton scattering with its decay product can dominate over Bremsstrahlung if the reheating temperature is larger than the inflaton mass. Pure inflaton 2 → 2 scattering is typically subdominant compared to Bremsstrahlung except in the high-frequency tail. The contribution from one-loop induced 1 → 2 inflaton decay is shown to be suppressed compared to Bremsstrahlung and pure inflaton 2 → 2 scattering.
We present a flow-based generative approach to emulate grids of stellar evolutionary models. By interpreting the input parameters and output properties of these models as multidimensional probability distributions, we train conditional normalizing flows to learn and predict the complex relationships between grid inputs and outputs in the form of conditional joint distributions. Leveraging the expressive power and versatility of these flows, we showcase their ability to emulate a variety of evolutionary tracks and isochrones across a continuous range of input parameters. In addition, we describe a simple Bayesian approach for estimating stellar parameters using these flows and demonstrate its application to asteroseismic data sets of red giants observed by the Kepler mission. By applying this approach to red giants in open clusters NGC 6791 and NGC 6819, we illustrate how large age uncertainties can arise when fitting only to global asteroseismic and spectroscopic parameters without prior information on initial helium abundances and mixing length parameter values. We also conduct inference using the flow at a large scale by determining revised estimates of masses and radii for 15,388 field red giants. These estimates show improved agreement with results from existing grid-based modeling, reveal distinct population-level features in the red clump, and suggest that the masses of Kepler red giants previously determined using the corrected asteroseismic scaling relations have been overestimated by 5%–10%.
The renormalization group equations for large-scale structure (RG-LSS) describe how the bias and stochastic (noise) parameters — both of matter and biased tracers such as galaxies — evolve as a function of the cutoff Λ of the effective field theory. In previous work, we derived the RG-LSS equations for the bias parameters using the Wilson-Polchinski framework. Here, we extend these results to include stochastic contributions, corresponding to terms in the effective action that are higher order in the current J. We derive the general local interaction terms that describe stochasticity at all orders in perturbations, and a closed set of nonlinear RG equations for their coefficients. These imply that a single nonlinear bias term generates all stochastic moments through RG evolution. Further, the evolution is controlled by a different, lower scale than the nonlinear scale. This has implications for the optimal choice of the renormalization scale when comparing the theory with data to obtain cosmological constraints.
In prior studies, a very minimal Yukawa sector within the SO(10) Grand Unified Theory framework has been identified, comprising of Higgs fields belonging to a real 10H, a real 120H, and a <inline-formula id="IEq1"><mml:math display="inline"><mml:msub><mml:mover accent="true"><mml:mn>126</mml:mn><mml:mo stretchy="true">¯</mml:mo></mml:mover><mml:mi>H</mml:mi></mml:msub></mml:math></inline-formula> dimensional representations. In this work, within this minimal framework, we have obtained fits to fermion masses and mixings while successfully reproducing the cosmological baryon asymmetry via leptogenesis. The right-handed neutrino (Ni) mass spectrum obtained from the fit is strongly hierarchical, suggesting that B ‑ L asymmetry is dominantly produced from N2 dynamics while N1 is responsible for erasing the excess asymmetry. With this rather constrained Yukawa sector, fits are obtained both for normal and inverted ordered neutrino mass spectra, consistent with leptonic CP-violating phase δCP indicated by global fits of neutrino oscillation data, while also satisfying the current limits from neutrinoless double beta decay experiments. In particular, the leptonic CP-violating phase has a preference to be in the range δCP ≃ (230 – 300)°. We also show the consistency of the framework with gauge coupling unification and proton lifetime limits.
To fulfil its science requirements, the Ariel space mission[1] has been specifically designed to have a stable payload and satellite platform optimised to provide a broad, instantaneous wavelength coverage to detect many molecular species, probe the thermal structure, identify/characterize clouds and monitor the stellar activity. The chosen wavelength range, from 0.5 to 7.8 µm, covers all the expected major atmospheric gases from, e.g. H2O, CO2, CH4, NH3, HCN, H2S, through to the more exotic metallic compounds, such as TiO, VO, and condensed species.In the frame of the "Spectral Data and databases" working group, 50+ members of the Ariel science team and colleagues were invited to contribute to a White Paper entitled: "Data availability and requirements relevant for the Ariel space mission and other exoplanet atmosphere applications"[2]. The goal of this 70-pages work submitted for a publication to RASTI is to provide a snapshot of the data availability and data needs primarily for the Ariel space mission, but also for related atmospheric studies of exoplanets and brown dwarfs in general. It covers the following data-related topics: molecular and atomic line lists, line profiles, computed cross-sections and opacities, collision-induced absorption and other continuum data, optical properties of aerosols and surfaces, atmospheric chemistry, UV photodissociation and photoabsorption cross-sections, and standards in the description and format of such data. These data aspects are discussed by addressing the following questions for each topic, based on the experience of the "data-provider" and "data-user" communities: (1) what are the types and sources of currently available data, (2) what work is currently in progress, and (3) what are the current and anticipated data needs. Our aim is to provide practical information on existing sources of data whether in databases, theoretical, or literature sources.In addition, a project on the GitHub platform - github.com/Ariel-data -has been created to foster collaboration between the communities. As an open access tool, GitHub provides huge advantages of forming direct dialogues and become a go-to place for both data users and data providers, even for those who are currently not directly involved in the Ariel consortium or in the field of exoplanetary science in general.References[1] G. Tinetti et al., ESA Definition Study Report},(2020) - sci.esa.int/documents/34022/36216/Ariel_Definition_Study_Report_2020.pdf[2] K.L. Chubb, S. Robert, C. Sousa-Silva, S.N. Yurchenko, et al., RAS Techniques and Instruments, submitted (2024) - arXiv:2404.02188.
In this work, we significantly enhance masked particle modeling (MPM), a self-supervised learning scheme for constructing highly expressive representations of unordered sets relevant to developing foundation models for high-energy physics. In MPM, a model is trained to recover the missing elements of a set, a learning objective that requires no labels and can be applied directly to experimental data. We achieve significant performance improvements over previous work on MPM by addressing inefficiencies in the implementation and incorporating a more powerful decoder. We compare several pre-training tasks and introduce new reconstruction methods that utilize conditional generative models without data tokenization or discretization. We show that these new methods outperform the tokenized learning objective from the original MPM on a new test bed for foundation models for jets, which includes using a wide variety of downstream tasks relevant to jet physics, such as classification, secondary vertex finding, and track identification.
The essence of the memory burden effect is that a load of information carried by a system stabilizes it. This universal effect is especially prominent in systems with a high capacity of information storage, such as black holes and other objects with maximal microstate degeneracy, the entities universally referred to as "saturons." The phenomenon has several implications. The memory burden effect suppresses a further decay of a black hole, the latest, after it has emitted about half of its initial mass. As a consequence, the light primordial black holes that previously were assumed to be fully evaporated are expected to be present as viable dark matter candidates. In the present paper, we deepen the understanding of the memory burden effect. We first identify various memory burden regimes in generic Hamiltonian systems and then establish a precise correspondence in solitons and in black holes. We make transparent, at a microscopic level, the fundamental differences between the stabilization by a quantum memory burden versus the stabilization by a long-range classical hair due to a spin or an electric charge. We identify certain new features of potential observational interest, such as the model-independent spread of the stabilized masses of initially degenerate primordial black holes.
We propose a simple fit function, <inline-formula><mml:math display="inline"><mml:msub><mml:mi>L</mml:mi><mml:msub><mml:mi>ν</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:mi>t</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mo>=</mml:mo><mml:mi>C</mml:mi><mml:msup><mml:mi>t</mml:mi><mml:mrow><mml:mo>-</mml:mo><mml:mi>α</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mi>e</mml:mi><mml:mrow><mml:mo>-</mml:mo><mml:mo stretchy="false">(</mml:mo><mml:mi>t</mml:mi><mml:mo>/</mml:mo><mml:mi>τ</mml:mi><mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mi>n</mml:mi></mml:msup></mml:mrow></mml:msup></mml:math></inline-formula>, to parametrize the luminosities of neutrinos and antineutrinos of all flavors during the protoneutron star (PNS) cooling phase at postbounce times <inline-formula><mml:math display="inline"><mml:mi>t</mml:mi><mml:mo>≳</mml:mo><mml:mn>1</mml:mn><mml:mtext> </mml:mtext><mml:mtext> </mml:mtext><mml:mi mathvariant="normal">s</mml:mi></mml:math></inline-formula>. This fit is based on results from a set of neutrino-hydrodynamics simulations of core-collapse supernovae in spherical symmetry. The simulations were performed with an energy-dependent transport for six neutrino species and took into account the effects of convection and muons in the dense and hot PNS interior. We provide values of the fit parameters <inline-formula><mml:math display="inline"><mml:mi>C</mml:mi></mml:math></inline-formula>, <inline-formula><mml:math display="inline"><mml:mi>α</mml:mi></mml:math></inline-formula>, <inline-formula><mml:math display="inline"><mml:mi>τ</mml:mi></mml:math></inline-formula>, and <inline-formula><mml:math display="inline"><mml:mi>n</mml:mi></mml:math></inline-formula> for different neutron star masses and equations of state as well as correlations between these fit parameters. Our functional description is useful for analytic supernova modeling, for characterizing the neutrino light curves in large underground neutrino detectors, and as a tool to extract information from measured signals on the mass and equation of state of the PNS and on secondary signal components on top of the PNS's neutrino emission.
Neutron stars provide a unique opportunity to study strongly interacting matter under extreme density conditions. The intricacies of matter inside neutron stars and their equation of state are not directly visible, but determine bulk properties, such as mass and radius, which affect the star's thermal X-ray emissions. However, the telescope spectra of these emissions are also affected by the stellar distance, hydrogen column, and effective surface temperature, which are not always well-constrained. Uncertainties on these nuisance parameters must be accounted for when making a robust estimation of the equation of state. In this study, we develop a novel methodology that, for the first time, can infer the full posterior distribution of both the equation of state and nuisance parameters directly from telescope observations. This method relies on the use of neural likelihood estimation, in which normalizing flows use samples of simulated telescope data to learn the likelihood of the neutron star spectra as a function of these parameters, coupled with Hamiltonian Monte Carlo methods to efficiently sample from the corresponding posterior distribution. Our approach surpasses the accuracy of previous methods, improves the interpretability of the results by providing access to the full posterior distribution, and naturally scales to a growing number of neutron star observations expected in the coming years.
Gravitational lensing by galaxy clusters involves hundreds of galaxies over a large redshift range and increases the likelihood of rare phenomena (supernovae, microlensing, dark substructures, etc.). Characterizing the mass and light distributions of foreground and background objects often requires a combination of high-resolution data and advanced modeling techniques. We present the detailed analysis of El Anzuelo, a prominent quintuply imaged dusty star-forming galaxy (ɀs = 2.29), mainly lensed by three members of the massive galaxy cluster ACT-CL J0102–4915, also known as El Gordo (ɀd = 0.87). We leverage JWST/NIRCam images, which contain lensing features that were unseen in previous HST images, using a Bayesian, multi-wavelength, differentiable and GPU-accelerated modeling framework that combines HERCULENS (lens modeling) and NIFTY (field model and inference) software packages. For one of the deflectors, we complement lensing constraints with stellar kinematics measured from VLT/MUSE data. In our lens model, we explicitly include the mass distribution of the cluster, locally corrected by a constant shear field. We find that the two main deflectors (L1 and L2) have logarithmic mass density slopes steeper than isothermal, with γL1 = 2.23 ± 0.05 and γL2 = 2.21 ± 0.04. We argue that such steep density profiles can arise due to tidally truncated mass distributions, which we probe thanks to the cluster lensing boost and the strong asymmetry of the lensing configuration. Moreover, our three-dimensional source model captures most of the surface brightness of the lensed galaxy, revealing a clump with a maximum diameter of 400 parsecs at the source redshift, visible at wavelengths λrest ≳ 0.6 µm. Finally, we caution on using point-like features within extended arcs to constrain galaxy-scale lens models before securing them with extended arc modeling.
The joint probability distribution of matter overdensity and galaxy counts in cells is a powerful probe of cosmology, and the extent to which variance in galaxy counts at fixed matter density deviates from Poisson shot noise is not fully understood. The lack of informed bounds on this stochasticity is currently the limiting factor in constraining cosmology with the galaxy–matter probability distribution function (PDF). We investigate stochasticity in the conditional distribution of galaxy counts along lines of sight with fixed matter density, and we present a halo occupation distribution (HOD)-based approach for obtaining plausible ranges for stochasticity parameters. To probe the high-dimensional space of possible galaxy–matter connections, we derive a set of HODs that conserve the galaxies' linear bias and number density to produce REDMAGIC-like galaxy catalogs within the ABACUSSUMMIT suite of N-body simulations. We study the impact of individual HOD parameters and cosmology on stochasticity and perform a Monte Carlo search in HOD parameter space subject to the constraints on bias and density. In mock catalogs generated by the selected HODs, shot noise in galaxy counts spans both sub-Poisson and super-Poisson values, ranging from 80% to 133% of Poisson variance for cells with mean matter density. Nearly all of the derived HODs show a positive relationship between local matter density and stochasticity. For galaxy catalogs with higher stochasticity, modeling galaxy bias to second order is required for an accurate description of the conditional PDF of galaxy counts at fixed matter density. The presence of galaxy assembly bias also substantially extends the range of stochasticity in the super-Poisson direction. This HOD-based approach leverages degrees of freedom in the galaxy–halo connection to obtain informed bounds on nuisance model parameters and can be adapted to study other parametrizations of shot noise in galaxy counts, in particular to motivate prior ranges on stochasticity for cosmological analyses.
A newfound interest has been seen in narrowband galaxy surveys as a promising method for achieving the necessary accuracy on the photometric redshift estimate of individual galaxies for next-generation stage IV cosmological surveys. One key advantage is the ability to provide higher spectral resolution information on galaxies, which ought to allow for a more accurate and precise estimation of the stellar population properties for galaxies. However, the impact of adding narrowband photometry on the stellar population properties estimate is largely unexplored. The scope of this work is two-fold: 1) we leverage the predictive power of broadband and narrowband data to infer galaxy physical properties, such as stellar masses, ages, star formation rates, and metallicities; and 2) we evaluate the improvement of performance in estimating galaxy properties when we use narrowband instead of broadband data. In this work, we measured the stellar population properties of a sample of galaxies in the COSMOS field for which both narrowband and broadband data are available. In particular, we employed narrowband data from the Physics of the Accelerating Universe Survey (PAUS) and broadband data from the Canada France Hawaii Telescope legacy survey (CFHTLS). We used two different spectral energy distribution (SED) fitting codes to measure galaxy properties, namely, CIGALE and PROSPECTOR. We find that the increased spectral resolution of narrowband photom try does not yield a substantial improvement in terms of constraining the galaxy properties using the SED fitting. Nonetheless, we find that we are able to obtain a more diverse distribution of metallicities and dust optical depths with CIGALE when employing the narrowband data. The effect is not as prominent as expected, which we relate to the low narrowband signal-to-noise ratio (S/N) of a majority of the sampled galaxies, the respective drawbacks of both codes, and the restriction of coverage to the optical regime. The measured properties are compared to those reported in the COSMOS2020 catalogue, showing a good agreement. We have released the catalogue of measured properties in tandem with this work.
For cellular functions like division and polarization, protein pattern formation driven by NTPase cycles is a central spatial control strategy. Operating far from equilibrium, no general theory links microscopic reaction networks and parameters to the pattern type and dynamics. We discover a generic mechanism giving rise to an effective interfacial tension organizing the macroscopic structure of non-equilibrium steady-state patterns. Namely, maintaining protein-density interfaces by cyclic protein attachment and detachment produces curvature-dependent protein redistribution which straightens the interface. We develop a non-equilibrium Neumann angle law and Plateau vertex conditions for interface junctions and mesh patterns, thus introducing the concepts of ``Turing mixtures'' and ``Turing foams''. In contrast to liquid foams and mixtures, these non-equilibrium patterns can select an intrinsic wavelength by interrupting an equilibrium-like coarsening process. Data from in vitro experiments with the E. coli Min protein system verifies the vertex conditions and supports the wavelength dynamics. Our study uncovers interface laws with correspondence to thermodynamic relations that arise from distinct physical processes in active systems. It allows the design of specific pattern morphologies with potential applications as spatial control strategies in synthetic cells.
In dense neutrino environments, such as provided by core-collapse supernovae or neutron-star mergers, neutrino angular distributions may be unstable to collective flavor conversions, whose outcome remains to be fully understood. These conversions are much faster than hydrodynamical scales, suggesting that self-consistent configurations may never be strongly unstable. With this motivation in mind, we study weakly unstable modes, i.e., those with small growth rates. We show that our newly developed dispersion relation (Paper~I of this series) allows for an expansion in powers of the small growth rate. For weakly unstable distributions, we show that the unstable modes must either move with subluminal phase velocity, or very close to the speed of light. The instability is fed from neutrinos moving resonantly with the waves, allowing us to derive explicit expressions for the growth rate. For axisymmetric distributions, often assumed in the literature, numerical examples show the accuracy of these expressions. We also note that for the often-studied one-dimensional systems one should not forget the axial-symmetry-breaking modes, and we provide explicit expressions for the range of wavenumbers that exhibit instabilities.
We propose masked particle modeling (MPM) as a self-supervised method for learning generic, transferable, and reusable representations on unordered sets of inputs for use in high energy physics (HEP) scientific data. This work provides a novel scheme to perform masked modeling based pre-training to learn permutation invariant functions on sets. More generally, this work provides a step towards building large foundation models for HEP that can be generically pre-trained with self-supervised learning and later fine-tuned for a variety of down-stream tasks. In MPM, particles in a set are masked and the training objective is to recover their identity, as defined by a discretized token representation of a pre-trained vector quantized variational autoencoder. We study the efficacy of the method in samples of high energy jets at collider physics experiments, including studies on the impact of discretization, permutation invariance, and ordering. We also study the fine-tuning capability of the model, showing that it can be adapted to tasks such as supervised and weakly supervised jet classification, and that the model can transfer efficiently with small fine-tuning data sets to new classes and new data domains.
We present magnetohydrodynamic simulations of star formation in the multiphase interstellar medium to quantify the impact of non-ionising far-ultraviolet (FUV) radiation. This study is carried out within the framework of the \textsc{Silcc Project}. It incorporates the radiative transfer of ionising radiation and self-consistent modelling of variable FUV radiation from star clusters. Near young star clusters, the interstellar radiation field (ISRF) can reach values of $G_0 \approx 10^4$ (in Habing units), far exceeding the canonical solar neighbourhood value of $G_0 = 1.7$. However, our findings suggest that FUV radiation has minimal impact on the integrated star formation rate compared to other feedback mechanisms such as ionising radiation, stellar winds, and supernovae. Only a slight decrease in star formation burstiness, related to increased photoelectric heating efficiency by the variable FUV radiation field, is detectable. Dust near star-forming regions can be heated up to 60 K via the photoelectric (PE) effect, showing a broad temperature distribution. PE heating rates for variable FUV radiation models show higher peak intensities but lower average heating rates than static ISRF models. Simulations of solar neighbourhood conditions without stellar winds or ionising radiation but with self-consistent ISRF and supernovae show high star formation rates $\sim10^{-1}\,\mathrm{M_\odot\,yr^{-1}\,kpc^{-2}}$, contradicting expectations. Our chemical analysis reveals increased cold neutral medium volume-filling factors (VFF) outside the vicinity of stellar clusters with a variable ISRF. Simultaneously, the thermally unstable gas is reduced, and a sharper separation of warm and cold gas phases is observed. The variable FUV field also promotes a diffuse molecular gas phase with VFF of $\sim5-10$~per cent.
Time-delay cosmography is a powerful technique to constrain cosmological parameters, particularly the Hubble constant (H0). The TDCOSMO Collaboration is performing an ongoing analysis of lensed quasars to constrain cosmology using this method. In this work, we obtain constraints from the lensed quasar WGD 2038‑4008 using new time-delay measurements and previous mass models by TDCOSMO. This is the first TDCOSMO lens to incorporate multiple lens modeling codes and the full time-delay covariance matrix into the cosmological inference. The models are fixed before the time delay is measured, and the analysis is performed blinded with respect to the cosmological parameters to prevent unconscious experimenter bias. We obtain DΔ t = 1.68‑0.38+0.40 Gpc using two families of mass models, a power-law describing the total mass distribution, and a composite model of baryons and dark matter, although the composite model is disfavored due to kinematics constraints. In a flat ΛCDM cosmology, we constrain the Hubble constant to be H0 = 65‑14+23 km s‑1 Mpc‑1. The dominant source of uncertainty comes from the time delays, due to the low variability of the quasar. Future long-term monitoring, especially in the era of the Vera C. Rubin Observatory's Legacy Survey of Space and Time, could catch stronger quasar variability and further reduce the uncertainties. This system will be incorporated into an upcoming hierarchical analysis of the entire TDCOSMO sample, and improved time delays and spatially-resolved stellar kinematics could strengthen the constraints from this system in the future.
Chemo-mechanical waves on active deformable surfaces are a key component for many vital cellular functions. In particular, these waves play a major role in force generation and long-range signal transmission in cells that dynamically change shape, as encountered during cell division or morphogenesis. Reconstituting and controlling such chemically controlled cell deformations is a crucial but unsolved challenge for the development of synthetic cells. Here, we develop an optogenetic method to elucidate the mechanism responsible for coordinating surface contraction waves that occur in oocytes of the starfish Patiria miniata during meiotic cell division. Using spatiotemporally-patterned light stimuli as a control input, we create chemo-mechanical cortical excitations that are decoupled from meiotic cues and drive diverse shape deformations ranging from local pinching to surface contraction waves and cell lysis. We develop a quantitative model that entails the hierarchy of chemical and mechanical dynamics, which allows to relate the variety of mechanical responses to optogenetic stimuli. Our framework systematically predicts and explains transitions of programmed shape dynamics. Finally, we qualitatively map the observed shape dynamics to elucidate how the versatility of intracellular protein dynamics can give rise to a broad range of mechanical phenomenologies. More broadly, our results pave the way toward real-time control over dynamical deformations in living organisms and can advance the design of synthetic cells and life-like cellular functions.
The self-organization of proteins into enriched compartments and the formation of complex patterns are crucial processes for life on the cellular level. Liquid-liquid phase separation is one mechanism for forming such enriched compartments. When phase-separating proteins are membrane-bound and locally disturb it, the mechanical response of the membrane mediates interactions between these proteins. How these membrane-mediated interactions influence the steady state of the protein density distribution is thus an important question to investigate in order to understand the rich diversity of protein and membrane-shape patterns present at the cellular level. This work starts with a widely used model for membrane-bound phase-separating proteins. We numerically solve our system to map out its phase space and perform a careful, systematic expansion of the model equations to characterize the phase transitions through linear stability analysis and free energy arguments. We observe that the membrane-mediated interactions, due to their long-range nature, are capable of qualitatively altering the equilibrium state of the proteins. This leads to arrested coarsening and length-scale selection instead of simple demixing and complete coarsening. In this study, we unambiguously show that long-range membrane-mediated interactions lead to pattern formation in a system that otherwise would not do so. This work provides a basis for further systematic study of membrane-bound pattern-forming systems.
Water-based Liquid Scintillator (WbLS) is a novel detector medium for particle physics experiments. Applications range from the use as hybrid Cherenkov/scintillation target in low-energy and accelerator neutrino experiments to large-volume neutron vetoes for dark matter detectors. Here we present a WbLS based on well-known components (the surfactant Triton-X, the fluor PPO and vitamin C for long-term stability), with which a new recipe was developed and the result subjected to a thorough characterization of its properties. In addition, based on neutron scattering data we are able to demonstrate that the pulse shape discrimination capabilities of this particular LS are comparable to all-organic LAB based scintillators.
The origin of atmospheric heating in the cool, magnetic white dwarf GD 356 remains unsolved nearly 40 years after its discovery. This once idiosyncratic star with Teff ≈ 7500 K, yet Balmer lines in Zeeman-split emission is now part of a growing class of white dwarfs exhibiting similar features, and which are tightly clustered in the HR diagram suggesting an intrinsic power source. This paper proposes that convective motions associated with an internal dynamo can power electric currents along magnetic field lines that heat the atmosphere via Ohmic dissipation. Such currents would require a dynamo driven by core 22Ne distillation, and would further corroborate magnetic field generation in white dwarfs by this process. The model predicts that the heating will be highest near the magnetic poles, and virtually absent toward the equator, in agreement with observations. This picture is also consistent with the absence of X-ray or extreme ultraviolet emission, because the resistivity would decrease by several orders of magnitude at the typical coronal temperatures. The proposed model suggests that i) DAHe stars are mergers with enhanced 22Ne that enables distillation and may result in significant cooling delays; and ii) any mergers that distill neon will generate magnetism and chromospheres. The predicted chromospheric emission is consistent with the two known massive DQe white dwarfs.
One of the most promising approaches for the next generation of neutrino experiments is the realization of large hybrid Cherenkov/scintillation detectors made possible by recent innovations in photodetection technology and liquid scintillator chemistry. The development of a potentially suitable future detector liquid with particularly slow light emission is discussed in the present publication. This cocktail is compared with respect to its fundamental characteristics (scintillation efficiency, transparency, and time profile of light emission) with liquid scintillators currently used in large-scale neutrino detectors. In addition, the optimization of the admixture of wavelength shifters for a scintillator with particularly high light emission is presented. Furthermore, the pulse-shape discrimination capabilities of the novel medium was studied using a pulsed particle accelerator driven neutron source. Beyond that, purification methods based on column chromatography and fractional vacuum distillation for the co-solvent DIN (Diisopropylnaphthalene) are discussed.
Context. Dark matter (DM) halos can be subject to gravothermal collapse if the DM is not collisionless, but engaged in strong self-interactions instead. When the scattering is able to efficiently transfer heat from the centre to the outskirts, the central region of the halo collapses and reaches densities much higher than those for collisionless DM. This phenomenon is potentially observable in studies of strong lensing. Current theoretical efforts are motivated by observations of surprisingly dense substructures. However, a comparison with observations requires accurate predictions. One method to obtain such predictions is to use N-body simulations. Collapsed halos are extreme systems that pose severe challenges when applying state-of-the-art codes to model self-interacting dark matter (SIDM). Aims. In this work, we investigate the root of such problems, with a focus on energy non-conservation. Moreover, we discuss possible strategies to avoid them. Methods. We ran N-body simulations, both with and without SIDM, of an isolated DM-only halo and we adjusted the numerical parameters to check the accuracy of the simulation. Results. We find that not only the numerical scheme for SIDM can lead to energy non-conservation, but also the modelling of gravitational interaction and the time integration are problematic. The main issues we find are: (a) particles changing their time step in a non-time-reversible manner; (b) the asymmetry in the tree-based gravitational force evaluation; and (c) SIDM velocity kicks breaking the time symmetry. Conclusions. Tuning the parameters of the simulation to achieve a high level of accuracy allows us to conserve energy not only at early stages of the evolution, but also later on. However, the cost of the simulations becomes prohibitively large as a result. Some of the problems that make the simulations of the gravothermal collapse phase inaccurate can be overcome by choosing appropriate numerical schemes. However, other issues still pose a challenge. Our findings motivate further works on addressing the challenges in simulating strong DM self-interactions.
Cryogenic scintillating calorimeters are ultra- sensitive particle detectors for rare event searches, particularly for the search for dark matter and the measurement of neutrino properties. These detectors are made from scintillating target crystals generating two signals for each particle interaction. The phonon (heat) signal precisely measures the deposited energy independent of the type of interacting particle. The scintillation light signal yields particle discrimination on an event-by-event basis. This paper presents a likelihood framework modeling backgrounds and a potential dark matter signal in the two-dimensional plane spanned by phonon and scintillation light energies. We apply the framework to data from CaWO<inline-formula id="IEq1"><mml:math><mml:mmultiscripts><mml:mrow></mml:mrow><mml:mn>4</mml:mn><mml:mrow></mml:mrow></mml:mmultiscripts></mml:math></inline-formula>-based detectors operated in the CRESST dark matter search. For the first time, a single likelihood framework is used in CRESST to model the data and extract results on dark matter in one step by using a profile likelihood ratio test. Our framework simultaneously fits (neutron) calibration data and physics (background) data and allows combining data from multiple detectors. Although tailored to CaWO<inline-formula id="IEq2"><mml:math><mml:mmultiscripts><mml:mrow></mml:mrow><mml:mn>4</mml:mn><mml:mrow></mml:mrow></mml:mmultiscripts></mml:math></inline-formula>-targets and the CRESST experiment, the framework can easily be expanded to other materials and experiments using scintillating cryogenic calorimeters for dark matter search and neutrino physics.
Many advances in astronomy and astrophysics originate from accurate images of the sky emission across multiple wavelengths. This often requires reconstructing spatially and spectrally correlated signals detected from multiple instruments. To facilitate the high-fidelity imaging of these signals, we introduce the universal Bayesian imaging kit (UBIK). Specifically, we present J-UBIK, a flexible and modular implementation leveraging the JAX-accelerated NIFTy.re software as its backend. J-UBIK streamlines the implementation of the key Bayesian inference components, providing for all the necessary steps of Bayesian imaging pipelines. First, it provides adaptable prior models for different sky realizations. Second, it includes likelihood models tailored to specific instruments. So far, the package includes three instruments: Chandra and eROSITA for X-ray observations, and the James Webb Space Telescope (JWST) for the near- and mid-infrared. The aim is to expand this set in the future. Third, these models can be integrated with various inference and optimization schemes, such as maximum a posteriori estimation and variational inference. Explicit demos show how to integrate the individual modules into a full analysis pipeline. Overall, J-UBIK enables efficient generation of high-fidelity images via Bayesian pipelines that can be tailored to specific research objectives.
We present new VLT/MUSE observations of the Hubble Frontier Field (HFF) galaxy cluster MACS J1149.5+2223, lensing the well-known supernova "Refsdal" into multiple images, which has enabled the first cosmological applications with a strongly lensed supernova. Thanks to these data, targeting a northern region of the cluster and thus complementing our previous MUSE program on the cluster core, we have released a new catalog containing 162 secure spectroscopic redshifts. We confirmed 22 cluster members, which had previously been only photometrically selected, and detected ten additional ones, resulting in a total of 308 secure members, of which 63% are spectroscopically confirmed. We further identified 17 new spectroscopic multiple images belonging to six different background sources. By exploiting these new and our previously published MUSE data, in combination with the deep HFF images, we developed an improved total mass model of MACS J1149.5+2223. This model includes 308 total mass components for the member galaxies and requires four additional mass profiles, one of which is associated with a cluster galaxy overdensity identified in the north, representing the dark matter mass distribution on larger scales. The values of the resulting 34 free parameters are optimized based on the observed positions of 106 multiple images from 34 different families, that cover an extended redshift range between 1.240 and 5.983. Our final model has a multiple image position root mean square value of 0.39″, which is in good agreement with other cluster lens models based on a similar number of multiple images. With this refined mass model, we have paved the way toward an improved strong-lensing analyses that will exploit the deep and high resolution observations with HST and JWST on a pixel level in the region of the supernova Refsdal host. This will increase the number of observables by around two orders of magnitude, thus offering the opportunity to carry out more precise and accurate cosmographic measurements in the future. ⋆ This work is based in large part on data collected at ESO VLT (prog.IDs 294.A-5032 and 105.20P5.001) and NASA HST.
The hypergeometric amplitude is a one-parameter deformation of the Veneziano amplitude for four-point tachyon scattering in bosonic string theory that is consistent with $S$-matrix bootstrap constraints. In this article we construct a similar hypergeometric generalization of the Veneziano amplitude for type-I superstring theory. We then rule out a large region of the $(r,m^2,D)$ parameter space as non-unitary, and establish another large subset of the $(r, m^2, D)$ parameter space where all partial wave coefficients are positive. We also analyze positivity in various limits and special cases. As a corollary to our analysis, we are able to directly demonstrate positivity of a wider set of Veneziano amplitude partial wave coefficients than what has been presented elsewhere.
The inference of astrophysical and cosmological properties from the Lyman-α forest conventionally relies on summary statistics of the transmission field that carry useful but limited information. We present a deep learning framework for inference from the Lyman-α forest at the field level. This framework consists of a 1D residual convolutional neural network (ResNet) that extracts spectral features and performs regression on thermal parameters of the intergalactic medium that characterize the power-law temperature-density relation. We trained this supervised machinery using a large set of mock absorption spectra from NYX hydrodynamic simulations at z = 2.2 with a range of thermal parameter combinations (labels). We employed Bayesian optimization to find an optimal set of hyperparameters for our network, and then employed a committee of 20 neural networks for increased statistical robustness of the network inference. In addition to the parameter point predictions, our machine also provides a self-consistent estimate of their covariance matrix with which we constructed a pipeline for inferring the posterior distribution of the parameters. We compared the results of our framework with the traditional summary based approach, namely the power spectrum and the probability density function (PDF) of transmission, in terms of the area of the 68% credibility regions as our figure of merit (FoM). In our study of the information content of perfect (noise- and systematics-free) Lyα forest spectral datasets, we find a significant tightening of the posterior constraints – factors of 10.92 and 3.30 in FoM over the power spectrum only and jointly with PDF, respectively – which is the consequence of recovering the relevant parts of information that are not carried by the classical summary statistics.
Auto-chemotaxis, the directed movement of cells along gradients in chemicals they secrete, is central to the formation of complex spatiotemporal patterns in biological systems. Since the introduction of the Keller-Segel model, numerous variants have been analyzed, revealing phenomena such as coarsening of aggregates, stable aggregate sizes, and spatiotemporally chaotic dynamics. Here, we consider general mass-conserving Keller-Segel models, that is, models without cell growth and death, and analyze the generic long-time dynamics of the chemotactic aggregates. Building on and extending our previous work, which demonstrated that chemotactic aggregation can be understood through a generalized Maxwell construction balancing density fluxes and reactive turnover, we use singular perturbation theory to derive the full rates of mass competition between well-separated aggregates. We analyze how this mass-competition process drives coarsening in both diffusion- and reaction-limited regimes, with the diffusion-limited rate aligning with our previous quasi-steady-state analyses. Our results generalize earlier mathematical findings, demonstrating that coarsening is driven by self-amplifying mass transport and aggregate coalescence. Additionally, we provide a linear stability analysis of the lateral instability, predicting it through a nullcline-slope criterion similar to the curvature criterion in spinodal decomposition. Overall, our findings suggest that chemotactic aggregates behave similarly to phase-separating droplets, providing a robust framework for understanding the coarse-grained dynamics of auto-chemotactic cell populations and providing a basis for the analysis of more complex multi-species chemotactic systems.
Context. Observational evidence has accumulated in recent years, showing that the Galactic bulge includes two populations, a metal-poor one and a metal-rich one, which in addition to having different metallicities show different alpha over iron abundances, spatial distribution, and kinematics. While the metal-rich, barred component has been fairly well characterized, the metal-poor, spheroidal component has been more elusive and harder to describe. RR Lyrae variables are clean tracers of the old bulge component, and they are, on average, more metal-poor than red clump stars. Aims. In the present paper, we provide a new catalog of 16488 ab-type RR Lyrae variables in the bulge region within |l|≲10° and |b|≲2.8°, extracted from multi-epoch Point Spread Function photometry performed on VISTA Variable in the Vía Láctea survey data. We used the catalog to constrain the shape of the old, metal-poor, bulge stellar population. Methods. The identification of ab-type RR Lyrae among a large sample of candidate variables of different types has been performed via a combination of a Random Forest classifier and visual inspection. We optimized this process in such a way to extract a clean catalog with high purity, although for this reason its completeness, close to the midplane, is lower compared to a few other near-infrared catalogs covering the same region of the sky. Results. We used the present catalog to derive the shape of their distribution around the Galactic center, resulting in an elongated spheroid with projected axis ratio of b/a~0.7 and an inclination angle of ϕ~20 degrees. We discuss how observational biases, such as errors on the distances and a nonuniform sampling in longitude, affect both the present measurements and previous ones, especially those based on red clump stars. Because the latter have not been taken into account before, we refrain from a quantitative comparison between these shape parameters and those derived for the main Galactic bar. Nonetheless, qualitatively, taking into account observational biases would lower the estimated ellipticity of the bar derived from red clump stars, and hence reduce the difference with the present results. Conclusions. We publish a high-purity RRab sample for future studies of the oldest Galactic bulge population, close to the midplane. We explore different choices for the period-luminosity-metallicity relation, highlighting how some of them introduce spurious trends of distance with either the period or the metallicity, or both. We provide evidence that they trace a structure that is less elongated than the main bar, though we also highlight some biases of these kind of studies not discussed before. ★Based on observations taken within the ESO VISTA Public Survey VVV, Program ID 179.B-2002.
Extragalactic and galactic cosmic rays scatter with the cosmic neutrino background during propagation to Earth, yielding a flux of relic neutrinos boosted to larger energies. If an overdensity of relic neutrinos is present in galaxies, and neutrinos are massive enough, this flux might be detectable by high-energy neutrino experiments. For a lightest neutrino of mass <inline-formula><mml:math display="inline"><mml:msub><mml:mi>m</mml:mi><mml:mi>ν</mml:mi></mml:msub><mml:mo>∼</mml:mo><mml:mn>0.1</mml:mn><mml:mtext> </mml:mtext><mml:mtext> </mml:mtext><mml:mi>eV</mml:mi></mml:math></inline-formula>, we find an upper limit on the local relic neutrino overdensity of <inline-formula><mml:math display="inline"><mml:mo>∼</mml:mo><mml:msup><mml:mn>10</mml:mn><mml:mn>13</mml:mn></mml:msup></mml:math></inline-formula> and an upper limit on the relic neutrino overdensity at TXS <inline-formula><mml:math display="inline"><mml:mrow><mml:mn>0506</mml:mn><mml:mo>+</mml:mo><mml:mn>056</mml:mn></mml:mrow></mml:math></inline-formula> of <inline-formula><mml:math display="inline"><mml:mo>∼</mml:mo><mml:msup><mml:mn>10</mml:mn><mml:mn>10</mml:mn></mml:msup></mml:math></inline-formula>. Future experiments like GRAND or IceCube-Gen2 could improve these bounds by orders of magnitude.
It is well established that maximizing the information extracted from upcoming and ongoing stage-IV weak-lensing surveys requires higher order summary statistics that complement the standard two-point statistics. In this work, we focus on weak-lensing peak statistics to test two popular modified gravity models, <inline-formula><tex-math id="TM0001" notation="LaTeX">$f(R)$</tex-math></inline-formula> and nDGP, using the FORGE and BRIDGE weak-lensing simulations, respectively. From these simulations, we measure the peak statistics as a function of both cosmological and modified gravity parameters simultaneously. Our findings indicate that the peak abundance is sensitive to the strength of modified gravity, while the peak two-point correlation function is sensitive to the nature of the screening mechanism in a modified gravity model. We combine these simulated statistics with a Gaussian Process Regression emulator and a Gaussian likelihood to generate stage-IV forecast posterior distributions for the modified gravity models. We demonstrate that, assuming small scales can be correctly modelled, peak statistics can be used to distinguish general relativity from <inline-formula><tex-math id="TM0002" notation="LaTeX">$f(R)$</tex-math></inline-formula> and nDGP models at the 2σ level with a stage-IV survey area of <inline-formula><tex-math id="TM0003" notation="LaTeX">$300$</tex-math></inline-formula> and <inline-formula><tex-math id="TM0004" notation="LaTeX">$1000 \, \rm {deg}^2$</tex-math></inline-formula>, respectively. Finally, we show that peak statistics can constrain <inline-formula><tex-math id="TM0005" notation="LaTeX">$\log _{10}\left(|f_{R0}|\right) = -6$</tex-math></inline-formula> per cent to 2 per cent precision, and <inline-formula><tex-math id="TM0006" notation="LaTeX">$\log _{10}(H_0 r_c) = 0.5$</tex-math></inline-formula> per cent to 25 per cent precision.
This research introduces an innovative application of physics-informed neural networks (PINNs) to tackle the intricate challenges of radiative transfer (RT) modelling in exoplanetary atmospheres, with a special focus on efficiently handling scattering phenomena. Traditional RT models often simplify scattering as absorption, leading to inaccuracies. Our approach utilizes PINNs, noted for their ability to incorporate the governing differential equations of RT directly into their loss function, thus offering a more precise yet potentially fast modelling technique. The core of our method involves the development of a parametrized PINN tailored for a modified RT equation, enhancing its adaptability to various atmospheric scenarios. We focus on RT in transiting exoplanet atmospheres using a simplified 1D isothermal model with pressure-dependent coefficients for absorption and Rayleigh scattering. In scenarios of pure absorption, the PINN demonstrates its effectiveness in predicting transmission spectra for diverse absorption profiles. For Rayleigh scattering, the network successfully computes the RT equation, addressing both direct and diffuse stellar light components. While our preliminary results with simplified models are promising, indicating the potential of PINNs in improving RT calculations, we acknowledge the errors stemming from our approximations as well as the challenges in applying this technique to more complex atmospheric conditions. Specifically, extending our approach to atmospheres with intricate temperature-pressure profiles and varying scattering properties, such as those introduced by clouds and hazes, remains a significant area for future development.
We study the theory of a scalar in the fundamental representation of the internal supergroup $SU(N|M)$. Remarkably, for $M=N+1$ its tree-level mass does not receive quantum corrections at one loop from either self-coupling or interactions with gauge bosons and fermions. This property comes at the price of introducing both degrees of freedom with wrong statistics and with wrong sign kinetic terms. We detail a method to break $SU(N|M)$ down to its bosonic subgroup through a Higgs-like mechanism, allowing for the partial decoupling of the dangerous modes, and study the associated vacuum structure up to one loop.
If primordial black holes (PBHs) of asteroidal mass make up the entire dark matter they could be detectable through their gravitational influence in the solar system. In this work, we study the perturbations that PBHs induce on the orbits of planets. Detailed numerical simulations of the solar system, embedded in a halo of PBHs, are performed. Using the Earth-Mars distance as an observational probe, we show that the perturbations are below the current detection limits and thus PBHs are not directly constrained by solar system ephemerides. We estimate that an improvement in the measurement accuracy by more than an order of magnitude or the extraction of signals well below the noise level are required to detect the gravitational influence of PBHs in the solar system in the foreseeable future.
We present constraints on the $f(R)$ gravity model using a sample of 1,005 galaxy clusters in the redshift range $0.25 - 1.78$ that have been selected through the thermal Sunyaev-Zel'dovich effect (tSZE) from South Pole Telescope (SPT) data and subjected to optical and near-infrared confirmation with the Multi-component Matched Filter (MCMF) algorithm. We employ weak gravitational lensing mass calibration from the Dark Energy Survey (DES) Year 3 data for 688 clusters at $z < 0.95$ and from the Hubble Space Telescope (HST) for 39 clusters with $0.6 < z < 1.7$. Our cluster sample is a powerful probe of $f(R)$ gravity, because this model predicts a scale-dependent enhancement in the growth of structure, which impacts the halo mass function (HMF) at cluster mass scales. To account for these modified gravity effects on the HMF, our analysis employs a semi-analytical approach calibrated with numerical simulations. Combining calibrated cluster counts with primary cosmic microwave background (CMB) temperature and polarization anisotropy measurements from the Planck2018 release, we derive robust constraints on the $f(R)$ parameter $f_{R0}$. Our results, $\log_{10} |f_{R0}| < -5.32$ at the 95 % credible level, are the tightest current constraints on $f(R)$ gravity from cosmological scales. This upper limit rules out $f(R)$-like deviations from general relativity that result in more than a $\sim$20 % enhancement of the cluster population on mass scales $M_\mathrm{200c}>3\times10^{14}M_\odot$.
Context. The forward modelling of galaxy surveys has recently gathered interest as one of the primary methods to achieve the required precision on the estimate of the redshift distributions for stage IV surveys, allowing them to perform cosmological tests with unprecedented accuracy. One of the key aspects of forward modelling a galaxy survey is the connection between the physical properties drawn from a galaxy population model and the intrinsic galaxy spectral energy distributions (SEDs), achieved through stellar population synthesis (SPS) codes (e.g. FSPS). However, SPS requires a large number of detailed assumptions on the constituents of galaxies, for which the model choice or parameter values are currently uncertain. Aims. In this work, we perform a sensitivity study of the impact that the variations of the SED modelling choices have on the mean and scatter of the tomographic galaxy redshift distributions. Methods. We assumed the PROSPECTOR-β model as the fiducial input galaxy population model and used its SPS parameters to build 9-bands ugriZYJHKs observed-frame magnitudes of a fiducial sample of galaxies. We then built samples of galaxy magnitudes by varying one SED modelling choice at a time. We modelled the colour-redshift relation of these galaxy samples using the self-organising map (SOM) approach that optimally groups similar redshifts galaxies by their multidimensional colours. We placed galaxies in the SOM cells according to their simulated observed-frame colours and used their cell assignment to build colour-selected tomographic bins. Finally, we compared each variant's binned redshift distributions against the estimates obtained for the original PROSPECTOR-β model. Results. We find that the SED components related to the initial mass function, as well as the active galactic nuclei, the gas physics, and the attenuation law substantially bias the mean and the scatter of the tomographic redshift distributions with respect to those estimated with the fiducial model. Conclusions. For the uncertainty of these choices currently present in the literature and regardless of the applied stellar mass function based re-weighting strategy, the bias in the mean and the scatter of the tomographic redshift distributions are greater than the precision requirements set by next-generation Stage IV galaxy surveys, such as the Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST) and Euclid.
Context. The ESO public survey VISTA Variables in the Vía Láctea (VVV) surveyed the inner Galactic bulge and the adjacent southern Galactic disk from 2009–2015. Upon its conclusion, the complementary VVV extended (VVVX) survey has expanded both the temporal as well as spatial coverage of the original VVV area, widening it from 562 to 1700 sq. deg., as well as providing additional epochs in JHKs filters from 2016–2023. Aims. With the completion of VVVX observations during the first semester of 2023, we present here the observing strategy, a description of data quality and access, and the legacy of VVVX. Methods. VVVX took ~2000 h, covering about 4% of the sky in the bulge and southern disk. VVVX covered most of the gaps left between the VVV and the VISTA Hemisphere Survey (VHS) areas and extended the VVV time baseline in the obscured regions affected by high extinction and hence hidden from optical observations. Results. VVVX provides a deep JHKs catalogue of ≳1.5 × 109 point sources, as well as a Ks band catalogue of ~107 variable sources. Within the existing VVV area, we produced a 5D map of the surveyed region by combining positions, distances, and proper motions of well-understood distance indicators such as red clump stars, RR Lyrae, and Cepheid variables. Conclusions. In March 2023 we successfully finished the VVVX survey observations that started in 2016, an accomplishment for ESO Paranal Observatory upon 4200 h of observations for VVV+VVVX. The VVV+VVVX catalogues complement those from the Gaia mission at low Galactic latitudes and provide spectroscopic targets for the forthcoming ESO high-multiplex spectrographs MOONS and 4MOST. ★Based on observations taken within the ESO VISTA Public Survey VVV and VVVX, Programmes ID 179.B-2002 and 198.B-2004, respectively.
We derive a cutting rule for equal-time in-in correlators including cosmological correlators based on Keldysh $r/a$ basis, which decomposes diagrams into fully retarded functions and cut-propagators consisting of Wightman functions. Our derivation relies only on basic assumptions such as unitarity, locality, and the causal structure of the in-in formalism, and therefore holds for theories with arbitrary particle contents and local interactions at any loop order. As an application, we show that non-local cosmological collider signals arise solely from cut-propagators under the assumption of microcausality. Since the cut-propagators do not contain (anti-)time-ordering theta functions, the conformal time integrals are factorized, simplifying practical calculations.
Hyperluminous infrared galaxies (HyLIRGs) are the rarest and most extreme starbursts and found only in the distant Universe (z ≳ 1). They have intrinsic infrared (IR) luminosities LIR ≥ 1013 L⊙ and are commonly found to be major mergers. Recently, the Planck All-Sky Survey to Analyze Gravitationally-lensed Extreme Starbursts project (PASSAGES) searched ~104 deg2 of the sky and found ~20 HyLIRGs. We describe a detailed study of PJ0116-24, the brightest (μLIR ≈ 2.6 × 1014 L⊙, magnified with μ ≈ 17) Einstein-ring HyLIRG in the southern sky, at z = 2.125, with observations from the near-IR integral-field spectrograph VLT/ERIS and the submillimetre interferometer ALMA. We detected Hα, Hβ, [N II] and [S II] lines and obtained an extreme Balmer decrement (Hα/Hβ ≈ 8.73 ± 1.14). We modelled the molecular-gas and ionized-gas kinematics with CO(3-2) and Hα data at ~100-300 pc and (sub)kiloparsec delensed scales, respectively, finding consistent regular rotation. We found PJ0116-24 to be highly rotationally supported (vrot/σ0, mol. gas ≈ 9.4) with a richer gaseous substructure than other known HyLIRGs. Our results imply that PJ0116-24 is an intrinsically massive (Mbaryon ≈ 1011.3 M⊙) and rare starbursty disk (star-formation rate, SFR = 1,490 M⊙ yr−1) probably undergoing secular evolution. This indicates that the maximal SFR (≳1,000 M⊙ yr−1) predicted by simulations could occur during a galaxy's secular evolution, away from major mergers.
Over the past decade, advancement of observational capabilities, specifically the Atacama Large Millimeter/submillimeter Array (ALMA) and Spectro-Polarimetric High-contrast Exoplanet REsearch (SPHERE) instruments, alongside theoretical innovations like pebble accretion, have reshaped our understanding of planet formation and the physics of protoplanetary disks. Despite this progress, mysteries persist along the winded path of micrometer-sized dust, from the interstellar medium, through transport and growth in the protoplanetary disk, to becoming gravitationally bound bodies. This review outlines our current knowledge of dust evolution in circumstellar disks, yielding the following insights: ▪ Theoretical and laboratory studies have accurately predicted the growth of dust particles to sizes that are susceptible to accumulation through transport processes like radial drift and settling. ▪ Critical uncertainties in that process remain the level of turbulence, the threshold collision velocities at which dust growth stalls, and the evolution of dust porosity. ▪ Symmetric and asymmetric substructures are widespread. Dust traps appear to be solving several long-standing issues in planet formation models, and they are observationally consistent with being sites of active planetesimal formation. ▪ In some instances, planets have been identified as the causes behind substructures. This underlines the need to study earlier stages of disks to understand how planets can form so rapidly. Theoretical and laboratory studies have accurately predicted the growth of dust particles to sizes that are susceptible to accumulation through transport processes like radial drift and settling. Critical uncertainties in that process remain the level of turbulence, the threshold collision velocities at which dust growth stalls, and the evolution of dust porosity. Symmetric and asymmetric substructures are widespread. Dust traps appear to be solving several long-standing issues in planet formation models, and they are observationally consistent with being sites of active planetesimal formation. In some instances, planets have been identified as the causes behind substructures. This underlines the need to study earlier stages of disks to understand how planets can form so rapidly. In the future, better probes of the physical conditions in optically thick regions, including densities, turbulence strength, kinematics, and particle properties, will be essential for unraveling the physical processes at play.
Axion quark nuggets (AQN) are hypothetical, macroscopically large objects with a mass greater than a few grams and sub-micrometer size, formed during the quark-hadron transition. Originating from the axion field, they offer a possible resolution of the similarity between visible and dark components of the Universe, i.e. ΩDM ∼ Ωvisible and observed matter-antimatter asymmetry. These composite objects behave as cold dark matter, interacting with ordinary matter and resulting in pervasive electromagnetic radiation throughout the Universe. This work aims to predict the electromagnetic signature in large-scale structures from this AQN-baryon interaction, accounting for thermal and non-thermal radiations. We use Magneticum hydrodynamical simulations to describe the realistic distribution and dynamics of gas and dark matter at cosmological scales. We construct a light cone encompassing a 1.4 square degree area on the sky, extending up to redshift z = 5.4, and we calculate the electromagnetic signature across a wide range of frequencies from radio, starting at ν ∼ 1 GHz, up to a few keV X-ray energies. We find that the AQNs electromagnetic signature is characterized by global (monopole) and fluctuation signals. The amplitude of both signals strongly depends on the average nugget mass and the ionization level of the baryonic environment, allowing us to identify a most optimistic scenario and a minimal configuration. The signal of our most optimistic scenario is often near the sensitivity limit of existing instruments, such as FIRAS in the ν = [100-500] GHz range and the South Pole Telescope for high-resolution ℓ > 4000 at ν = 95 GHz. Fluctuations in the Extra-galactic Background Light caused by the axion quark nuggets in the most optimistic scenario can also be tested with space-based imagers Euclid and James Webb Space Telescope. In general, our minimal configuration is still out of reach of existing instruments, but future experiments might be able to pose some constraints. We conclude that the axion quark nuggets model represents a viable model for dark matter, which does not violate the canons of cosmology nor existing observations. A reanalysis of existing data sets could provide some evidence of axion quark nuggets if our most optimistic configuration is correct. The best chances for testing the model reside in 1) ultra-deep infrared and optical surveys, 2) future experiments to probe the frequency spectrum of the cosmic microwave background, and 3) low-frequency (1 GHz < ν < 100 GHz) and high-resolution (ℓ ≳ 104) observations.
We develop parametrizations of eight of the lowest Born-Oppenheimer potentials for quarkonium hybrid mesons as functions of the separation <inline-formula><mml:math display="inline"><mml:mi>r</mml:mi></mml:math></inline-formula> of the static quark and antiquark sources. The parameters are determined by fitting results calculated using pure <inline-formula><mml:math display="inline"><mml:mi>S</mml:mi><mml:mi>U</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mn>3</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> lattice gauge theory. The parametrizations have the correct limiting behavior at small <inline-formula><mml:math display="inline"><mml:mi>r</mml:mi></mml:math></inline-formula>, where the potentials form multiplets associated with gluelumps. They have the correct limiting behavior at large <inline-formula><mml:math display="inline"><mml:mi>r</mml:mi></mml:math></inline-formula>, where the potentials form multiplets associated with excitations of a relativistic string. There is a narrow avoided crossing in the small-<inline-formula><mml:math display="inline"><mml:mi>r</mml:mi></mml:math></inline-formula> region between two potentials with the same Born-Oppenheimer quantum numbers.
Context. Planets form in the disks surrounding young stars. The time at which the planet formation process begins is still an open question. Annular substructures such as rings and gaps in disks are intertwined with planet formation, and thus their presence or absence is commonly used to investigate the onset of this process. Aims. Current observations show that a limited number of disks surrounding protostars exhibit annular substructures, all of them in the Class I stage. The lack of observed features in most of these sources may indicate a late emergence of substructures, but it could also be an artifact of these disks being optically thick. To mitigate the problem of optical depth, we investigated substructures within a very young Class 0 disk characterized by low inclination using observations at longer wavelengths. Methods. We used 3 mm ALMA observations tracing dust emission at a resolution of 7 au to search for evidence of annular substructures in the disk around the deeply embedded Class 0 protostar Oph A SM1. Results. The observations reveal a nearly face-on disk (inclination ∼ 16°) extending up to 40 au. The radial intensity profile shows a clear deviation from a smooth profile near 30 au, which we interpret as the presence of either a gap at 28 au or a ring at 34 au with Gaussian widths of σ = 1.4‑1.2+2.3 au and σ = 3.9‑1.9+2.0 au, respectively. Crucially, the 3 mm emission at the location of the possible gap or ring is determined to be optically thin, precluding the possibility that this feature in the intensity profile is due to the emission being optically thick. Conclusions. Annular substructures resembling those in the more evolved Class I and II disks could indeed be present in the Class 0 stage, which is earlier than suggested by previous observations. Similar observations of embedded disks in which the high-optical-depth problem can be mitigated are clearly needed to better constrain the onset of substructures in the embedded stages.
Observed low-mass galaxies with nuclear star clusters (NSCs) can host accreting massive black holes (MBH). We present simulations of dwarf galaxies ($M_{\mathrm{baryon}} \sim 0.6 - 2.4 \times 10^8 \rm \, M_\odot$) at solar mass resolution ($0.5\rm \, M_\odot < m_{\mathrm{gas}} < 4 \rm \, M_\odot$) with a multi-phase interstellar medium (ISM) and investigate the impact of NSCs on MBH growth and nuclear star formation (SF). The Griffin simulation model includes non-equilibrium low temperature cooling, chemistry and the effect of HII regions and supernovae (SN) from massive stars. Individual stars are sampled down to 0.08 $\rm M_\odot$ and their non-softened gravitational interactions with MBHs are computed with the regularised Ketju integrator. MBHs with masses in the range of $10^2 - 10^5 \, \rm M_\odot$ are represented by accreting sink particles without feedback. We find that the presence of NSCs boost nuclear SF (i.e. NSC growth) and MBH accretion by funneling gas to the central few parsecs. Low-mass MBHs grow more rapidly on $\sim 600$ Myr timescales, exceeding their Eddington rates at peak accretion. MBH accretion and nuclear SF is episodic (i.e. leads to multiple stellar generations), coeval and regulated by SN explosions. On 40 - 60 Myr timescales the first SN of each episode terminates MBH accretion and nuclear SF. Without NSCs, low-mass MBHs do not grow and MBH accretion and reduced nuclear SF become irregular and uncorrelated. This study gives the first insights into the possible co-evolution of MBHs and NSCs in low-mass galaxies and highlights the importance of considering dense NSCs in galactic studies of MBH growth.
Strongly lensed supernovae (SNe) are a rare class of transient that can offer tight cosmological constraints that are complementary to methods from other astronomical events. We present a follow-up study of one recently-discovered strongly lensed SN, the quadruply-imaged Type Ia SN 2022qmx (aka, "SN Zwicky") at z = 0.3544. We measure updated, template-subtracted photometry for SN Zwicky and derive improved time delays and magnifications. This is possible because SNe are transient, fading away after reaching their peak brightness. Specifically, we measure point spread function (PSF) photometry for all four images of SN Zwicky in three Hubble Space Telescope WFC3/UVIS passbands (F475W, F625W, F814W) and one WFC3/IR passband (F160W), with template images taken $\sim 11$ months after the epoch in which the SN images appear. We find consistency to within $2\sigma$ between lens model predicted time delays ($\lesssim1$ day), and measured time delays with HST colors ($\lesssim2$ days), including the uncertainty from chromatic microlensing that may arise from stars in the lensing galaxy. The standardizable nature of SNe Ia allows us to estimate absolute magnifications for the four images, with images A and C being elevated in magnification compared to lens model predictions by about $6\sigma$ and $3\sigma$ respectively, confirming previous work. We show that millilensing or differential dust extinction is unable to explain these discrepancies and find evidence for the existence of microlensing in images A, C, and potentially D, that may contribute to the anomalous magnification.
Studying the orbital motion of stars around Sagittarius A* in the Galactic Center provides a unique opportunity to probe the gravitational potential near the supermassive black hole at the heart of our Galaxy. Interferometric data obtained with the GRAVITY instrument at the Very Large Telescope Interferometer (VLTI) since 2016 has allowed us to achieve unprecedented precision in tracking the orbits of these stars. GRAVITY data have been key to detecting the in-plane, prograde Schwarzschild precession of the orbit of the star S2, as predicted by General Relativity. By combining astrometric and spectroscopic data from multiple stars, including S2, S29, S38, and S55 - for which we have data around their time of pericenter passage with GRAVITY - we can now strengthen the significance of this detection to an approximately $10 \sigma$ confidence level. The prograde precession of S2's orbit provides valuable insights into the potential presence of an extended mass distribution surrounding Sagittarius A*, which could consist of a dynamically relaxed stellar cusp comprised of old stars and stellar remnants, along with a possible dark matter spike. Our analysis, based on two plausible density profiles - a power-law and a Plummer profile - constrains the enclosed mass within the orbit of S2 to be consistent with zero, establishing an upper limit of approximately $1200 \, M_\odot$ with a $1 \sigma$ confidence level. This significantly improves our constraints on the mass distribution in the Galactic Center. Our upper limit is very close to the expected value from numerical simulations for a stellar cusp in the Galactic Center, leaving little room for a significant enhancement of dark matter density near Sagittarius A*.
We perform canonical quantization of General Relativity, as an effective quantum field theory below the Planck scale, within the BRST-invariant framework. We show that the promotion of constraints to dynamical equations of motion for auxiliary fields leads to the healthy Hamiltonian flow. In particular, we show that the classical properties of Einstein's gravity, such as vanishing Hamiltonian modulo boundary contribution, is realized merely as an expectation value in appropriate physical states. Most importantly, the physicality is shown not to entail trivial time-evolution for correlation functions. In the present approach we quantize the theory once and for all around the Minkowaski vacuum and treat other would-be classical backgrounds as BRST-invariant coherent states. This is especially important for cosmological spacetimes as it uncovers features that are not visible in ordinary semi-classical treatment. The Poincaré invariance of the vacuum, essential for our quantization, provides strong motivation for spontaneously-broken supersymmetry.
In this thesis, I present my work developing two instruments for studying cosmic and solar particles and the secondary radiation created by them. The RadMap Telescope is a compact radiation monitor for characterizing the environment inside the International Space Station. The Lunar Cosmic-Ray and Neutron Spectrometer is a versatile instrument designed to detect areas of increased sub-surface hydrogen abundance via neutron spectroscopy, which we use to search for water-ice deposits in the Moon's polar regions.
RR Lyrae stars (RRLs) are excellent tracers of stellar populations for old, metal-poor components in the the Milky Way and the Local Group. Their luminosities have a metallicity dependence, but determining spectroscopic [Fe/H] metallicities for RRLs, especially at distances outside the solar neighborhood, is challenging. Using 40 RRLs with metallicities derived from both Fe(II) and Fe(I) abundances, we verify the calibration between the [Fe/H] of RRLs from the calcium triplet. Our calibration is applied to all RRLs with Gaia Radial Velocity Spectrometer (RVS) spectra in Gaia DR3 and to 80 stars in the inner Galaxy from the BRAVA-RR survey. The coadded Gaia RVS RRL spectra provide RRL metallicities with an uncertainty of 0.25 dex, which is a factor of two improvement over the Gaia photometric RRL metallicities. Within our Galactic bulge RRL sample, we find a dominant fraction with low energies without a prominent rotating component. Due to the large fraction of such stars, we interpret these stars as belonging to the in situ metal-poor Galactic bulge component, although we cannot rule out that a fraction of these belong to an ancient accretion event such as Kraken/Heracles.
A central requirement for the faithful implementation of large-scale lattice gauge theories (LGTs) on quantum simulators is the protection of the underlying gauge symmetry. Recent advancements in the experimental realizations of large-scale LGTs have been impressive, albeit mostly restricted to Abelian gauge groups. Guided by this requirement for gauge protection, we propose an experimentally feasible approach to implement large-scale non-Abelian <inline-formula><mml:math display="inline" overflow="scroll" xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mi>SU</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>N</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> and <inline-formula><mml:math display="inline" overflow="scroll" xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow><mml:mi mathvariant="normal">U</mml:mi></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mi>N</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> LGTs with dynamical matter in <inline-formula><mml:math display="inline" overflow="scroll" xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mi>d</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn><mml:mrow><mml:mrow><mml:mi mathvariant="normal">D</mml:mi></mml:mrow></mml:mrow></mml:math></inline-formula>, enabled by two-body spin-exchange interactions realizing local emergent gauge-symmetry stabilizer terms. We present two concrete proposals for <inline-formula><mml:math display="inline" overflow="scroll" xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mn>2</mml:mn><mml:mo>+</mml:mo><mml:mn>1</mml:mn><mml:mrow><mml:mrow><mml:mi mathvariant="normal">D</mml:mi></mml:mrow></mml:mrow><mml:mspace width="0.2em"></mml:mspace><mml:mi>SU</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mn>2</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> and <inline-formula><mml:math display="inline" overflow="scroll" xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow><mml:mi mathvariant="normal">U</mml:mi></mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mn>2</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> LGTs, including dynamical bosonic matter and induced plaquette terms, that can be readily implemented in current ultracold-molecule and next-generation ultracold-atom platforms. We provide numerical benchmarks showcasing experimentally accessible dynamics, and demonstrate the stability of the underlying non-Abelian gauge invariance. We develop a method to obtain the effective gauge-invariant model featuring the relevant magnetic plaquette and minimal gauge-matter coupling terms. Our approach paves the way towards near-term realizations of large-scale non-Abelian quantum link models in analog quantum simulators.
We present phase-corrected photometric measurements of 88 Cepheid variables in the core of the Small Magellanic Cloud (SMC), the first sample obtained with the Hubble Space Telescope's (HST) Wide Field Camera 3, in the same homogeneous photometric system as past measurements of all Cepheids on the SH0ES distance ladder. We limit the sample to the inner core and model the geometry to reduce errors in prior studies due to the nontrivial depth of this cloud. Without crowding present in ground-based studies, we obtain an unprecedentedly low dispersion of 0.102 mag for a period–luminosity (P–L) relation in the SMC, approaching the width of the Cepheid instability strip. The new geometric distance to 15 late-type detached eclipsing binaries in the SMC offers a rare opportunity to improve the foundation of the distance ladder, increasing the number of calibrating galaxies from three to four. With the SMC as the only anchor, we find H 0 = 74.1 ± 2.1 km s‑1 Mpc‑1. Combining these four geometric distances with our HST photometry of SMC Cepheids, we obtain H 0 = 73.17 ± 0.86 km s‑1 Mpc‑1. By including the SMC in the distance ladder, we also double the range where the metallicity ([Fe/H]) dependence of the Cepheid P–L relation can be calibrated, and we find γ = ‑0.234 ± 0.052 mag dex‑1. Our local measurement of H 0 based on Cepheids and Type Ia supernovae shows a 5.8σ tension with the value inferred from the cosmic microwave background assuming a Lambda cold dark matter (ΛCDM) cosmology, reinforcing the possibility of physics beyond ΛCDM.
We introduce a block encoding method for mapping discrete subgroups to qubits on a quantum computer. This method is applicable to general discrete groups, including crystal-like subgroups such as <inline-formula><mml:math display="inline"><mml:mrow><mml:mi mathvariant="double-struck">BI</mml:mi></mml:mrow></mml:math></inline-formula> of <inline-formula><mml:math display="inline"><mml:mi>S</mml:mi><mml:mi>U</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mn>2</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> and <inline-formula><mml:math display="inline"><mml:mi mathvariant="double-struck">V</mml:mi></mml:math></inline-formula> of <inline-formula><mml:math display="inline"><mml:mi>S</mml:mi><mml:mi>U</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mn>3</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>. We detail the construction of primitive gates—the inversion gate, the group multiplication gate, the trace gate, and the group Fourier gate—utilizing this encoding method for <inline-formula><mml:math display="inline"><mml:mrow><mml:mi mathvariant="double-struck">BT</mml:mi></mml:mrow></mml:math></inline-formula> and for the first time <inline-formula><mml:math display="inline"><mml:mrow><mml:mi mathvariant="double-struck">BI</mml:mi></mml:mrow></mml:math></inline-formula> group. We also provide resource estimations to extract the gluon viscosity. The inversion gates for <inline-formula><mml:math display="inline"><mml:mrow><mml:mi mathvariant="double-struck">BT</mml:mi></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math display="inline"><mml:mrow><mml:mi mathvariant="double-struck">BI</mml:mi></mml:mrow></mml:math></inline-formula> are benchmarked on the Baiwang quantum computer with estimated fidelities of <inline-formula><mml:math display="inline"><mml:msubsup><mml:mn>40</mml:mn><mml:mrow><mml:mo>-</mml:mo><mml:mn>4</mml:mn></mml:mrow><mml:mrow><mml:mo>+</mml:mo><mml:mn>5</mml:mn></mml:mrow></mml:msubsup><mml:mo>%</mml:mo></mml:math></inline-formula> and <inline-formula><mml:math display="inline"><mml:msubsup><mml:mn>4</mml:mn><mml:mrow><mml:mo>-</mml:mo><mml:mn>3</mml:mn></mml:mrow><mml:mrow><mml:mo>+</mml:mo><mml:mn>5</mml:mn></mml:mrow></mml:msubsup><mml:mo>%</mml:mo></mml:math></inline-formula>, respectively.
When a star is described as a spectral class G2V, we know its approximate mass, temperature, age, and size. At more than 5,700 exoplanets discovered, it is a natural developmental step to establish a classification for them, such as for example, the Harvard classification for stars. This exoplanet classification has to be easily interpreted and present the most relevant information about them and divides them into groups based on certain characteristics. We propose an exoplanet classification, which using an easily readable code, may inform you about a exoplanet's main characteristics. The suggested classification code contains four parameters by which we can quickly determine the range of temperature, mass, density and their eccentricity. The first parameter concerns the mass of an exoplanet in the form of the units of the mass of other known planets, where e.g. M represents the mass of Mercury, E that of Earth, N Neptune, or J Jupiter. The second parameter is the mean Dyson temperature of the extoplanet's orbit, for which we established four main classes: F represents the Frozen class, W the Water class, G the Gaseous class, and R the Roaster class. The third parameter is eccentricity and the fourth parameter is surface attribute which is defined as the bulk density of the exoplanet, where g represents a gaseous planet, w - water planet, t - terrestrial planet, i - iron planet and s - super dense planet. The classification code for Venus, could be EG0t (E - mass in the range of the mass of the Earth, G - Gaseous class, temperature in the range from 450 to 1000 K, 0 - circular or nearly circular orbit, t - terrestrial surface), for Earth it could be EW0t (W - Water class - a possible Habitable zone). This classification is very helpful in, for example, quickly delimiting if a planet can be found in the Habitable zone; if it is terrestrial or not.
We present UV–optical–near-infrared observations and modeling of supernova (SN) 2024ggi, a type II supernova (SN II) located in NGC 3621 at 7.2 Mpc. Early-time ("flash") spectroscopy of SN 2024ggi within +0.8 days of discovery shows emission lines of H I, He I, C III, and N III with a narrow core and broad, symmetric wings (i.e., "IIn-like") arising from the photoionized, optically thick, unshocked circumstellar material (CSM) that surrounded the progenitor star at shock breakout (SBO). By the next spectral epoch at +1.5 days, SN 2024ggi showed a rise in ionization as emission lines of He II, C IV, N IV/V, and O V became visible. This phenomenon is temporally consistent with a blueward shift in the UV–optical colors, both likely the result of SBO in an extended, dense CSM. The IIn-like features in SN 2024ggi persist on a timescale of t IIn = 3.8 ± 1.6 days, at which time a reduction in CSM density allows the detection of Doppler-broadened features from the fastest SN material. SN 2024ggi has peak UV–optical absolute magnitudes of M w2 = ‑18.7 mag and M g = ‑18.1 mag, respectively, that are consistent with the known population of CSM-interacting SNe II. Comparison of SN 2024ggi with a grid of radiation hydrodynamics and non–local thermodynamic equilibrium radiative-transfer simulations suggests a progenitor mass-loss rate of <inline-formula> <mml:math overflow="scroll"><mml:mover accent="true"><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mrow><mml:mo>̇</mml:mo></mml:mrow></mml:mover><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mn>10</mml:mn></mml:mrow><mml:mrow><mml:mo>‑</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:msup><mml:mspace width="0.25em"></mml:mspace><mml:msub><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mrow><mml:mo>⊙</mml:mo></mml:mrow></mml:msub></mml:math> </inline-formula> yr‑1 (v w = 50 km s‑1), confined to a distance of r < 5 × 1014 cm. Assuming a wind velocity of v w = 50 km s‑1, the progenitor star underwent an enhanced mass-loss episode in the last ∼3 yr before explosion.
We study the contribution of large scalar perturbations sourced by a sharp feature during cosmic inflation to the stochastic gravitational wave background (SGWB), extending our previous work to include the SGWB sourced during the inflationary era. We focus in particular on three-field inflation, since the third dynamical field is the first not privileged by the perturbations' equations of motion and allows a more direct generalization to $N$-field inflation. For the first time, we study the three-field isocurvature perturbations sourced during the feature and include the effects of isocurvature masses. In addition to a two-field limit, we find that the third field's dynamics during the feature can source large isocurvature transients which then later decay, leaving an inflationary-era-sourced SGWB as their only observable signature. We find that the inflationary-era signal shape near the peak is largely independent of the number of dynamical fields and has a greatly enhanced amplitude sourced by the large isocurvature transient, suppressing the radiation-era contribution and opening a new window of detectable parameter space with small adiabatic enhancement. The largest enhancements we study could easily violate backreaction constraints, but much of parameter space remains under perturbative control. These SGWBs could be visible in LISA and other gravitational wave experiments, leaving an almost universal signature of sharp features during multi-field inflation, even when the sourcing isocurvature decays to unobservability shortly afterwards.
The Hubble parameter, H 0, is not an univocally defined quantity: It relates redshifts to distances in the near Universe, but it is also a key parameter of the ΛCDM standard cosmological model. As such, H 0 affects several physical processes at different cosmic epochs and multiple observables. We have counted more than a dozen H 0s that are expected to agree if (a) there are no significant systematics in the data and their interpretation and (b) the adopted cosmological model is correct. With few exceptions (proverbially confirming the rule), these determinations do not agree at high statistical significance; their values cluster around two camps: the low (68 km s1 Mpc1) and high (73 km s1 Mpc1) camps. It appears to be a matter of anchors. The shape of the Universe expansion history agrees with the model; it is the normalizations that disagree. Beyond systematics in the data/analysis, if the model is incorrect, there are only two viable ways to "fix" it: by changing the early time (z ≳ 1,100) physics and, thus, the early time normalization or by a global modification, possibly touching the model's fundamental assumptions (e.g., homogeneity, isotropy, gravity). None of these three options has the consensus of the community. The research community has been actively looking for deviations from ΛCDM for two decades; the one we might have found makes us wish we could put the genie back in the bottle.
Large-scale cosmological simulations are an indispensable tool for modern cosmology. To enable model-space exploration, fast and accurate predictions are critical. In this paper, we show that the performance of such simulations can be further improved with time-stepping schemes that use input from cosmological perturbation theory. Specifically, we introduce a class of time-stepping schemes derived by matching the particle trajectories in a single leapfrog/Verlet drift-kick-drift step to those predicted by Lagrangian perturbation theory (LPT). As a corollary, these schemes exactly yield the analytic Zel'dovich solution in 1D in the pre-shell-crossing regime (i.e. before particle trajectories cross). One representative of this class is the popular 'FastPM' scheme by Feng et al. 2016[1], which we take as our baseline. We then construct more powerful LPT-inspired integrators and show that they outperform FastPM and standard integrators in fast simulations in two and three dimensions with <mml:math altimg="si1.svg"><mml:mi mathvariant="script">O</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mn>1</mml:mn><mml:mo linebreak="badbreak" linebreakstyle="after">‑</mml:mo><mml:mn>100</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:math> timesteps, requiring fewer steps to accurately reproduce the power spectrum and bispectrum of the density field. Furthermore, we demonstrate analytically and numerically that, for any integrator, convergence is limited in the post-shell-crossing regime (to order <mml:math altimg="si2.svg"><mml:mfrac bevelled="true"><mml:mrow><mml:mn>3</mml:mn></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:mfrac></mml:math> for planar-wave collapse), owing to the lacking regularity of the acceleration field, which makes the use of high-order integrators in this regime futile. Also, we study the impact of the timestep spacing and of a decaying mode present in the initial conditions. Importantly, we find that symplecticity of the integrator plays a minor role for fast approximate simulations with a small number of timesteps.
We combine stellar orbits with the abundances of the heavy, r-process element europium and the light, $\alpha$-element, silicon to separate in situ and accreted populations in the Milky Way (MW) across all metallicities. At high orbital energy, the accretion-dominated halo shows elevated values of [Eu/Si], while at lower energies, where many of the stars were born in situ, the levels of [Eu/Si] are lower. These systematically different levels of [Eu/Si] in the MW and the accreted halo imply that the scatter in [Eu/$\alpha$] within a single galaxy is smaller than previously thought. At the lowest metallicities, we find that both accreted and in situ populations trend down in [Eu/Si], consistent with enrichment via neutron star mergers. Through compiling a large data set of abundances for 54 globular clusters (GCs), we show that differences in [Eu/Si] extend to populations of in situ/accreted GCs. We interpret this consistency as evidence that in r-process elements GCs trace the star formation history of their hosts, motivating their use as sub-Gyr timers of galactic evolution. Furthermore, fitting the trends in [Eu/Si] using a simple galactic chemical evolution model, we find that differences in [Eu/Si] between accreted and in situ MW field stars cannot be explained through star formation efficiency alone. Finally, we show that the use of [Eu/Si] as a chemical tag between GCs and their host galaxies extends beyond the Local Group, to the halo of M31 - potentially offering the opportunity to do Galactic Archaeology in an external galaxy.
This review describes the duality between color and kinematics and its applications, with the aim of gaining a deeper understanding of the perturbative structure of gauge and gravity theories. We emphasize, in particular, applications to loop-level calculations, the broad web of theories linked by the duality and the associated double-copy structure, and the issue of extending the duality and double copy beyond scattering amplitudes. The review is aimed at doctoral students and junior researchers both inside and outside the field of amplitudes and is accompanied by various exercises.
Alongside the search of a potential origin of life on Earth, it is clear, that all processes leading towards life's first molecules had to be compatible with the geo-physical circumstances provided by Earth's prebiotic environment. This implies that life not only had to emerge from simple chemical building blocks but also from the highly diluted solutions posed by the prebiotic oceans. As all living systems require information storing molecules, nucleic acids are assumed to be the starting point for molecular evolution. However, the mechanisms of how molecules could accumulate in early geological settings are poorly understood and especially the formation of nucleic acids primary building blocks poses one of the biggest challenges in the research field.[...]
This thesis reports a determination of the branching fraction (B) and CP-violating charge asymmetry (ACP) of the three-body decay B 0 -> K + π − π 0 at the Belle II experiment. In addition to the inclusive B and ACP, i.e. for B0 -> K+ π− π0 decays, we measure B and ACP exclusively for individual two-body resonances appearing in the K+ π− π0 system. To this end, we employ a model dependent Dalitz plot analysis, including the seven dominant intermediate resonances and a non-resonant contribution. The analyzed data were recorded between 2019 and 2022 and correspond to an integrated luminosity of 362 fb^−1 produced in e+ e− collisions at the Y(4S) resonance by the SuperKEKB collider containing 387 × 106 pairs of bottom-antibottom mesons. The analysis is developed on simulated data and control mode data solely. The branching fractions and CP-asymmetries are extracted in a four-dimensional extended maximum likelihood fit. As the analysis is still under Belle II internal review, we blind central values and state uncertainties only. We measure the branching fraction and CP -violating charge asymmetry inclusively as well as exclusively for the channels B0 -> K∗(892)+ π− , B0 -> K∗(892)0 π0 , B0 -> ρ(770)− K+ , B0 -> (Kπ)∗+ π-, B0 -> (Kπ)0 π0, B0 -> ρ(1450)- K+, B0 -> ρ(1700)- K+ und B0 -> K+ π- π0 non-resonant.
This thesis presents the first model dependent Dalitz plot analysis at Belle II. We achieve uncertainties on par with known determinations. The B 0 -> K ∗ (892)π modes will serve as inputs for a sum rule based on isospin to probe the Standard Model.
While LCDM cosmology is the most successful cosmological model at our disposal today, being able to explain most of the observed phenomena, it has been challenged by more and more tensions. One of the greatest, both in terms of numerical tension and of the importance of the parameter measured, is the infamous Hubble tension. This refers to the disagreement between measurements of the Hubble constant, which describes the rate of expansion of the Universe, a cornerstone of our cosmological understanding. In recent years the methods for measuring H0 have grown in number and sophistication, and yet, as the uncertainties of the measurements have decreased, the tension has not been solved; in fact, it has increased.
Such methods can be roughly divided between "early" and "late" probes of H0, approximately referring to the time of origin of the phenomenon observed. While "early" probes, based for example on the cosmic microwave background, are strongly dependent on the assumed cosmology, "late" probes are generally model-independent but are more susceptible to systematic errors in the measurements. In this context, the time delay cosmographic method is a "late" time probe which can measure H0 directly, without requiring any calibration. This analysis is based on the well-tested general relativity phenomenon of strong gravitational lensing. Given a background variable source and a foreground strong gravitational lens, the time delay between the multiple lensed images can be measured by monitoring and analysing their luminosity over time. A separate modelling analysis of the system can then constrain the mass profile of the lens. The two combined information can then be used to constrain the Hubble constant. In this work, I implemented this analysis based on Hubble Space Telescope archival data and a dedicated observational campaign from the 2.1-meter telescope at Wendelstein. I employed the space-based data by taking advantage of the multiple filters available and their higher resolution to model the lens mass, obtaining a result with 3% precision on the Fermat potential.
I instead used the data from the Wendelstein observational campaign to produce the lightcurves of the image and analyse them in order to constrain the time delay, which was obtained with a precision ranging from 8% to 15% depending on the image pair.
I then combined the results following a Bayesian approach, reaching a constraint on H0 of 71.3+5.0 -4.5 km/(s*Mpc) with a precision ~6.7% considering random uncertainty.
Notably, this work has been mostly independent of major collaborations, such as TDCOSMO, thus providing an unbiased validation of the methodology. Furthermore, the result is proof of the capabilities of the Wendelstein observatory, which should be considered a reliable asset for time delay cosmography or similar projects that require high-sampling, high-quality data.
In the quantum simulation of lattice gauge theories, gauge symmetry can be either fixed or encoded as a redundancy of the Hilbert space. While gauge-fixing reduces the number of qubits, keeping the gauge redundancy can provide code space to mitigate and correct quantum errors by checking and restoring Gauss's law. In this work, we consider the correctable errors for generic finite gauge groups and design the quantum circuits to detect and correct them. We calculate the error thresholds below which the gauge-redundant digitization with Gauss's law error correction has better fidelity than the gauge-fixed digitization involving only gauge-invariant states. Our results provide guidance for fault-tolerant quantum simulations of lattice gauge theories.
We explore the potential for improving constraints on gravity by leveraging correlations in the dispersion measure derived from Fast Radio Bursts (FRBs) in combination with cosmic shear. Specifically, we focus on Horndeski gravity, inferring the kinetic braiding and Planck mass run rate from a stage-4 cosmic shear mock survey alongside a survey comprising $10^4$ FRBs. For the inference pipeline, we utilise hi_class to predict the linear matter power spectrum in modified gravity scenarios, while non-linear corrections are modelled with HMcode, including feedback mechanisms. Our findings indicate that FRBs can disentangle degeneracies between baryonic feedback and cosmological parameters, as well as the mass of massive neutrinos. Since these parameters are also degenerate with modified gravity parameters, the inclusion of FRBs can enhance constraints on Horndeski parameters by up to $40$ percent, despite being a less significant measurement. Additionally, we apply our model to current FRB data and use the uncertainty in the $\mathrm{DM}-z$ relation to impose limits on gravity. However, due to the limited sample size of current data, constraints are predominantly influenced by theoretical priors. Despite this, our study demonstrates that FRBs will significantly augment the limited set of cosmological probes available, playing a critical role in providing alternative tests of feedback, cosmology, and gravity. All codes used in this work are made publically available.
It is well-known that all Feynman integrals within a given family can be expressed as a finite linear combination of master integrals. The master integrals naturally group into sectors. Starting from two loops, there can exist sectors made up of more than one master integral. In this paper we show that such sectors may have additional symmetries. First of all, self-duality, which was first observed in Feynman integrals related to Calabi-Yau geometries, often carries over to non-Calabi-Yau Feynman integrals. Secondly, we show that in addition there can exist Galois symmetries relating integrals. In the simplest case of two master integrals within a sector, whose definition involves a square root r, we may choose a basis (I1, I2) such that I2 is obtained from I1 by the substitution r → ‑r. This pattern also persists in sectors, which a priori are not related to any square root with dependence on the kinematic variables. We show in several examples that in such cases a suitable redefinition of the integrals introduces constant square roots like <inline-formula id="IEq1"><mml:math display="inline"><mml:msqrt><mml:mn>3</mml:mn></mml:msqrt></mml:math></inline-formula>. The new master integrals are then again related by a Galois symmetry, for example the substitution <inline-formula id="IEq2"><mml:math display="inline"><mml:msqrt><mml:mn>3</mml:mn></mml:msqrt></mml:math></inline-formula> → <inline-formula id="IEq3"><mml:math display="inline"><mml:mo>‑</mml:mo><mml:msqrt><mml:mn>3</mml:mn></mml:msqrt></mml:math></inline-formula>. To handle the case where the argument of a square root would be a perfect square we introduce a limit Galois symmetry. Both self-duality and Galois symmetries constrain the differential equation.
The non-observation of baryon number violation suggests that the scale of baryon-number violating interactions at zero temperature is comparable to the GUT scale. However, the pertinent measurements involve hadrons made of the first-generation quarks, such as protons and neutrons. One may therefore entertain the idea that new flavour physics breaks baryon number at a much lower scale, but only in the coupling to a third generation quark, leading to observable baryon-number violating b-hadron decay rates. In this paper we show that indirect constraints on the new physics scale ΛBNV from the existing bounds on the proton lifetime do not allow for this possibility. For this purpose we consider the three dominant proton decay channels p → <inline-formula id="IEq1"><mml:math display="inline"><mml:msup><mml:mi>ℓ</mml:mi><mml:mo>+</mml:mo></mml:msup><mml:msub><mml:mi>ν</mml:mi><mml:mi>ℓ</mml:mi></mml:msub><mml:mover accent="true"><mml:mi>ν</mml:mi><mml:mo stretchy="true">¯</mml:mo></mml:mover></mml:math></inline-formula>, p → <inline-formula id="IEq2"><mml:math display="inline"><mml:msup><mml:mi>π</mml:mi><mml:mo>+</mml:mo></mml:msup><mml:mover accent="true"><mml:mi>ν</mml:mi><mml:mo stretchy="true">¯</mml:mo></mml:mover></mml:math></inline-formula> and p → π0ℓ+ mediated by a virtual bottom quark.
Weak gravitational lensing is a powerful tool for precision tests of cosmology. As the expected deflection angles are small, predictions based on non-linear N-body simulations are commonly computed with the Born approximation. Here, we examine this assumption using DORIAN, a newly developed full-sky ray-tracing scheme applied to high-resolution mass-shell outputs of the two largest simulations in the MillenniumTNG suite, each with a 3000 Mpc box containing almost 1.1 trillion cold dark matter particles in addition to 16.7 billion particles representing massive neutrinos. We examine simple two-point statistics like the angular power spectrum of the convergence field, as well as statistics sensitive to higher order correlations such as peak and minimum statistics, void statistics, and Minkowski functionals of the convergence maps. Overall, we find only small differences between the Born approximation and a full ray-tracing treatment. While these are negligibly small at power-spectrum level, some higher order statistics show more sizeable effects; ray-tracing is necessary to achieve per cent level precision. At the resolution reached here, full-sky maps with 0.8 billion pixels and an angular resolution of 0.43 arcmin, we find that interpolation accuracy can introduce appreciable errors in ray-tracing results. We therefore implemented an interpolation method based on non-uniform fast Fourier transforms (NUFFT) along with more traditional methods. Bilinear interpolation introduces significant smoothing, while nearest grid point sampling agrees well with NUFFT, at least for our fiducial source redshift, <inline-formula><tex-math id="TM0001" notation="LaTeX">$z_s=1.0$</tex-math></inline-formula>, and for the 1 arcmin smoothing we use for higher order statistics.
We study particular integrated correlation functions of two superconformal primary operators of the stress tensor multiplet in the presence of a half-BPS line defect labelled by electromagnetic charges $(p,q)$ in $\mathcal{N}=4$ supersymmetric Yang-Mills theory (SYM) with gauge group $SU(N)$. An important consequence of ${\rm SL}(2,\mathbb{Z})$ electromagnetic duality in $\mathcal{N}=4$ SYM is that correlators of line defect operators with different charges $(p,q)$ must be related in a non-trivial manner when the complex coupling $\tau=\theta/(2\pi)+4\pi i /g_{_{\rm YM}}^2$ is transformed appropriately. In this work we introduce a novel class of real-analytic functions whose automorphic properties with respect to ${\rm SL}(2,\mathbb{Z})$ match the expected transformations of line defect operators in $\mathcal{N}=4$ SYM under electromagnetic duality. At large $N$ and fixed $\tau$, the correlation functions we consider are related to scattering amplitudes of two gravitons from extended $(p,q)$-strings in the holographic dual type IIB superstring theory. We show that the large-$N$ expansion coefficients of the integrated two-point line defect correlators are given by finite linear combinations with rational coefficients of elements belonging to this class of automorphic functions. On the other hand, for any fixed value of $N$ we conjecture that the line defect integrated correlators can be expressed as formal infinite series over such automorphic functions. The resummation of this series produces a simple lattice sum representation for the integrated line defect correlator that manifests its automorphic properties. We explicitly demonstrate this construction for the cases with gauge group $SU(2)$ and $SU(3)$. Our results give direct access to non-perturbative integrated correlators in the presence of an 't Hooft-line defect, observables otherwise very difficult to compute by other means.
Context. A correlation has been reported between the arrival directions of high-energy IceCube events and γ-ray blazars classified as intermediate- and high-synchrotron-peaked BL Lacs. Subsequent studies have investigated the optical properties of these sources, compiled and analyzed public multiwavelength data, and constrained their individual neutrino emission based on public IceCube point-source data. Aims. We provide a theoretical interpretation of public multiwavelength and neutrino point source data for the 32 BL Lac objects in the sample previously associated with an IceCube alert event. We combined the individual source results to draw conclusions regarding the multimesssenger properties of the sample and the required power in relativistic protons. Methods. We performed particle interaction modeling using open-source numerical simulation software. We constrained the model parameters using a novel and unique approach that simultaneously describes the host galaxy contribution, the observed synchrotron peak properties, the average multiwavelength fluxes, and, where possible, the IceCube point source constraints. Results. We show that a single-zone leptohadronic model can describe the multiwavelength broadband fluxes from all 32 IceCube candidates. In some cases, the model suggests that hadronic emission may contribute a considerable fraction of the γ-ray flux. The required power in relativistic protons ranges from a few percent to a factor of ten of the Eddington luminosity, which is energetically less demanding compared to other leptohadronic blazar models in recent literature. The model can describe the 68% confidence level IceCube flux for a large fraction of the masquerading BL Lacs in the sample, including TXS 0506+056; whereas, for true BL Lacs, the model predicts a low neutrino flux in the IceCube sensitivity range. Physically, this distinction is due to the presence of photons from broad line emission in masquerading BL Lacs, which increase the efficiency of hadronic interactions. The predicted neutrino flux peaks between a few petaelectronvolt and 100 PeV and scales positively with the flux in the gigaelectronvolt, megaelectronvolt, X-ray, and optical bands. Based on these results, we provide a list of the brightest neutrino emitters, which can be used for future searches targeting the 10–100 PeV regime.
We describe a software package, TomOpt, developed to optimise the geometrical layout and specifications of detectors designed for tomography by scattering of cosmic-ray muons. The software exploits differentiable programming for the modeling of muon interactions with detectors and scanned volumes, the inference of volume properties, and the optimisation cycle performing the loss minimisation. In doing so, we provide the first demonstration of end-to-end-differentiable and inference-aware optimisation of particle physics instruments. We study the performance of the software on a relevant benchmark scenario and discuss its potential applications. Our code is available on Github (Strong et al 2024 available at: github.com/GilesStrong/tomopt).
The shape of 286No is investigated with the relativistic density functional theory on a three-dimensional lattice space without any symmetry restriction in a microscopic and self-consistent way. It is found that the ground state of 286No has a pure non-axial octupole shape and coexists with a tetrahedral isomeric state. The energy difference between the two states is only 0.12 MeV, and they are separated by a potential barrier of about 0.5 MeV. The occurrence of the octupole correlations is analyzed with the evolution of the single-particle levels near the Fermi surface driven by the octupole deformations.
Small-scale winds driven from accretion discs surrounding active galactic nuclei (AGN) are expected to launch kpc-scale outflows into their host galaxies. However, the ways in which the structure of the interstellar medium (ISM) affects the multiphase content and impact of the outflow remain uncertain. We present a series of numerical experiments featuring a realistic small-scale AGN wind with velocity $5\times 10^3 \!-\! 10^4\rm {\ km\ s^{-1}}$ interacting with an isolated galaxy disc with a manually controlled clumpy ISM, followed at sub-pc resolution. Our simulations are performed with AREPO and probe a wide range of AGN luminosities ($L_{\rm {AGN}}{=} 10^{43-47}\rm {\ erg\ s^{-1}}$) and ISM substructures. In homogeneous discs, the AGN wind sweeps up an outflowing, cooling shell, where the emerging cold phase dominates the mass and kinetic energy budgets, reaching a momentum flux $\dot{p} \approx 7\ L/c$. However, when the ISM is clumpy, outflow properties are profoundly different. They contain small, long-lived ($\gtrsim 5\ \rm {Myr}$), cold ($T{\lesssim }10^{4.5}{\rm {\ K}}$) cloudlets entrained in the faster, hot outflow phase, which are only present in the outflow if radiative cooling is included in the simulation. While the cold phase dominates the mass of the outflow, most of the kinetic luminosity is now carried by a tenuous, hot phase with $T \gtrsim 10^7 \, \rm K$. While the hot phases reach momentum fluxes $\dot{p} \approx (1 - 5)\ L/c$, energy-driven bubbles couple to the cold phase inefficiently, producing modest momentum fluxes $\dot{p} \lesssim L/c$ in the fast-outflowing cold gas. These low momentum fluxes could lead to the outflows being misclassified as momentum-driven using common observational diagnostics. We also show predictions for scaling relations between outflow properties and AGN luminosity and discuss the challenges in constraining outflow driving mechanisms and kinetic coupling efficiencies using observed quantities.
We study the generation of gravitational waves (GWs) during a first-order cosmological phase transition (PT) using the recently introduced Higgsless approach to numerically evaluate the fluid motion induced by the PT. We present for the first time spectra from strong first-order PTs ($\alpha = 0.5$), alongside weak ($\alpha = 0.0046$) and intermediate ($\alpha = 0.05$) transitions previously considered in the literature. We test the regime of applicability of the stationary source assumption, characteristic of the sound-shell model, and show that it agrees with our numerical results when the kinetic energy, sourcing GWs, does not decay with time. However, we find in general that for intermediate and strong PTs, the kinetic energy in our simulations decays following a power law in time, and provide a theoretical framework that extends the stationary assumption to one that allows to include the time evolution of the source. This decay of the kinetic energy, potentially determined by non-linear dynamics and hence, related to the production of vorticity, modifies the usually assumed linear growth with the source duration to an integral over time of the kinetic energy fraction, effectively reducing the growth rate. We validate the novel theoretical model with the results of our simulations covering a broad range of wall velocities. We provide templates for the GW amplitude and spectral shape for a broad range of PT parameters.
We investigate the redshift evolution of the concentration-mass relationship of dark matter haloes in state-of-the-art cosmological hydrodynamic simulations and their dark-matter-only counterparts. By combining the IllustrisTNG suite and the novel MillenniumTNG simulation, our analysis encompasses a wide range of box size ($50 - 740 \: \rm cMpc$) and mass resolution ($8.5 \times 10^4 - 3.1 \times 10^7 \: \rm M_{\odot}$ per baryonic mass element). This enables us to study the impact of baryons on the concentration-mass relationship in the redshift interval $0<z<7$ over an unprecedented halo mass range, extending from dwarf galaxies to superclusters ($\sim 10^{9.5}-10^{15.5} \, \rm M_{\odot}$). We find that the presence of baryons increases the steepness of the concentration-mass relationship at higher redshift, and demonstrate that this is driven by adiabatic contraction of the profile, due to gas accretion at early times, which promotes star formation in the inner regions of haloes. At lower redshift, when the effects of feedback start to become important, baryons decrease the concentration of haloes below the mass scale $\sim 10^{11.5} \, \rm M_{\odot}$. Through a rigorous information criterion test, we show that broken power-law models accurately represent the redshift evolution of the concentration-mass relationship, and of the relative difference in the total mass of haloes induced by the presence of baryons. We provide the best-fit parameters of our empirical formulae, enabling their application to models that mimic baryonic effects in dark-matter-only simulations over six decades in halo mass in the redshift range $0<z<7$.
We introduce the parity-odd power (POP) spectra, a novel set of observables for probing parity violation in cosmological N-point statistics. POP spectra are derived from composite fields obtained by applying non-linear transformations, involving also gradients, curls, and filtering functions, to a scalar field. This compresses the parity-odd trispectrum into a power spectrum. These new statistics offer several advantages: they are computationally fast to construct, estimating their covariance is less demanding compared to estimating that of the full parity-odd trispectrum, and they are simple to model theoretically. We measure the POP spectra on simulations of a scalar field with a specific parity-odd trispectrum shape. We compare these measurements to semi-analytic theoretical calculations and find agreement. We also explore extensions and generalizations of these parity-odd observables.
Understanding star formation in galaxies requires resolving the physical scale on which star formation often occurs: the scale of star clusters. We present a multiwavelength, eight-parsec resolution study of star formation in the circumnuclear star cluster and molecular gas rings of the early-type spiral NGC 1386. The cluster ring formed simultaneously ~4 Myr ago. The clusters have similar properties in terms of mass and star formation rate, resembling those of H II regions in the Milky Way disc. The molecular CO gas resolves into long filaments, which define a secondary ring detached from the cluster ring. Most clusters are in CO voids. Their separation with respect to the CO filaments is reminiscent of that seen in galaxy spiral arms. By analogy, we propose that a density wave through the disc of this galaxy may have produced this gap in the central kpc. The CO filaments fragment into strings of dense, unresolved clouds with no evidence of a stellar counterpart. These clouds may be the sites of a future population of clusters in the ring. The free-fall time of these clouds, ~10 Myr, is close to the orbital time of the CO ring. This coincidence could lead to a synchronous bursting ring, as is the case for the current ring. The inward spiralling morphology of the CO filaments and co-spatiality with equivalent kpc-scale dust filaments are suggestive of their role as matter carriers from the galaxy outskirts to feed the molecular ring and a moderately active nucleus.
The accuracy of reaction theories used to extract properties of exotic nuclei from scattering experiments is often unknown or not quantified, but of utmost importance when, e.g., constraining the equation of state of asymmetric nuclear matter from observables as the neutron-skin thickness. In order to test the Glauber multiple-scattering model, the total interaction cross section of Image 1 on carbon targets was measured at initial beam energies of 400, 550, 650, 800, and 1000 MeV/nucleon. The measurements were performed during the first experiment of the newly constructed R3B (Reaction with Relativistic Radioactive Beams) experiment after the start of FAIR Phase-0 at the GSI/FAIR facility with beam energies of 400, 550, 650, 800, and 1000 MeV/nucleon. The combination of the large-acceptance dipole magnet GLAD and a newly designed and highly efficient Time-of-Flight detector enabled a precise transmission measurement with several target thicknesses for each initial beam energy with an experimental uncertainty of ±0.4%. A comparison with the Glauber model revealed a discrepancy of around 3.1% at higher beam energies, which will serve as a crucial baseline for the model-dependent uncertainty in future fragmentation experiments.
Multiply lensed images of a same source experience a relative time delay in the arrival of photons due to the path length difference and the different gravitational potentials the photons travel through. This effect can be used to measure absolute distances and the Hubble constant (H0) and is known as time-delay cosmography. The method is independent of the local distance ladder and early-universe physics and provides a precise and competitive measurement of H0. With upcoming observatories, time-delay cosmography can provide a 1% precision measurement of H0 and can decisively shed light on the current reported `Hubble tension'. This manuscript details the general methodology developed over the past decades in time-delay cosmography, discusses recent advances and results, and, foremost, provides a foundation and outlook for the next decade in providing accurate and ever more precise measurements with increased sample size and improved observational techniques.
A comparison of theoretical and experimental values of the scalar spin-spin interaction ($J$-coupling) in tritium deuteride molecules yield constraints for nucleon-nucleon exotic interactions of the dimensionless coupling strengths $g_Vg_V$, $g_Ag_A$ and $g_pg_p$, corresponding to the exchange of an vector, axial-vector, and pseudoscalar (axionlike) boson. The couplings between proton ($p$) and nucleon ($N$), denoted by $g_V^p g_V^N$, $g_p^p g_p^N$ are constrained to be less than $1.4 \times 10^{-6}$ and $2.7\times 10^{-6}$, respectively, for boson masses around 5 keV. The coupling constant $g_A^p g_A^N$ is constrained to be less than $1.0 \times 10^{-18}$ for boson masses $\leq 100$ eV. It is noteworthy that this study represents the first instance in which constraints on $g_V g_V$ have been established through the analysis of the potential term $V_2 + V_3$ for both tritium deuteride and hydrogen deuteride molecules.
Novel interactions beyond the four known fundamental forces in nature (electromagnetic, gravitational, strong and weak interactions), may arise due to "new physics" beyond the standard model, manifesting as a "fifth force". This review is focused on spin-dependent fifth forces mediated by exotic bosons such as spin-0 axions and axionlike particles and spin-1 Z' bosons, dark photons, or paraphotons. Many of these exotic bosons are candidates to explain the nature of dark matter and dark energy, and their interactions may violate fundamental symmetries. Spin-dependent interactions between fermions mediated by the exchange of exotic bosons have been investigated in a variety of experiments, particularly at the low-energy frontier. Experimental methods and tools used to search for exotic spin-dependent interactions, such as atomic comagnetometers, torsion balances, nitrogen-vacancy spin sensors, and precision atomic and molecular spectroscopy, are described. A complete set of interaction potentials, derived based on quantum field theory with minimal assumptions and characterized in terms of reduced coupling constants, are presented. A comprehensive summary of existing experimental and observational constraints on exotic spin-dependent interactions is given, illustrating the current research landscape and promising directions of further research.
Large angular scale surveys in the absence of atmosphere are essential for measuring the primordial $B$-mode power spectrum of the Cosmic Microwave Background (CMB). Since this proposed measurement is about three to four orders of magnitude fainter than the temperature anisotropies of the CMB, in-flight calibration of the instruments and active suppression of systematic effects are crucial. We investigate the effect of changing the parameters of the scanning strategy on the in-flight calibration effectiveness, the suppression of the systematic effects themselves, and the ability to distinguish systematic effects by null-tests. Next-generation missions such as LiteBIRD, modulated by a Half-Wave Plate (HWP), will be able to observe polarisation using a single detector, eliminating the need to combine several detectors to measure polarisation, as done in many previous experiments and hence avoiding the consequent systematic effects. While the HWP is expected to suppress many systematic effects, some of them will remain. We use an analytical approach to comprehensively address the mitigation of these systematic effects and identify the characteristics of scanning strategies that are the most effective for implementing a variety of calibration strategies in the multi-dimensional space of common spacecraft scan parameters. We also present Falcons, a fast spacecraft scanning simulator that we developed to investigate this scanning parameter space.
We compute the electron self-energy in Quantum Electrodynamics to three loops in terms of iterated integrals over kernels of elliptic type. We make use of the differential equations method, augmented by an $\epsilon$-factorized basis, which allows us to gain full control over the differential forms appearing in the iterated integrals to all orders in the dimensional regulator. We obtain compact analytic expressions, for which we provide generalized series expansion representations that allow us to evaluate the result numerically for all values of the electron momentum squared. As a by product, we also obtain $\epsilon$-resummed results for the self-energy in the on-shell limit $p^2 = m^2$, which we use to recompute the known three-loop renormalization constants in the on-shell scheme.
Nested sampling (NS) is a stochastic method for computing the log-evidence of a Bayesian problem. It relies on stochastic estimates of prior volumes enclosed by likelihood contours, which limits the accuracy of the log-evidence calculation. We propose to transform the prior volume estimation into a Bayesian inference problem, which allows us to incorporate a smoothness assumption for likelihood-prior volume relations. As a result, we aim to increase the accuracy of the volume estimates and thus improve the overall log-evidence calculation using NS. The method presented works as a post-processing step for NS and provides posterior samples of the likelihood-prior-volume relation, from which the log-evidence can be calculated. We demonstrate an implementation of the algorithm and compare its results with plain NS on two synthetic datasets for which the underlying evidence is known. We find a significant improvement in accuracy for runs with less than one hundred active samples in NS, but are prone to numerical problems beyond this point.
Active matter systems evade the constraints of thermal equilibrium, leading to the emergence of intriguing collective behavior. A paradigmatic example is given by motor-filament mixtures, where the motion of motor proteins drives alignment and sliding interactions between filaments and their self-organization into macroscopic structures. After defining a microscopic model for these systems, we derive continuum equations, exhibiting the formation of active supramolecular assemblies such as micelles, bilayers, and foams. The transition between these structures is driven by a branching instability, which destabilizes the orientational order within the micelles, leading to the growth of bilayers at high microtubule densities. Additionally, we identify a fingering instability, modulating the shape of the micelle interface at high motor densities. We study the role of various mechanisms in these two instabilities, such as contractility, active splay, and anchoring, allowing for generalization beyond the system considered here.
How can Transformers model and learn enumerative geometry? What is a robust procedure for using Transformers in abductive knowledge discovery within a mathematician-machine collaboration? In this work, we introduce a new paradigm in computational enumerative geometry in analyzing the $\psi$-class intersection numbers on the moduli space of curves. By formulating the enumerative problem as a continuous optimization task, we develop a Transformer-based model for computing $\psi$-class intersection numbers based on the underlying quantum Airy structure. For a finite range of genera, our model is capable of regressing intersection numbers that span an extremely wide range of values, from $10^{-45}$ to $10^{45}$. To provide a proper inductive bias for capturing the recursive behavior of intersection numbers, we propose a new activation function, Dynamic Range Activator (DRA). Moreover, given the severe heteroscedasticity of $\psi$-class intersections and the required precision, we quantify the uncertainty of the predictions using Conformal Prediction with a dynamic sliding window that is aware of the number of marked points. Next, we go beyond merely computing intersection numbers and explore the enumerative "world-model" of the Transformers. Through a series of causal inference and correlational interpretability analyses, we demonstrate that Transformers are actually modeling Virasoro constraints in a purely data-driven manner. Additionally, we provide evidence for the comprehension of several values appearing in the large genus asymptotic of $\psi$-class intersection numbers through abductive hypothesis testing.
We consider the dimensional reduction of N=(2,0) conformal supergravity in six dimensions on a two-torus to N=4 conformal supergravity in four dimensions. At the level of kinematics, the six-dimensional Weyl multiplet is shown to reduce to a mixture of the N=4 Weyl and vector multiplets, which can be reinterpreted as a new off-shell multiplet of N=4 conformal supergravity. Similar multiplets have been constructed in other settings and are referred to as dilaton Weyl multiplets. We derive it here for the first time in a maximally supersymmetric context in four dimensions. Furthermore, we present the non-linear relations between all the six- and four-dimensional bosonic and fermionic fields, that are obtained by comparing the off-shell supersymmetry transformation rules.
Modern hydrodynamic simulations of core-collapse supernovae and neutron-star mergers require knowledge not only of the equilibrium properties of strongly interacting matter, but also of the system's response to perturbations, encoded in various transport coefficients. Using perturbative and holographic tools, we derive here an improved weak-coupling and a new strong-coupling result for the most important transport coefficient of unpaired quark matter, its bulk viscosity. These results are combined in a simple analytic pocket formula for the quantity that is rooted in perturbative quantum chromodynamics at high densities but takes into account nonperturbative holographic input at neutron-star densities, where the system is strongly coupled. This expression can be used in the modeling of unpaired quark matter at astrophysically relevant temperatures and densities.
In this work we present the study of $p\Lambda$ and $pp\Lambda$ scattering processes using femtoscopic correlation functions. This observable has been recently used to access the low-energy interaction of hadrons emitted in the final state of high-energy collisions, delivering unprecedented precision information of the interaction among strange hadrons. The formalism for particle pairs is well established and it relates the measured correlation functions with the scattering wave function and the emission source. In the present work we analyze the $NN\Lambda$ scattering in free space and relate the corresponding wave function to the $pp\Lambda$ correlation measurement performed by the ALICE collaboration. The three-body problem is solved using the hyperspherical adiabatic basis. Regarding the $p\Lambda$ and $pp\Lambda$ interactions, different models are used and their impact on the correlation function is studied. The three body force considered in this work is anchored to describe the binding energy of the hypertriton and to give a good description of the two four-body hypernuclei. As a main result we have observed a huge, low-energy peak in the $pp\Lambda$ correlation function, mainly produced by the $J^\pi=1/2^+$ three-body state. The study of this peak from an experimental as well as a theoretical point of view will provide important constraints to the two- and three-body interactions.
We explore the potential application of quantum computers to the examination of lattice holography, which extends to the strongly coupled bulk theory regime. With adiabatic evolution, we compute the ground state of a spin system on a (2 +1 )-dimensional hyperbolic lattice, and measure the spin-spin correlation function on the boundary. Notably, we observe that with achievable resources for coming quantum devices, the correlation function demonstrates an approximate scale-invariant behavior, aligning with the pivotal theoretical predictions of the anti-de Sitter/conformal field theory correspondence.