We investigate the mass-metallicity relationship of star-forming galaxies by analyzing the absorption line spectra of ~200,000 galaxies in the Sloan Digital Sky Survey. The galaxy spectra are stacked in bins of stellar mass, and a population synthesis technique is applied yielding the metallicities, ages, and star formation history of the young and old stellar population together with interstellar reddening and extinction. We adopt different lengths of the initial starbursts and different initial mass functions for the calculation of model spectra of the single stellar populations contributing to the total integrated spectrum. We also allow for deviations of the ratio of extinction to reddening R V from 3.1 and determine the value from the spectral fit. We find that burst length and R V have a significant influence on the determination of metallicities, whereas the effect of the initial mass function is small. The R V values are larger than 3.1. The metallicities of the young stellar population agree with extragalactic spectroscopic studies of individual massive supergiant stars and are significantly higher than those of the older stellar population. This confirms galaxy evolution models where metallicity depends on the ratio of gas to stellar mass and where this ratio decreases with time. Star formation history is found to depend on galaxy stellar mass. Massive galaxies are dominated by stars formed at early times.
Substructures are known to be good tracers for the dynamical states and recent accretion histories of the most massive collapsed structures in the universe, galaxy clusters. Observations find extremely massive substructures in some clusters, especially Abell 2744 (A2744), which are potentially in tension with the ΛCDM paradigm because they are not found in simulations directly. However, the methods to measure substructure masses strongly differ between observations and simulations. Using the fully hydrodynamical cosmological simulation suite MAGNETICUM PATHFINDER, we develop a method to measure substructure masses in projection from simulations, similarly to the observational approach. We identify a simulated A2744 counterpart that not only has eight substructures of similar mass fractions but also exhibits similar features in the hot gas component. This cluster formed only recently through a major merger together with at least six massive minor merger events since z = 1, where previously the most massive component had a mass of less than 1 × 1014 M ⊙. We show that the mass fraction of all substructures and of the eighth substructure separately are excellent tracers for the dynamical state and assembly history for all galaxy cluster mass ranges, with high fractions indicating merger events within the last 2 Gyr. Finally, we demonstrate that the differences between subhalo masses measured directly from simulations as bound and those measured in projection are due to methodology, with the latter generally 2-3 times larger than the former. We provide a predictor function to estimate projected substructure masses from SUBFIND masses for future comparison studies between simulations and observations.
LiteBIRD is a future satellite mission designed to observe the polarization of the cosmic microwave background radiation in order to probe the inflationary universe. LiteBIRD is set to observe the sky using three telescopes with transition-edge sensor bolometers. In this work we estimated the LiteBIRD instrumental sensitivity using its current design. We estimated the detector noise due to the optical loadings using physical optics and ray-tracing simulations. The noise terms associated with thermal carrier and readout noise were modeled in the detector noise calculation. We calculated the observational sensitivities over fifteen bands designed for the LiteBIRD telescopes using assumed observation time efficiency.
We extend the multi-tracer (MT) formalism of the effective field theory of large-scale structure to redshift space, comparing the results of MT to a single-tracer analysis when extracting cosmological parameters from simulations. We used a sub-halo abundance matching method to obtain more realistic multi-tracer galaxy catalogs constructed from N-body simulations. Considering different values for the sample shot noise and volume, we show that the MT error bars on $A_s$, $\omega_{\rm cdm}$, and $h$ in a full-shape analysis are approximately $50\%$ smaller relative to ST. We find that cosmological and bias coefficients from MT are less degenerate, indicating that the MT parameter basis is more orthogonal. We conclude that using MT combined with perturbation theory is a robust and competitive way to accommodate the information present in the mildly non-linear scales.
While supervised neural networks have become state of the art for identifying the rare strong gravitational lenses from large imaging data sets, their selection remains significantly affected by the large number and diversity of nonlens contaminants. This work evaluates and compares systematically the performance of neural networks in order to move towards a rapid selection of galaxy-scale strong lenses with minimal human input in the era of deep, wide-scale surveys. We used multiband images from PDR2 of the HSC Wide survey to build test sets mimicking an actual classification experiment, with 189 strong lenses previously found over the HSC footprint and 70,910 nonlens galaxies in COSMOS. Multiple networks were trained on different sets of realistic strong-lens simulations and nonlens galaxies, with various architectures and data pre-processing. The overall performances strongly depend on the construction of the ground-truth training data and they typically, but not systematically, improve using our baseline residual network architecture. Improvements are found when applying random shifts to the image centroids and square root stretches to the pixel values, adding z band, or using random viewpoints of the original images, but not when adding difference images to subtract emission from the central galaxy. The most significant gain is obtained with committees of networks trained on different data sets, and showing a moderate overlap between populations of false positives. Nearly-perfect invariance to image quality can be achieved by training networks either with large number of bands, or jointly with the PSF and science frames. Overall, we show the possibility to reach a TPR0 as high as 60% for the test sets under consideration, which opens promising perspectives for pure selection of strong lenses without human input using the Rubin Observatory and other forthcoming ground-based surveys.
Context. Hot giant planets such as MASCARA-1 b are expected to have thermally inverted atmospheres, which makes them perfect laboratories for atmospheric characterization through high-resolution spectroscopy. Nonetheless, previous attempts at detecting the atmosphere of MASCARA-1 b in transmission have led to negative results.
Aims: We aim to detect the optical emission spectrum of MASCARA-1 b.
Methods: We used the high-resolution spectrograph PEPSI to observe MASCARA-1 (spectral type A8) near the secondary eclipse of the planet. We cross-correlated the spectra with synthetic templates computed for several atomic and molecular species.
Results: We detect Fe I, Cr I, and Ti I in the atmosphere of MASCARA-1 b with a S/N ≈ 7, 4, and 5, respectively, and confirm the expected systemic velocity of ≈13 km s−1 and the radial velocity semi-amplitude of MASCARA-1 b of ≈200 km s−1. The detection of Ti is of particular importance in the context of the recently proposed phenomenon of Ti cold-trapping below a certain planetary equilibrium temperature.
Conclusions: We confirm the presence of an atmosphere around MASCARA-1 b through emission spectroscopy. We conclude that the atmospheric non-detection in transmission spectroscopy is due to the strong gravity of the planet and/or to the overlap between the planetary track and its Doppler shadow.
Using one of the largest volumes of the hydrodynamical cosmological simulation suit Magneticum, we study the evolution of protoclusters identified at redshift ≈ 4, with properties similar to the well-observed protocluster SPT2349-56. We identify 42 protoclusters in the simulation as massive and equally rich in substructures as observed, confirming that these observed structures can already be virialized. The dynamics of the internally fast-rotating member galaxies within these protoclusters resemble observations, merging rapidly to form the cores of the brightest cluster galaxies of the assembling clusters. Half of the gas reservoir of these structures is in a hot phase, with the metal enrichment at a very early stage. These systems show a good agreement with the observed amount of cold star-forming gas, largely enriched to solar values. We predict that some of the member galaxies are already quenched at z ≈ 4, rendering them undetectable through measurements of their gas reservoirs. Tracing the evolution of protoclusters reveals that none of the typical mass indicators at high redshift are good tracers to predict the present-day mass of the system. We find that none of the simulated protoclusters at z = 4.3 are among the top ten most massive clusters at redshift z = 0.2, with some barely reaching masses of M ≈ 2 × 1014 M ⊙. Although the average star formation and mass growth rates in the simulated galaxies match observations at high redshift reasonably well, the simulation fails to reproduce the extremely high total star formation rates within the observed protoclusters, indicating that the subgrid models are lacking the ability to reproduce a higher star formation efficiency (or lower depletion timescales).
The orbital distribution of the S-star cluster surrounding the supermassive black hole in the center of the Milky Way is analyzed. A tight, roughly exponential dependence of the pericenter distance r$_{p}$ on orbital eccentricity e$_{\star}$ is found, $\log ($r$_p)\sim$(1-e$_{\star}$), which cannot be explained simply by a random distribution of semi-major axes and eccentricities. No stars are found in the region with high e$_{\star}$ and large log r$_{p}$ or in the region with low e$_{\star}$ and small log r$_{p}$. G-clouds follow the same correlation. The likelihood P(log r$_p$,(1-e$_{\star}$)) to determine the orbital parameters of S-stars is determined. P is very small for stars with large e$_{\star}$ and large log r$_{p}$. S-stars might exist in this region. To determine their orbital parameters, one however needs observations over a longer time period. On the other hand, if stars would exist in the region of low log r$_{p}$ and small e$_{\star}$, their orbital parameters should by now have been determined. That this region is unpopulated therefore indicates that no S-stars exist with these orbital characteristics, providing constraints for their formation. We call this region, defined by $\log$ (r$_p$/AU) $<$ 1.57+2.6(1-e$_{\star})$, the zone of avoidance. Finally, it is shown that the observed frequency of eccentricities and pericenter distances is consistent with a random sampling of log r$_{p}$ and e$_{\star}$. However, only if one takes into account that no stars exist in the zone of avoidance and that orbital parameters cannot yet be determined for stars with large r$_{p}$ and large e$_{\star}$.
When analyzing real-world data it is common to work with event ensembles, which comprise sets of observations that collectively constrain the parameters of an underlying model of interest. Such models often have a hierarchical structure, where "local" parameters impact individual events and "global" parameters influence the entire dataset. We introduce practical approaches for optimal dataset-wide probabilistic inference in cases where the likelihood is intractable, but simulations can be realized via forward modeling. We construct neural estimators for the likelihood(-ratio) or posterior and show that explicitly accounting for the model's hierarchical structure can lead to tighter parameter constraints. We ground our discussion using case studies from the physical sciences, focusing on examples from particle physics (particle collider data) and astrophysics (strong gravitational lensing observations).
Radiative corrections are crucial for modern high-precision physics experiments, and are an area of active research in the experimental and theoretical community. Here we provide an overview of the state of the field of radiative corrections with a focus on several topics: lepton-proton scattering, QED corrections in deep-inelastic scattering, and in radiative light-hadron decays. Particular emphasis is placed on the two-photon exchange, believed to be responsible for the proton form-factor discrepancy, and associated Monte-Carlo codes. We encourage the community to continue developing theoretical techniques to treat radiative corrections, and perform experimental tests of these corrections.
With its cored surface brightness profile, the elliptical galaxy NGC 5419 appears as a typical high-mass early-type galaxy (ETG). However, the galaxy hosts two distinct nuclei in its center. We use high-signal MUSE (Multi-unit Spectroscopic Explorer (Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO program 099.B-0193(A).)) spectral observations and novel triaxial dynamical orbit models to reveal a surprisingly isotropic central orbit distribution in NGC 5419. Recent collisionless simulations of merging massive ETGs suggest a two-phase core formation model, in which the low-density stellar core forms rapidly by supermassive black holes (SMBHs) sinking into the center due to dynamical friction. Only afterwards do the SMBHs form a hard binary, and the black hole scouring process slowly changes the central orbit distribution from isotropic to tangential. The observed cored density profile, the double nucleus, and the isotropic center of NGC 5419 together thus point to an intermediate evolutionary state where the first phase of core formation has taken place, yet the scouring process is only beginning. This implies that the double nucleus is an SMBH binary. Our triaxial dynamical models indicate a total mass of the two SMBHs in the center of NGC 5419 of M BH = (1.0 ± 0.08) × 1010 M ⊙. Moreover, we find that NGC 5419's complex kinematically distinct core can be explained by a coherent flip of the direction of orbital rotation of stars on tube orbits at ~3 kpc distance from the galaxy center together with projection effects. This is also in agreement with merger simulations hosting SMBHs in the same mass regime.
We study the broadband emission of Mrk 501 using multiwavelength observations from 2017 to 2020 performed with a multitude of instruments, involving, among others, MAGIC, Fermi's Large Area Telescope (LAT), NuSTAR, Swift, GASP-WEBT, and the Owens Valley Radio Observatory. Mrk 501 showed an extremely low broadband activity, which may help to unravel its baseline emission. Nonetheless, significant flux variations are detected at all wave bands, with the highest occurring at X-rays and very-high-energy (VHE) γ-rays. A significant correlation (>3σ) between X-rays and VHE γ-rays is measured, supporting leptonic scenarios to explain the variable parts of the emission, also during low activity. This is further supported when we extend our data from 2008 to 2020, and identify, for the first time, significant correlations between the Swift X-Ray Telescope and Fermi-LAT. We additionally find correlations between high-energy γ-rays and radio, with the radio lagging by more than 100 days, placing the γ-ray emission zone upstream of the radio-bright regions in the jet. Furthermore, Mrk 501 showed a historically low activity in X-rays and VHE γ-rays from mid-2017 to mid-2019 with a stable VHE flux (>0.2 TeV) of 5% the emission of the Crab Nebula. The broadband spectral energy distribution (SED) of this 2 yr long low state, the potential baseline emission of Mrk 501, can be characterized with one-zone leptonic models, and with (lepto)-hadronic models fulfilling neutrino flux constraints from IceCube. We explore the time evolution of the SED toward the low state, revealing that the stable baseline emission may be ascribed to a standing shock, and the variable emission to an additional expanding or traveling shock.
Collisionless shock waves in supernova remnants and the solar wind heat electrons less effectively than they heat ions, as is predicted by kinetic simulations. However, the values of T e /T p inferred from the Hα profiles of supernova remnant shocks behave differently as a function of Mach number or Alfvén Mach number than what is measured in the solar wind or predicted by simulations. Here we determine T e /T p for supernova remnant shocks using Hα profiles, shock speeds from proper motions, and electron temperatures from X-ray spectra. We also improve the estimates of sound speed and Alfvén speed used to determine Mach numbers. We find that the Hα determinations are robust and that the discrepancies among supernova remnant shocks, solar wind shocks, and computer-simulated shocks remain. We discuss some possible contributing factors, including shock precursors, turbulence, and varying preshock conditions.
Galaxy clusters and cosmic voids, the most extreme objects in our Universe in terms of mass and size, trace two opposite sides of the large-scale matter density field. By studying their abundance as a function of their mass and radius, respectively, i.e. the halo mass function (HMF) and void size function (VSF), it is possible to achieve fundamental constraints on the cosmological model. While the HMF has already been extensively exploited, providing robust constraints on the main cosmological model parameters (e.g. Ωm, σ8, and S8), the VSF is still emerging as a viable and effective cosmological probe. Given the expected complementarity of these statistics, in this work, we aim at estimating the costraining power deriving from their combination. To this end, we exploit realistic mock samples of galaxy clusters and voids extracted from state-of-the-art large hydrodynamical simulations, in the redshift range 0.2 ≤ z ≤ 1. We perform an accurate calibration of the free parameters of the HMF and VSF models, needed to take into account the differences between the types of mass tracers used in this work and those considered in previous literature analyses. Then, we obtain constraints on Ωm and σ8 by performing a Bayesian analysis. We find that cluster and void counts represent powerful independent and complementary probes to test the cosmological framework. In particular, the constraining power of the HMF on Ωm and σ8 improves with the VSF contribution, increasing the S8 constraint precision by a factor of about 60 per cent.
We present the first results of one extremely high resolution, non-radiative magnetohydrodynamical cosmological zoom-in simulation of a massive cluster with a virial mass M$_\mathrm{vir} = 2.0 \times 10^{15}$ solar masses. We adopt a mass resolution of $4 \times 10^5$ M$_{\odot}$ with a maximum spatial resolution of around 250 pc in the central regions of the cluster. We follow the detailed amplification process in a resolved small-scale turbulent dynamo in the Intracluster medium (ICM) with strong exponential growth until redshift 4, after which the field grows weakly in the adiabatic compression limit until redshift 2. The energy in the field is slightly reduced as the system approaches redshift zero in agreement with adiabatic decompression. The field structure is highly turbulent in the center and shows field reversals on a length scale of a few 10 kpc and an anti-correlation between the radial and angular field components in the central region that is ordered by small-scale turbulent dynamo action. The large-scale field on Mpc scales is almost isotropic, indicating that the structure formation process in massive galaxy cluster formation is suppressing memory of both the initial field configuration and the amplified morphology via the turbulent dynamo in the central regions. We demonstrate that extremely high-resolution simulations of the magnetized ICM are in reach that can resolve the small-scale magnetic field structure which is of major importance for the injection of and transport of cosmic rays in the ICM. This work is a major cornerstone for follow-up studies with an on-the-fly treatment of cosmic rays to model in detail electron-synchrotron and gamma-ray emissions.
We present a method for obtaining unbiased signal estimates in the presence of a significant background, eliminating the need for a parametric model for the background itself. Our approach is based on a minimal set of conditions for observation and background estimators, which are typically satisfied in practical scenarios. To showcase the effectiveness of our method, we apply it to simulated data from the planned dielectric axion haloscope MADMAX.
The properties of strongly-coupled lattice gauge theories at finite density as well as in real time have largely eluded first-principles studies on the lattice. This is due to the failure of importance sampling for systems with a complex action. An alternative to evade the sign problem is quantum simulation. Although still in its infancy, a lot of progress has been made in devising algorithms to address these problems. In particular, recent efforts have addressed the question of how to produce thermal Gibbs states on a quantum computer. In this study, we apply a variational quantum algorithm to a low-dimensional model which has a local abelian gauge symmetry. We demonstrate how this approach can be applied to obtain information regarding the phase diagram as well as unequal-time correlation functions at non-zero temperature.
We present a detailed multi-wavelength study of star formation in the dwarf galaxy NGC 4395 which hosts an active galactic nucleus (AGN). From our observations with the Ultra-Violet Imaging Telescope, we have compiled a catalogue of 284 star forming (SF) regions, out of which we could detect 120 SF regions in Hα observations. Across the entire galaxy, we found the extinction corrected star formation rate (SFR) in the far ultra-violet (FUV) to range from 2.0 × 10−5 M⊙yr−1 to 1.5 × 10−2 M⊙yr−1 with a median of 3.0 × 10−4 M⊙yr−1 and age to lie in the range of ∼ 1 to 98 Myr with a median of 14 Myr. In Hα we found the SFR to range from 7.2 × 10−6 M⊙yr−1 to 2.7 × 10−2 M⊙yr−1 with a median of 1.7 × 10−4 M⊙yr−1 and age to lie between 3 to 6 Myr with a median of 5 Myr. The stellar ages derived from Hα show a gradual decline with galactocentric distance. We found three SF regions close to the center of NGC~4395 with high SFR both from Hα and UV which could be attributed to feedback effects from the AGN. We also found six other SF regions in one of the spiral arms having higher SFR. These are very close to supernovae remnants which could have enhanced the SFR locally. We obtained a specific SFR (SFR per unit mass) for the whole galaxy 4.64 × 10−10 yr−1.
Context. The dead zone outer edge corresponds to the transition from the magnetically dead to the magnetorotational instability (MRI) active regions in the outer protoplanetary disk midplane. It has been previously hypothesized to be a prime location for dust particle trapping. A more consistent approach to access such an idea has yet to be developed, since the interplay between dust evolution and MRI-driven accretion over millions of years has been poorly understood.
Aims: We provide an important step toward a better understanding of the MRI-dust coevolution in protoplanetary disks. In this pilot study, we present a proof of concept that dust evolution ultimately plays a crucial role in the MRI activity.
Methods: First, we study how a fixed power-law dust size distribution with varying parameters impacts the MRI activity, especially the steady-state MRI-driven accretion, by employing and improving our previous 1+1D MRI-driven turbulence model. Second, we relax the steady-state accretion assumption in this disk accretion model, and partially couple it to a dust evolution model in order to investigate how the evolution of dust (dynamics and grain growth processes combined) and MRI-driven accretion are intertwined on million-year timescales, from a more sophisticated modeling of the gas ionization degree.
Results: Dust coagulation and settling lead to a higher gas ionization degree in the protoplanetary disk, resulting in stronger MRI-driven turbulence as well as a more compact dead zone. On the other hand, fragmentation has an opposite effect because it replenishes the disk in small dust particles which are very efficient at sweeping up free electrons and ions from the gas phase. Since the dust content of the disk decreases over millions of years of evolution due to radial drift, the MRI-driven turbulence overall becomes stronger and the dead zone more compact until the disk dust-gas mixture eventually behaves as a grain-free plasma. Furthermore, our results show that dust evolution alone does not lead to a complete reactivation of the dead zone. For typical T-Tauri stars, we find that the dead zone outer edge is expected to be located roughly between 10 au and 50 au during the disk lifetime for our choice of the magnetic field strength and configuration. Finally, the MRI activity evolution is expected to be crucially sensitive to the choice made for the minimum grain size of the dust distribution.
Conclusions: The MRI activity evolution (hence the temporal evolution of the MRI-induced α parameter) is controlled by dust evolution and occurs on a timescale of local dust growth, as long as there are enough dust particles in the disk to dominate the recombination process for the ionization chemistry. Once that is no longer the case, the MRI activity evolution is expected to be controlled by gas evolution and occurs on a viscous evolution timescale.
Light antinuclei, like antideuteron and antihelium-3, are ideal probes for new, exotic physics because their astrophysical backgrounds are suppressed at low energies. In order to exploit fully the inherent discovery potential of light antinuclei, a reliable description of their production cross sections in cosmic ray interactions is crucial. We provide therefore the cross sections of antideuteron and antihelium-3 production in pp, pHe, Hep, HeHe, p bar p and p bar He collisions at energies relevant for secondary production in the Milky Way, in a tabulated form which is convinient to use. These predictions are based on QGSJET-II-04m and the state of the art coalescence model WiFunC, which evaluates the coalesence probability on an event-by-event basis, including both momentum correlations and the dependence on the emission volume. In addition, we comment on the importance of a Monte Carlo description of the antideuteron production and on the use of event generators in general. In particular, we discuss the effect of two-particle momentum correlations provided by Monte Carlo event generators on antinuclei production.
One necessary step for probing the nature of self-interacting dark matter (SIDM) particles with astrophysical observations is to pin down any possible velocity dependence in the SIDM cross section. Major challenges for achieving this goal include eliminating, or mitigating, the impact of the baryonic components and tidal effects within the dark matter halos of interest -- the effects of these processes can be highly degenerate with those of dark matter self-interactions at small scales. In this work we select 9 isolated galaxies and brightest cluster galaxies (BCGs) with baryonic components small enough such that the baryonic gravitational potentials do not significantly influence the halo gravothermal evolution processes. We then constrain the parameters of Rutherford and Moller scattering cross section models with the measured rotation curves and stellar kinematics through the gravothermal fluid formalism and isothermal method. Cross sections constrained by the two methods are consistent at $1\sigma$ confidence level, but the isothermal method prefers cross sections greater than the gravothermal approach constraints by a factor of $\sim3$.
Neutrinos propagating in a dense neutrino gas, such as those expected in core-collapse supernovae (CCSNe) and neutron star mergers (NSMs), can experience fast flavor conversions on relatively short scales. This can happen if the neutrino electron lepton number (ν ELN ) angular distribution crosses zero in a certain direction. Despite this, most of the state-of-the-art CCSN and NSM simulations do not provide such detailed angular information and instead, supply only a few moments of the neutrino angular distributions. In this study we employ, for the first time, a machine learning (ML) approach to this problem and show that it can be extremely successful in detecting ν ELN crossings on the basis of its zeroth and first moments. We observe that an accuracy of ∼95 % can be achieved by the ML algorithms, which almost corresponds to the Bayes error rate of our problem. Considering its remarkable efficiency and agility, the ML approach provides one with an unprecedented opportunity to evaluate the occurrence of fast flavor conversions in CCSN and NSM simulations on the fly. We also provide our ML methodologies on GitHub.
We present new computations for Feynman integrals relevant to Higgs plus jet production at three loops, including first results for a non-planar class of integrals. The results are expressed in terms of generalised polylogarithms up to transcendental weight six. We also provide the full canonical differential equations, which allows us to make structural observations on the answer. In particular, we find a counterexample to previously conjectured adjacency relations, for a planar integral of the tennis-court type. Additionally, for a non-planar triple ladder diagram, we find two novel alphabet letters. This information may be useful for future bootstrap approaches.
Theoretically predicted yields of elements created by the rapid neutron capture (r-) process carry potentially large uncertainties associated with incomplete knowledge of nuclear properties and approximative hydrodynamical modelling of the matter ejection processes. We present an in-depth study of the nuclear uncertainties by varying theoretical nuclear input models that describe the experimentally unknown neutron-rich nuclei. This includes two frameworks for calculating the radiative neutron capture rates and 14 different models for nuclear masses, β-decay rates and fission properties. Our r-process nuclear network calculations are based on detailed hydrodynamical simulations of dynamically ejected material from NS-NS or NS-BH binary mergers plus the secular ejecta from BH-torus systems. The impact of nuclear uncertainties on the r-process abundance distribution and the early radioactive heating rate is found to be modest (within a factor of ~20 for individual A > 90 abundances and a factor of 2 for the heating rate). However, the impact on the late-time heating rate is more significant and depends strongly on the contribution from fission. We witness significantly higher sensitivity to the nuclear physics input if only a single trajectory is used compared to considering ensembles with a much larger number of trajectories (ranging between 150 and 300), and the quantitative effects of the nuclear uncertainties strongly depend on the adopted conditions for the individual trajectory. We use the predicted Th/U ratio to estimate the cosmochronometric age of six metal-poor stars and find the impact of the nuclear uncertainties to be up to 2 Gyr.
Modeling of strongly gravitationally lensed galaxies is often required in order to use them as astrophysical or cosmological probes. With current and upcoming wide-field imaging surveys, the number of detected lenses is increasing significantly such that automated and fast modeling procedures for ground-based data are urgently needed. This is especially pertinent to short-lived lensed transients in order to plan follow-up observations. Therefore, we present in a companion paper a neural network predicting the parameter values with corresponding uncertainties of a singular isothermal ellipsoid (SIE) mass profile with external shear. In this work, we also present a newly developed pipeline glee_auto.py that can be used to model any galaxy-scale lensing system consistently. In contrast to previous automated modeling pipelines that require high-resolution space-based images, glee_auto.py is optimized to work well on ground-based images such as those from the Hyper-Suprime-Cam (HSC) Subaru Strategic Program or the upcoming Rubin Observatory Legacy Survey of Space and Time. We further present glee_tools.py, a flexible automation code for individual modeling that has no direct decisions and assumptions implemented on the lens system setup or image resolution. Both pipelines, in addition to our modeling network, minimize the user input time drastically and thus are important for future modeling efforts. We applied the network to 31 real galaxy-scale lenses of HSC and compare the results to traditional, Markov chain Monte Carlo sampling-based models obtained from our semi-autonomous pipelines. In the direct comparison, we find a very good match for the Einstein radius. The lens mass center and ellipticity show reasonable agreement. The main discrepancies pretrain to the external shear, as is expected from our tests on mock systems where the neural network always predicts values close to zero for the complex components of the shear. In general, our study demonstrates that neural networks are a viable and ultra fast approach for measuring the lens-galaxy masses from ground-based data in the upcoming era with ~105 lenses expected.
One of science’s greatest challenges is how life can spontaneously emerge from a mixture of abiotic molecules. A complicating factor is that life is inherently unstable, and, by extension, so are its molecules—RNA and proteins are prone to hydrolysis and denaturation. For the synthesis of life or to better understand its emergence at its origin, selection mechanisms are needed for such inherently unstable molecules. Here, we present a chemically-fueled dynamic combinatorial library as a model for RNA oligomerization and deoligomerization that shines new light on selection and purification mechanisms under kinetic control. In the experiments, nucleotide oligomers can only sustain by continuous production. We find that hybridization is a powerful tool for selecting unstable molecules as it offers feedback on oligomerization and deoligomerization rates. Template-based copying can thereby select molecules of specific lengths and sequences. Moreover, we find that templation can also be used to purify libraries of oligomers. Further, template-based copying within coacervate-based protocells changes its compartment’s physical properties, like their ability to fuse. Such reciprocal coupling between information sequences and physical properties is a key step toward synthetic life.
Aims: We want to find the distribution of initial conditions that best reproduces disc observations at the population level.
Methods: We first ran a parameter study using a 1D model that includes the viscous evolution of a gas disc, dust, and pebbles, coupled with an emission model to compute the millimetre flux observable with ALMA. This was used to train a machine learning surrogate model that can compute the relevant quantity for comparison with observations in seconds. This surrogate model was used to perform parameter studies and synthetic disc populations.
Results: Performing a parameter study, we find that internal photoevaporation leads to a lower dependency of disc lifetime on stellar mass than external photoevaporation. This dependence should be investigated in the future. Performing population synthesis, we find that under the combined losses of internal and external photoevaporation, discs are too short lived.
Conclusions: To match observational constraints, future models of disc evolution need to include one or a combination of the following processes: infall of material to replenish the discs, shielding of the disc from internal photoevaporation due to magnetically driven disc winds, and extinction of external high-energy radiation. Nevertheless, disc properties in low-external-photoevaporation regions can be reproduced by having more massive and compact discs. Here, the optimum values of the α viscosity parameter lie between 3 × 10−4 and 10−3 and with internal photoevaporation being the main mode of disc dispersal.
Tables 3 and 4 are only available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (ftp://130.79.128.5) or via https://cdsarc.cds.unistra.fr/viz-bin/cat/J/A+A/673/A78
Supernovae (SNe) that have been multiply imaged by gravitational lensing are rare and powerful probes for cosmology. Each detection is an opportunity to develop the critical tools and methodologies needed as the sample of lensed SNe increases by orders of magnitude with the upcoming Vera C. Rubin Observatory and Nancy Grace Roman Space Telescope. The latest such discovery is of the quadruply imaged Type Ia SN 2022qmx (aka, "SN Zwicky") at z = 0.3544. SN Zwicky was discovered by the Zwicky Transient Facility in spatially unresolved data. Here we present follow-up Hubble Space Telescope observations of SN Zwicky, the first from the multicycle "LensWatch (www.lenswatch.org)" program. We measure photometry for each of the four images of SN Zwicky, which are resolved in three WFC3/UVIS filters (F475W, F625W, and F814W) but unresolved with WFC3/IR F160W, and present an analysis of the lensing system using a variety of independent lens modeling methods. We find consistency between lens-model-predicted time delays (≲1 day), and delays estimated with the single epoch of Hubble Space Telescope colors (≲3.5 days), including the uncertainty from chromatic microlensing (~1-1.5 days). Our lens models converge to an Einstein radius of ${\theta }_{{\rm{E}}}=({0.168}_{-0.005}^{+0.009})^{\prime\prime} $ , the smallest yet seen in a lensed SN system. The "standard candle" nature of SN Zwicky provides magnification estimates independent of the lens modeling that are brighter than predicted by $\sim {1.7}_{-0.6}^{+0.8}$ mag and $\sim {0.9}_{-0.6}^{+0.8}$ mag for two of the four images, suggesting significant microlensing and/or additional substructure beyond the flexibility of our image-position mass models.
Energetic jets that traverse the quark-gluon plasma created in heavy-ion collisions serve as excellent probes to study this new state of deconfined QCD matter. Presently, however, our ability to achieve a crisp theoretical interpretation of the crescent number of jet observables measured in experiments is hampered by the presence of selection biases. The aim of this work is to minimize those selection biases associated to the modification of the quark- versus gluon-initiated jet fraction in order to assess the presence of other medium-induced effects, namely, color decoherence, by exploring the rapidity dependence of jet substructure observables. So far, all jet substructure measurements at midrapidity have shown that heavy-ion jets are narrower than vacuum jets. We show both analytically and with Monte Carlo simulations that if the narrowing effect persists at forward rapidities, where the quark-initiated jet fraction is greatly increased, this could serve as an unambiguous experimental observation of color decoherence dynamics in heavy-ion collisions.
Feedback mediated by cosmic rays (CRs) is an important process in galaxy formation. Because CRs are long-lived and because they are transported along the magnetic field lines independently of any gas flow, they can efficiently distribute their feedback energy within the galaxy. We present an in-depth investigation of (i) how CRs launch galactic winds from a disc that is forming in a $10^{11} \, \rm {M}_\odot$ halo and (ii) the state of CR transport inside the galactic wind. To this end, we use the AREPO moving-mesh code and model CR transport with the two-moment description of CR hydrodynamics. This model includes the CR interaction with the gyroresonant Alfvén waves that enable us to self-consistently calculate the CR diffusion coefficient and CR transport speeds based on coarse-grained models for plasma physical effects. This delivers insight into key questions such as whether the effective CR transport is streaming-like or diffusive-like, how the CR diffusion coefficient and transport speed change inside the circumgalactic medium, and to what degree the two-moment approximation is needed to faithfully capture these effects. We find that the CR-diffusion coefficient reaches a steady state in most environments with the notable exception of our newly discovered Alfvén-wave dark regions where the toroidal wind magnetic field is nearly perpendicular to the CR pressure gradient so that CRs are unable to excite the gyroresonant Alfvén waves. However, CR transport itself cannot reach a steady state and is not well described by either the CR streaming paradigm, the CR diffusion paradigm, or a combination of both.
We show that nonstandard neutrino self-interactions can lead to total flavor equipartition in a dense neutrino gas, such as those expected in core-collapse supernovae. In this first investigation of this phenomenon in the multiangle scenario, we demonstrate that such a flavor equipartition can occur on very short scales, and therefore very deep inside the newly formed proto-neutron star, with a possible significant impact on the physics of core-collapse supernovae. Our findings imply that future galactic core-collapse supernovae can appreciably probe nonstandard neutrino self-interactions, for certain cases even when they are many orders of magnitude smaller than the Standard Model terms.
Absorption features in stellar atmospheres are often used to calibrate photocentric velocities for the kinematic analysis of further spectral lines. The Li feature at ∼6708 Å is commonly used, especially in the case of young stellar objects, for which it is one of the strongest absorption lines. However, this complex line comprises two isotope fine-structure doublets. We empirically measured the wavelength of this Li feature in a sample of young stars from the PENELLOPE/VLT programme (using X-shooter, UVES, and ESPRESSO data) as well as HARPS data. For 51 targets, we fit 314 individual spectra using the STAR-MELT package, resulting in 241 accurately fitted Li features given the automated goodness-of-fit threshold. We find the mean air wavelength to be 6707.856 Å, with a standard error of 0.002 Å (0.09 km s−1), and a weighted standard deviation of 0.026 Å (1.16 km s−1). The observed spread in measured positions spans 0.145 Å, or 6.5 km s−1, which is higher by up to a factor of six than the typically reported velocity errors for high-resolution studies. We also find a correlation between the effective temperature of the star and the wavelength of the central absorption. We discuss that exclusively using this Li feature as a reference for photocentric velocity in young stars might introduce a systematic positive offset in wavelength to measurements of further spectral lines. If outflow tracing forbidden lines, such as [O I] 6300 Å, is more blueshifted than previously thought, this then favours a disc wind as the origin for this emission in young stars.
Based on observations collected at the European Southern Observatory under ESO programmes 105.207T and 106.20Z8.
Despite the observation of significant suppressions of b →s μ+μ- branching ratios no clear sign of New Physics (NP) has been identified in Δ F =2 observables Δ Md ,s , εK and the mixing induced CP asymmetries Sψ KS and Sψ ϕ. Assuming negligible NP contributions to these observables allows to determine CKM parameters without being involved in the tensions between inclusive and exclusive determinations of | Vcb| and | Vub| . Furthermore this method avoids the impact of NP on the determination of these parameters present likely in global fits. Simultaneously it provides SM predictions for numerous rare K and B branching ratios that are most accurate to date. Analyzing this scenario within Z' models we point out, following the 2009 observations of Monika Blanke and ours of 2020, that despite the absence of NP contributions to εK, significant NP contributions to K+→π+ν ν ¯ , KL→π0ν ν ¯ , KS→μ+μ- , KL→π0ℓ+ℓ- , ε'/ε and Δ MK can be present. In the simplest scenario, this is guaranteed, as far as flavour changes are concerned, by a single non-vanishing imaginary left-handed Z' coupling gsdL. This scenario implies very stringent correlations between the Kaon observables considered by us. In particular, the identification of NP in any of these observables implies automatically NP contributions to the remaining ones under the assumption of non-vanishing flavour conserving Z' couplings to q q ¯ , ν ν ¯ , and μ+μ- . A characteristic feature of this scenario is a strict correlation between K+→π+ν ν ¯ and KL→π0ν ν ¯ branching ratios on a branch parallel to the Grossman-Nir bound. Moreover, Δ MK is automatically suppressed as seems to be required by the results of the RBC-UKQCD lattice QCD collaboration. Furthermore, there is no NP contribution to KL→μ+μ- which otherwise would bound NP effects in K+→π+ν ν ¯ . Of particular interest are the correlations of K+→π+ν ν ¯ and KL→π0ν ν ¯ branching ratios and of Δ MK with the ratio ε'/ε . We investigate the impact of renormalization group effects in the context of the SMEFT on this simple scenario.
We construct extended TQFTs associated to Rozansky-Witten models with target manifolds T∗Cn . The starting point of the construction is the 3-category whose objects are such Rozansky-Witten models, and whose morphisms are defects of all codimensions. By truncation, we obtain a (non-semisimple) 2-category C of bulk theories, surface defects, and isomorphism classes of line defects. Through a systematic application of the cobordism hypothesis we construct a unique extended oriented 2-dimensional TQFT valued in C for every affine Rozansky-Witten model. By evaluating this TQFT on closed surfaces we obtain the infinite-dimensional state spaces (graded by flavour and R-charges) of the initial 3-dimensional theory. Furthermore, we explicitly compute the commutative Frobenius algebras that classify the restrictions of the extended theories to circles and bordisms between them.
A long-standing observed curiosity of globular clusters (GCs) has been that both the number and total mass of GCs in a galaxy are linearly correlated with the galaxy's virial mass, whereas its stellar component shows no such linear correlation. This work expands on an empirical model for the numbers and ages of GCs in galaxies presented by Valenzuela et al. (2021) that is consistent with recent observational data from massive elliptical galaxies down to the dwarf galaxy regime. Applying the model to simulations, GC numbers are shown to be excellent tracers for the dark matter (DM) virial mass, even when distinct formation mechanisms are employed for blue and red GCs. Furthermore, the amount of DM smooth accretion is encoded in the GC abundances, therefore providing a measure for an otherwise nearly untraceable component of the formation history of galaxies.
Photoevaporative disc winds play a key role in our understanding of circumstellar disc evolution, especially in the final stages, and they might affect the planet formation process and the final location of planets. The study of transition discs (i.e. discs with a central dust cavity) is central for our understanding of the photoevaporation process and disc dispersal. However, we need to distinguish cavities created by photoevaporation from those created by giant planets. Theoretical models are necessary to identify possible observational signatures of the two different processes, and models to find the differences between the two processes are still lacking. In this paper we study a sample of transition discs obtained from radiation-hydrodynamic simulations of internally photoevaporated discs, and focus on the dust dynamics relevant for current ALMA observations. We then compared our results with gaps opened by super Earths/giant planets, finding that the photoevaporated cavity steepness depends mildly on gap size, and it is similar to that of a 1 MJ mass planet. However, the dust density drops less rapidly inside the photoevaporated cavity compared to the planetary case due to the less efficient dust filtering. This effect is visible in the resulting spectral index, which shows a larger spectral index at the cavity edge and a shallower increase inside it with respect to the planetary case. The combination of cavity steepness and spectral index might reveal the true nature of transition discs.
Strong-lensing time delays enable the measurement of the Hubble constant (H0) independently of other traditional methods. The main limitation to the precision of time-delay cosmography is mass-sheet degeneracy (MSD). Some of the previous TDCOSMO analyses broke the MSD by making standard assumptions about the mass density profile of the lens galaxy, reaching 2% precision from seven lenses. However, this approach could potentially bias the H0 measurement or underestimate the errors. For this work, we broke the MSD for the first time using spatially resolved kinematics of the lens galaxy in RXJ1131-1231 obtained from the Keck Cosmic Web Imager spectroscopy, in combination with previously published time delay and lens models derived from Hubble Space Telescope imaging. This approach allowed us to robustly estimate H0, effectively implementing a maximally flexible mass model. Following a blind analysis, we estimated the angular diameter distance to the lens galaxy Dd = 865-81+85 Mpc and the time-delay distance DΔt = 2180-271+472 Mpc, giving H0 = 77.1-7.1+7.3 km s-1 Mpc-1 - for a flat Λ cold dark matter cosmology. The error budget accounts for all uncertainties, including the MSD inherent to the lens mass profile and line-of-sight effects, and those related to the mass-anisotropy degeneracy and projection effects. Our new measurement is in excellent agreement with those obtained in the past using standard simply parametrized mass profiles for this single system (H0 = 78.3-3.3+3.4 km s-1 Mpc-1) and for seven lenses (H0 = 74.2-1.6+1.6 km s-1 Mpc-1), or for seven lenses using single-aperture kinematics and the same maximally flexible models used by us (H0 = 73.3-5.8+5.8 km s-1 Mpc-1). This agreement corroborates the methodology of time-delay cosmography.
Reduced Keck Cosmic Web Imager data analyzed in this paper are also available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (ftp://130.79.128.5) or via https://cdsarc.cds.unistra.fr/viz-bin/cat/J/A+A/673/A9
VISIONS is an ESO public survey of five nearby (d < 500 pc) star-forming molecular cloud complexes that are canonically associated with the constellations of Chamaeleon, Corona Australis, Lupus, Ophiuchus, and Orion. The survey was carried out with the Visible and Infrared Survey Telescope for Astronomy (VISTA), using the VISTA Infrared Camera (VIRCAM), and collected data in the near-infrared passbands J (1.25 μm), H (1.65 μm), and KS (2.15 μm). With a total on-sky exposure time of 49.4h VISIONS covers an area of 650 deg2, it is designed to build an infrared legacy archive with a structure and content similar to the Two Micron All Sky Survey (2MASS) for the screened star-forming regions. Taking place between April 2017 and March 2022, the observations yielded approximately 1.15 million images, which comprise 19 TB of raw data. The observations undertaken within the survey are grouped into three different subsurveys. First, the wide subsurvey comprises shallow, large-scale observations and it has revisited the star-forming complexes six times over the course of its execution. Second, the deep subsurvey of dedicated high-sensitivity observations has collected data on areas with the largest amounts of dust extinction. Third, the control subsurvey includes observations of areas of low-to-negligible dust extinction. Using this strategy, the VISIONS observation program offers multi-epoch position measurements, with the ability to access deeply embedded objects, and it provides a baseline for statistical comparisons and sample completeness - all at the same time. In particular, VISIONS is designed to measure the proper motions of point sources, with a precision of 1 mas yr−1 or better, when complemented with data from the VISTA Hemisphere Survey (VHS). In this way, VISIONS can provide proper motions of complete ensembles of embedded and low-mass objects, including sources inaccessible to the optical ESA Gaia mission. VISIONS will enable the community to address a variety of research topics from a more informed perspective, including the 3D distribution and motion of embedded stars and the nearby interstellar medium, the identification and characterization of young stellar objects, the formation and evolution of embedded stellar clusters and their initial mass function, as well as the characteristics of interstellar dust and the reddening law.
The fifth iteration of the Sloan Digital Sky Survey is set to obtain optical and near-infrared spectra of ~5 million stars of all ages and masses throughout the Milky Way. As a part of these efforts, APOGEE and BOSS Young Star Survey (ABYSS) will observe ~105 stars with ages <30 Myr that have been selected using a set of homogeneous selection functions that make use of different tracers of youth. The ABYSS targeting strategy we describe in this paper is aimed to provide the largest spectroscopic census of young stars to date. It consists of eight different types of selection criteria that take the position on the H-R diagram, infrared excess, variability, as well as the position in phase space in consideration. The resulting catalog of ~200,000 sources (of which a half are expected to be observed) provides representative coverage of the young Galaxy, including both nearby diffuse associations as well as more distant massive complexes, reaching toward the inner Galaxy and the Galactic center.
Topology plays a fundamental role in our understanding of many-body physics, from vortices and solitons in classical field theory, to phases and excitations in quantum matter. Topological phenomena are intimately connected to the distribution of information content - that, differently from ordinary matter, is now governed by non-local degrees of freedom. However, a precise characterization of how topological effects govern the complexity of a many-body state - i.e., its partition function - is presently unclear. In this work, we show how topology and complexity are directly intertwined concepts in the context of classical statistical mechanics. In concrete, we present a theory that shows how the \emph{Kolmogorov complexity} of a classical partition function sampling carries unique, distinctive features depending on the presence of topological excitations in the system. We confront two-dimensional Ising and XY models on several topologies, and study the corresponding samplings as high-dimensional manifolds in configuration space, quantifying their complexity via the intrinsic dimension. While for the Ising model the intrisic dimension is independent of the real-space topology, for the XY model it depends crucially on temperature: across the Berezkinskii-Kosterlitz-Thouless (BKT) transition, complexity becomes topology dependent. In the BKT phase, it displays a characteristic dependence on the homology of the real-space manifold, and, for $g$-torii, it follows a scaling that is solely genus dependent. We argue that this behavior is intimately connected to the emergence of an order parameter in data space, the conditional connectivity, that displays scaling behavior. Our approach paves the way for an understanding of topological phenomena from the Kolmogorov complexity perspective, in a manner that is amenable to both quantum mechanical and out-of-equilibrium generalizations.
We present the integrated 3-point correlation functions (3PCF) involving both the cosmic shear and the galaxy density fields. These are a set of higher-order statistics that describe the modulation of local 2-point correlation functions (2PCF) by large-scale features in the fields, and which are easy to measure from galaxy imaging surveys. Based on previous works on the shear-only integrated 3PCF, we develop the theoretical framework for modelling 5 new statistics involving the galaxy field and its cross-correlations with cosmic shear. Using realistic galaxy and cosmic shear mocks from simulations, we determine the regime of validity of our models based on leading-order standard perturbation theory with an MCMC analysis that recovers unbiased constraints of the amplitude of fluctuations parameter $A_s$ and the linear and quadratic galaxy bias parameters $b_1$ and $b_2$. Using Fisher matrix forecasts for a DES-Y3-like survey, relative to baseline analyses with conventional 3$\times$2PCFs, we find that the addition of the shear-only integrated 3PCF can improve cosmological parameter constraints by $20-40\%$. The subsequent addition of the new statistics introduced in this paper can lead to further improvements of $10-20\%$, even when utilizing only conservatively large scales where the tree-level models are valid. Our results motivate future work on the galaxy and shear integrated 3PCFs, which offer a practical way to extend standard analyses based on 3$\times$2PCFs to systematically probe the non-Gaussian information content of cosmic density fields.
We propose a newly optimized nonlinear point-coupling parameterized interaction, PC-L3R, for the relativistic Hartree-Bogoliubov framework with a further optimized separable pairing force by fitting to observables, i.e., the binding energies of 91 spherical nuclei, charge radii of 63 nuclei, and 12 sets of mean pairing gaps consisting of 54 nuclei in total. The separable pairing force strengths of proton and neutron are optimized together with the point-coupling constants, and are justified in satisfactory reproducing the empirical pairing gaps. The comparison of experimental binding energies compiled in AME2020 for 91 nuclei with the ones generated from the present and other commonly used point-coupling interactions indicates that the implementation of PC-L3R in relativistic Hartree-Bogoliubov yields the lowest root-mean-square deviation. The charge radii satisfactory agree with experiment. Meanwhile, PC-L3R is capable of estimating the saturation properties of the symmetric nuclear matter and of appropriately predicting the isospin and mass dependence of binding energy. The experimental odd-even staggering of single nucleon separation energies is well reproduced. The comparison of the estimated binding energies for 7,373 nuclei based on the PC-L3R and other point-coupling interactions is also presented.
The emergence of prebiotic organics was a mandatory step toward the origin of life. The significance of the exogenous delivery versus the in-situ synthesis from atmospheric gases is still under debate. We experimentally demonstrate that iron-rich meteoritic and volcanic particles activate and catalyse the fixation of CO2, yielding the key precursors of life-building blocks. This catalysis is robust and produces selectively aldehydes, alcohols, and hydrocarbons, independent of the redox state of the environment. It is facilitated by common minerals and tolerates a broad range of the early planetary conditions (150–300 °C, ≲ 10–50 bar, wet or dry climate). We find that up to 6 × 108 kg/year of prebiotic organics could have been synthesized by this planetary-scale process from the atmospheric CO2 on Hadean Earth.
We present N-body simulations, including post-Newtonian dynamics, of dense clusters of low-mass stars harbouring central black holes (BHs) with initial masses of 50, 300, and 2000 M⊙. The models are evolved with the N-body code BIFROST to investigate the possible formation and growth of massive BHs by the tidal capture of stars and tidal disruption events (TDEs). We model star-BH tidal interactions using a velocity-dependent drag force, which causes orbital energy and angular momentum loss near the BH. About ~20-30 per cent of the stars within the spheres of influence of the black holes form Bahcall-Wolf cusps and prevent the systems from core collapse. Within the first 40 Myr of evolution, the systems experience 500-1300 TDEs, depending on the initial cluster structure. Most (>95 per cent) of the TDEs originate from stars in the Bahcall-Wolf cusp. We derive an analytical formula for the TDE rate as a function of the central BH mass, density, and velocity dispersion of the clusters ($\dot{N}_{\mathrm{TDE}} \propto M\mathrm{_{BH}}\rho \sigma ^{-3}$). We find that TDEs can lead a 300 M⊙ BH to reach $\sim 7000 \, \mathrm{{M}_{\odot }}$ within a Gyr. This indicates that TDEs can drive the formation and growth of massive BHs in sufficiently dense environments, which might be present in the central regions of nuclear star clusters.
In this paper we focus on scattering amplitudes in maximally supersymmetric Yang-Mills theory and define a long sought-after geometry, the loop momentum amplituhedron, which we conjecture to encode tree and (the integrands of) loop amplitudes in spinor helicity variables. Motivated by the structure of amplitude singularities, we define an extended positive space, which enhances the Grassmannian space featuring at tree level, and a map which associates to each of its points tree-level kinematic variables and loop momenta. The image of this map is the loop momentum amplituhedron. Importantly, our formulation provides a global definition of the loop momenta. We conjecture that for all multiplicities and helicity sectors, there exists a canonical logarithmic differential form defined on this space, and provide its explicit form in a few examples.
We analyze exactly marginal deformations of 3d N = 4 Lagrangian gauge theories, especially mixed-branch operators with both electric and magnetic charges. These mixed-branch moduli can either belong to products of electric and magnetic current supermultiplets, or be single-trace (non-factorizable). Apart from some exceptional quivers that have additional moduli, 3d N = 4 theories described by genus g quivers with nonabelian unitary gauge groups have exactly g single-trace mixed moduli, which preserve the global flavour symmetries. This partly explains why only linear and circular quivers have known AdS4 supergravity duals. Indeed, for g > 1, AdS4 gauged supergravities cannot capture the entire g-dimensional moduli space even if one takes into account the quantization moduli of boundary conditions. Likewise, in a general Lagrangian theory, we establish (using the superconformal index) that the number of single-trace mixed moduli is bounded below by the genus of a graph encoding how nonabelian gauge groups act on hypermultiplets.
We study, in the context of the three-dimensional N = 6 Chern-Simons-matter (ABJM) theory, the infrared-finite functions that result from performing L − 1 loop integrations over the L-loop integrand of the logarithm of the four-particle scattering amplitude. Our starting point are the integrands obtained from the recently proposed all-loop projected amplituhedron for the ABJM theory. Organizing them in terms of negative geometries ensures that no divergences occur upon integration if at least one loop variable is left unintegrated. We explicitly perform the integrations up to L = 3, finding both parity-even and -odd terms. Moreover, we discuss a prescription to compute the cusp anomalous dimension Γcusp of ABJM in terms of the integrated negative geometries, and we use it to reproduce the first non-trivial order of Γcusp. Finally, we show that the leading singularities that characterize the integrated results are conformally invariant.
We provide analytic results for two-loop four-point master integrals with one massive propagator and one massive leg relevant to single top production. Canonical bases of master integrals are constructed and the Simplified Differential Equations approach is employed for their analytic solution. The necessary boundary terms are computed in closed form in the dimensional regulator, allowing us to obtain analytic results in terms of multiple polylogarithms of arbitrary transcendental weight. We provide explicit solutions of all two-loop master integrals up to transcendental weight six and discuss their numerical evaluation for Euclidean and physical phase-space points.
We outline the physics opportunities provided by the Electron Ion Collider (EIC). These include the study of the parton structure of the nucleon and nuclei, the onset of gluon saturation, the production of jets and heavy flavor, hadron spectroscopy and tests of fundamental symmetries. We review the present status and future challenges in EIC theory that have to be addressed in order to realize this ambitious and impactful physics program, including how to engage a diverse and inclusive workforce. In order to address these many-fold challenges, we propose a coordinated effort involving theory groups with differing expertise is needed. We discuss the scientific goals and scope of such an EIC Theory Alliance.
We compute bottom mass (mb) corrections to the transverse momentum (qT) spectrum of Higgs bosons produced by gluon fusion in the regime qT ∼ mb ≪ mH at leading power in mb/mH and qT/mH, where the gluons couple to the Higgs via a top loop. To this end we calculate the quark mass dependence of the transverse momentum dependent gluon beam functions (aka gluon TMDPDFs) at two loops in the framework of SCET. These functions represent the collinear matrix elements in the factorized gluon-fusion cross section for small qT. We discuss in detail technical subtleties regarding rapidity regulators and zero-bin subtractions in the calculation of the virtual corrections present for massive quarks. Combined with the known soft function for mb ≠ 0 our results allow to determine the resummed Higgs qT distribution in the top-induced gluon fusion channel at NNLL' (and eventually N3LL) with full dependence on mb/qT. We perform a first phenomenological analysis at fixed order, where the new corrections to the massless approximation lead to percent-level effects in the peak region of the Higgs qT spectrum. Upon resummation they may thus be relevant for state-of-the-art precision predictions for the LHC.
We present the MARDELS catalog of 8,471 X-ray selected galaxy clusters over 25,000 deg^2 of extragalactic sky. The accumulation of deep, multiband optical imaging data, the development of the optical counterpart classification algorithm MCMF, and the release of the DESI Legacy Survey DR10 catalog covering the extragalactic sky makes it possible -- for the first time, more than 30 years after the launch of the ROSAT X-ray satellite -- to identify the majority of the galaxy clusters detected in the ROSAT All-Sky-Survey source catalog (2RXS). The resulting 90% pure MARDELS catalog is the largest ICM-selected cluster sample to date. MARDELS probes a large dynamic range in cluster mass spanning from galaxy groups to the most massive clusters in the Universe. The cluster redshift distribution peaks at z~0.1 and extends to redshifts z~1. Out to z~0.4, the MARDELS sample contains more clusters per redshift interval (dN/dz) than any other ICM-selected sample. In addition to the main sample, we present two subsamples with 6,930 and 5,522 clusters, exhibiting 95% and 99% purity, respectively. We forecast the utility of the sample for a cluster cosmological study, using realistic mock catalogs that incorporate most observational effects, including the X-ray exposure time and background variations, the existence likelihood selection adopted in 2RXS and the impact of the optical cleaning with MCMF. Using realistic priors on the observable--mass relation parameters from a DES-based weak lensing analysis, we estimate the constraining power of the MARDELSxDES sample to be of 0.026, 0.033 and 0.15 ($1\sigma$) on the parameters $\Omega_\mathrm{m}$, $\sigma_8$ and $w$, respectively.
Characterisation of atmospheres undergoing photo-evaporation is key to understanding the formation, evolution, and diversity of planets. However, only a few upper atmospheres that experience this kind of hydrodynamic escape have been characterised. Our aim is to characterise the upper atmospheres of the hot Jupiters HAT-P-32b and WASP-69 b, the warm sub-Neptune GJ 1214 b, and the ultra-hot Jupiter WASP-76 b through high-resolution observations of their He I triplet absorption. In addition, we also reanalyse the warm Neptune GJ 3470 b and the hot Jupiter HD 189733 b. We used a spherically symmetric 1D hydrodynamic model coupled with a non-local thermodynamic equilibrium model for calculating the He I triplet distribution along the escaping outflow. Comparing synthetic absorption spectra with observations, we constrained the main parameters of the upper atmosphere of these planets and classify them according to their hydrodynamic regime. Our results show that HAT-P-32 b photo-evaporates at (130 ± 70) ×1011 g s−1 with a hot (12 400 ± 2900 K) upper atmosphere; WASP-69 b loses its atmosphere at (0.9 ± 0.5) ×1011 g s−1 and 5250 ± 750 K; and GJ 1214b, with a relatively cold outflow of 3750 ± 750 K, photo-evaporates at (1.3 ± 1.1) ×1011 g s−1. For WASP-76 b, its weak absorption prevents us from constraining its temperature and mass-loss rate significantly; we obtained ranges of 6000-17 000 K and 23.5 ± 21.5 ×1011 g s−1. Our reanalysis of GJ 3470 b yields colder temperatures, 3400 ± 350 K, but practically the same mass-loss rate as in our previous results. Our reanalysis of HD 189733 b yields a slightly higher mass-loss rate, (1.4 ± 0.5) × 1011 g s−1, and temperature, 12 700 ± 900 K compared to previous estimates. We also found that HAT-P-32 b, WASP-69 b, and WASP-76 b undergo hydrodynamic escape in the recombination-limited regime, and that GJ 1214 b is in the photon-limited regime. Our results support that photo-evaporated outflows tend to be very light, H/He ≳ 98/2. The dependences of the mass-loss rates and temperatures of the studied planets on the respective system parameters (X-ray and ultraviolet stellar flux, gravitational potential) are well explained by the current hydrodynamic escape models.
We explore the impact of small-scale flavor conversions of neutrinos, the so-called fast flavor conversions (FFCs), on the dynamical evolution and neutrino emission of core-collapse supernovae (CCSNe). In order to do that, we implement FFCs in the spherically symmetric (1D) CCSN simulations of a 20 M⊙ progenitor model parametrically, assuming that FFCs happen at densities lower than a systematically varied threshold value and lead to an immediate flavor equilibrium consistent with lepton number conservation. We find that besides hardening the νe and ν¯e spectra, which helps the expansion of the shock by enhanced postshock heating, FFCs can cause significant, nontrivial modifications of the energy transport in the SN environment via increasing the νμ ,τ luminosities. In our nonexploding models this results in extra cooling of the layers around the neutrinospheres, which triggers a faster contraction of the protoneutron star and hence, in our 1D models, hampers the CCSN explosion. Although our study is limited by the 1D nature of our simulations, it provides valuable insights into how neutrino flavor conversions in the deepest CCSN regions can impact the neutrino release and the corresponding response of the stellar medium.
We describe the development of a command and data-handling system and a payload data processor for two science missions aboard 3U CubeSats. Both are built around radiation-hardened VA41620 microcontrollers and mostly rely on radiation-tolerant magnetoresistive random-access memory for data storage; the payload data processor is equipped with an XQRKU060 field-programmable gate array that allows the implementation of a wide variety of hardware interfaces and processing algorithms. We also describe the operating system and software framework we develop to program these systems. A flexible hardware architecture and a modular software design allow to adapt them to a variety of future missions.
A central requirement for the faithful implementation of large-scale lattice gauge theories (LGTs) on quantum simulators is the protection of the underlying gauge symmetry. Recent advancements in the experimental realizations of large-scale LGTs have been impressive, albeit mostly restricted to Abelian gauge groups. Guided by this requirement for gauge protection, we propose an experimentally feasible approach to implement large-scale non-Abelian $\mathrm{SU}(N)$ and $\mathrm{U}(N)$ LGTs with dynamical matter in $d+1$D, enabled by two-body spin-exchange interactions realizing local emergent gauge-symmetry stabilizer terms. We present two concrete proposals for $2+1$D $\mathrm{SU}(2)$ and $\mathrm{U}(2)$ LGTs, including dynamical matter and induced plaquette terms, that can be readily implemented in current ultracold-molecule and next-generation ultracold-atom platforms. We provide numerical benchmarks showcasing experimentally accessible dynamics, and demonstrate the stability of the underlying non-Abelian gauge invariance. We develop a method to obtain the effective gauge-invariant model featuring the relevant magnetic plaquette and minimal gauge-matter coupling terms. Our approach paves the way towards near-term realizations of large-scale non-Abelian quantum link models in analog quantum simulators.
We present the first systematic study of the detailed shapes of the line-of-sight velocity distributions (LOSVDs) in nine massive early-type galaxies (ETGs) using the novel nonparametric modeling code WINGFIT. High-signal spectral observations with the Multi-Unit Spectroscopic Explorer (MUSE) at the Very Large Telescope allow us to measure between 40 and 400 individual LOSVDs in each galaxy at a signal-to-noise ratio level better than 100 per spectral bin and to trace the LOSVDs all the way out to the highest stellar velocities. We extensively discuss potential LOSVD distortions due to template mismatch and strategies to avoid them. Our analysis uncovers a plethora of complex, large-scale kinematic structures for the shapes of the LOSVDs. Most notably, in the centers of all ETGs in our sample, we detect faint, broad LOSVD "wings" extending the line-of-sight velocities, vlos, well beyond 3σ to vlos ∼ ± 1000–1500 km s−1 on both sides of the peak of the LOSVDs. These wings likely originate from point-spread function effects and contain velocity information about the very central unresolved regions of the galaxies. In several galaxies, we detect wings of similar shape also toward the outer parts of the MUSE field of view. We propose that these wings originate from faint halos of loosely bound stars around the ETGs, similar to the cluster-bound stellar envelopes found around many brightest cluster galaxies.
The high-x data from the ZEUS Collaboration are used to extract parton density distributions of the proton deep in the perturbative regime of QCD. The data primarily constrain the up-quark valence distribution and new results are presented on its x dependence as well as on the momentum carried by the up quark. The results were obtained using Bayesian analysis methods which can serve as a model for future parton density extractions.
CRESST is a leading direct detection sub-GeVc-2 dark matter experiment. During its second phase, cryogenic bolometers were used to detect nuclear recoils off the CaWO4 target crystal nuclei. The previously established electromagnetic background model relies on Secular Equilibrium (SE) assumptions. In this work, a validation of SE is attempted by comparing two likelihood-based normalisation results using a recently developed spectral template normalisation method based on Bayesian likelihood. Albeit we find deviations from SE in some cases we conclude that these deviations are artefacts of the fit and that the assumptions of SE is physically meaningful.
Core-collapse Supernovae (SNe) are one of the most energetic events in the Universe, during which almost all the star's binding energy is released in the form of neutrinos. These particles are direct probes of the processes occurring in the stellar core and provide unique insights into the gravitational collapse. RES-NOVA will revolutionize how we detect neutrinos from astrophysical sources, by deploying the first ton-scale array of cryogenic detectors made from archaeological lead. Pb offers the highest neutrino interaction cross-section via coherent elastic neutrino-nucleus scattering (CE νNS). Such process will enable RES-NOVA to be equally sensitive to all neutrino flavours. For the first time, we propose the use archaeological Pb as sensitive target material in order to achieve an ultra-low background level in the region of interest (O(1 keV)). All these features make possible the deployment of the first cm-scale neutrino telescope for the investigation of astrophysical sources. In this contribution, we will characterize the radiopurity level and the performance of a small-scale proof-of-principle detector of RES-NOVA, consisting in a PbWO4 crystal made from archaeological-Pb operated as cryogenic detector.
A new model grid containing 228016 synthetic red supergiant explosions (Type II supernovae) is introduced. Time evolution of spectral energy distributions from 1 to 50000 Å (100 frequency bins in a log scale) is computed at each time step up to 500 d after explosion in each model. We provide light curves for the filters of Vera C, Rubin Observatory's Legacy Survey of Space and Time (LSST), the Zwicky Transient Facility, the Sloan Digital Sky Survey, and the Neil Gehrels Swift Observatory, but light curves for any photometric filters can be constructed by convolving any filter response functions to the synthetic spectral energy distributions. We also provide bolometric light curves and photosphere information such as photospheric velocity evolution. The parameter space covered by the model grid is five progenitor masses (10, 12, 14, 16, and 18 M$_{\odot}$ at the zero-age main sequence, solar metallicity), ten explosion energies (0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, and 5.0 × 1051 erg), nine 56Ni masses (0.001, 0.01, 0.02, 0.04, 0.06, 0.08, 0.1, 0.2, and 0.3 M$_{\odot}$), nine mass-loss rates (10-5.0, 10-4.5, 10-4.0, 10-3.5, 10-3.0, 10-2.5, 10-2.0, 10-1.5, and 10-1.0 M$_{\odot}$ yr-1 with a wind velocity of 10 km s-1), six circumstellar matter radii (1, 2, 4, 6, 8, and 10 × 1014 cm), and ten circumstellar structures (β = 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, and 5.0). 56Ni is assumed to be uniformly mixed up to the half-mass of a hydrogen-rich envelope. This model grid can be a base for rapid characterizations of Type II supernovae with sparse photometric sampling expected in LSST through a Bayesian approach, for example. The model grid is available at doi.org/10.5061/dryad.pnvx0k6sj.
Context. Type II supernovae offer a direct way of estimating distances via the expanding photosphere method, which is independent of the cosmic distance ladder. A Gaussian process-based method was recently introduced, allowing for a fast and precise modelling of spectral time series and placing accurate and computationally cheap Type II-based absolute distance determinations within reach.
Aims: The goal of this work is to assess the internal consistency of this new modelling technique coupled with the distance estimation in an empirical way, using the spectral time series of supernova siblings, that is, supernovae that exploded in the same host galaxy.
Methods: We used a recently developed spectral emulator code, trained on TARDIS radiative transfer models that is capable of a fast maximum-likelihood parameter estimation and spectral fitting. After calculating the relevant physical parameters of supernovae, we applied the expanding photosphere method to estimate their distances. Finally, we tested the consistency of the obtained values by applying the formalism of Bayes factors.
Results: The distances to four different host galaxies were estimated based on two supernovae in each. The distance estimates are not only consistent within the errors for each of the supernova sibling pairs, but in the case of two hosts, they are precise to better than 5%. The analysis also showed that the main limiting factor of this estimation is the number and quality of spectra available for the individual objects, rather than the physical differences of the siblings.
Conclusions: Even though the literature data we used was not tailored to the requirements of our analysis, the agreement of the final estimates shows that the method is robust and is capable of inferring both precise and consistent distances. By using high-quality spectral time series, this method can provide precise distance estimates independent of the distance ladder, which are of high value for cosmology.
We explore the features of interpolating gauge for QCD. This gauge, defined by Doust and by Baulieu and Zwanziger, interpolates between Feynman gauge or Lorenz gauge and Coulomb gauge. We argue that it could be useful for defining the splitting functions for a parton shower beyond order $\as$ or for defining the infrared subtraction terms for higher order perturbative calculations.
We present results for the static energy in (2 +1 +1 )-flavor QCD over a wide range of lattice spacings and several quark masses, including the physical quark mass, with ensembles of lattice-gauge-field configurations made available by the MILC Collaboration. We obtain results for the static energy out to distances of nearly 1 fm, allowing us to perform a simultaneous determination of the scales r1 and r0, as well as the string tension σ . For the smallest three lattice spacings we also determine the scale r2. Our results for r0/r1 and r0√{σ } agree with published (2 +1 )-flavor results. However, our result for r1/r2 differs significantly from the value obtained in the (2 +1 )-flavor case, which is most likely due to the effect of the charm quark. We also report results for r0, r1, and r2 in fm, with the former two being slightly lower than published (2 +1 )-flavor results. We study in detail the effect of the charm quark on the static energy by comparing our results on the finest two lattices with the previously published (2 +1 )-flavor QCD results at similar lattice spacing. We find that for r >0.2 fm our results on the static energy agree with the (2 +1 )-flavor result, implying the decoupling of the charm quark for these distances. For smaller distances, on the other hand, we find that the effect of the dynamical charm quark is noticeable. The lattice results agree well with the two-loop perturbative expression of the static energy incorporating finite charm mass effects. This is the first time that the decoupling of the charm quark is observed and quantitatively analyzed on lattice data of the static energy.
Under some assumptions on the hierarchy of relevant energy scales, we compute the nonrelativistic QCD (NRQCD) long-distance matrix elements (LDMEs) for inclusive production of J/ψ, ψ(2S), and Υ states based on the potential NRQCD (pNRQCD) effective field theory. Based on the pNRQCD formalism, we obtain expressions for the LDMEs in terms of the quarkonium wavefunctions at the origin and universal gluonic correlators, which do not depend on the heavy quark flavor or the radial excitation. This greatly reduces the number of nonperturbative unknowns and substantially enhances the predictive power of the nonrelativistic effective field theory formalism. We obtain improved determinations of the LDMEs for J/ψ, ψ(2S), and Υ states thanks to the universality of the gluonic correlators, and obtain phenomenological results for cross sections and polarizations at large transverse momentum that agree well with measurements at the LHC.
In Paper I, we showed that clumps in high-redshift galaxies, having a high star formation rate density (ΣSFR), produce disks with two tracks in the [Fe/H]-[α/Fe] chemical space, similar to that of the Milky Way's (MW's) thin+thick disks. Here we investigate the effect of clumps on the bulge's chemistry. The chemistry of the MW's bulge is comprised of a single track with two density peaks separated by a trough. We show that the bulge chemistry of an N-body + smoothed particle hydrodynamics clumpy simulation also has a single track. Star formation within the bulge is itself in the high-ΣSFR clumpy mode, which ensures that the bulge's chemical track follows that of the thick disk at low [Fe/H] and then extends to high [Fe/H], where it peaks. The peak at low metallicity instead is comprised of a mixture of in situ stars and stars accreted via clumps. As a result, the trough between the peaks occurs at the end of the thick disk track. We find that the high-metallicity peak dominates near the mid-plane and declines in relative importance with height, as in the MW. The bulge is already rapidly rotating by the end of the clump epoch, with higher rotation at low [α/Fe]. Thus clumpy star formation is able to simultaneously explain the chemodynamic trends of the MW's bulge, thin+thick disks, and the splash.
MadMiner is a Python package that implements a powerful family of multivariate inference techniques that leverage matrix element information and machine learning. This multivariate approach neither requires the reduction of high-dimensional data to summary statistics nor any simplifications to the underlying physics or detector response. In this paper, we address some of the challenges arising from deploying MadMiner in a real-scale HEP analysis with the goal of offering a new tool in HEP that is easily accessible. The proposed approach encapsulates a typical MadMiner pipeline into a parametrized yadage workflow described in YAML files. The general workflow is split into two yadage sub-workflows, one dealing with the physics simulations and the other with the ML inference. After that, the workflow is deployed using REANA, a reproducible research data analysis platform that takes care of flexibility, scalability, reusability, and reproducibility features. To test the performance of our method, we performed scaling experiments for a MadMiner workflow on the National Energy Research Scientific Computer (NERSC) cluster with an HT-Condor back-end. All the stages of the physics sub-workflow had a linear dependency between resources or wall time and the number of events generated. This trend has allowed us to run a typical MadMiner workflow, consisting of 11M events, in 5 hours compared to days in the original study.
Basis transformations often involve Fierz and other relations that are only valid in D =4 dimensions. In general D spacetime dimensions, however, evanescent operators have to be introduced in order to preserve such identities. Such evanescent operators contribute to one-loop basis transformations as well as to two-loop renormalization group running. We present a simple procedure on how to systematically change basis at the one-loop level by obtaining shifts due to evanescent operators. As an example we apply this method to derive the one-loop basis transformation from the Buras, Misiak and Urban basis useful for next-to-leading order QCD calculations, to the Jenkins, Manohar and Stoffer basis used in the matching to the standard model effective theory.
We present the first calculation for the hadroproduction of a W boson in association with a massive bottom (b ) quark-antiquark pair at next-to-next-to-leading order (NNLO) in QCD perturbation theory. We exploit the hierarchy between the b -quark mass and the characteristic energy scale of the process to obtain a reliable analytic expression for the two-loop virtual amplitude with three massive legs, starting from the corresponding result available for massless b quarks. The use of massive b quarks avoids the ambiguities associated with the correct flavor assignment in massless calculations, paving the way to a more realistic comparison with experimental data. We present phenomenological results considering proton-proton collisions at center-of-mass energy √{s }=13.6 TeV for inclusive W b b ¯ production and within a fiducial region relevant for the associated production of a W boson and a Higgs boson decaying into a b b ¯ pair, for which W b b ¯ production represents one of the most relevant backgrounds. We find that the NNLO corrections are substantial and that their inclusion is mandatory to obtain reliable predictions.
In view of the forthcoming High-Luminosity phase of the LHC, next-to-next-to-next-to-leading (N3LO) calculations for the most phenomenologically relevant processes become necessary. In this work, we take the first step towards this goal for H+jet production by computing the one- and two-loop helicity amplitudes for the two contributing processes, H → ggg, H →q q ¯g , in an effective theory with infinite top quark mass, to higher orders in the dimensional regulator. We decompose the amplitude in scalar form factors related to the helicity amplitudes and in a new basis of tensorial structures. The form factors receive contributions from Feynman integrals which were reduced to a novel canonical basis of master integrals. We derive and solve a set of differential equations for these integrals in terms of Multiple Polylogarithms (MPLs) of two variables up to transcendental weight six.
We generalize the next-to-leading order (NLO) QCD calculations for the decay rates of h →g g and h →γ γ to the case of anomalous couplings of the Higgs boson. We demonstrate how this computation can be done in a consistent way within the framework of an electroweak chiral Lagrangian, based on a systematic power counting. It turns out that no additional coupling parameters arise at NLO in QCD beyond those already present at leading order. The impact of QCD is large for h →g g and the uncertainties from QCD are significantly reduced at NLO. h →γ γ is only mildly affected by QCD; here the NLO treatment practically eliminates the uncertainties. Consequently, our results will allow for an improved determination of anomalous Higgs couplings from these processes. The relation of our framework to a treatment in Standard Model effective field theory is also discussed.
Motivated by the improved results from the HPQCD lattice collaboration on the hadronic matrix elements entering ∆Ms,d in Bs,d 0-B¯s,d 0 mixings and the increase of the experimental branching ratio for Bs→ μ+μ-, we update our 2016 analysis of various flavour observables in four 331 models, M1, M3, M13 and M16 based on the gauge group SU(3)C× SU(3)L× U(1)X. These four models, which are distinguished by the quantum numbers, are selected among 24 331 models through their consistency with the electroweak precision tests and simultaneously by the relation C9NP=-bC10NP with 2 ≤ b ≤ 5, which after new result on Bs→ μ+μ- from CMS is favoured over the popular relation C9NP=-C10NP predicted by several leptoquark models. In this context we investigate in particular the dependence of various observables on |Vcb|, varying it in the broad range [0.0386, 0.043], that encompasses both its inclusive and exclusive determinations. Imposing the experimental constraints from εK, ∆Ms, ∆Md and the mixing induced CP asymmetries Sψ KS and Sψ KS, we investigate for which values of |Vcb| the four models can be made compatible with these data and what is the impact on B and K branching ratios. In particular we analyse NP contributions to the Wilson coefficients C9 and C10 and the decays Bs,d→ μ+μ-, K+→π+ν ν ¯ and KL→π0ν ν ¯. This allows us to illustrate how the value of |Vcb| determined together with other parameters of these models is infected by NP contributions and compare it with the one obtained recently under the assumption of the absence of NP in εK, ∆Ms, ∆Md and Sψ KS.
Context. The presence of radioactive 26Al at 1.8 MeV reveals an ongoing process of nucleosynthesis in the Milky Way. Diffuse emission from its decay can be measured with gamma-ray telescopes in space. The intensity, line shape, and spatial distribution of the 26Al emission allow for studies of these nucleosynthesis sources. The line parameters trace massive-star feedback in the interstellar medium thanks to its 1 My lifetime.
Aims: We aim to expand upon previous studies of the 26Al emission in the Milky Way, using all available gamma-ray data, including single and double events collected with SPI on INTEGRAL from 2003 until 2020.
Methods: We applied improved spectral response and background as evaluated from tracing spectral details over the entire mission. The exposure for the Galactic 26Al emission was enhanced using all event types measured within SPI. We redetermined the intensity of Galactic 26Al emission across the entire sky, through maximum likelihood fits of simulated and model-built sky distributions to SPI spectra for single and for double detector hits.
Results: We found an all-sky flux of (1.84±0.03)×10−3 ph cm−2 s−1 in the 1.809 MeV line from 26Al, determined via fitting to sky distributions from previous observations with COMPTEL. Significant emission from higher latitudes indicates an origin from nearby massive-star groups and superbubbles, which is also supported by a bottom-up population synthesis model. The line centroid is found at (1809.83±0.04 keV), while the line broadening from source kinematics integrated over the sky is (0.62±0.3) keV (FWHM).
When strong gravitational lenses are to be used as an astrophysical or cosmological probe, models of their mass distributions are often needed. We present a new, time-efficient automation code for the uniform modeling of strongly lensed quasars with GLEE, a lens-modeling software for multiband data. By using the observed positions of the lensed quasars and the spatially extended surface brightness distribution of the host galaxy of the lensed quasar, we obtain a model of the mass distribution of the lens galaxy. We applied this uniform modeling pipeline to a sample of nine strongly lensed quasars for which images were obtained with the Wide Field Camera 3 of the Hubble Space Telescope. The models show well-reconstructed light components and a good alignment between mass and light centroids in most cases. We find that the automated modeling code significantly reduces the input time during the modeling process for the user. The time for preparing the required input files is reduced by a factor of 3 from ~3 h to about one hour. The active input time during the modeling process for the user is reduced by a factor of 10 from ~ 10 h to about one hour per lens system. This automated uniform modeling pipeline can efficiently produce uniform models of extensive lens-system samples that can be used for further cosmological analysis. A blind test that compared our results with those of an independent automated modeling pipeline based on the modeling software Lenstronomy revealed important lessons. Quantities such as Einstein radius, astrometry, mass flattening, and position angle are generally robustly determined. Other quantities, such as the radial slope of the mass density profile and predicted time delays, depend crucially on the quality of the data and on the accuracy with which the point spread function is reconstructed. Better data and/or a more detailed analysis are necessary to elevate our automated models to cosmography grade. Nevertheless, our pipeline enables the quick selection of lenses for follow-up and further modeling, which significantly speeds up the construction of cosmography-grade models. This important step forward will help us to take advantage of the increase in the number of lenses that is expected in the coming decade, which is an increase of several orders of magnitude.
The origin of the diffuse gamma-ray background (DGRB), the one that remains after subtracting all individual sources from observed gamma-ray sky, is unknown. The DGRB possibly encompasses contributions from different source populations such as star-forming galaxies, starburst galaxies, active galactic nuclei, gamma-ray bursts, or galaxy clusters. Here, we combine cosmological magnetohydrodynamical simulations of clusters of galaxies with the propagation of cosmic rays (CRs) using Monte Carlo simulations, in the redshift range z ≤ 5.0, and show that the integrated gamma-ray flux from clusters can contribute up to 100% of the DGRB flux observed by Fermi-LAT above 100 GeV, for CRs spectral indices α = 1.5 − 2.5 and energy cutoffs Emax=1 016−1 017 eV. The flux is dominated by clusters with masses 1013 ≲ M/M⊙ ≲ 1015 and redshift z ≲ 0.3. Our results also predict the potential observation of high-energy gamma rays from clusters by experiments like the High Altitude Water Cherenkov (HAWC), the Large High Altitude Air Shower Observatory (LHAASO), and potentially the upcoming Cherenkov Telescope Array (CTA).
We describe a systematic approach to cast the differential equation for the l-loop equal mass banana integral into an ε-factorised form. With the known boundary value at a specific point we obtain systematically the term of order j in the expansion in the dimensional regularisation parameter ε for any loop l. The approach is based on properties of Calabi-Yau operators, and in particular on self-duality.
We study the decay $J/\psi\to\pi^{+}\pi^{-}\pi^{0}$ within the framework of the Khuri-Treiman equations. We find that the BESIII experimental di-pion mass distribution in the $\rho(770)$-region is well reproduced with a once-subtracted $P$-wave amplitude. Furthermore, we show that $F$-wave contributions to the amplitude improve the description of the data in the $\pi\pi$ mass region around 1.5 GeV. We also present predictions for the $J/\psi\to\pi^{0}\gamma^{*}$ transition form factor.
The heavy quark diffusion coefficient is encoded in the spectral functions of the chromo-electric and the chromo-magnetic correlators, of which the latter describes the T/M contribution. We study these correlators at two different temperatures $T=1.5T_c$ and $T=10^4T_c$ in the deconfined phase of SU(3) gauge theory. We use gradient flow for noise reduction. We perform both continuum and zero flow time limits to extract the heavy quark diffusion coefficient. Our results imply that the mass suppressed effects in the heavy quark diffusion coefficient are 20% for bottom quarks and 34% for charm quark at $T=1.5T_c$.
We present new Very Large Array observations, between 6.8 and 66 mm, of the edge-on Class I disk IRAS04302+2247. Observations at 6.8 mm and 9.2 mm lead to the detection of thermal emission from the disk, while shallow observations at the other wavelengths are used to correct for emission from other processes. The disk radial brightness profile transitions from broadly extended in previous Atacama Large Millimeter/submillimeter Array 0.9 mm and 2.1 mm observations to much more centrally brightened at 6.8 mm and 9.2 mm, which can be explained by optical depth effects. The radiative transfer modeling of the 0.9 mm, 2.1 mm, and 9.2 mm data suggests that the grains are smaller than 1 cm in the outer regions of the disk, allowing us to obtain the first lower limit for the scale height of grains emitting at millimeter wavelengths in a protoplanetary disk. We find that the millimeter dust scale height is between 1 au and 6 au at a radius 100 au from the central star, while the gas scale height is estimated to be about 7 au, indicating a modest level of settling. The estimated dust height is intermediate between less evolved Class 0 sources, which are found to be vertically thick, and more evolved Class II sources, which show a significant level of settling. This suggests that we are witnessing an intermediate stage of dust settling.
During the last three decades the determination of the Unitarity Triangle (UT) was dominated by the measurements of its sides $R_b$ and $R_t$ through tree-level $B$ decays and the $\Delta M_d/\Delta M_s$ ratio, respectively, with some participation of the measurements of the angle $\beta$ through the mixing induced CP-asymmetries like $S_{\psi K_S}$ and $\varepsilon_K$. However, as pointed out already in 2002 by Fabrizio Parodi, Achille Stocchi and the present author, the most efficient strategy for a precise determination of the apex of the UT, that is $(\bar\varrho,\bar\eta)$, is to use the measurements of the angles $\beta$ and $\gamma$. The second best strategy would be the measurements of $R_b$ and $\gamma$. However, in view of the tensions between different determinations of $|V_{ub}|$ and $|V_{cb}|$, that enter $R_b$, the $(\beta,\gamma)$ strategy should be a clear winner once LHCb and Belle II will improve the measurements of these two angles. In this note we recall our finding of 2002 which should be finally realized in this decade through precise measurements of both angles by these collaborations. In this context we present two very simple formulae for $\bar\varrho$ and $\bar\eta$ in terms of $\beta$ and $\gamma$ which could be derived by high-school students, but to my knowledge never appeared in the literature on the UT, not even in our 2002 paper. We also emphasize the importance of precise measurements of both angles that would allow to perform powerful tests of the SM through numerous $|V_{cb}|$-independent correlations between $K$ and $B$ decay branching ratios $R_i(\beta,\gamma)$ recently derived by Elena Venturini and the present author. The simple findings presented here will appear in a subsection of a much longer contribution to the proceedings of KM50 later this year. I exhibited them here so that they are not lost in the latter.
[Abridged] Galaxy clusters are the most massive gravitationally-bound systems in the universe and are widely considered to be an effective cosmological probe. We propose the first Machine Learning method using galaxy cluster properties to derive unbiased constraints on a set of cosmological parameters, including Omega_m, sigma_8, Omega_b, and h_0. We train the machine learning model with mock catalogs including "measured" quantities from Magneticum multi-cosmology hydrodynamical simulations, like gas mass, gas bolometric luminosity, gas temperature, stellar mass, cluster radius, total mass, velocity dispersion, and redshift, and correctly predict all parameters with uncertainties of the order of ~14% for Omega_m, ~8% for sigma_8, ~6% for Omega_b, and ~3% for h_0. This first test is exceptionally promising, as it shows that machine learning can efficiently map the correlations in the multi-dimensional space of the observed quantities to the cosmological parameter space and narrow down the probability that a given sample belongs to a given cosmological parameter combination. In the future, these ML tools can be applied to cluster samples with multi-wavelength observations from surveys like CSST in the optical band, Euclid and Roman in the near-infrared band, and eROSITA in the X-ray band to constrain both the cosmology and the effect of the baryonic feedback.
We present the serendipitous discovery of a large double radio relic associated with the merging galaxy cluster PSZ2 G277.93+12.34 and a new odd radio circle, ORC J1027-4422, both found in deep MeerKAT 1.3 GHz wide-band data. The angular separation of the two arc-shaped cluster relics is 16 arcmin or 2.6 Mpc for a cluster redshift of z = 0.158. The thin southern relic, which shows a number of ridges/shocks including one possibly moving inwards, has a linear extent of 1.64 Mpc. In contrast, the northern relic is about twice as wide, twice as bright, but only has a largest linear size of 0.66 Mpc. Complementary SRG/eROSITA X-ray images reveal extended emission from hot intracluster gas between the two relics and around the narrow-angle tail (NAT) radio galaxy PMN J1033-4335 (z = 0.153) located just east of the northern relic. No radio halo associated with the PSZ2 cluster is detected. The radio morphologies of the NAT galaxy and the northern relic, which are also detected with the Australian Square Kilometer Array Pathfinder at 887.5 MHz, suggest both are moving in the same outward direction. The discovery of ORC J1027-4422 in a different part of the MeerKAT image makes it the 4th known single ORC. It has a diameter of 90" corresponding to 400 kpc at a tentative redshift of z = 0.3 and remains undetected in X-ray emission. We discuss similarities between galaxy and cluster mergers as the formation mechanisms for ORCs and radio relics, respectively.
First-principle simulations are at the heart of the high-energy physics research program. They link the vast data output of multi-purpose detectors with fundamental theory predictions and interpretation. This review illustrates a wide range of applications of modern machine learning to event generation and simulation-based inference, including conceptional developments driven by the specific requirements of particle physics. New ideas and tools developed at the interface of particle physics and machine learning will improve the speed and precision of forward simulations, handle the complexity of collision data, and enhance inference as an inverse simulation problem.
The integrated shear 3-point correlation function $\zeta_{\pm}$ measures the correlation between the local shear 2-point function $\xi_{\pm}$ and the 1-point shear aperture mass in patches of the sky. Unlike other higher-order statistics, $\zeta_{\pm}$ can be efficiently measured from cosmic shear data, and it admits accurate theory predictions on a wide range of scales as a function of cosmological and baryonic feedback parameters. Here, we develop and test a likelihood analysis pipeline for cosmological constraints using $\zeta_{\pm}$. We incorporate treatment of systematic effects from photometric redshift uncertainties, shear calibration bias and galaxy intrinsic alignments. We also develop an accurate neural-network emulator for fast theory predictions in MCMC parameter inference analyses. We test our pipeline using realistic cosmic shear maps based on $N$-body simulations with a DES Y3-like footprint, mask and source tomographic bins, finding unbiased parameter constraints. Relative to $\xi_{\pm}$-only, adding $\zeta_{\pm}$ can lead to $\approx 10-25\%$ improvements on the constraints of parameters like $A_s$ (or $\sigma_8$) and $w_0$. We find no evidence in $\xi_{\pm} + \zeta_{\pm}$ constraints of a significant mitigation of the impact of systematics. We also investigate the impact of the size of the apertures where $\zeta_{\pm}$ is measured, and of the strategy to estimate the covariance matrix ($N$-body vs. lognormal). Our analysis solidifies the strong potential of the $\zeta_{\pm}$ statistic and puts forward a pipeline that can be readily used to improve cosmological constraints using real cosmic shear data.
Recent observations have shown that the atmospheres of ultrahot Jupiters (UHJs) commonly possess temperature inversions, where the temperature increases with increasing altitude. Nonetheless, which opacity sources are responsible for the presence of these inversions remains largely observationally unconstrained. We used LBT/PEPSI to observe the atmosphere of the UHJ KELT-20 b in both transmission and emission in order to search for molecular agents which could be responsible for the temperature inversion. We validate our methodology by confirming a previous detection of Fe I in emission at 16.9σ. Our search for the inversion agents TiO, VO, FeH, and CaH results in non-detections. Using injection-recovery testing we set 4σ upper limits upon the volume mixing ratios for these constituents as low as ~1 × 10-9 for TiO. For TiO, VO, and CaH, our limits are much lower than expectations from an equilibrium chemical model, while we cannot set constraining limits on FeH with our data. We thus rule out TiO and CaH as the source of the temperature inversion in KELT-20 b, and VO only if the line lists are sufficiently accurate. *Based on data acquired with the Potsdam Echelle Polarimetric and Spectroscopic Instrument (PEPSI) using the Large Binocular Telescope (LBT) in Arizona.
The dark matter annihilation cross section can be amplified by orders of magnitude if the annihilation occurs into a narrow resonance, or if the dark-matter particles experience a long-range force before annihilation (Sommerfeld effect). We show that when both enhancements are present they factorize completely, that is, all long-distance non-factorizable effects cancel at leading order in the small-velocity and narrow-width expansion. We then investigate the viability of ``super-resonant'' annihilation from the coaction of both mechanisms in Standard Model Higgs portal and simplified MSSM-inspired dark-matter scenarios.
LiteBIRD, the Lite (Light) satellite for the study of B-mode polarization and Inflation from cosmic background Radiation Detection, is a space mission for primordial cosmology and fundamental physics. The Japan Aerospace Exploration Agency (JAXA) selected LiteBIRD in May 2019 as a strategic large-class (L-class) mission, with an expected launch in the late 2020s using JAXA's H3 rocket. LiteBIRD is planned to orbit the Sun-Earth Lagrangian point L2, where it will map the cosmic microwave background polarization over the entire sky for three years, with three telescopes in 15 frequency bands between 34 and 448 GHz, to achieve an unprecedented total sensitivity of $2.2\, \mu$K-arcmin, with a typical angular resolution of 0.5○ at 100 GHz. The primary scientific objective of LiteBIRD is to search for the signal from cosmic inflation, either making a discovery or ruling out well-motivated inflationary models. The measurements of LiteBIRD will also provide us with insight into the quantum nature of gravity and other new physics beyond the standard models of particle physics and cosmology. We provide an overview of the LiteBIRD project, including scientific objectives, mission and system requirements, operation concept, spacecraft and payload module design, expected scientific outcomes, potential design extensions, and synergies with other projects.
At the Laboratory for Rapid Space Missions, we develop CubeSat missions and ISS-based experiments with diverging requirements. Due to limited manpower and resources, we cannot develop new and specifically tailored software and hardware for each of these projects. Instead, we propose the use of a distributed approach with well-defined and statically checked components that can be reconfigured and reused for several missions. The DOSIS framework developed at the Technical University of Munich provides the features required for such a setup. We present the conceptual design of the framework and briefly introduce the first missions using the DOSIS hardware and software setup.
Earth and other rocky objects in the inner Solar system are depleted in carbon compared to objects in the outer Solar system, the Sun, or the ISM. It is believed that this is a result of the selective removal of refractory carbon from primordial circumstellar material. In this work, we study the irreversible release of carbon into the gaseous environment via photolysis and pyrolysis of refractory carbonaceous material during the disc phase of the early Solar system. We analytically solve the one-dimensional advection equation and derive an explicit expression that describes the depletion of carbonaceous material in solids under the influence of radial and vertical transport. We find both depletion mechanisms individually fail to reproduce Solar system abundances under typical conditions. While radial transport only marginally restricts photodecomposition, it is the inefficient vertical transport that limits carbon depletion under these conditions. We show explicitly that an increase in the vertical mixing efficiency, and/or an increase in the directly irradiated disc volume, favours carbon depletion. Thermal decomposition requires a hot inner disc (>500 K) beyond 3 au to deplete the formation region of Earth and chondrites. We find FU Ori-type outbursts to produce these conditions such that moderately refractory compounds are depleted. However, such outbursts likely do not deplete the most refractory carbonaceous compounds beyond the innermost disc region. Hence, the refractory carbon abundance at 1 au typically does not reach terrestrial levels. Nevertheless, under specific conditions, we find photolysis and pyrolysis combined to reproduce Solar system abundances.
In this work, we study the annihilation of a pair of ‘t Hooft-Polyakov monopoles due to confinement by a string. We analyze the regime in which the scales of monopoles and strings are comparable. We compute the spectrum of the emitted gravitational waves and find it to agree with the previously calculated pointlike case for wavelengths longer than the system width and before the collision. However, we observe that in a head-on collision, monopoles are never recreated. Correspondingly, not even once the string oscillates. Instead, the system decays into waves of Higgs and gauge fields. We explain this phenomenon by the loss of coherence in the annihilation process. Due to this, the entropy suppression makes the recreation of a monopole pair highly improbable. We argue that in a similar regime, analogous behavior is expected for the heavy quarks connected by a QCD string. There too, instead of restretching a long string after the first collapse, the system hadronizes and decays in a high multiplicity of mesons and glueballs. We discuss the implications of our results.
Figure
We propose to replace the exact amplitudes used in Monte Carlo event generators for trained machine learning regressors, with the aim of speeding up the evaluation of slow amplitudes. As a proof of concept, we study the process g g →Z Z , whose leading-order amplitude is loop induced. We show that gradient boosting machines like XGBoost can predict the fully differential distributions with errors below 0.1%, and with prediction times O (103) faster than the evaluation of the exact function. This is achieved with training times ∼ 23 minutes and regressors of size ≲ 22 Mb . We also find that XGBoost performs well over the entire phase space, while interpolation gives much larger errors in regions where the function is peaked. These results suggest a possible new avenue to speed up Monte Carlo event generators.
We study gravitational back-reaction within relational time formulations of quantum mechanics by considering two versions of time: a time coordinate, modelled as a global quantum degree of freedom, and the proper time of a given physical system, modelled via an internal degree of freedom serving as a local quantum "clock". We show that interactions between coordinate time and mass-energy in a global Wheeler-DeWitt-like constraint lead to gravitational time dilation. In the presence of a massive object this agrees with time dilation in a Schwarzchild metric at leading order in $G$. Furthermore, if two particles couple independently to the time coordinate we show that Newtonian gravitational interaction between those particles emerges in the low energy limit. We also observe features of renormalization of high energy divergences.
The wavelength dependence of the Kormendy relation (KR) is well characterised at low redshift but poorly studied at intermediate redshifts. The KR provides information on the evolution of the population of early-type galaxies (ETGs). Therefore, by studying it, we may shed light on the assembly processes of these objects and their size evolution. As studies at different redshifts are generally conducted in different rest-frame wavebands, it is important to investigate whether the KR is dependent on wavelength. Knowledge of such a dependence is fundamental to correctly interpreting the conclusions we might draw from these studies. We analyse the KRs of the three Hubble Frontier Fields clusters, Abell S1063 (z = 0.348), MACSJ0416.1-2403 (z = 0.396), and MACS J1149.5+2223 (z = 0.542), as a function of wavelength. This is the first time the KR of ETGs has been explored consistently over such a large range of wavelengths at intermediate redshifts. We exploit very deep HST photometry, ranging from the observed B-band to the H-band, and MUSE integral field spectroscopy. We improve the structural parameter estimation we performed in a previous work by means of a newly developed Python package called morphofit. With its use on cluster ETGs, we find that the KR slopes increase smoothly with wavelength from the optical to the near-infrared (NIR) bands in all three clusters, with the intercepts becoming fainter at lower redshifts due to the passive ageing of the ETG stellar populations. The slope trend is consistent with previous findings at lower redshifts. The slope increase with wavelength implies that smaller ETGs are more centrally concentrated than larger ETGs in the NIR with respect to the optical regime. As different bands probe different stellar populations in galaxies, the slope increase also implies that smaller ETGs have stronger internal gradients with respect to larger ETGs.
In today's modern wide-field galaxy surveys, there is the necessity for parametric surface brightness decomposition codes characterised by accuracy, small degree of user intervention, and high degree of parallelisation. We try to address this necessity by introducing morphofit, a highly parallelisable Python package for the estimate of galaxy structural parameters. The package makes use of wide-spread and reliable codes, namely SExtractor and GALFIT. It has been optimised and tested in both low-density and crowded environments, where blending and diffuse light makes the structural parameters estimate particularly challenging. morphofit allows the user to fit multiple surface brightness components to each individual galaxy, among those currently implemented in the code. Using simulated images of single Sérsic and bulge plus disk galaxy light profiles with different bulge-to-total luminosity (B/T) ratios, we show that morphofit is able to recover the input structural parameters of the simulated galaxies with good accuracy. We also compare its estimates against existing literature studies, finding consistency within the errors. We use the package in Tortorelli et al. 2023 to measure the structural parameters of cluster galaxies in order to study the wavelength dependence of the Kormendy relation of early-type galaxies. The package is available on github (this https URL) and on the Pypi server (this https URL).
Accurate knowledge of the redshift distributions of faint samples of galaxies selected by broad-band photometry is a prerequisite for future weak lensing experiments to deliver precision tests of our cosmological model. The most direct way to measure these redshift distributions is spectroscopic follow-up of representative galaxies. For this to be efficient and accurate, targets have to be selected such that they systematically cover a space defined by apparent colours in which there is little variation in redshift at any point. 4C3R2 will follow this strategy to observe over 100 000 galaxies selected by their KiDS-VIKING ugriZYJHKs photometry over a footprint identical to that of the WAVES survey, to constrain the colour-redshift relation with high multiplicity across two-thirds of the colour space of future Euclid and Rubin samples.
The singlet sector of the $O(N),$ $\phi^4$-model in AdS$_4$ at large-$N$, gives rise to a (non-local) dual conformal field theory on the conformal boundary of AdS$_4$, which is a deformation of the generalized free field. We identify and compute a AdS$_4$ 3-point 1-loop fish diagram that controls the exact large-$N$ dimensions and operator product coefficients (OPE) for all "double trace" operators as a function of the renormalized $\phi^4$-coupling. We find that the space of $\phi^4$-coupling is compact with a boundary at the bulk Landau pole where the lowest OPE coefficient diverges.
We revisit stellar energy-loss bounds on the Yukawa couplings $g_{\rm B,L}$ of baryophilic and leptophilic scalars $\phi$. The white-dwarf luminosity function yields $g_{\rm B}\lesssim 7 \times 10^{-13}$ and $g_{\rm L}\lesssim 4 \times 10^{-16}$, based on bremsstrahlung from ${}^{12}{\rm C}$ and ${}^{16}{\rm O}$ collisions with electrons. In models with a Higgs portal, this also implies a bound on the scalar-Higgs mixing angle $\sin \theta \lesssim 2 \times 10^{-10}$. Our new bounds apply for $m_\phi\lesssim {\rm 1~keV}$ and are among the most restrictive ones, whereas for $m_\phi\lesssim 0.5\,{\rm eV}$ long-range force measurements dominate. Besides a detailed calculation of the bremsstrahlung rate for degenerate and semi-relativistic electrons, we prove with a simple argument that non-relativistic bremsstrahlung by the heavy partner is suppressed relative to that by the light one by their squared-mass ratio. This large reduction was overlooked in previous much stronger bounds on $g_{\rm B}$. In an Appendix, we provide fitting formulas (few percent precision) for the bremsstrahlung emission of baryophilic and leptophilic scalars as well as axions for white-dwarf conditions, i.e., degenerate, semi-relativistic electrons and ion-ion correlations in the ``liquid'' phase.
Peptides have essential structural and catalytic functions in living organisms. The formation of peptides requires the overcoming of thermodynamic and kinetic barriers. In recent years, various formation scenarios that may have occurred during the origin of life have been investigated, including iron(III)-catalyzed condensations. However, iron(III)-catalysts require elevated temperatures and the catalytic activity in peptide bond forming reactions is often low. It is likely that in an anoxic environment such as that of the early Earth, reduced iron compounds were abundant, both on the Earth's surface itself and as a major component of iron meteorites. In this work, we show that reduced iron activated by acetic acid mediates efficiently peptide formation. We recently demonstrated that, compared to water, liquid sulfur dioxide (SO2) is a superior reaction medium for peptide formations. We thus investigated both and observed up to four amino acid/peptide coupling steps in each solvent. Reaction with diglycine (G2) formed 2.0 % triglycine (G3) and 7.6 % tetraglycine (G4) in 21 d. Addition of G3 and dialanine (A2) yielded 8.7 % G4. Therefore, this is an efficient and plausible route for the formation of the first peptides as simple catalysts for further transformations in such environments.
We examine the influence of quadrupole moment of a slowly rotating neutron star (NS) on the oscillations of a fluid accretion disk (torus) orbiting a compact object the spacetime around which is described by the Hartle-Thorne geometry. Explicit formulae for non-geodesic orbital epicyclic and precession frequencies, as well as their simplified practical versions that allow for an expeditious application of the universal relations determining the NS properties, are obtained and examined. We demonstrate that the difference in the accretion disk precession frequencies for NSs of the same mass and angular momentum, but different oblateness, can reach up to tens of percent. Even higher differences can arise when NSs with the same mass and rotational frequency, but different equations of state (EoS), are considered. In particular, the Lense-Thirring precession frequency defined in the innermost parts of the accretion region can differ by more than one order of magnitude across NSs with different EoS. Our results have clear implications for models of the LMXBs variability.
Enzyme-catalyzed replication of nucleic acid sequences is a prerequisite for the survival and evolution of biological entities. Before the advent of protein synthesis, genetic information was most likely stored in and replicated by RNA. However, experimental systems for sustained RNA-dependent RNA-replication are difficult to realise, in part due to the high thermodynamic stability of duplex products and the low chemical stability of catalytic RNAs. Using a derivative of a group I intron as a model for an RNA replicase, we show that heated air-water interfaces that are exposed to a plausible CO2-rich atmosphere enable sense and antisense RNA replication as well as template-dependent synthesis and catalysis of a functional ribozyme in a one-pot reaction. Both reactions are driven by autonomous oscillations in salt concentrations and pH, resulting from precipitation of acidified dew droplets, which transiently destabilise RNA duplexes. Our results suggest that an abundant Hadean microenvironment may have promoted both replication and synthesis of functional RNAs.
We extend the effective field theory for soft and collinear gravitons to interactions with fermionic matter fields. The full theory features a local Lorentz symmetry in addition to the usual diffeomorphisms, which requires incorporating the former into the soft-collinear gravity framework. The local Lorentz symmetry gives rise to Wilson lines in the effective theory that strongly resemble those in SCET for non-abelian gauge interactions, whereas the diffeomorphisms can be treated in the same fashion as in the case of scalar matter. The basic structure of soft-collinear gravity, which features a homogeneous soft background field, giving rise to a covariant derivative and multipole-expanded covariant Riemann-tensor interactions, remains unaltered and generalises in a natural way to fermion fields.