We perform the first measurement of the thermal and ionization state of the intergalactic medium (IGM) across 0.9 < z < 1.5 using 301 \lya absorption lines fitted from 12 HST STIS quasar spectra, with a total pathlength of \Delta z=2.1. We employ the machine-learning-based inference method that uses joint b-N distributions obtained from \lyaf decomposition. Our results show that the HI photoionization rates, \Gamma, are in good agreement with the recent UV background synthesis models, with \log (\Gamma/s^{-1})={-11.79}^{0.18}_{-0.15}, -11.98}^{0.09}_{-0.09}, and {-12.32}^{0.10}_{-0.12} at z=1.4, 1.2, and 1 respectively. We obtain the IGM temperature at the mean density, T_0, and the adiabatic index, \gamma, as [\log (T_0/K), \gamma]= [{4.13}^{+0.12}_{-0.10}, {1.34}^{+0.10}_{-0.15}], [{3.79}^{+0.11}_{-0.11}, {1.70}^{+0.09}_{-0.09}] and [{4.12}^{+0.15}_{-0.25}, {1.34}^{+0.21}_{-0.26}] at z=1.4, 1.2 and 1 respectively. Our measurements of T_0 at z=1.4 and 1.2 are consistent with the expected trend from z<3 temperature measurements as well as theoretical expectations that, in the absence of any non-standard heating, the IGM should cool down after HeII reionization. Whereas, our T_0 measurements at z=1 show unexpectedly high IGM temperature. However, because of the relatively large uncertainty in these measurements of the order of \Delta T_0~5000 K, mostly emanating from the limited redshift path length of available data in these bins, we can not definitively conclude whether the IGM cools down at z<1.5. Lastly, we generate a mock dataset to test the constraining power of future measurement with larger datasets. The results demonstrate that, with redshift pathlength \Delta z \sim 2 for each redshift bin, three times the current dataset, we can constrain the T_0 of IGM within 1500K. Such precision would be sufficient to conclusively constrain the history of IGM thermal evolution at z < 1.5.
The Dark Energy Spectroscopic Instrument (DESI) completed its five-month Survey Validation in May 2021. Spectra of stellar and extragalactic targets from Survey Validation constitute the first major data sample from the DESI survey. This paper describes the public release of those spectra, the catalogs of derived properties, and the intermediate data products. In total, the public release includes good-quality spectral information from 466,447 objects targeted as part of the Milky Way Survey, 428,758 as part of the Bright Galaxy Survey, 227,318 as part of the Luminous Red Galaxy sample, 437,664 as part of the Emission Line Galaxy sample, and 76,079 as part of the Quasar sample. In addition, the release includes spectral information from 137,148 objects that expand the scope beyond the primary samples as part of a series of secondary programs. Here, we describe the spectral data, data quality, data products, Large-Scale Structure science catalogs, access to the data, and references that provide relevant background to using these spectra.
We present a publicly-available code to generate mock Lyman-α (\lya) forest data sets. The code is based on the Fluctuating Gunn-Peterson Approximation (FGPA) applied to Gaussian random fields and on the use of fast Fourier transforms (FFT). The output includes spectra of lya transmitted flux fraction, F, a quasar catalog, and a catalog of high-column-density systems. While these three elements have realistic correlations, additional code is then used to generate realistic quasar spectra, to add absorption by high-column-density systems and metals, and to simulate instrumental effects. Redshift space distortions (RSD) are implemented by including the large-scale velocity-gradient field in the FGPA resulting in a correlation function of F that can be accurately predicted. One hundred realizations have been produced over the 14,000 deg2 Dark Energy Spectroscopy Instrument (DESI) survey footprint with 100 quasars per deg2, and they are being used for the Extended Baryon Oscillation Survey (eBOSS) and DESI surveys. The analysis of these realizations shows that the correlation of F follows the prediction within the accuracy of eBOSS survey. The most time-consuming part of the production occurs before application of the FGPA, and the existing pre-FGPA forests can be used to easily produce new mock sets with modified redshift-dependent bias parameters or observational conditions.
We present preliminary results of a partial-wave analysis of τ−→π−π−π+νττ−→π−π−π+ντ in data from the Belle experiment at the KEKB e+e−e+e− collider. We demonstrate the presence of the a1(1420)a1(1420) and a1(1640)a1(1640) resonances in tauon decays and measure their masses and widths. We also present validation of our findings using a model-independent approach. Our results can improve modeling in simulation studies necessary for measuring the tauon electric and magnetic dipole moments and Michel parameters.
Experimental High Energy Physics has entered an era of precision measurements. However, measurements of many of the accessible processes assume that the final states' underlying kinematic distribution is the same as the Standard Model prediction. This assumption introduces an implicit model-dependency into the measurement, rendering the reinterpretation of the experimental analysis complicated without reanalysing the underlying data. We present a novel reweighting method in order to perform reinterpretation of particle physics measurements. It makes use of reweighting the Standard Model templates according to kinematic signal distributions of alternative theoretical models, prior to performing the statistical analysis. The generality of this method allows us to perform statistical inference in the space of theoretical parameters, assuming different kinematic distributions, according to a beyond Standard Model prediction. We implement our method as an extension to the pyhf software and interface it with the EOS software, which allows us to perform flavor physics phenomenology studies. Furthermore, we argue that, beyond the pyhf or HistFactory likelihood specification, only minimal information is necessary to make a likelihood model-agnostic and hence easily reinterpretable. We showcase that publishing such likelihoods is crucial for a full exploitation of experimental results.
Disc winds and planet-disc interactions are two crucial mechanisms that define the structure, evolution and dispersal of protoplanetary discs. While winds are capable of removing material from discs, eventually leading to their dispersal, massive planets can shape their disc by creating sub-structures such as gaps and spiral arms. We study the interplay between an X-ray photoevaporative disc wind and the substructures generated due to planet-disc interactions to determine how their mutual interactions affect the disc's and the planet's evolution. We perform three-dimensional hydrodynamic simulations of viscous (α=6.9⋅10−4) discs that host a Jupiter-like planet and undergo X-ray photoevaporation. We trace the gas flows within the disc and wind and measure the accretion rate onto the planet, as well as the gravitational torque that is acting on it. Our results show that the planetary gap takes away the wind's pressure support, allowing wind material to fall back into the gap. This opens new pathways for material from the inner disc (and part of the outer disc) to be redistributed through the wind towards the gap. Consequently, the gap becomes shallower, and the flow of mass across the gap in both directions is significantly increased, as well as the planet's mass-accretion rate (by factors ≈5 and ≈2, respectively). Moreover, the wind-driven redistribution results in a denser inner disc and less dense outer disc, which, combined with the recycling of a significant portion of the inner wind, leads to longer lifetimes of the inner disc, contrary to the expectation in a planet-induced photoevaporation (PIPE) scenario that has been proposed in the past.
We extend the multireference covariant density-functional theory (MR-CDFT) by including fluctuations in quadrupole deformations and average isovector pairing gaps simultaneously for the nuclear matrix elements (NMEs) of neutrinoless double-beta (0νββ) decay in the candidate nuclei 76Ge, 82Se, 100Mo, 130Te, and 136Xe assuming the exchange of either light or heavy neutrinos. The results indicate a linear correlation between the predicted NMEs and the isovector pairing strengths, as well as the excitation energies of 2+1 and 4+1 states. By adjusting the pairing strengths based on the excitation energies of the 2+1 states, we calculate the NMEs for 0νββ decay, which are reduced by approximately 12% to 62% compared with the results obtained in the previous studies by Song et al. [Phys. Rev. C 95, 024305 (2017)]. Additionally, upon introducing the average isovector pairing gap as an additional generator coordinate in the calculation, the NMEs increase by a factor ranging from 56% to 218%.
GraphNeT is an open-source python framework aimed at providing high quality, user friendly, end-to-end functionality to perform reconstruction tasks at neutrino telescopes using graph neural networks (GNNs). GraphNeT makes it fast and easy to train complex models that can provide event reconstruction with state-of-the-art performance, for arbitrary detector configurations, with inference times that are orders of magnitude faster than traditional reconstruction techniques. GNNs from GraphNeT are flexible enough to be applied to data from all neutrino telescopes, including future projects such as IceCube extensions or P-ONE. This means that GNN-based reconstruction can be used to provide state-of-the-art performance on most reconstruction tasks in neutrino telescopes, at real-time event rates, across experiments and physics analyses, with vast potential impact for neutrino and astro-particle physics.
We introduce a new statistical test based on the observed spacings of ordered data. The statistic is sensitive to detect non-uniformity in random samples, or short-lived features in event time series. Under some conditions, this new test can outperform existing ones, such as the well known Kolmogorov-Smirnov or Anderson-Darling tests, in particular when the number of samples is small and differences occur over a small quantile of the null hypothesis distribution. A detailed description of the test statistic is provided including a detailed discussion of the parameterization of its distribution via asymptotic bootstrapping as well as a novel per-quantile error estimation of the empirical distribution. Two example applications are provided, using the test to boost the sensitivity in generic "bump hunting", and employing the test to detect supernovae. The article is rounded off with an extended performance comparison to other, established goodness-of-fit tests.
Event reconstruction is a central step in many particle physics experiments, turning detector observables into parameter estimates; for example estimating the energy of an interaction given the sensor readout of a detector. A corresponding likelihood function is often intractable, and approximations need to be constructed. In our work, we first show how the full likelihood for a many-sensor detector can be broken apart into smaller terms, and secondly how we can train neural networks to approximate all terms solely based on forward simulation. Our technique results in a fast, flexible, and close-to-optimal surrogate model proportional to the likelihood and can be used in conjunction with standard inference techniques allowing for a consistent treatment of uncertainties. We illustrate our technique for parameter inference in neutrino telescopes based on maximum likelihood and Bayesian posterior sampling. Given its great flexibility, we also showcase our method for geometry optimization enabling to learn optimal detector designs. Lastly, we apply our method to realistic simulation of a ton-scale water-based liquid scintillator detector.
The reconstruction of neutrino events in the IceCube experiment is crucial for many scientific analyses, including searches for cosmic neutrino sources. The Kaggle competition "IceCube -- Neutrinos in Deep ice" was a public machine learning challenge designed to encourage the development of innovative solutions to improve the accuracy and efficiency of neutrino event reconstruction. Participants worked with a dataset of simulated neutrino events and were tasked with creating a suitable model to predict the direction vector of incoming neutrinos. From January to April 2023, hundreds of teams competed for a total of $50k prize money, which was awarded to the best performing few out of the many thousand submissions. In this contribution I will present some insights into the organization of this large outreach project, and summarize some of the main findings, results and takeaways.
The chiral anomaly, a fundamental property of QCD, relates the coupling of an odd number of Goldstone bosons to vector bosons, e.g. of the coupling of three pions to one photon. This coupling can experimentally be measured in pion-photon scattering. We report on a precision experiment using the COMPASS experiment at CERN where pion-photon scattering is mediated via the Primakoff effect. We present also improvements of monitoring beam stability and of controlling the DAQ of COMPASS.
Black holes are an essential building block of the baryonic structure in the Universe. Their masses range from a few times to a few billion times the mass of our Sun. Super-massive black holes (SMBHs), the largest form of compact objects, are believed to reside at the centre of all galaxies. Scaling relations between host properties and SMBH masses hint at a tight co-evolution between galaxies and their central black holes. In the deep nuclear gravitational potential, matter is accreted onto the SMBH. Under certain conditions, the viscous flow in the accretion disk releases extreme amounts of energy in the form of radiation. The galactic cores irradiating their environment under the effect of accretion are called active galactic nuclei (AGN). Over the last 20 years, quasars, the most luminous sub-species of AGN, have been discovered at ever-increasing distances. [...]
This work contains the first measurement of the anti-3He and anti-3H inelastic cross sections on matter, measured with the ALICE experiment at the LHC. It also evaluates the effect of the measurements of inelastic cross sections on matter on the propagation of antinuclei through the galaxy, and thus determines the transparency of the galaxy to antinuclei from different sources.
The goal of this work is to obtain a Hubble constant estimate through the study of the quadruply lensed, variable QSO SDSSJ1433+6007. To achieve this we combine multi-filter, archival $\textit{HST}$ data for lens modelling and a dedicated time delay monitoring campaign with the 2.1m Fraunhofer telescope at the $\textit{Wendelstein Observatory}$. The lens modelling is carried out with the public $\texttt{lenstronomy}$ Python package for each of the filters individually. Through this approach, we find that the data in one of the $\textit{HST}$ filters (F160W) contain a light contaminant, that would, if remained undetected, have severely biased the lensing potentials and thus our cosmological inference. After rejecting these data we obtain a combined posterior for the Fermat potential differences from the lens modelling in the remaining filters (F475X, F814W, F105W and F140W) with a precision of $\sim6\%$. The analysis of the $\textit{g'}$-band Wendelstein light curve data is carried out with a free-knot spline fitting method implemented in the public Python $\texttt{PyCS3}$ tools. The precision of the time delays between the QSO images has a range between 7.5 and 9.8$\%$ depending on the brightness of the images and their time delay. We then combine the posteriors for the Fermat potential differences and time delays. Assuming a flat $\Lambda$CDM cosmology, we infer a Hubble parameter of $H_0=76.6^{+7.7}_{-7.0}\frac{\mathrm{km}}{\mathrm{Mpc\;s}}$, reaching $9.6\%$ uncertainty for a single system.
The static QCD force from the lattice can be used to extract $\Lambda_{\overline{\textrm{MS}}}$, which determines the running of the strong coupling. Usually, this is done with a numerical derivative of the static potential. However, this introduces additional systematic uncertainties; thus, we use another observable to measure the static force directly. This observable consists of a Wilson loop with a chromoelectric field insertion. We work in the pure SU(3) gauge theory. We use gradient flow to improve the signal-to-noise ratio and to address the field insertion. We extract $\Lambda_{\overline{\textrm{MS}}}^{n_f=0}$ from the data by exploring different methods to perform the zero flow time limit. We obtain the value $\sqrt{8t_0} \Lambda_{\overline{\textrm{MS}}}^{n_f=0} =0.629^{+22}_{-26}$, where $t_0$ is a flow time reference scale. We also obtain precise determinations of several scales: $r_0/r_1$, $\sqrt{8 t_0}/r_0$, $\sqrt{8 t_0}/r_1$ and we compare to the literature. The gradient flow appears to be a promising method for calculations of Wilson loops with chromolectric and chromomagnetic insertions in quenched and unquenched configurations.
We show in experiments that a long, underdense, relativistic proton bunch propagating in plasma undergoes the oblique instability, that we observe as filamentation. We determine a threshold value for the ratio between the bunch transverse size and plasma skin depth for the instability to occur. At the threshold, the outcome of the experiment alternates between filamentation and self-modulation instability (evidenced by longitudinal modulation into microbunches). Time-resolved images of the bunch density distribution reveal that filamentation grows to an observable level late along the bunch, confirming the spatio-temporal nature of the instability. We calculate the amplitude of the magnetic field generated in the plasma by the instability and show that the associated magnetic energy increases with plasma density.
The study of next-to-leading-power (NLP) corrections in soft emissions continues to attract interest both in QCD and in QED. Soft-photon spectra in particular provide a clean case-study for the experimental verification of the Low-Burnett-Kroll (LBK) theorem. In this paper we study the consistency of the LBK theorem in the context of an ambiguity arising from momentum-conservation constraints in the computation of non-radiative amplitudes. We clarify that this ambiguity leads to various possible formulations of the LBK theorem, which are all equivalent up to power-suppressed effects (i.e. beyond the formal accuracy of the LBK theorem). We also propose a new formulation of the LBK theorem with a modified shifted kinematics which facilitates the numerical computation of non-radiative amplitudes with publicly available tools. Furthermore, we present numerical results for soft-photon spectra in the associated production of a muon pair with a photon, both in $e^+e^-$ annihilation and proton-proton collisions.
The high-energy radiation emitted by young stars can have a strong influence on their rotational evolution at later stages. This is because internal photoevaporation is one of the major drivers of the dispersal of circumstellar disks, which surround all newly born low-mass stars during the first few million years of their evolution. Employing an internal EUV/X-ray photoevaporation model, we have derived a simple recipe for calculating realistic inner disk lifetimes of protoplanetary disks. This prescription was implemented into a magnetic-morphology-driven rotational evolution model and is used to investigate the impact of disk locking on the spin evolution of low-mass stars. We find that the length of the disk locking phase has a profound impact on the subsequent rotational evolution of a young star, and the implementation of realistic disk lifetimes leads to an improved agreement of model outcomes with observed rotation period distributions for open clusters of various ages. However, for both young star-forming regions tested in our model, the strong bimodality in rotation periods that is observed in h Per could not be recovered. h Per is only successfully recovered if the model is started from a double-peaked distribution with an initial disk fraction of 65%. However, at an age of only ~1 Myr, such a low disk fraction can only be achieved if an additional disk dispersal process, such as external photoevaporation, is invoked. These results therefore highlight the importance of including realistic disk dispersal mechanisms in rotational evolution models of young stars.
Understanding the nature of high-redshift dusty galaxies requires a comprehensive view of their interstellar medium (ISM) and molecular complexity. However, the molecular ISM at high redshifts is commonly studied using only a few species beyond 12C16O, limiting our understanding. In this paper, we present the results of deep 3 mm spectral line surveys using the NOrthern Extended Millimeter Array (NOEMA) targeting two strongly lensed dusty galaxies observed when the Universe was less than 1.8 Gyr old: APM 08279+5255, a quasar at redshift z = 3.911, and NCv1.143 (H-ATLAS J125632.7+233625), a z = 3.565 starburst galaxy. The spectral line surveys cover rest-frame frequencies from about 330 to 550 GHz for both galaxies. We report the detection of 38 and 25 emission lines in APM 08279+5255 and NCv1.143, respectively. These lines originate from 17 species, namely CO, 13CO, C18O, CN, CCH, HCN, HCO+, HNC, CS, C34S, H2O, H3O+, NO, N2H+, CH, c-C3H2, and the vibrationally excited HCN and neutral carbon. The spectra reveal the chemical richness and the complexity of the physical properties of the ISM. By comparing the spectra of the two sources and combining the analysis of the molecular gas excitation, we find that the physical properties and the chemical imprints of the ISM are different: the molecular gas is more excited in APM 08279+5255, which exhibits higher molecular gas temperatures and densities compared to NCv1.143; the molecular abundances in APM 08279+5255 are akin to the values of local active galactic nuclei (AGN), showing boosted relative abundances of the dense gas tracers that might be related to high-temperature chemistry and/or the X-ray-dominated regions, while NCv1.143 more closely resembles local starburst galaxies. The most significant differences between the two sources are found in H2O: the 448 GHz ortho-H2O(423 − 330) line is significantly brighter in APM 08279+5255, which is likely linked to the intense far-infrared radiation from the dust powered by AGN. Our astrochemical model suggests that, at such high column densities, far-ultraviolet radiation is less important in regulating the ISM, while cosmic rays (and/or X-rays and shocks) are the key players in shaping the molecular abundances and the initial conditions of star formation. Both our observed CO isotopologs line ratios and the derived extreme ISM conditions (high gas temperatures, densities, and cosmic-ray ionization rates) suggest the presence of a top-heavy stellar initial mass function. From the ∼330-550 GHz continuum, we also find evidence of nonthermal millimeter flux excess in APM 08279+5255 that might be related to the central supermassive black hole. Such deep spectral line surveys open a new window into the physics and chemistry of the ISM and the radiation field of galaxies in the early Universe.
The final data products of the tables derived from UVFIT are available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (ftp://130.79.128.5) or via https://cdsarc.cds.unistra.fr/viz-bin/cat/J/A+A/680/A95
We dedicate this paper to the memory of our coauthor and friend, Yu Gao, who passed away in May 2022.
Protoplanetary disks exhibit a vertical gradient in angular momentum, rendering them susceptible to the vertical shear instability (VSI). The most important condition for the onset of this mechanism is a short timescale of thermal relaxation (≲0.1 orbital timescales). Simulations of fully VSI active disks are characterized by turbulent, vertically extended dust layers. This is in contradiction with recent observations of the outer regions of some protoplanetary disks, which appear highly settled. In this work, we demonstrate that the process of dust coagulation can diminish the cooling rate of the gas in the outer disk and extinct the VSI activity. Our findings indicate that the turbulence strength is especially susceptible to variations in the fragmentation velocity of the grains. A small fragmentation velocity of ≈100 cm s-1 results in a fully turbulent simulation, whereas a value of ≈400 cm s-1 results in a laminar outer disk, being consistent with observations. We show that VSI turbulence remains relatively unaffected by variations in the maximum particle size in the inner disk regions. However, we find that dust coagulation can significantly suppress the occurrence of VSI turbulence at larger distances from the central star.
Methyl cyanide (CH3CN) is one of the most abundant and widely spread interstellar complex organic molecules (iCOMs). Several studies found that, in hot corinos, methyl cyanide and methanol abundances are correlated suggesting a chemical link, often interpreted as a synthesis of them on the interstellar grain surfaces. In this article, we present a revised network of the reactions forming methyl cyanide in the gas phase. We carried out an exhaustive review of the gas-phase CH3CN formation routes, propose two new reactions, and performed new quantum mechanics calculations of several reactions. We found that 13 of the 15 reactions reported in the databases KIDA and UDfA have incorrect products and/or rate constants. The new corrected reaction network contains 10 reactions leading to methyl cyanide. We tested the relative importance of those reactions in forming CH3CN using our astrochemical model. We confirm that the radiative association of CH3+ and HCN, forming CH3CNH+, followed by the electron recombination of CH3CNH+, is the most important CH3CN formation route in both cold and warm environments, notwithstanding that we significantly corrected the rate constants and products of both reactions. The two newly proposed reactions play an important role in warm environments. Finally, we found a very good agreement between the CH3CN predicted abundances with those measured in cold (~10 K) and warm (~90 K) objects. Unexpectedly, we also found a chemical link between methanol and methyl cyanide via the CH$_{3}^{+}$ ion, which can explain the observed correlation between the CH3OH and CH3CN abundances measured in hot corinos.
Intracellular protein patterns are described by (nearly) mass-conserving reaction-diffusion systems. While these patterns initially form out of a homogeneous steady state due to the well-understood Turing instability, no general theory exists for the dynamics of fully nonlinear patterns. We develop a unifying theory for nonlinear wavelength-selection dynamics in (nearly) mass-conserving two-component reaction-diffusion systems independent of the specific mathematical model chosen. Previous work has shown that these systems support an extremely broad band of stable wavelengths, but the mechanism by which a specific wavelength is selected has remained unclear. We show that an interrupted coarsening process selects the wavelength at the threshold to stability. Based on the physical intuition that coarsening is driven by competition for mass and interrupted by weak source terms that break strict mass conservation, we develop a singular perturbation theory for the stability of stationary patterns. The resulting closed-form analytical expressions enable us to quantitatively predict the coarsening dynamics and the final pattern wavelength. We find excellent agreement with numerical results throughout the diffusion- and reaction-limited regimes of the dynamics, including the crossover region. Further, we show how, in these limits, the two-component reaction-diffusion systems map to generalized Cahn-Hilliard and conserved Allen-Cahn dynamics, therefore providing a link to these two fundamental scalar field theories. The systematic understanding of the length-scale dynamics of fully nonlinear patterns in two-component systems provided here builds the basis to reveal the mechanisms underlying wavelength selection in multicomponent systems with potentially several conservation laws.
The upcoming ByCycle project on the VISTA/4MOST multi-object spectrograph will offer new prospects of using a massive sample of ~1 million high spectral resolution (R = 20 000) background quasars to map the circumgalactic metal content of foreground galaxies (observed at R = 4000-7000), as traced by metal absorption. Such large surveys require specialized analysis methodologies. In the absence of early data, we instead produce synthetic 4MOST high-resolution fibre quasar spectra. To do so, we use the TNG50 cosmological magnetohydrodynamical simulation, combining photo-ionization post-processing and ray tracing, to capture Mg II (λ2796, λ2803) absorbers. We then use this sample to train a convolutional neural network (CNN) which searches for, and estimates the redshift of, Mg II absorbers within these spectra. For a test sample of quasar spectra with uniformly distributed properties ($\lambda _{\rm {Mg\, {\small II},2796}}$, $\rm {EW}_{\rm {Mg\, {\small II},2796}}^{\rm {rest}} = 0.05\!-\!5.15$ Å, $\rm {SNR} = 3\!-\!50$), the algorithm has a robust classification accuracy of 98.6 per cent and a mean wavelength accuracy of 6.9 Å. For high signal-to-noise (SNR) spectra ($\rm {SNR \gt 20}$), the algorithm robustly detects and localizes Mg II absorbers down to equivalent widths of $\rm {EW}_{\rm {Mg\, {\small II},2796}}^{\rm {rest}} = 0.05$ Å. For the lowest SNR spectra ($\rm {SNR=3}$), the CNN reliably recovers and localizes EW$_{\rm {Mg\, {\small II},2796}}^{\rm {rest}}$ ≥0.75 Å absorbers. This is more than sufficient for subsequent Voigt profile fitting to characterize the detected Mg II absorbers. We make the code publicly available through GitHub. Our work provides a proof-of-concept for future analyses of quasar spectra data sets numbering in the millions, soon to be delivered by the next generation of surveys.
We study supermassive black hole (SMBH) binary eccentricity of equal-mass galaxy mergers in N-body simulations with the KETJU code, which combines the GADGET-4 fast multipole gravity solver with accurate regularized integration and post-Newtonian corrections around SMBHs. In simulations with realistic, high-eccentricity galactic merger orbits, the hard binary eccentricity is found to be a non-linear function of the deflection angle in the SMBH orbit during the final, nearly radial close encounter between the SMBHs before they form a bound binary. This mapping between the deflection angle and the binary eccentricity has no apparent resolution dependence in our simulations spanning the resolution range of 1 × 105 to 8 × 106 particles per galaxy. The mapping is also captured using a simple model with an analytical potential, indicating that it is driven by the interplay between a smooth asymmetric stellar background potential and dynamical friction acting on the SMBHs. Due to the non-linearity of this mapping, in eccentric major merger configurations, small, parsec-scale variations in the merger orbit can result in binary eccentricities varying in nearly the full possible range between e = 0 and e = 1. In idealized simulations, such variations are caused by finite resolution effects, and convergence of the binary eccentricity can be achieved with increasing resolution. However, in real galaxies, other mechanisms such as nuclear gas and substructure that perturb the merger orbit are likely to be significant enough for the binary eccentricity to be effectively random. Our results indicate that the distribution of these effectively random eccentricities can be studied using even moderate resolution simulations.
We present the RASS-MCMF catalogue of 8449 X-ray selected galaxy clusters over 25 000 deg2 of extragalactic sky. The accumulation of deep multiband optical imaging data, the development of the Multi-Component Matched Filter (MCMF) cluster confirmation algorithm, and the release of the DESI Legacy Survey DR10 catalogue makes it possible - for the first time, more than 30 yr after the launch of the ROSAT X-ray satellite - to identify the majority of the galaxy clusters detected in the second ROSAT All-Sky-Survey (RASS) source catalogue (2RXS). The resulting 90 per cent pure RASS-MCMF catalogue is the largest intracluster medium (ICM)-selected cluster sample to date. RASS-MCMF probes a large dynamic range in cluster mass spanning from galaxy groups to the most massive clusters. The cluster redshift distribution peaks at $z$ ~ 0.1 and extends to redshifts $z$ ~ 1. Out to $z$ ~ 0.4, the RASS-MCMF sample contains more clusters per redshift interval (dN/dz) than any other ICM-selected sample. In addition to the main sample, we present two subsamples with 6912 and 5506 clusters, exhibiting 95 per cent and 99 per cent purity, respectively. We forecast the utility of the sample for a cluster cosmological study, using realistic mock catalogues that incorporate most observational effects, including the X-ray exposure time and background variations, the existence likelihood selection and the impact of the optical cleaning with the algorithm MCMF. Using realistic priors on the observable-mass relation parameters from a DES-based weak lensing analysis, we estimate the constraining power of the RASS-MCMF×DES sample to be of 0.026, 0.033, and 0.15 (1σ) on the parameters Ωm, σ8, and $w$, respectively.
We have obtained high-quality spectra of blue supergiant candidates in the dwarf irregular galaxy Leo A with the Low Resolution Imaging Spectrometer at the Keck I telescope. From the quantitative analysis of seven B8-A0 stars, we derive a mean metallicity [Z] = -1.35 ± 0.08, in excellent agreement with the gas-phase chemical abundance. From the stellar parameters and the flux-weighted gravity-luminosity relation (FGLR), we derive a spectroscopic distance modulus m - M = 24.77 ± 0.11 mag, significantly larger (~0.4 mag) than the value indicated by RR Lyrae and other stellar indicators. We explain the bulk of this discrepancy with blue loop stellar evolution at very low metallicity and show that the combination of metallicity effects and blue loop evolution amounts, in the case of Leo A, to an ~0.35 mag offset of the FGLR to fainter bolometric luminosities. We identify one outlier of low bolometric magnitude as a post-AGB star. Its metallicity is consistent with that of the young population, confirming the slow chemical enrichment of Leo A.
The simulation of particle physics data is a fundamental but computationally intensive ingredient for physics analysis at the large Hadron collider, where observational set-valued data is generated conditional on a set of incoming particles. To accelerate this task, we present a novel generative model based on a graph neural network and slot-attention components, which exceeds the performance of pre-existing baselines.
We present a systematic formalism based on a factorization theorem in soft-collinear effective theory to describe non-global observables at hadron colliders, such as gap-between-jets cross sections. The cross sections are factorized into convolutions of hard functions, capturing the dependence on the partonic center-of-mass energy √{s ̂}, and low-energy matrix elements, which are sensitive to the low scale Q0 ≪ √{s ̂} characteristic of the veto imposed on energetic emissions into the gap between the jets. The scale evolution of both objects is governed by a renormalization-group equation, which we derive at one-loop order. By solving the evolution equation for the hard functions for arbitrary 2 → M jet processes in the leading logarithmic approximation, we accomplish for the first time the all-order resummation of the so-called "super-leading logarithms" discovered in 2006, thereby solving an old problem of quantum field theory. We study the numerical size of the corresponding effects for different partonic scattering processes and explain why they are sizable for pp → 2 jets processes, but suppressed in H/Z and H/Z + jet production. The super-leading logarithms are given by an alternating series, whose individual terms can be much larger than the resummed result, even in very high orders of the loop expansion. Resummation is therefore essential to control these effects. We find that the asymptotic fall-off of the resummed series is much weaker than for standard Sudakov form factors.
The gamma-ray sky as seen by the Large Area Telescope (LAT) on board the Fermi satellite is a superposition of emissions from many processes. To study them, a rich toolkit of analysis methods for gamma-ray observations has been developed, most of which rely on emission templates to model foreground emissions. Here, we aim to complement these methods by presenting a template-free spatio-spectral imaging approach for the gamma-ray sky, based on a phenomenological modeling of its emission components. It is formulated in a Bayesian variational inference framework and allows a simultaneous reconstruction and decomposition of the sky into multiple emission components, enabled by a self-consistent inference of their spatial and spectral correlation structures. Additionally, we formulated the extension of our imaging approach to template-informed imaging, which includes adding emission templates to our component models while retaining the "data-drivenness" of the reconstruction. We demonstrate the performance of the presented approach on the ten-year Fermi LAT data set. With both template-free and template-informed imaging, we achieve a high quality of fit and show a good agreement of our diffuse emission reconstructions with the current diffuse emission model published by the Fermi Collaboration. We quantitatively analyze the obtained data-driven reconstructions and critically evaluate the performance of our models, highlighting strengths, weaknesses, and potential improvements. All reconstructions have been released as data products.
Nested sampling provides an estimate of the evidence of a Bayesian inference problem via probing the likelihood as a function of the enclosed prior volume. However, the lack of precise values of the enclosed prior mass of the samples introduces probing noise, which can hamper high-accuracy determinations of the evidence values as estimated from the likelihood-prior-volume function. We introduce an approach based on information field theory, a framework for non-parametric function reconstruction from data, that infers the likelihood-prior-volume function by exploiting its smoothness and thereby aims to improve the evidence calculation. Our method provides posterior samples of the likelihood-prior-volume function that translate into a quantification of the remaining sampling noise for the evidence estimate, or for any other quantity derived from the likelihood-prior-volume function.
We present the first results of the eXtreme UV Environments (XUE) James Webb Space Telescope (JWST) program, which focuses on the characterization of planet-forming disks in massive star-forming regions. These regions are likely representative of the environment in which most planetary systems formed. Understanding the impact of environment on planet formation is critical in order to gain insights into the diversity of the observed exoplanet populations. XUE targets 15 disks in three areas of NGC 6357, which hosts numerous massive OB stars, including some of the most massive stars in our Galaxy. Thanks to JWST, we can, for the first time, study the effect of external irradiation on the inner (<10 au), terrestrial-planet-forming regions of protoplanetary disks. In this study, we report on the detection of abundant water, CO, 12CO2, HCN, and C2H2 in the inner few au of XUE 1, a highly irradiated disk in NGC 6357. In addition, small, partially crystalline silicate dust is present at the disk surface. The derived column densities, the oxygen-dominated gas-phase chemistry, and the presence of silicate dust are surprisingly similar to those found in inner disks located in nearby, relatively isolated low-mass star-forming regions. Our findings imply that the inner regions of highly irradiated disks can retain similar physical and chemical conditions to disks in low-mass star-forming regions, thus broadening the range of environments with similar conditions for inner disk rocky planet formation to the most extreme star-forming regions in our Galaxy.
We study the molecular probability of the X (3872 ) in the D0D¯*0 and D+D*- channels in several scenarios. One of them assumes that the state is purely due to a genuine nonmolecular component. However, it gets unavoidably dressed by the meson components to the point that in the limit of zero binding of the D0D¯*0 component becomes purely molecular. Yet, the small but finite binding allows for a nonmolecular state when the bare mass of the genuine state approaches the D0D¯*0 threshold, but, in this case the system develops a small scattering length and a huge effective range for this channel in flagrant disagreement with present values of these magnitudes. Next we discuss the possibility to have hybrid states stemming from the combined effect of a genuine state and a reasonable direct interaction between the meson components, where we find cases in which the scattering length and effective range are still compatible with data, but even then the molecular probability is as big as 95%. Finally, we perform the calculations when the binding stems purely from the direct interaction between the meson-meson components. In summary we conclude, that while present data definitely rule out the possibility of a dominant nonmolecular component, the precise value of the molecular probability requires a more precise determination of the scattering length and effective range of the D0D¯*0 channel, as well as the measurement of these magnitudes for the D+D*- channel which have not been determined experimentally so far.
Context. A number of He-rich hot subdwarf stars present high abundances for trans-iron elements, such as Sr, Y, Zr, and Pb. Diffusion processes are important in hot subdwarf stars and it is generally believed that the high abundances of heavy elements in these peculiar stars are due to the action of radiative levitation. However, during the formation of He-rich hot subdwarf stars, hydrogen can be ingested into the convective zone driven by the He-core flash. It is known that episodes of protons being ingested into He-burning convective zones can lead to neutron-capture processes and the formation of heavy elements.
Aims: In this work, we aim to explore, for the first time, whether neutron-capture processes can occur in late He-core flashes taking place in the cores of the progenitors of He-rich hot subdwarfs. We aim to explore the possibility of a self-synthesized origin for the heavy elements observed in some He-rich hot subdwarf stars.
Methods: We computed a detailed evolutionary model for a stripped red-giant star using a stellar evolution code with a nuclear network comprising 32 isotopes. Then we post-processed the stellar models in the phase of helium and hydrogen burning using a post-processing nucleosynthesis code with a nuclear network of 1190 species, which allowed us to follow the neutron-capture processes in detail.
Results: We find the occurrence of neutron-capture processes in our model, with neutron densities reaching a value of ∼5 × 1012 cm−3. We determined that the trans-iron elements are enhanced in the surface by 1 to 2 dex, as compared to initial compositions. Moreover, the relative abundance pattern [Xi/Fe] produced by neutron-capture processes closely resembles those observed in some He-rich hot subdwarf stars, hinting at a possible self-synthesized origin for the heavy elements in these stars.
Conclusions: We conclude that intermediate neutron-capture processes can occur during a proton ingestion event in the He-core flash of stripped red-giant stars. This mechanism offers a natural channel for the production of the heavy elements observed in certain He-rich hot subdwarf stars.
We report early-time ultraviolet (UV) and optical spectroscopy of the young, nearby Type II supernova (SN) 2022wsp obtained by the Hubble Space Telescope (HST)/STIS at about 10 and 20 days after the explosion. The SN 2022wsp UV spectra are compared to those of other well-observed Type II/IIP SNe, including the recently studied Type IIP SN 2021yja. Both SNe exhibit rapid cooling and similar evolution during early phases, indicating a common behavior among SNe II. Radiative-transfer modeling of the spectra of SN 2022wsp with the TARDIS code indicates a steep radial density profile in the outer layer of the ejecta, a solar metallicity, and a relatively high total extinction of E(B - V) = 0.35 mag. The early-time evolution of the photospheric velocity and temperature derived from the modeling agree with the behavior observed from other previously studied cases. The strong suppression of hydrogen Balmer lines in the spectra suggests interaction with a preexisting circumstellar environment could be occurring at early times. In the SN 2022wsp spectra, the absorption component of the Mg II P Cygni profile displays a double-trough feature on day +10 that disappears by day +20. The shape is well reproduced by the model without fine-tuning the parameters, suggesting that the secondary blueward dip is a metal transition that originates in the SN ejecta.
Quantum Chromodynamics, the theory of quarks and gluons, whose interactions can be described by a local SU(3) gauge symmetry with charges called "color quantum numbers", is reviewed; the goal of this review is to provide advanced Ph.D. students a comprehensive handbook, helpful for their research. When QCD was "discovered" 50 years ago, the idea that quarks could exist, but not be observed, left most physicists unconvinced. Then, with the discovery of charmonium in 1974 and the explanation of its excited states using the Cornell potential, consisting of the sum of a Coulomb-like attraction and a long range linear confining potential, the theory was suddenly widely accepted. This paradigm shift is now referred to as the November revolution. It had been anticipated by the observation of scaling in deep inelastic scattering, and was followed by the discovery of gluons in three-jet events. The parameters of QCD include the running coupling constant, αs(Q2) , that varies with the energy scale Q2 characterising the interaction, and six quark masses. QCD cannot be solved analytically, at least not yet, and the large value of αs at low momentum transfers limits perturbative calculations to the high-energy region where Q2≫ΛQCD2≃ (250 MeV)2. Lattice QCD (LQCD), numerical calculations on a discretized space-time lattice, is discussed in detail, the dynamics of the QCD vacuum is visualized, and the expected spectra of mesons and baryons are displayed. Progress in lattice calculations of the structure of nucleons and of quantities related to the phase diagram of dense and hot (or cold) hadronic matter are reviewed. Methods and examples of how to calculate hadronic corrections to weak matrix elements on a lattice are outlined. The wide variety of analytical approximations currently in use, and the accuracy of these approximations, are reviewed. These methods range from the Bethe-Salpeter, Dyson-Schwinger coupled relativistic equations, which are formulated in both Minkowski or Euclidean spaces, to expansions of multi-quark states in a set of basis functions using light-front coordinates, to the AdS/QCD method that imbeds 4-dimensional QCD in a 5-dimensional deSitter space, allowing confinement and spontaneous chiral symmetry breaking to be described in a novel way. Models that assume the number of colors is very large, i.e. make use of the large Nc-limit, give unique insights. Many other techniques that are tailored to specific problems, such as perturbative expansions for high energy scattering or approximate calculations using the operator product expansion are discussed. The very powerful effective field theory techniques that are successful for low energy nuclear systems (chiral effective theory), or for non-relativistic systems involving heavy quarks, or the treatment of gluon exchanges between energetic, collinear partons encountered in jets, are discussed. The spectroscopy of mesons and baryons has played an important historical role in the development of QCD. The famous X,Y,Z states - and the discovery of pentaquarks - have revolutionized hadron spectroscopy; their status and interpretation are reviewed as well as recent progress in the identification of glueballs and hybrids in light-meson spectroscopy. These exotic states add to the spectrum of expected q q ¯ mesons and qqq baryons. The progress in understanding excitations of light and heavy baryons is discussed. The nucleon as the lightest baryon is discussed extensively, its form factors, its partonic structure and the status of the attempt to determine a three-dimensional picture of the parton distribution. An experimental program to study the phase diagram of QCD at high temperature and density started with fixed target experiments in various laboratories in the second half of the 1980s, and then, in this century, with colliders. QCD thermodynamics at high temperature became accessible to LQCD, and numerical results on chiral and deconfinement transitions and properties of the deconfined and chirally restored form of strongly interacting matter, called the Quark-Gluon Plasma (QGP), have become very precise by now. These results can now be confronted with experimental data that are sensitive to the nature of the phase transition. There is clear evidence that the QGP phase is created. This phase of QCD matter can already be characterized by some properties that indicate, within a temperature range of a few times the pseudocritical temperature, the medium behaves like a near ideal liquid. Experimental observables are presented that demonstrate deconfinement. High and ultrahigh density QCD matter at moderate and low temperatures shows interesting features and new phases that are of astrophysical relevance. They are reviewed here and some of the astrophysical implications are discussed. Perturbative QCD and methods to describe the different aspects of scattering processes are discussed. The primary parton-parton scattering in a collision is calculated in perturbative QCD with increasing complexity. The radiation of soft gluons can spoil the perturbative convergence, this can be cured by resummation techniques, which are also described here. Realistic descriptions of QCD scattering events need to model the cascade of quark and gluon splittings until hadron formation sets in, which is done by parton showers. The full event simulation can be performed with Monte Carlo event generators, which simulate the full chain from the hard interaction to the hadronic final states, including the modelling of non-perturbative components. The contribution of the LEP experiments (and of earlier collider experiments) to the study of jets is reviewed. Correlations between jets and the shape of jets had allowed the collaborations to determine the "color factors" - invariants of the SU(3) color group governing the strength of quark-gluon and gluon-gluon interactions. The calculated jet production rates (using perturbative QCD) are shown to agree precisely with data, for jet energies spanning more than five orders of magnitude. The production of jets recoiling against a vector boson, W± or Z, is shown to be well understood. The discovery of the Higgs boson was certainly an important milestone in the development of high-energy physics. The couplings of the Higgs boson to massive vector bosons and fermions that have been measured so far support its interpretation as mass-generating boson as predicted by the Standard Model. The study of the Higgs boson recoiling against hadronic jets (without or with heavy flavors) or against vector bosons is also highlighted. Apart from the description of hard interactions taking place at high energies, the understanding of "soft QCD" is also very important. In this respect, Pomeron - and Odderon - exchange, soft and hard diffraction are discussed. Weak decays of quarks and leptons, the quark mixing matrix and the anomalous magnetic moment of the muon are processes which are governed by weak interactions. However, corrections by strong interactions are important, and these are reviewed. As the measured values are incompatible with (most of) the predictions, the question arises: are these discrepancies first hints for New Physics beyond the Standard Model? This volume concludes with a description of future facilities or important upgrades of existing facilities which improve their luminosity by orders of magnitude. The best is yet to come!
Pseudospin symmetry (PSS) is a relativistic dynamical symmetry connected with the lower component of the Dirac spinor. Here, we investigate the conservation and breaking of PSS in the single-nucleon resonant states, as an example, using Green's function method that provides a novel way to precisely describe not only the resonant energies and widths but also the spacial density distributions for both narrow and wide resonances. The PSS conservation and breaking are perfectly displayed in the evolution of resonant parameters and density distributions with the potential depth: In the PSS limit, i.e., when the attractive scalar and repulsive vector potentials have the same magnitude but opposite sign, PSS is exactly conserved with strictly the same energy and width between the PS partners as well as identical density distributions of the lower components. As the potential depth increases, the PSS is broken gradually with energy and width splittings and a phase shift in the density distributions.
Strong nonrelativistic shocks are known to accelerate particles up to relativistic energies. However, for diffusive shock acceleration, electrons must have a highly suprathermal energy, implying the need for very efficient preacceleration. Most published studies consider shocks propagating through homogeneous plasma, which is an unrealistic assumption for astrophysical environments. Using 2D3V particle-in-cell simulations, we investigate electron acceleration and heating processes at nonrelativistic high-Mach-number shocks in electron-ion plasma with a turbulent upstream medium. For this purpose, slabs of plasma with compressive turbulence are simulated separately and then inserted into shock simulations, which require matching of the plasma slabs at the interface. Using a novel procedure of matching electromagnetic fields and currents, we perform simulations of perpendicular shocks setting different intensities of density fluctuations (≲10%) in the upstream region. The new simulation technique provides a framework for studying shocks propagating in turbulent media. We explore the impact of the fluctuations on electron heating, the dynamics of upstream electrons, and the driving of plasma instabilities. Our results indicate that while the presence of turbulence enhances variations in the upstream magnetic field, their levels remain too low to significantly influence the behavior of electrons at perpendicular shocks.
The origin of molecular evolution required the replication of short oligonucleotides to form longer polymers. Prebiotically plausible oligonucleotide pools tend to contain more of some nucleobases than others. It has been unclear whether this initial bias persists and how it affects replication. To investigate this, we examined the evolution of 12-mer biased short DNA pools using an enzymatic model system. This allowed us to study the long timescales involved in evolution, since it is not yet possible with currently investigated prebiotic replication chemistries. Our analysis using next-generation sequencing from different time points revealed that the initial nucleotide bias of the pool disappeared in the elongated pool after isothermal replication. In contrast, the nucleotide composition at each position in the elongated sequences remained biased and varied with both position and initial bias. Furthermore, we observed the emergence of highly periodic dimer and trimer motifs in the rapidly elongated sequences. This shift in nucleotide composition and the emergence of structure through templated replication could help explain how biased prebiotic pools could undergo molecular evolution and lead to complex functional nucleic acids.
We investigate cosmological correlators for conformally coupled $\phi^4$ theory in four-dimensional de Sitter space. These \textit{in-in} correlators differ from scattering amplitudes for massless particles in flat space due to the spacelike structure of future infinity in de Sitter. They also require a regularization which preserves de Sitter-invariance, which makes the flat space limit subtle to define at loop-level. Nevertheless we find that up to two loops, the \textit{in-in} correlators are structurally simpler than the wave function and have the same transcendentality as flat space amplitudes. Moreover, we show that their loop integrands can be recast in terms of flat space integrands and can be derived from a novel recursion relation.
We present the one-dimensional Ly α forest power spectrum measurement using the first data provided by the Dark Energy Spectroscopic Instrument (DESI). The data sample comprises 26 330 quasar spectra, at redshift z > 2.1, contained in the DESI Early Data Release and the first 2 months of the main survey. We employ a Fast Fourier Transform (FFT) estimator and compare the resulting power spectrum to an alternative likelihood-based method in a companion paper. We investigate methodological and instrumental contaminants associated with the new DESI instrument, applying techniques similar to previous Sloan Digital Sky Survey (SDSS) measurements. We use synthetic data based on lognormal approximation to validate and correct our measurement. We compare our resulting power spectrum with previous SDSS and high-resolution measurements. With relatively small number statistics, we successfully perform the FFT measurement, which is already competitive in terms of the scale range. At the end of the DESI survey, we expect a five times larger Ly α forest sample than SDSS, providing an unprecedented precise one-dimensional power spectrum measurement.
Spatial proton gradients create energy in biological systems and are likely a driving force for prebiotic systems. Due to the fast diffusion of protons, they are however difficult to create as steady state, unless driven by other non-equilibria such as thermal gradients. Here, we quantitatively predict the heat-flux driven formation of pH gradients for the case of a simple acid-base reaction system. To this end, we (i) establish a theoretical framework that describes the spatial interplay of chemical reactions with thermal convection, thermophoresis, and electrostatic forces by a separation of timescales, and (ii) report quantitative measurements in a purpose-built microfluidic device. We show experimentally that the slope of such pH gradients undergoes pronounced amplitude changes in a concentration-dependent manner and can even be inverted. The predictions of the theoretical framework fully reflect these features and establish an understanding of how naturally occurring non-equilibrium environmental conditions can drive pH gradients.
We present a simple and promising new method to measure the expansion rate and the geometry of the universe that combines observations related to the time delays between the multiple images of time-varying sources, strongly lensed by galaxy clusters, and Type Ia supernovae, exploding in galaxies belonging to the same lens clusters. By means of two different statistical techniques that adopt realistic errors on the relevant quantities, we quantify the accuracy of the inferred cosmological parameter values. We show that the estimate of the Hubble constant is robust and competitive, and depends only mildly on the chosen cosmological model. Remarkably, the two probes separately produce confidence regions on the cosmological parameter planes that are oriented in complementary ways, thus providing in combination valuable information on the values of the other cosmological parameters. We conclude by illustrating the immediate observational feasibility of the proposed joint method in a well-studied lens galaxy cluster, with a relatively small investment of telescope time for monitoring from a 2 to 3 m class ground-based telescope.
In previous hydrodynamical simulations, we found a mechanism for nearly circular binary stars, such as Kepler-413, to trap two planets in a stable 1:1 resonance. Therefore, the stability of coorbital configurations becomes a relevant question for planet formation around binary stars. For this work, we investigated the coorbital planet stability using a Kepler-413 analogue as an example and then expanded the parameters to study a general n-body stability of planet pairs in eccentric horseshoe orbits around binaries. The stability was tested by evolving the planet orbits for 105 binary periods with varying initial semi-major axes and planet eccentricities. The unstable region of a single circumbinary planet is used as a comparison to the investigated coorbital configurations in this work. We confirm previous findings on the stability of single planets and find a first order linear relation between the orbit eccentricity ep and pericentre to identify stable orbits for various binary configurations. Such a linear relation is also found for the stability of 1:1 resonant planets around binaries. Stable orbits for eccentric horseshoe configurations exist with a pericentre closer than seven binary separations and, in the case of Kepler-413, the pericentre of the first stable orbit can be approximated by rc,peri = (2.90 ep + 2.46) abin.
We devise and demonstrate a method to search for non-gravitational couplings of ultralight dark matter to standard model particles using space-time separated atomic clocks and cavity-stabilized lasers. By making use of space-time separated sensors, which probe different values of an oscillating dark matter field, we can search for couplings that cancel in typical local experiments. We demonstrate this method using existing data from a frequency comparison of lasers stabilized to two optical cavities connected via a 2220 km fiber link [Nat. Commun. 13, 212 (2022)]. The absence of significant oscillations in the data results in constraints on the coupling of scalar dark matter to electrons, d_me, for masses between 1e-19 eV and 2e-15 eV. These are the first constraints on d_me alone in this mass range, and improve the dark matter constraints on any scalar-Fermion coupling by up to two orders of magnitude.
We aim at a direct measurement of the compactness of three galaxy-scale lenses in massive clusters, testing the accuracy of the scaling laws that describe the members in strong lensing (SL) models of galaxy clusters. We selected the multiply imaged sources MACS J0416.1−2403 ID14 (z=3.221), MACS J0416.1−2403 ID16 (z=2.095), and MACS J1206.2−0847 ID14 (z=3.753). Eight images were observed for the first SL system, and six for the latter two. We focused on the main deflector of each galaxy-scale SL system (identified as members 8971, 8785, and 3910, respectively), and modelled its total mass distribution with a truncated isothermal sphere. We accounted for the lensing effects of the remaining cluster components, and included the uncertainty on the cluster-scale mass distribution through a bootstrapping procedure. We measured a truncation radius value of 6.1+2.3−1.1kpc, 4.0+0.6−0.4kpc, and 5.2+1.3−1.1kpc for members 8971, 8785, and 3910, respectively. Alternative non-truncated models with a higher number of free parameters do not lead to an improved description of the SL system. We measured the stellar-to-total mass fraction within the effective radius Re for the three members, finding 0.51±0.21, 1.0±0.4, and 0.39±0.16, respectively. We find that a parameterisation of the properties of cluster galaxies in SL models based on power-law scaling relations with respect to the total luminosity cannot accurately describe their compactness over their full total mass range. Our results agree with modelling of the cluster members based on the Fundamental Plane relation. Finally, we report good agreement between our values of the stellar-to-total mass fraction within Re and those of early-type galaxies from the SLACS Survey. Our work significantly extends the regime of the current samples of lens galaxies.
We study the excitation spectrum of light and strange mesons in diffractive scattering. We identify different hadron resonances through partial wave analysis, which inherently relies on analysis models. Besides statistical uncertainties, the model dependence of the analysis introduces dominant systematic uncertainties. We discuss several of their sources for the $\pi^-\pi^-\pi^+$ and $K^0_S K^-$ final states and present methods to reduce them. We have developed a new approach exploiting a-priori knowledge of signal continuity over adjacent final-state-mass bins to stably fit a large pool of partial-waves to our data, allowing a clean identification of very small signals in our large data sets. For two-body final states of scalar particles, such as $K^0_S K^-$, mathematical ambiguities in the partial-wave decomposition lead to the same intensity distribution for different combinations of amplitude values. We will discuss these ambiguities and present solutions to resolve or at least reduce the number of possible solutions. Resolving these issues will allow for a complementary analysis of the $a_J$-like resonance sector in these two final states.
We propose extensions of the anti-kt and Cambridge/Aachen hierarchical jet clustering algorithms that are designed to retain the exact jet kinematics of these algorithms, while providing an infrared-and-collinear-safe definition of jet flavor at any fixed order in perturbation theory. Central to our approach is a new technique called interleaved flavor neutralization (IFN), whereby the treatment of flavor is integrated with, but distinct from, the kinematic clustering. IFN allows flavor information to be meaningfully accessed at each stage of the clustering sequence, which enables a consistent assignment of flavor both to individual jets and to their substructure. We validate the IFN approach using a dedicated framework for fixed-order tests of infrared and collinear safety, which also reveals unanticipated issues in earlier approaches to flavored jet clustering. We briefly explore the phenomenological impact of IFN with anti-kt jets for benchmark tasks at the Large Hadron Collider.
RNA in extant biological systems is homochiral -- it consists exclusively of D-ribonucleotides rather than L-ribonucleotides. How the homochirality of RNA emerged is not known. Here, we use stochastic simulations to quantitatively explore the conditions for RNA homochirality to emerge in the prebiotic scenario of an `RNA reactor', in which RNA strands react in a non-equilibrium environment. These reactions include the hybridization, dehybridization, template-directed ligation, and cleavage of RNA strands. The RNA reactor is either closed, with a finite pool of ribonucleotide monomers of both chiralities (D and L), or the reactor is open, with a constant inflow of a racemic mixture of monomers. For the closed reactor, we also consider the interconversion between D- and L-monomers via a racemization reaction. We first show that template-free polymerization is unable to reach a high degree of homochirality, due to the lack of autocatalytic amplification. In contrast, in the presence of template-directed ligation, with base pairing and stacking between bases of the same chirality thermodynamically favored, a high degree of homochirality can arise and be maintained, provided that the non-equilibrium environment overcomes product inhibition, for instance via temperature cycling. Furthermore, if the experimentally observed kinetic stalling of ligation after chiral mismatches is also incorporated, the RNA reactor can evolve towards a fully homochiral state, in which one chirality is entirely lost. This is possible, because the kinetic stalling after chiral mismatches effectively implements a chiral cross-inhibition process. Taken together, our model supports a scenario, where the emergence of homochirality is assisted by template-directed ligation and polymerization in a non-equilibrium RNA reactor.
The chemical enrichment of dust and metals in the interstellar medium (ISM) of galaxies throughout cosmic time is one of the key driving processes of galaxy evolution. Here we study the evolution of the gas-phase metallicities, dust-to-gas (DTG), and dust-to-metal (DTM) ratios of 36 star-forming galaxies at 1.7<z<6.3 probed by gamma-ray bursts (GRBs). We compile all GRB-selected galaxies with intermediate (R=7000) to high (R>40,000) resolution spectroscopic data for which at least one refractory (e.g. Fe) and one volatile (e.g. S or Zn) element have been detected at S/N>3. This is to ensure that accurate abundances and dust depletion patterns can be obtained. We first derive the redshift evolution of the dust-corrected, absorption-line based gas-phase metallicity [M/H]tot in these galaxies, for which we determine a linear relation with redshift [M/H]tot(z)=(−0.21±0.04)z−(0.47±0.14). We then examine the DTG and DTM ratios as a function of redshift and through three orders of magnitude in metallicity, quantifying the relative dust abundance both through the direct line-of-sight visual extinction AV and the derived depletion level. We use a novel method to derive the DTG and DTM mass ratios for each GRB sightline, summing up the mass of all the depleted elements in the dust-phase. We find that the DTG and DTM mass ratios are both strongly correlated with the gas-phase metallicity and show a mild evolution with redshift as well. While these results are subject to a variety of caveats related to the physical environments and the narrow pencil-beam sightlines through the ISM probed by the GRBs, they provide strong implications for studies of dust masses to infer the gas and metal content of high-redshift galaxies, and particularly demonstrate the large offset from the average Galactic value in the low-metallicity, high-redshift regime.
The onset of star formation is set by the collapse of filaments in the interstellar medium. From a theoretical point of view, an isolated cylindrical filament forms cores via the edge effect. Due to the self-gravity of a filament, the strong increase in acceleration at both ends leads to a pile-up of matter which collapses into cores. However, this effect is rarely observed. Most theoretical models consider a sharp density cut-off at the edge of the filament, whereas a smoother transition is more realistic and would also decrease the acceleration at the ends of the filament. We show that the edge effect can be significantly slowed down by a density gradient, although not completely avoided. However, this allows perturbations inside the filament to grow faster than the edge. We determine the critical density gradient for which the time-scales are equal and find it to be of the order of several times the filament radius. Hence, the density gradient at the ends of a filament is an essential parameter for fragmentation and the low rate of observed cases of the edge effect could be naturally explained by shallow gradients.
Turbulence in protoplanetary discs, when present, plays a critical role in transporting dust particles embedded in the gaseous disc component. When using a field description of dust dynamics, a diffusion approach is traditionally used to model this turbulent dust transport. However, it has been shown that classical turbulent diffusion models are not fully self-consistent. Several shortcomings exist, including the ambiguous nature of the diffused quantity and the non-conservation of angular momentum. Orbital effects are also neglected without an explicit prescription. In response to these inconsistencies, we present a novel Eulerian turbulent dust transport model for isotropic and homogeneous turbulence on the basis of a mean-field theory. Our model is based on density-weighted averaging applied to the pressureless fluid equations and uses appropriate turbulence closures. Our model yields novel dynamic equations for the turbulent dust mass flux and recovers existing turbulent transport models in special limiting cases, thus providing a more general and self-consistent description of turbulent particle transport. Importantly, our model ensures the conservation of global angular and linear momentum unconditionally and implicitly accounts for the effects of orbital dynamics in protoplanetary discs. Furthermore, our model correctly describes the vertical settling-diffusion equilibrium solutions for both small and large particles. Hence, this work presents a generalized Eulerian turbulent dust transport model, establishing a comprehensive framework for more detailed studies of turbulent dust transport in protoplanetary discs.
We present MGLENS, a large series of modified gravity lensing simulations tailored for cosmic shear data analyses and forecasts in which cosmological and modified gravity parameters are varied simultaneously. Based on the FORGE and BRIDGEN-body simulation suites presented in companion papers, we construct 100 × 5000 deg2 of mock Stage-IV lensing data from two 4D Latin hypercubes that sample cosmological and gravitational parameters in f(R) and nDGP gravity, respectively. These are then used to validate our inference analysis pipeline based on the lensing power spectrum, exploiting our implementation of these modified gravity models within the COSMOSIS cosmological inference package. Sampling this new likelihood, we find that cosmic shear can achieve 95 per cent CL constraints on the modified gravity parameters of log$_{10}[f_{R_0}] \lt $ -4.77 and log10[H0rc] > 0.09, after marginalizing over intrinsic alignments of galaxies and including scales up to ℓ = 5000. We also investigate the impact of photometric uncertainty, scale cuts, and covariance matrices. We finally explore the consequences of analysing MGLENS data with the wrong gravity model, and report catastrophic biases for a number of possible scenarios. The Stage-IV MGLENS simulations, the FORGE and BRIDGE emulators and the COSMOSIS interface modules will be made publicly available upon journal acceptance.
The differential cross section for the quasi-free photoproduction reaction γ n →K0Σ0 was measured at BGOOD at ELSA from threshold to a centre-of-mass energy of 2400 MeV . Close to threshold the results are consistent with existing data and are in agreement with partial wave analysis solutions over the full measured energy range, with a large coupling to the Δ (1900 ) 1 /2- evident. This is the first dataset covering the K∗ threshold region, where there are model predictions of dynamically generated vector meson-baryon resonance contributions.
Gaia BH1, the first quiescent black hole (BH) detected from Gaia data, poses a challenge to most binary evolution models: its current mass ratio is ≈0.1, and its orbital period seems to be too long for a post-common envelope system and too short for a non-interacting binary system. Here, we explore the hypothesis that Gaia BH1 formed through dynamical interactions in a young star cluster (YSC). We study the properties of BH-main sequence (MS) binaries formed in YSCs with initial mass 3 × 102-3 × 104 M⊙ at solar metallicity, by means of 3.5 × 104 direct N-body simulations coupled with binary population synthesis. For comparison, we also run a sample of isolated binary stars with the same binary population synthesis code and initial conditions used in the dynamical models. We find that BH-MS systems that form via dynamical exchanges populate the region corresponding to the main orbital properties of Gaia BH1 (period, eccentricity, and masses). In contrast, none of our isolated binary systems match the orbital period and MS mass of Gaia BH1. Our best-matching Gaia BH1-like system forms via repeated dynamical exchanges and collisions involving the BH progenitor star, before it undergoes core collapse. YSCs are at least two orders of magnitude more efficient in forming Gaia BH1-like systems than isolated binary evolution.
Radio relics are typically found to be arc-like regions of synchrotron emission in the outskirts of merging galaxy clusters, bowing out from the cluster center. In most cases they show synchrotron spectra that steepen toward the cluster center, indicating that they are caused by relativistic electrons being accelerated at outward traveling merger shocks. A number of radio relics break with this ideal picture and show morphologies that are bent the opposite way and show spectral index distributions that do not follow expectations from the ideal picture. We propose that these "wrong way" relics can form when an outward traveling shock wave is bent inward by an infalling galaxy cluster or group. We test this in an ultra-high-resolution zoom-in simulation of a massive galaxy cluster with an on-the-fly spectral cosmic-ray model. This allows us to study not only the synchrotron emission at colliding shocks, but also their synchrotron spectra to address the open question of relics with strongly varying spectral indices over the relic surface.
Photoevaporation from high-energy stellar radiation has been thought to drive the dispersal of protoplanetary discs. Different theoretical models have been proposed, but their predictions diverge in terms of the rate and modality at which discs lose their mass, with significant implications for the formation and evolution of planets. In this paper, we use disc population synthesis models to interpret recent observations of the lowest accreting protoplanetary discs, comparing predictions from EUV-driven, FUV-driven, and X-ray-driven photoevaporation models. We show that the recent observational data of stars with low accretion rates (low accretors) point to X-ray photoevaporation as the preferred mechanism driving the final stages of protoplanetary disc dispersal. We also show that the distribution of accretion rates predicted by the X-ray photoevaporation model is consistent with observations, while other dispersal models tested here are clearly ruled out.
We report high-quality Hα/CO imaging spectroscopy of nine massive (log median stellar mass = 10.65 M ⊙) disk galaxies on the star-forming main sequence (henceforth SFGs), near the peak of cosmic galaxy evolution (z ~ 1.1-2.5), taken with the ESO Very Large Telescope, IRAM-NOEMA, and Atacama Large Millimeter/submillimeter Array. We fit the major axis position-velocity cuts with beam-convolved, forward models with a bulge, a turbulent rotating disk, and a dark matter (DM) halo. We include priors for stellar and molecular gas masses, optical light effective radii and inclinations, and DM masses from our previous rotation curve analysis of these galaxies. We then subtract the inferred 2D model-galaxy velocity and velocity dispersion maps from those of the observed galaxies. We investigate whether the residual velocity and velocity dispersion maps show indications for radial flows. We also carry out kinemetry, a model-independent tool for detecting radial flows. We find that all nine galaxies exhibit significant nontangential flows. In six SFGs, the inflow velocities (v r ~ 30-90 km s-1, 10%-30% of the rotational component) are along the minor axis of these galaxies. In two cases the inflow appears to be off the minor axis. The magnitudes of the radial motions are in broad agreement with the expectations from analytic models of gravitationally unstable, gas-rich disks. Gravitational torques due to clump and bar formation, or spiral arms, drive gas rapidly inward and result in the formation of central disks and large bulges. If this interpretation is correct, our observations imply that gas is transported into the central regions on ~10 dynamical timescales.
In this work we compare the predictions for the scattering length and effective range of the channels $K^0 \Sigma^+, K^+ \Sigma^0 , K^+ \Lambda$ and $\eta p$, assuming the $N^*(1535)$ state as a molecular state of these channels, or an original genuine state, made for instance from three quarks. Looking at very different scenarios, what we conclude is that the predictions of these two pictures are drastically different, to the point that we advise the measurement of these magnitudes, accessible for instance by measuring correlation functions, in order to gain much valuable information concerning the nature of this state.
Accurately understanding the equation of state (EOS) of high-density, zero-temperature quark matter plays an essential role in constraining the behavior of dense strongly interacting matter inside the cores of neutron stars. In this Letter, we study the weak-coupling expansion of the EOS of cold quark matter and derive the complete, gauge-invariant contributions from the long-wavelength, dynamically screened gluonic sector at next-to-next-to-next-to-leading order (N3LO) in the strong coupling constant αs. This elevates the EOS result to the O (αs3ln αs) level, leaving only one unknown constant from the unscreened sector at N3LO, and places it on par with its high-temperature counterpart from 2003.
In this in silico study, we show that phase-separated active nematics form −1/2 defects, contrary to the current paradigm. We also observe and characterize lateral arc-like structures separating from nematic bands and moving in transverse direction. Topological defects play a central role in the formation and organization of various biological systems. Historically, such nonequilibrium defects have been mainly studied in the context of homogeneous active nematics. Phase-separated systems, in turn, are known to form dense and dynamic nematic bands, but typically lack topological defects. In this paper, we use agent-based simulations of weakly aligning, self-propelled polymers and demonstrate that contrary to the existing paradigm phase-separated active nematics form −1/2 defects. Moreover, these defects, emerging due to interactions among dense nematic bands, constitute a novel second-order collective state. We investigate the morphology of defects in detail and find that their cores correspond to a strong increase in density, associated with a condensation of nematic fluxes. Unlike their analogs in homogeneous systems, such condensed defects form and decay in a different way and do not involve positively charged partners. We additionally observe and characterize lateral arc-like structures that separate from a band's bulk and move in transverse direction. We show that the key control parameters defining the route from stable bands to the coexistence of dynamic lanes and defects are the total density of particles and their path persistence length. We introduce a hydrodynamic theory that qualitatively recapitulates all the main features of the agent-based model, and use it to show that the emergence of both defects and arcs can be attributed to the same anisotropic active fluxes. Finally, we present a way to artificially engineer and position defects, and speculate about experimental verification of the provided model.
Phase transitions in a non-perturbative regime can be studied by ab initio Lattice Field Theory methods. The status and future research directions for LFT investigations of Quantum Chromo-Dynamics under extreme conditions are reviewed, including properties of hadrons and of the hypothesized QCD axion as inferred from QCD topology in different phases. We discuss phase transitions in strong interactions in an extended parameter space, and the possibility of model building for Dark Matter and Electro-Weak Symmetry Breaking. Methodological challenges are addressed as well, including new developments in Artificial Intelligence geared towards the identification of different phases and transitions.
JEM-EUSO is an international program for the development of space-based Ultra-High Energy Cosmic Ray observatories. The program consists of a series of missions which are either under development or in the data analysis phase. All instruments are based on a wide-field-of-view telescope, which operates in the near-UV range, designed to detect the fluorescence light emitted by extensive air showers in the atmosphere. We describe the simulation software ESAF in the framework of the JEM-EUSO program and explain the physical assumptions used. We present here the implementation of the JEM-EUSO, POEMMA, K-EUSO, TUS, Mini-EUSO, EUSO-SPB1 and EUSO-TA configurations in ESAF. For the first time ESAF simulation outputs are compared with experimental data.
Context. The general prediction that more than half of all cataclysmic variables (CVs) have evolved past the period minimum is in strong disagreement with observational surveys, which show that the relative number of these objects is just a few percent.
Aims: Here, we investigate whether a large number of post-period minimum CVs could detach because of the appearance of a strong white dwarf magnetic field potentially generated by a rotation- and crystallization-driven dynamo.
Methods: We used the MESA code to calculate evolutionary tracks of CVs incorporating the spin evolution and cooling as well as compressional heating of the white dwarf. If the conditions for the dynamo were met, we assumed that the emerging magnetic field of the white dwarf connects to that of the companion star and incorporated the corresponding synchronization torque, which transfers spin angular momentum to the orbit.
Results: We find that for CVs with donor masses exceeding ∼0.04 M⊙, magnetic fields are generated mostly if the white dwarfs start to crystallize before the onset of mass transfer. It is possible that a few white dwarf magnetic fields are generated in the period gap. For the remaining CVs, the conditions for the dynamo to work are met beyond the period minimum, when the accretion rate decreased significantly. Synchronization torques cause these systems to detach for several gigayears even if the magnetic field strength of the white dwarf is just one MG.
Conclusions: If the rotation- and crystallization-driven dynamo - which is currently the only mechanism that can explain several observational facts related to magnetism in CVs and their progenitors - or a similar temperature-dependent mechanism is responsible for the generation of magnetic field in white dwarfs, most CVs that have evolved beyond the period minimum must detach for several gigayears at some point. This reduces the predicted number of semi-detached period bouncers by up to ∼60 − 80%.
We analyse in detail the QED corrections to the total decay width and the moments of the electron energy spectrum of the inclusive semi-leptonic B → Xceν decay. Our calculation includes short-distance electroweak corrections, the complete
Strong gravitational lensing is a powerful tool to provide constraints on galaxy mass distributions and cosmological parameters, such as the Hubble constant, H0. Nevertheless, inference of such parameters from images of lensing systems is not trivial as parameter degeneracies can limit the precision in the measured lens mass and cosmological results. External information on the mass of the lens, in the form of kinematic measurements, is needed to ensure a precise and unbiased inference. Traditionally, such kinematic information has been included in the inference after the image modeling, using spherical Jeans approximations to match the measured velocity dispersion integrated within an aperture. However, as spatially resolved kinematic measurements become available via IFU data, more sophisticated dynamical modeling is necessary. Such kinematic modeling is expensive, and constitutes a computational bottleneck that we aim to overcome with our Stellar Kinematics Neural Network (SKiNN). SKiNN emulates axisymmetric modeling using a neural network, quickly synthesizing from a given mass model a kinematic map that can be compared to the observations to evaluate a likelihood. With a joint lensing plus kinematic framework, this likelihood constrains the mass model at the same time as the imaging data. We show that SKiNN's emulation of a kinematic map is accurate to a considerably better precision than can be measured (better than 1% in almost all cases). Using SKiNN speeds up the likelihood evaluation by a factor of ~200. This speedup makes dynamical modeling economical, and enables lens modelers to make effective use of modern data quality in the JWST era.
NGC 7793, NGC 300, M 33, and NGC 2403 are four nearby undisturbed and bulgeless low-mass spiral galaxies whose morphology and stellar mass are similar. They are ideal laboratories for studying disc formation scenarios and the histories of stellar mass growth. We constructed a simple chemical evolution model by assuming that discs grow gradually with continuous metal-free gas infall and metal-enriched gas outflow. By means of the classical χ2 method, applied to the model predictions, the best combination of free parameters capable of reproducing the corresponding present-day observations was determined, that is, the radial dependence of the infall timescale τ = 0.1r/Rd + 3.4 Gyr (Rd is the disc scale length) and the gas outflow efficiency bout = 0.2. The model results agree excellently with the general predictions of the inside-out growth scenario for the evolution of spiral galaxies. About 80% of the stellar mass of NGC 7793 was assembled within the last 8 Gyr, and 40% of the mass was assembled within the last 4 Gyr. By comparing the best-fitting model results of the three other galaxies, we obtain similar results: 72% (NGC 300), 66% (NGC 2403), and 79% (M 33) of the stellar mass were assembled within the last ∼8 Gyr (i.e. z = 1). These four disc galaxies simultaneously increased their sizes and stellar masses in time, and they grew in size at ∼0.30 times the rate at which they grew in mass. The scale lengths of these four discs now are 20%-25% larger than at z = 1. Our best-fitting model predicted the stellar mass-metallicity relation and the metallicity gradients, constrained by the observed metallicities from HII-region emission line analysis, agree well with the observations measured from individual massive red and blue supergiant stars and population synthesis of Sloan Digital Sky Survey galaxies.
The inference of astrophysical and cosmological properties from the Lyman-$\alpha$ forest conventionally relies on summary statistics of the transmission field that carry useful but limited information. We present a deep learning framework for inference from the Lyman-$\alpha$ forest at field-level. This framework consists of a 1D residual convolutional neural network (ResNet) that extracts spectral features and performs regression on thermal parameters of the IGM that characterize the power-law temperature-density relation. We train this supervised machinery using a large set of mock absorption spectra from Nyx hydrodynamic simulations at $z=2.2$ with a range of thermal parameter combinations (labels). We employ Bayesian optimization to find an optimal set of hyperparameters for our network, and then employ a committee of ten neural networks for increased statistical robustness of the network inference. In addition to the parameter point predictions, our machine also provides a self-consistent estimate of their covariance matrix with which we construct a pipeline for inferring the posterior distribution of the parameters. We compare the results of our framework with the traditional summary (PDF and power spectrum of transmission) based approach in terms of the area of the 68% credibility regions as our figure of merit (FoM). In our study of the information content of perfect (noise- and systematics-free) Ly$\alpha$ forest spectral data-sets, we find a significant tightening of the posterior constraints -- factors of 5.65 and 1.71 in FoM over power spectrum only and jointly with PDF, respectively -- that is the consequence of recovering the relevant parts of information that are not carried by the classical summary statistics.
Subsonic turbulence plays a major role in determining properties of the intracluster medium (ICM). We introduce a new meshless finite mass (MFM) implementation in OPENGADGET3 and apply it to this specific problem. To this end, we present a set of test cases to validate our implementation of the MFM framework in our code. These include but are not limited to: the soundwave and Kepler disc as smooth situations to probe the stability, a Rayleigh-Taylor and Kelvin-Helmholtz instability as popular mixing instabilities, a blob test as more complex example including both mixing and shocks, shock tubes with various Mach numbers, a Sedov blast wave, different tests including self-gravity such as gravitational freefall, a hydrostatic sphere, the Zeldovich-pancake, and a 1015 M⊙ galaxy cluster as cosmological application. Advantages over smoothed particle hydrodynamics (SPH) include increased mixing and a better convergence behaviour. We demonstrate that the MFM-solver is robust, also in a cosmological context. We show evidence that the solver preforms extraordinarily well when applied to decaying subsonic turbulence, a problem very difficult to handle for many methods. MFM captures the expected velocity power spectrum with high accuracy and shows a good convergence behaviour. Using MFM or SPH within OPENGADGET3 leads to a comparable decay in turbulent energy due to numerical dissipation. When studying the energy decay for different initial turbulent energy fractions, we find that MFM performs well down to Mach numbers $\mathcal {M}\approx 0.01$. Finally, we show how important the slope limiter and the energy-entropy switch are to control the behaviour and the evolution of the fluids.
We analyse synthetic 12CO, 13CO, and [C II] emission maps of molecular cloud (MC) simulations from the SILCC-Zoom project. We present radiation, magnetohydrodynamic zoom-in simulations of individual clouds, both with and without radiative stellar feedback, forming in a turbulent multiphase interstellar medium following on-the-fly the evolution of e.g. H2, CO, and C+. We introduce a novel post-processing routine based on CLOUDY which accounts for higher ionization states of carbon due to stellar radiation in H II regions. Synthetic emission maps of [C II] in and around feedback bubbles show that the bubbles are largely devoid of [C II], as recently found in observations, which we attribute to the further ionization of C+ into C2+. For both 12CO and 13CO, the cloud-averaged luminosity ratio, $L_\rm {CO}/L_\rm {[C\, \small {II}]}$, can neither be used as a reliable measure of the H2 mass fraction nor of the evolutionary stage of the clouds. We note a relation between the $I_\rm {CO}/I_\rm {[C\, \small {II}]}$ intensity ratio and the H2 mass fraction for individual pixels of our synthetic maps. The scatter, however, is too large to reliably infer the H2 mass fraction. Finally, the assumption of chemical equilibrium overestimates H2 and CO masses by up to 150 and 50 per cent, respectively, and $L_\rm {CO}$ by up to 60 per cent. The masses of H and C+ would be underestimated by 65 and 30 per cent, respectively, and $L_\rm {[C\, \small {II}]}$ by up to 35 per cent. Hence, the assumption of chemical equilibrium in MC simulations introduces intrinsic errors of a factor of 2 in chemical abundances, luminosities, and luminosity ratios.
Context. Photoevaporation and dust-trapping are individually considered to be important mechanisms in the evolution and morphology of protoplanetary disks. However, it is not yet clear what kind of observational features are expected when both processes operate simultaneously.
Aims: We studied how the presence (or absence) of early substructures, such as the gaps caused by planets, affects the evolution of the dust distribution and flux in the millimeter continuum of disks that are undergoing photoevaporative dispersal. We also tested if the predicted properties resemble those observed in the population of transition disks.
Methods: We used the numerical code Dustpy to simulate disk evolution considering gas accretion, dust growth, dust-trapping at substructures, and mass loss due to X-ray and EUV (XEUV) photoevaporation and dust entrainment. Then, we compared how the dust mass and millimeter flux evolve for different disk models.
Results: We find that, during photoevaporative dispersal, disks with primordial substructures retain more dust and are brighter in the millimeter continuum than disks without early substructures, regardless of the photoevaporative cavity size. Once the photoevaporative cavity opens, the estimated fluxes for the disk models that are initially structured are comparable to those found in the bright transition disk population (Fmm > 30 mJy), while the disk models that are initially smooth have fluxes comparable to the transition disks from the faint population (Fmm < 30 mJy), suggesting a link between each model and population.
Conclusions: Our models indicate that the efficiency of the dust trapping determines the millimeter flux of the disk, while the gas loss due to photoevaporation controls the formation and expansion of a cavity, decoupling the mechanisms responsible for each feature. In consequence, even a planet with a mass comparable to Saturn could trap enough dust to reproduce the millimeter emission of a bright transition disk, while its cavity size is independently driven by photoevaporative dispersal.
The canonical undestanding of stellar convection has recently been put under doubt due to helioseismic results and global 3D convection simulations. This "convective conundrum" is manifested by much higher velocity amplitudes in simulations at large scales in comparison to helioseismic results, and the difficulty in reproducing the solar differential rotation and dynamo with global 3D simulations. Here some aspects of this conundrum are discussed from the viewpoint of hydrodynamic Cartesian 3D simulations targeted at testing the rotational influence and surface forcing on deep convection. More specifically, the dominant scale of convection and the depths of the convection zone and the weakly subadiabatic -- yet convecting -- Deardorff zone are discussed in detail.
Upcoming large galaxy surveys will subject the standard cosmological model, Lambda Cold Dark Matter, to new precision tests. These can be tightened considerably if theoretical models of galaxy formation are available that can predict galaxy clustering and galaxy-galaxy lensing on the full range of measurable scales, throughout volumes as large as those of the surveys, and with sufficient flexibility that uncertain aspects of the underlying astrophysics can be marginalized over. This, in particular, requires mock galaxy catalogues in large cosmological volumes that can be directly compared to observation, and can be optimized empirically by Monte Carlo Markov Chains or other similar schemes, thus eliminating or estimating parameters related to galaxy formation when constraining cosmology. Semi-analytic galaxy formation methods implemented on top of cosmological dark matter simulations offer a computationally efficient approach to construct physically based and flexibly parametrized galaxy formation models, and as such they are more potent than still faster, but purely empirical models. Here, we introduce an updated methodology for the semi-analytic L-GALAXIES code, allowing it to be applied to simulations of the new MillenniumTNG project, producing galaxies directly on fully continuous past lightcones, potentially over the full sky, out to high redshift, and for all galaxies more massive than $\sim 10^8\, {\rm M}_\odot$. We investigate the numerical convergence of the resulting predictions, and study the projected galaxy clustering signals of different samples. The new methodology can be viewed as an important step towards more faithful forward-modelling of observational data, helping to reduce systematic distortions in the comparison of theory to observations.
We address the issue of the compositeness of hadronic states and demonstrate that starting with a genuine state of nonmolecular nature, but which couples to some meson-meson component to be observable in that channel, if that state is blamed for a bound state appearing below the meson-meson threshold it gets dressed with a meson cloud and it becomes pure molecular in the limit case of zero binding. We discuss the issue of the scales, and see that if the genuine state has a mass very close to threshold, the theorem holds, but the molecular probability goes to unity in a very narrow range of energies close to threshold. The conclusion is that the value of the binding does not determine the compositeness of a state. However, in such extreme cases we see that the scattering length gets progressively smaller and the effective range grows indefinitely. In other words, the binding energy does not determine the compositeness of a state, but the additional information of the scattering length and effective range can provide an answer. We also show that the consideration of a direct attractive interaction between the mesons in addition to having a genuine component, increases the compositeness of the state. Explicit calculations are done for the Tcc (3875) state, but are easily generalized to any hadronic system.
We describe an algorithm to organize Feynman integrals in terms of their infrared properties. Our approach builds upon the theory of Landau singularities, which we use to classify all configurations of loop momenta that can give rise to infrared divergences. We then construct bases of numerators for arbitrary Feynman integrals, which cancel all singularities and render the integrals finite. Through the same analysis, one can also classify so-called evanescent and evanescently finite Feynman integrals. These are integrals whose vanishing or finiteness relies on properties of dimensional regularization. To illustrate the use of these integrals, we display how to obtain a simpler form for the leading-color two-loop four-gluon scattering amplitude through the choice of a suitable basis of finite integrals. In particular, when all gluon helicities are equal, we show that with our basis the most complicated double-box integrals do not contribute to the finite remainder of the scattering amplitude.
Rotation matters for the life of a star. It causes a multitude of dynamical phenomena in the stellar interior during a star's evolution and its effects accumulate until the star dies. All stars rotate at some level but those born with a mass above about 1.3 times the mass of the Sun rotate rapidly during more than 90% of their nuclear lifetime. Internal rotation guides the angular momentum and chemical element transport throughout the stellar interior. These transport processes change over time as the star evolves. The cumulative effects of stellar rotation and its induced transport processes determine the helium content of the core by the time it exhausts its hydrogen isotopes. The amount of helium at that stage also guides the heavy element yields at the end of the star's life. A proper theory of stellar evolution and any realistic models for the chemical enrichment of galaxies, must be based on observational calibrations of stellar rotation and of the induced transport processes. Since a few years, asteroseismology offers such calibrations, for single and binary stars. We review the current status of asteroseismic modelling of rotating stars for different stellar mass regimes, in an accessible way for the non-expert. While doing so, we describe exciting opportunities sparked by asteroseismology for various domains in astrophysics, touching upon topics from exoplanetary science to galactic structure and evolution towards gravitational wave physics. Along the way, we provide ample sneak-previews for future 'industrialised' applications of asteroseismology to slow and rapid rotators, from exploitation of combined Kepler, TESS, PLATO, Gaia, and spectroscopy surveys. We end the review with a list of take away messages and achievements of asteroseismology, which are of relevance for many fields of astrophysics.
For many years, various experiments have attempted to shed light on the nature of dark matter (DM). This work investigates the possibility of using CaWO4 crystals for the direct search of spin-dependent DM interactions using the isotope 17O with a nuclear spin of 5/2. Due to the low natural abundance of 0.038%, an enrichment of the CaWO4 crystals with 17O is developed during the crystal production process at the Technical University of Munich. Three CaWO4 crystals were enriched, and their 17O content was measured by nuclear magnetic resonance spectroscopy at the University of Leipzig. This paper presents the concept and first results of the 17O enrichment and discusses the possibility of using enriched crystals to increase the sensitivity for the spin-dependent DM search with CRESST.
We report the first simultaneous and independent measurements of the K$^{-}$p $\rightarrow \Sigma^0 \, \pi^{0}$ and K$^{-}$p $\rightarrow \Lambda \, \pi^{0}$ cross sections around 100 MeV/c kaon momentum. The kaon beam delivered by the DA$\Phi$NE collider was exploited to detect K$^-$ absorptions on Hydrogen atoms, populating the gas mixture of the KLOE drift chamber. The precision of the measurements ($\sigma_{K^- p \rightarrow \Sigma^0 \pi^0} = 42.8 \pm 1.5 (stat.) ^{+2.4}_{-2.0}(syst.) \ \mathrm{mb}$ and $\sigma_{K^- p \rightarrow \Lambda \pi^0} = 31.0 \pm 0.5 (stat.) ^{+1.2}_{-1.2}(syst.) \ \mathrm{mb}\,$) is the highest yet obtained in the low kaon momentum regime.
We present a novel cryogenic VUV spectrofluorometer designed for the characterization of wavelength shifters (WLS) crucial for experiments based on liquid argon (LAr) scintillation light detection. Wavelength shifters like 1,1,4,4-tetraphenyl-1,3-butadiene (TPB) or polyethylene naphthalate (PEN) are used in these experiments to shift the VUV scintillation light to the visible region. Precise knowledge of the optical properties of the WLS at liquid argon's temperature (87 K) and LAr scintillation wavelength (128 nm) is necessary to model and understand the detector response. The cryogenic VUV spectrofluorometer was commissioned to measure the emission spectra and relative wavelength shifting efficiency (WLSE) of samples between 300 K and 87 K for VUV (128 nm) and UV (310 nm) excitation. New mitigation techniques for surface effects on cold WLS were established. As part of this work, the TPB-based wavelength shifting reflector (WLSR) featured in the neutrinoless double-beta decay experiment LEGEND-200 was characterized. The wavelength shifting efficiency was observed to increase by (54 +/- 5)% from room temperature (RT) to 87 K. PEN installed in LEGEND-200 was also characterized and a first measurement of the relative wavelength shifting efficiency and emission spectrum at RT and 87 K is presented. Surface effects from cooling were corrected by normalizing the measurement at VUV excitation with the respective ones at UV excitation. The WLSE of amorphous PEN was found to be enhanced by (52 +/- 3)% at 87 K compared to RT.
We explore the impact of highly excited bound states on the evolution of number densities of new physics particles, specifically dark matter, in the early Universe. Focusing on dipole transitions within perturbative, unbroken gauge theories, we develop an efficient method for including around a million bound state formation and bound-to-bound transition processes. This enables us to examine partial-wave unitarity and accurately describe the freeze-out dynamics down to very low temperatures. In the non-Abelian case, we find that highly excited states can prevent the particles from freezing out, supporting a continuous depletion in the regime consistent with perturbativity and unitarity. We apply our formalism to a simplified dark matter model featuring a colored and electrically charged t-channel mediator. Our focus is on the regime of superWIMP production which is commonly characterized by a mediator freeze-out followed by its late decay into dark matter. In contrast, we find that excited states render mediator depletion efficient all the way until its decay, introducing a dependence of the dark matter density on the mediator lifetime as a novel feature. The impact of bound states on the viable dark matter mass can amount to an order of magnitude, relaxing constraints from Lyman-α observations.
We present the first measurements of Lyman-α (Lyα) forest correlations using early data from the Dark Energy Spectroscopic Instrument (DESI). We measure the auto-correlation of Lyα absorption using 88,509 quasars at z>2, and its cross-correlation with quasars using a further 147,899 tracer quasars at z≳1.77. Then, we fit these correlations using a 13-parameter model based on linear perturbation theory and find that it provides a good description of the data across a broad range of scales. We detect the BAO peak with a signal-to-noise ratio of 3.8σ, and show that our measurements of the auto- and cross-correlations are fully-consistent with previous measurements by the Extended Baryon Oscillation Spectroscopic Survey (eBOSS). Even though we only use here a small fraction of the final DESI dataset, our uncertainties are only a factor of 1.7 larger than those from the final eBOSS measurement. We validate the existing analysis methods of Lyα correlations in preparation for making a robust measurement of the BAO scale with the first year of DESI data.
Stars have always fascinated people and already decades ago observations showed that low-mass stars originate from filamentary structures. Nevertheless, the formation process of these filaments, as well as their evolution and fragmentation into individual cores, is not yet sufficiently understood. As I will show in this thesis, which provides new insights into the dynamics, fragmentation and collapse of filaments, determining, understanding and comparing timescales plays a crucial role, in this regard. [...]
Gravity is the driving force of star formation. Although gravity is caused by the presence of matter, its role in complex regions is still unsettled. One effective way to study the pattern of gravity is to compute the accretion it exerts on the gas by providing gravitational acceleration maps. A practical way to study acceleration is by computing it using 2D surface density maps, yet whether these maps are accurate remains uncertain. Using numerical simulations, we confirm that the accuracy of the acceleration maps a2D(x, y) computed from 2D surface density are good representations for the mean acceleration weighted by mass. Due to the underestimations of the distances from projected maps, the magnitudes of accelerations will be overestimated $|\mathbf {a}_{\rm 2D}(x,y)| \approx 2.3 \pm 1.8 \,\, |\mathbf {a}_{\rm 3D}^{\rm proj}(x,y)|$, where $\mathbf {a}_{\rm 3D}^{\rm proj}(x,y)$ is mass-weighted projected gravitational acceleration, yet a2D(x, y) and $\mathbf {a}_{\rm 3D}^{\rm proj}(x,y)$ stay aligned within 20°. Significant deviations only occur in regions where multiple structures are present along the line of sight. The acceleration maps estimated from surface density provide good descriptions of the projection of 3D acceleration fields. We expect this technique useful in establishing the link between cloud morphology and star formation, and in understanding the link between gravity and other processes such as the magnetic field. A version of the code for calculating surface density gravitational potential is available at github.com/zhenzhen-research/phi_2d.
We study the interactions between ’t Hooft-Polyakov magnetic monopoles and the domain walls formed by the same order parameter within an SU(2) gauge theory. We observe that the collision leads to the erasure of the magnetic monopoles, as suggested by Dvali et al. [Phys. Rev. Lett. 80, 2281 (1998)]. The domain wall represents a layer of vacuum with un-Higgsed SU(2) gauge symmetry. When the monopole enters the wall, it unwinds, and the magnetic charge spreads over the wall. We perform numerical simulations of the collision process and, in particular, analyze the angular distribution of the emitted electromagnetic radiation. As in the previous studies, we observe that erasure always occurs. Although not forbidden by any conservation laws, the monopole never passes through the wall. This is explained by entropy suppression. The erasure phenomenon has important implications for cosmology, as it sheds a very different light on the monopole abundance in postinflationary phase transitions and provides potentially observable imprints in the form of electromagnetic and gravitational radiation. The phenomenon also sheds light on fundamental aspects of gauge theories with coexisting phases, such as confining and Higgs phases.
We investigate the ten independent local form factors relevant to the b -baryon decay Λb→Λ ℓ+ℓ-,combininginformationof lattice QCD and dispersive bounds. We propose a novel parametrization of the form factors in terms of orthonormal polynomials that diagonalizes the form factor contributions to the dispersive bounds. This is a generalization of the unitarity bounds developed for meson-to-meson form factors. In contrast to ad hoc parametrizations of these form factors, our parametrization provides a degree of control of the form-factor uncertainties at large hadronic recoil. This is of phenomenological interest for theoretical predictions of, e.g., Λb→Λ γ and Λb→Λ ℓ+ℓ- decay processes.
A forward modelling approach provides simple, fast and realistic simulations of galaxy surveys, without a complex underlying model. For this purpose, galaxy clustering needs to be simulated accurately, both for the usage of clustering as its own probe and to control systematics. We present a forward model to simulate galaxy surveys, where we extend the Ultra-Fast Image Generator to include galaxy clustering. We use the distribution functions of the galaxy properties, derived from a forward model adjusted to observations. This population model jointly describes the luminosity functions, sizes, ellipticities, SEDs and apparent magnitudes. To simulate the positions of galaxies, we then use a two-parameter relation between galaxies and halos with Subhalo Abundance Matching (SHAM). We simulate the halos and subhalos using the fast PINOCCHIO code, and a method to extract the surviving subhalos from the merger history. Our simulations contain a red and a blue galaxy population, for which we build a SHAM model based on star formation quenching. For central galaxies, mass quenching is controlled with the parameter Mlimit, with blue galaxies residing in smaller halos. For satellite galaxies, environmental quenching is implemented with the parameter tquench, where blue galaxies occupy only recently merged subhalos. We build and test our model by comparing to imaging data from the Dark Energy Survey Year 1. To ensure completeness in our simulations, we consider the brightest galaxies with i<20. We find statistical agreement between our simulations and the data for two-point correlation functions on medium to large scales. Our model provides constraints on the two SHAM parameters Mlimit and tquench and offers great prospects for the quick generation of galaxy mock catalogues, optimized to agree with observations.
Much of what is known of the chemical composition of the universe is based on emission line spectra from star forming galaxies. Emission-based inferences are, nevertheless, model-dependent and they are dominated by light from luminous star forming regions. An alternative and sensitive probe of the metallicity of galaxies is through absorption lines imprinted on the luminous afterglow spectra of long gamma ray bursts (GRBs) from neutral material within their host galaxy. We present results from a JWST/NIRSpec programme to investigate for the first time the relation between the metallicity of neutral gas probed in absorption by GRB afterglows and the metallicity of the star forming regions for the same host galaxy sample. Using an initial sample of eight GRB host galaxies at z=2.1-4.7, we find a tight relation between absorption and emission line metallicities when using the recently proposed R̂ metallicity diagnostic (+/-0.2dex). This agreement implies a relatively chemically-homogeneous multi-phase interstellar medium, and indicates that absorption and emission line probes can be directly compared. However, the relation is less clear when using other diagnostics, such as R23 and R3. We also find possible evidence of an elevated N/O ratio in the host galaxy of GRB090323 at z=4.7, consistent with what has been seen in other z>4 galaxies. Ultimate confirmation of an enhanced N/O ratio and of the relation between absorption and emission line metallicities will require a more direct determination of the emission line metallicity via the detection of temperature-sensitive auroral lines in our GRB host galaxy sample.
Life as we know it is built on complex and perfectly interlocking processes that have evolved over millions of years through evolutionary optimization processes. The emergence of life from nonliving matter and the evolution of such highly efficient systems therefore constitute an enormous synthetic and systems chemistry challenge. Advances in supramolecular and systems chemistry are opening new perspectives that provide insights into living and self-sustaining reaction networks as precursors for life. However, the ab initio synthesis of such a system requires the possibility of autonomous optimization of catalytic properties and, consequently, of an evolutionary system at the molecular level. In this Account, we present our discovery of the formation of substituted imidazolidine-4-thiones (photoredox) organocatalysts from simple prebiotic building blocks such as aldehydes and ketones under Strecker reaction conditions with ammonia and cyanides in the presence of hydrogen sulfide. The necessary aldehydes are formed from CO2 and hydrogen under prebiotically plausible meteoritic or volcanic iron-particle catalysis in the atmosphere of the early Earth. Remarkably, the investigated imidazolidine-4-thiones undergo spontaneous resolution by conglomerate crystallization, opening a pathway for symmetry breaking, chiral amplification, and enantioselective organocatalysis. These imidazolidine-4-thiones enable α-alkylations of aldehydes and ketones by photoredox organocatalysis. Therefore, these photoredox organocatalysts are able to modify their aldehyde building blocks, which leads in an evolutionary process to mutated second-generation and third-generation catalysts. In our experimental studies, we found that this mutation can occur not only by new formation of the imidazolidine core structure of the catalyst from modified aldehyde building blocks or by continuous supply from a pool of available building blocks but also by a dynamic exchange of the carbonyl moiety in ring position 2 of the imidazolidine moiety. Remarkably, it can be shown that by incorporating aldehyde building blocks from their environment, the imidazolidine-4-thiones are able to change and adapt to altering environmental conditions without undergoing the entire formation process. The selection of the mutated catalysts is then based on the different catalytic activities in the modification of the aldehyde building blocks and on the catalysis of subsequent processes that can lead to the formation of molecular reaction networks as progenitors for cellular processes. We were able to show that these imidazolidine-4-thiones not only enable α-alkylations but also facilitate other important transformations, such as the selective phosphorylation of nucleosides to nucleotides as a key step leading to the oligomerization to RNA and DNA. It can therefore be expected that evolutionary processes have already taken place on a small molecular level and have thus developed chemical tools that change over time, representing a hidden layer on the path to enzymatically catalyzed biochemical processes.
We present the dust properties of 125 bright Herschel galaxies selected from the z-GAL NOEMA spectroscopic redshift survey. All the galaxies have precise spectroscopic redshifts in the range 1.3 < z < 5.4. The large instantaneous bandwidth of NOEMA provides an exquisite sampling of the underlying dust continuum emission at 2 and 3 mm in the observed frame, with flux densities in at least four sidebands for each source. Together with the available Herschel 250, 350, and 500 μm and SCUBA-2 850 μm flux densities, the spectral energy distribution (SED) of each source can be analyzed from the far-infrared to the millimeter, with a fine sampling of the Rayleigh-Jeans tail. This wealth of data provides a solid basis to derive robust dust properties, in particular the dust emissivity index (β) and the dust temperature (Tdust). In order to demonstrate our ability to constrain the dust properties, we used a flux-generated mock catalog and analyzed the results under the assumption of an optically thin and optically thick modified black body emission. The robustness of the SED sampling for the z-GAL sources is highlighted by the mock analysis that showed high accuracy in estimating the continuum dust properties. These findings provided the basis for our detailed analysis of the z-GAL continuum data. We report a range of dust emissivities with β ∼ 1.5 − 3 estimated up to high precision with relative uncertainties that vary in the range 7%−15%, and an average of 2.2 ± 0.3. We find dust temperatures varying from 20 to 50 K with an average of Tdust ∼ 30 K for the optically thin case and Tdust ∼ 38 K in the optically thick case. For all the sources, we estimate the dust masses and apparent infrared luminosities (based on the optically thin approach). An inverse correlation is found between Tdust and β with β ∝ Tdust−0.69, which is similar to what is seen in the local Universe. Finally, we report an increasing trend in the dust temperature as a function of redshift at a rate of 6.5 ± 0.5 K/z for this 500 μm-selected sample. Based on this study, future prospects are outlined to further explore the evolution of dust temperature across cosmic time.
Full Tables A.1 and B.1 are available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (ftp://130.79.128.5) or via https://cdsarc.cds.unistra.fr/viz-bin/cat/J/A+A/678/A27
We report on our study of the supernova (SN) 2022xxf based on observations obtained during the first four months of its evolution. The light curves (LCs) display two humps of similar maximum brightness separated by 75 days, unprecedented for a broad-lined (BL) Type Ic supernova (SN IcBL). SN 2022xxf is the most nearby SN IcBL to date (in NGC 3705, z = 0.0037, at a distance of about 20 Mpc). Optical and near-infrared photometry and spectroscopy were used to identify the energy source powering the LC. Nearly 50 epochs of high signal-to-noise ratio spectroscopy were obtained within 130 days, comprising an unparalleled dataset for a SN IcBL, and one of the best-sampled SN datasets to date. The global spectral appearance and evolution of SN 2022xxf points to typical SN Ic/IcBL, with broad features (up to ~14 000 km s−1) and a gradual transition from the photospheric to the nebular phase. However, narrow emission lines (corresponding to ~ 1000-2500 km s−1) are present in the spectra from the time of the second rise, suggesting slower-moving circumstellar material (CSM). These lines are subtle, in comparison to the typical strong narrow lines of CSM-interacting SNe, for example, Type IIn, Ibn, and Icn, but some are readily noticeable at late times, such as in Mg I λ5170 and [O I] λ5577. Unusually, the near-infrared spectra show narrow line peaks in a number of features formed by ions of O and Mg. We infer the presence of CSM that is free of H and He. We propose that the radiative energy from the ejecta-CSM interaction is a plausible explanation for the second LC hump. This interaction scenario is supported by the color evolution, which progresses to blue as the light curve evolves along the second hump, and by the slow second rise and subsequent rapid LC drop. SN 2022xxf may be related to an emerging number of CSM-interacting SNe Ic, which show slow, peculiar LCs, blue colors, and subtle CSM interaction lines. The progenitor stars of these SNe likely experienced an episode of mass loss consisting of H/He-free material shortly prior to explosion.
Photometric and spectroscopic data are available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (ftp://130.79.128.5) or via https://cdsarc.cds.unistra.fr/viz-bin/cat/J/A+A/678/A209
Using the IRAM NOrthern Extended Millimetre Array (NOEMA), we conducted a Large Programme (z-GAL) to measure redshifts for 126 bright galaxies detected in the Herschel Astrophysical Large Area Survey (H-ATLAS), the HerMES Large Mode Survey (HeLMS), and the Herschel Stripe 82 (HerS) Survey. We report reliable spectroscopic redshifts for a total of 124 of the Herschel-selected galaxies. The redshifts are estimated from scans of the 3 and 2-mm bands (and, for one source, the 1-mm band), covering up to 31 GHz in each band, and are based on the detection of at least two emission lines. Together with the Pilot Programme, where 11 sources had their spectroscopic redshifts measured, our survey has derived precise redshifts for 135 bright Herschel-selected galaxies, making it the largest sample of high-z galaxies with robust redshifts to date. Most emission lines detected are from 12CO (mainly from J = 2-1 to 5-4), with some sources seen in [CI] and H2O emission lines. The spectroscopic redshifts are in the range 0.8 < z < 6.55 with a median value of z = 2.56 ± 0.10, centred on the peak epoch of galaxy formation. The linewidths of the sources are large, with a mean value for the full width at half maximum ΔV of 590 ± 25 km s−1 and with 35% of the sources having widths of 700 km s−1 < ΔV < 1800 km s−1. Most of the sources are unresolved or barely resolved on scales of ∼2 to 3″ (or linear sizes of ∼15 − 25 kpc, unlensed). Some fields reveal double or multiple sources in line emission and the underlying dust continuum and, in some cases, sources at different redshifts. Taking these sources into account, there are, in total, 165 individual sources with robust spectroscopic redshifts, including lensed galaxies, binary systems, and over-densities. This paper presents an overview of the z-GAL survey and provides the observed properties of the emission lines, the derived spectroscopic redshifts, and a catalogue of the entire sample. The catalogue includes, for each source, the combined continuum and emission lines' maps together with the spectra for each of the detected emission lines. The data presented here will serve as a foundation for the other z-GAL papers in this series reporting on the dust emission, the molecular and atomic gas properties, and a detailed analysis of the nature of the sources. Comparisons are made with other spectroscopic surveys of high-z galaxies and future prospects, including dedicated follow-up observations based on these redshift measurements, are outlined.
Context. Recent JWST observations of the Type Ia supernova (SN Ia) 2021aefx in the nebular phase have paved the way for late-time studies covering the full optical to mid-infrared (MIR) wavelength range, and with it the hope to better constrain SN Ia explosion mechanisms.
Aims: We investigate whether public SN Ia models covering a broad range of progenitor scenarios and explosion mechanisms (Chandrasekhar-mass, or MCh, delayed detonations, pulsationally assisted gravitationally confined detonations, sub-MCh double detonations, and violent mergers) can reproduce the full optical-MIR spectrum of SN 2021aefx at ∼270 days post explosion.
Methods: We consider spherically averaged 3D models available from the Heidelberg Supernova Model Archive with a 56Ni yield in the range 0.5-0.8 M⊙. We performed 1D steady-state non-local thermodynamic equilibrium simulations with the radiative-transfer code CMFGEN, and compared the predicted spectra to SN 2021aefx.
Results: The models can explain the main features of SN 2021aefx over the full wavelength range. However, no single model, or mechanism, emerges as a preferred match, and the predicted spectra are similar to each other despite the very different explosion mechanisms. We discuss possible causes for the mismatch of the models, including ejecta asymmetries and ionisation effects. Our new calculations of the collisional strengths for Ni III have a major impact on the two prominent lines at 7.35 μm and 11.00 μm, and highlight the need for more accurate collisional data for forbidden transitions. Using updated atomic data, we identify a strong feature due to [Ca IV] 3.21 μm, attributed to [Ni I] in previous studies. We also provide a tentative identification of a forbidden line due to [Ne II] 12.81 μm, whose peaked profile indicates the presence of neon all the way to the innermost region of the ejecta, as predicted for instance in violent merger models. Contrary to previous claims, we show that the [Ar III] 8.99 μm line can be broader in sub-MCh models compared to near-MCh models. Last, the total luminosity in lines of Ni is found to correlate strongly with the stable nickel yield, although ionisation effects can bias the inferred abundance.
Conclusions: Our models suggest that key physical ingredients are missing from either the explosion models, or the radiative-transfer post-processing, or both. Nonetheless, they also show the potential of the near- and MIR to uncover new spectroscopic diagnostics of SN Ia explosion mechanisms.
Full Tables F.1 and F.2 are available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (ftp://130.79.128.5) or via https://cdsarc.cds.unistra.fr/viz-bin/cat/J/A+A/678/A170
Liquid-liquid phase separation yields spherical droplets that eventually coarsen to one large, stable droplet governed by the principle of minimal free energy. In chemically fueled phase separation, the formation of phase-separating molecules is coupled to a fuel-driven, non-equilibrium reaction cycle. It thus yields dissipative structures sustained by a continuous fuel conversion. Such dissipative structures are ubiquitous in biology but are poorly understood as they are governed by non-equilibrium thermodynamics. Here, we bridge the gap between passive, close-to-equilibrium, and active, dissipative structures with chemically fueled phase separation. We observe that spherical, active droplets can undergo a morphological transition into a liquid, spherical shell. We demonstrate that the mechanism is related to gradients of short-lived droplet material. We characterize how far out of equilibrium the spherical shell state is and the chemical power necessary to sustain it. Our work suggests alternative avenues for assembling complex stable morphologies, which might already be exploited to form membraneless organelles by cells.
We describe the formalism to analyze the mathematical ambiguities arising in partial-wave analysis of two spinless mesons produced with a linearly polarized photon beam. We show that partial waves are uniquely defined when all accessible observables are considered, for a wave set which includes S and D waves. The inclusion of higher partial waves does not affect our results, and we conclude that there are no mathematical ambiguities in partial-wave analysis of two mesons produced with a linearly polarized photon beam. We present Monte Carlo simulations to illustrate our results.
We analyze the impact of positivity conditions on static spherically symmetric deformations of the Schwarzschild space-time. The metric is taken to satisfy, at least asymptotically, the Einstein equation in the presence of a nontrivial stress-energy tensor, on which we impose various physicality conditions. We systematically study and compare the impact of these conditions on the space-time deformations. The universal nature of our findings applies to both classical and quantum metric deformations with and without event horizons. We further discuss minimal realizations of the asymptotic stress energy tensor in terms of physical fields. Finally, we illustrate our results by discussing concrete models of quantum black holes.
An outstanding question is whether the α/Fe bimodality exists in disk galaxies other than in the Milky Way. Here we present a bimodality using our state-of-the-art galactic chemical evolution models that can explain various observations in the Andromeda galaxy (M31) disks, namely, elemental abundances both of planetary nebulae and of red giant branch stars recently observed with the James Webb Space Telescope. We find that in M31 a high-α thicker-disk population out to 30 kpc formed by a more intense initial starburst than that in the Milky Way. We also find a young low-α thin disk within 14 kpc, which is formed by a secondary star formation M31 underwent about 2-4.5 Gyr ago, probably triggered by a wet merger. In the outer disk, however, the planetary nebula observations indicate a slightly higher-α young (~2.5 Gyr) population at a given metallicity, possibly formed by secondary star formation from almost pristine gas. Therefore, an α/Fe bimodality is seen in the inner disk (≲14 kpc), while only a slight α/Fe offset of the young population is seen in the outer disk (≳18 kpc). The appearance of the α/Fe bimodality depends on the merging history at various galactocentric radii, and wide-field multiobject spectroscopy is required for unveiling the history of M31.
I highlight a few thoughts on the contribution to the dipole moments from the so-called θ parameter. The dipole moments are known can be generated by θ . In fact, the renowned strong CP problem was formulated as a result of nonobservation of the dipole moments. What is less known is that there is another parameter of the theory, the θQED which becomes also a physical and observable parameter of the system when some conditions are met. This claim should be contrasted with conventional (and very naive) viewpoint that the θQED is unphysical and unobservable. A specific manifestation of this phenomenon is the so-called Witten effect when the magnetic monopole becomes the dyon with induced electric charge e'=-e θ/QED 2 π . We argued that the similar arguments suggest that the electric magnetic dipole moment μ of any microscopical configuration in the background of θQED generates the electric dipole moment ⟨dind⟩ proportional to θQED, i.e., ⟨dind⟩=-θ/QED.α π μ . We also argue that many CP odd correlations such as ⟨B→ ext.E → ⟩=-α/θQED π B→ext 2 will be generated in the background of an external magnetic field B→ext as a result of the same physics.
How can a self-organized cellular function evolve, adapt to perturbations, and acquire new sub-functions? To make progress in answering these basic questions of evolutionary cell biology, we analyze, as a concrete example, the cell polarity machinery of Saccharomyces cerevisiae. This cellular module exhibits an intriguing resilience: it remains operational under genetic perturbations and recovers quickly and reproducibly from the deletion of one of its key components. Using a combination of modeling, conceptual theory, and experiments, we propose that multiple, redundant self-organization mechanisms coexist within the protein network underlying cell polarization and are responsible for the module's resilience and adaptability. Based on our mechanistic understanding of polarity establishment, we hypothesize that scaffold proteins, by introducing new connections in the existing network, can increase the redundancy of mechanisms and thus increase the evolvability of other network components. Moreover, our work gives a perspective on how a complex, redundant cellular module might have evolved from a more rudimental ancestral form.
We explore the features of interpolating gauge for QCD. This gauge, defined by Doust and by Baulieu and Zwanziger, interpolates between Feynman gauge or Lorenz gauge and Coulomb gauge. We argue that it could be useful for defining the splitting functions for a parton shower beyond order αs or for defining the infrared subtraction terms for higher order perturbative calculations.
We discover analytic equations that can infer the value of Ωm from the positions and velocity moduli of halo and galaxy catalogs. The equations are derived by combining a tailored graph neural network (GNN) architecture with symbolic regression. We first train the GNN on dark matter halos from Gadget N-body simulations to perform field-level likelihood-free inference, and show that our model can infer Ωm with ~6% accuracy from halo catalogs of thousands of N-body simulations run with six different codes: Abacus, CUBEP3M, Gadget, Enzo, PKDGrav3, and Ramses. By applying symbolic regression to the different parts comprising the GNN, we derive equations that can predict Ωm from halo catalogs of simulations run with all of the above codes with accuracies similar to those of the GNN. We show that, by tuning a single free parameter, our equations can also infer the value of Ωm from galaxy catalogs of thousands of state-of-the-art hydrodynamic simulations of the CAMELS project, each with a different astrophysics model, run with five distinct codes that employ different subgrid physics: IllustrisTNG, SIMBA, Astrid, Magneticum, SWIFT-EAGLE. Furthermore, the equations also perform well when tested on galaxy catalogs from simulations covering a vast region in parameter space that samples variations in 5 cosmological and 23 astrophysical parameters. We speculate that the equations may reflect the existence of a fundamental physics relation between the phase-space distribution of generic tracers and Ωm, one that is not affected by galaxy formation physics down to scales as small as 10 h -1 kpc.