Cosmological inference with large galaxy surveys requires theoretical models that combine precise predictions for large-scale structure with robust and flexible galaxy formation modelling throughout a sufficiently large cosmic volume. Here, we introduce the MILLENNIUMTNG (MTNG) project which combines the hydrodynamical galaxy formation model of ILLUSTRISTNG with the large volume of the MILLENNIUM simulation. Our largest hydrodynamic simulation, covering $(500 \, h^{-1}{\rm Mpc})^3 \simeq (740\, {\rm Mpc})^3$, is complemented by a suite of dark-matter-only simulations with up to 43203 dark matter particles (a mass resolution of $1.32\times 10^8 \, h^{-1}{\rm M}_\odot$) using the fixed-and-paired technique to reduce large-scale cosmic variance. The hydro simulation adds 43203 gas cells, achieving a baryonic mass resolution of $2\times 10^7 \, h^{-1}{\rm M}_\odot$. High time-resolution merger trees and direct light-cone outputs facilitate the construction of a new generation of semi-analytic galaxy formation models that can be calibrated against both the hydro simulation and observation, and then applied to even larger volumes - MTNG includes a flagship simulation with 1.1 trillion dark matter particles and massive neutrinos in a volume of $(3000\, {\rm Mpc})^3$. In this introductory analysis we carry out convergence tests on basic measures of non-linear clustering such as the matter power spectrum, the halo mass function and halo clustering, and we compare simulation predictions to those from current cosmological emulators. We also use our simulations to study matter and halo statistics, such as halo bias and clustering at the baryonic acoustic oscillation scale. Finally we measure the impact of baryonic physics on the matter and halo distributions.
We present the new public version of the KETJU supermassive black hole (SMBH) dynamics module, as implemented into GADGET-4. KETJU adds a small region around each SMBH where the dynamics of the SMBHs and stellar particles are integrated using an algorithmically regularized integrator instead of the leapfrog integrator with gravitational softening used by GADGET-4. This enables modelling SMBHs as point particles even during close interactions with stellar particles or other SMBHs, effectively removing the spatial resolution limitation caused by gravitational softening. KETJU also includes post-Newtonian (PN) corrections, which allows following the dynamics of SMBH binaries to sub-parsec scales and down to tens of Schwarzschild radii. Systems with multiple SMBHs are also supported, with the code also including the leading non-linear cross terms that appear in the PN equations for such systems. We present tests of the code showing that it correctly captures, at sufficient mass resolution, the sinking driven by dynamical friction and binary hardening driven by stellar scattering. We also present an example application demonstrating how the code can be applied to study the dynamics of SMBHs in mergers of multiple galaxies and the effect they have on the properties of the surrounding galaxy. We expect that the presented KETJU SMBH dynamics module can also be straightforwardly incorporated into other codes similar to GADGET-4, which would allow coupling small-scale SMBH dynamics to the rich variety of galactic physics models that exist in the literature.
We introduce a novel technique for constraining cosmological parameters and galaxy assembly bias using non-linear redshift-space clustering of galaxies. We scale cosmological N-body simulations and insert galaxies with the SubHalo Abundance Matching extended (SHAMe) empirical model to generate over 175 000 clustering measurements spanning all relevant cosmological and SHAMe parameter values. We then build an emulator capable of reproducing the projected galaxy correlation function at the monopole, quadrupole, and hexadecapole level for separations between $0.1\, h^{-1}\, {\rm Mpc}$ and $25\, h^{-1}\, {\rm Mpc}$. We test this approach by using the emulator and Monte Carlo Markov Chain (MCMC) inference to jointly estimate cosmology and assembly bias parameters both for the MTNG740 hydrodynamic simulation and for a semi-analytical model (SAM) galaxy formation built on the MTNG740-DM dark matter-only simulation, obtaining unbiased results for all cosmological parameters. For instance, for MTNG740 and a galaxy number density of $n\sim 0.01 h^{3}\, {\rm Mpc}^{-3}$, we obtain $\sigma _{8}=0.799^{+0.039}_{-0.044}$ and $\Omega _\mathrm{M}h^2= 0.138^{+ 0.025}_{- 0.018}$ (which are within 0.4 and 0.2σ of the MTNG cosmology). For fixed Hubble parameter (h), the constraint becomes $\Omega _\mathrm{M}h^2= 0.137^{+ 0.011}_{- 0.012}$. Our method performs similarly well for the SAM and for other tested sample densities. We almost always recover the true amount of galaxy assembly bias within 1σ. The best constraints are obtained when scales smaller than $2\, h^{-1}\, {\rm Mpc}$ are included, as well as when at least the projected correlation function and the monopole are incorporated. These methods offer a powerful way to constrain cosmological parameters using galaxy surveys.
Cosmological simulations are an important theoretical pillar for understanding non-linear structure formation in our Universe and for relating it to observations on large scales. In several papers, we introduce our MillenniumTNG (MTNG) project that provides a comprehensive set of high-resolution, large-volume simulations of cosmic structure formation aiming to better understand physical processes on large scales and to help interpret upcoming large-scale galaxy surveys. We here focus on the full physics box MTNG740 that computes a volume of $740\, \mathrm{Mpc}^3$ with a baryonic mass resolution of $3.1\times ~10^7\, \mathrm{M_\odot }$ using AREPO with 80.6 billion cells and the IllustrisTNG galaxy formation model. We verify that the galaxy properties produced by MTNG740 are consistent with the TNG simulations, including more recent observations. We focus on galaxy clusters and analyse cluster scaling relations and radial profiles. We show that both are broadly consistent with various observational constraints. We demonstrate that the SZ-signal on a deep light-cone is consistent with Planck limits. Finally, we compare MTNG740 clusters with galaxy clusters found in Planck and the SDSS-8 RedMaPPer richness catalogue in observational space, finding very good agreement as well. However, simultaneously matching cluster masses, richness, and Compton-y requires us to assume that the SZ mass estimates for Planck clusters are underestimated by 0.2 dex on average. Due to its unprecedented volume for a high-resolution hydrodynamical calculation, the MTNG740 simulation offers rich possibilities to study baryons in galaxies, galaxy clusters, and in large-scale structure, and in particular their impact on upcoming large cosmological surveys.
Luminous red galaxies (LRGs) and blue star-forming emission-line galaxies (ELGs) are key tracers of large-scale structure used by cosmological surveys. Theoretical predictions for such data are often done via simplistic models for the galaxy-halo connection. In this work, we use the large, high-fidelity hydrodynamical simulation of the MillenniumTNG project (MTNG) to inform a new phenomenological approach for obtaining an accurate and flexible galaxy-halo model on small scales. Our aim is to study LRGs and ELGs at two distinct epochs, z = 1 and z = 0, and recover their clustering down to very small scales, $r \sim 0.1 \ h^{-1}\, {\rm Mpc}$, i.e. the one-halo regime, while a companion paper extends this to a two-halo model for larger distances. The occupation statistics of ELGs in MTNG inform us that (1) the satellite occupations exhibit a slightly super-Poisson distribution, contrary to commonly made assumptions, and (2) that haloes containing at least one ELG satellite are twice as likely to host a central ELG. We propose simple recipes for modelling these effects, each of which calls for the addition of a single free parameter to simpler halo occupation models. To construct a reliable satellite population model, we explore the LRG and ELG satellite radial and velocity distributions and compare them with those of subhaloes and particles in the simulation. We find that ELGs are anisotropically distributed within haloes, which together with our occupation results provides strong evidence for cooperative galaxy formation (manifesting itself as one-halo galaxy conformity); i.e. galaxies with similar properties form in close proximity to each other. Our refined galaxy-halo model represents a useful improvement of commonly used analysis tools and thus can be of help to increase the constraining power of large-scale structure surveys.
We build a deep learning framework that connects the local formation process of dark matter haloes to the halo bias. We train a convolutional neural network (CNN) to predict the final mass and concentration of dark matter haloes from the initial conditions. The CNN is then used as a surrogate model to derive the response of the haloes' mass and concentration to long-wavelength perturbations in the initial conditions, and consequently the halo bias parameters following the 'response bias' definition. The CNN correctly predicts how the local properties of dark matter haloes respond to changes in the large-scale environment, despite no explicit knowledge of halo bias being provided during training. We show that the CNN recovers the known trends for the linear and second-order density bias parameters b1 and b2, as well as for the local primordial non-Gaussianity linear bias parameter bϕ. The expected secondary assembly bias dependence on halo concentration is also recovered by the CNN: at fixed mass, halo concentration has only a mild impact on b1, but a strong impact on bϕ. Our framework opens a new window for discovering which physical aspects of the halo's Lagrangian patch determine assembly bias, which in turn can inform physical models of halo formation and bias.
The dispersion of fast radio bursts (FRBs) is a measure of the large-scale electron distribution. It enables measurements of cosmological parameters, especially of the expansion rate and the cosmic baryon fraction. The number of events is expected to increase dramatically over the coming years, and of particular interest are bursts with identified host galaxy and therefore redshift information. In this paper, we explore the covariance matrix of the dispersion measure (DM) of FRBs induced by the large-scale structure, as bursts from a similar direction on the sky are correlated by long-wavelength modes of the electron distribution. We derive analytical expressions for the covariance matrix and examine the impact on parameter estimation from the FRB DM-redshift relation. The covariance also contains additional information that is missed by analysing the events individually. For future samples containing over ~300 FRBs with host identification over the full sky, the covariance needs to be taken into account for unbiased inference, and the effect increases dramatically for smaller patches of the sky. Also, forecasts must consider these effects as they would yield too optimistic parameter constraints. Our procedure can also be applied to the DM of the afterglow of gamma-ray bursts.
Thorne-Żytkow objects (TŻO) are potential end products of the merger of a neutron star with a non-degenerate star. In this work, we have computed the first grid of evolutionary models of TŻOs with the MESA stellar evolution code. With these models, we predict several observational properties of TŻOs, including their surface temperatures and luminosities, pulsation periods, and nucleosynthetic products. We expand the range of possible TŻO solutions to cover $3.45 \lesssim \rm {\log \left(T_{eff}/K\right)}\lesssim 3.65$ and $4.85 \lesssim \rm {\log \left(L/L_{\odot }\right)}\lesssim 5.5$. Due to the much higher densities our TŻOs reach compared to previous models, if TŻOs form we expect them to be stable over a larger mass range than previously predicted, without exhibiting a gap in their mass distribution. Using the GYRE stellar pulsation code we show that TŻOs should have fundamental pulsation periods of 1000-2000 d, and period ratios of ≈0.2-0.3. Models computed with a large 399 isotope fully coupled nuclear network show a nucleosynthetic signal that is different to previously predicted. We propose a new nucleosynthetic signal to determine a star's status as a TŻO: the isotopologues $\mathrm{^{44}Ti} \rm {O}_2$ and $\mathrm{^{44}Ti} \rm {O}$, which will have a shift in their spectral features as compared to stable titanium-containing molecules. We find that in the local Universe (~SMC metallicities and above) TŻOs show little heavy metal enrichment, potentially explaining the difficulty in finding TŻOs to-date.
Approximate methods to populate dark-matter haloes with galaxies are of great utility to galaxy surveys. However, the limitations of simple halo occupation models (HODs) preclude a full use of small-scale galaxy clustering data and call for more sophisticated models. We study two galaxy populations, luminous red galaxies (LRGs) and star-forming emission-line galaxies (ELGs), at two epochs, z = 1 and z = 0, in the large-volume, high-resolution hydrodynamical simulation of the MillenniumTNG project. In a partner study we concentrated on the small-scale, one-halo regime down to r ~ 0.1 h-1 Mpc, while here we focus on modelling galaxy assembly bias in the two-halo regime, r ≳ 1 h-1 Mpc. Interestingly, the ELG signal exhibits scale dependence out to relatively large scales (r ~ 20 h-1 Mpc), implying that the linear bias approximation for this tracer is invalid on these scales, contrary to common assumptions. The 10-15 per cent discrepancy is only reconciled when we augment our halo occupation model with a dependence on extrinsic halo properties ('shear' being the best-performing one) rather than intrinsic ones (e.g. concentration, peak mass). We argue that this fact constitutes evidence for two-halo galaxy conformity. Including tertiary assembly bias (i.e. a property beyond mass and 'shear') is not an essential requirement for reconciling the galaxy assembly bias signal of LRGs, but the combination of external and internal properties is beneficial for recovering ELG the clustering. We find that centrals in low-mass haloes dominate the assembly bias signal of both populations. Finally, we explore the predictions of our model for higher order statistics such as nearest neighbour counts. The latter supplies additional information about galaxy assembly bias and can be used to break degeneracies between halo model parameters.
Modern redshift surveys are tasked with mapping out the galaxy distribution over enormous distance scales. Existing hydrodynamical simulations, however, do not reach the volumes needed to match upcoming surveys. We present results for the clustering of galaxies using a new, large volume hydrodynamical simulation as part of the MillenniumTNG (MTNG) project. With a computational volume that is ≈15 times larger than the next largest such simulation currently available, we show that MTNG is able to accurately reproduce the observed clustering of galaxies as a function of stellar mass. When separated by colour, there are some discrepancies with respect to the observed population, which can be attributed to the quenching of satellite galaxies in our model. We combine MTNG galaxies with those generated using a semi-analytic model to emulate the sample selection of luminous red galaxies (LRGs) and emission-line galaxies (ELGs) and show that, although the bias of these populations is approximately (but not exactly) constant on scales larger than ≈10 Mpc, there is significant scale-dependent bias on smaller scales. The amplitude of this effect varies between the two galaxy types and between the semi-analytic model and MTNG. We show that this is related to the distribution of haloes hosting LRGs and ELGs. Using mock SDSS-like catalogues generated on MTNG lightcones, we demonstrate the existence of prominent baryonic acoustic features in the large-scale galaxy clustering. We also demonstrate the presence of realistic redshift space distortions in our mocks, finding excellent agreement with the multipoles of the redshift-space clustering measured in SDSS data.
A consistent power-counting prescription for the Standard Model Effective Field Theory requires more than the canonical dimension of operators, as they contain no informa tion about the perturbative expansion of the underlying Quantum Field Theory at highenergies. Although this has been noted in the literature for many years, a consistent quantitative approach remains to be completed. In this work, we present a solution for operators of canonical dimension six based on the notion of chiral dimensions. Our results are illustrated by explicit analytic calculations for two major examples at hadron colliders. These are the fusion of two gluons associated with the production of a top-quark pair, and the decay of a Higgs boson into two gluons or photons. We provide numerical studies for both processes to estimate hypothetical deviations from the Standard Model.
In the first part of the thesis, we investigate the intriguing erasure phenomenon that occurs when lower-dimensional objects encounter those of higher dimensions, with profound implications in cosmology and fundamental physics. The erasure process is explored in the context of topological defects, revealing novel insights into cosmic strings and magnetic monopole interactions with domain walls. For one-dimensional objects like vortices or strings (e.g., cosmic, QCD flux or fundamental strings), the encounter with defects like domain walls or D-branes results in erasure due to the loss of coherence in the collision process. Consequently, a new mechanism of string break-up emerges. We present numerical simulations that confirm that vortices cannot cross a domain wall. We discuss entropy-based arguments describing the phenomenon, emphasizing its significance in various scenarios. In three-dimensional space, we consider the collision between magnetic monopoles and domain walls in an SU (2) gauge theory. It leads to monopole erasure, contributing to post-inflationary phase transitions phenomenology and providing a potential solution to the cosmological monopole problem.
Recent work has pointed out the potential existence of a tight relation between the cosmological parameter Ωm, at fixed Ωb, and the properties of individual galaxies in state-of-the-art cosmological hydrodynamic simulations. In this paper, we investigate whether such a relation also holds for galaxies from simulations run with a different code that makes use of a distinct subgrid physics: Astrid. We also find that in this case, neural networks are able to infer the value of Ωm with a ~10% precision from the properties of individual galaxies, while accounting for astrophysics uncertainties, as modeled in Cosmology and Astrophysics with MachinE Learning (CAMELS). This tight relationship is present at all considered redshifts, z ≤ 3, and the stellar mass, the stellar metallicity, and the maximum circular velocity are among the most important galaxy properties behind the relation. In order to use this method with real galaxies, one needs to quantify its robustness: the accuracy of the model when tested on galaxies generated by codes different from the one used for training. We quantify the robustness of the models by testing them on galaxies from four different codes: IllustrisTNG, SIMBA, Astrid, and Magneticum. We show that the models perform well on a large fraction of the galaxies, but fail dramatically on a small fraction of them. Removing these outliers significantly improves the accuracy of the models across simulation codes.
Most instruments for hyperspectral Earth observation rely on dispersive image acquisition via spatial scanning. In such systems, the Earth’s surface is scanned line by line while the satellite carrying the instrument moves over it. The spatial and spectral resolutions of the image acquisition are directly coupled via a slit aperture and are thus difficult to adjust independently. Spatio-spectral scanning systems, on the other hand, can acquire 2D, spectrally coded images with decoupled spatial and spectral resolutions. Despite this advantage, they have so far been given little attention in the literature. Simple architectures using variable filters were proposed, but come with significant caveats. As an alternative, we investigated the use of two dispersion stages for spatio-spectral scanning. We provide a theoretical treatment and show by basic experiments that a double-dispersive system provides robust and flexible image acquisition. Based on our results, we suggest a system concept for the implementation of a demonstrator on a small satellite.
We search for a B decay mode where one can find a peak for a D D ¯ bound state predicted in effective theories and in lattice QCD calculations, which has also been claimed from some reactions that show an accumulated strength in D D ¯ production at threshold. We find a good candidate in the B+→K+η η reaction, by looking at the η η mass distribution. The reaction proceeds via a first step in which one has the B+→Ds*+D¯ 0 reaction followed by Ds*+ decay to D0K+ and a posterior fusion of D0D¯0 to η η , implemented through a triangle diagram that allows the D0D¯0 to be virtual and to produce the bound state. The choice of η η to see the peak is based on results of calculations that find the η η among the light pseudoscalar channels with stronger coupling to the D D ¯ bound state. We find a neat peak around the predicted mass of that state in the η η mass distribution, with an integrated branching ratio for B+→K+ (D D ¯, bound); (D D ¯, bound) →η η of the order of 1.5 ×10-4, a large number for hadronic B decays, which should motivate its experimental search.
We propose a construction of generalized cuts of Feynman integrals as an operation on the domain of the Feynman parametric integral. A set of on-shell conditions removes the corresponding boundary components of the integration domain, in favor of including a boundary component from the second Symanzik polynomial. Hence integration domains are full-dimensional spaces with finite volumes, rather than being localized around poles. As initial applications, we give new formulations of maximal cuts, and we provide a simple derivation of a certain linear relation among cuts from the inclusion-exclusion principle.
We present UV and/or optical observations and models of SN 2023ixf, a type II supernova (SN) located in Messier 101 at 6.9 Mpc. Early time (flash) spectroscopy of SN 2023ixf, obtained primarily at Lick Observatory, reveals emission lines of H I, He I/II, C IV, and N III/IV/V with a narrow core and broad, symmetric wings arising from the photoionization of dense, close-in circumstellar material (CSM) located around the progenitor star prior to shock breakout. These electron-scattering broadened line profiles persist for ~8 days with respect to first light, at which time Doppler broadened the features from the fastest SN ejecta form, suggesting a reduction in CSM density at r ≳ 1015 cm. The early time light curve of SN 2023ixf shows peak absolute magnitudes (e.g., M u = -18.6 mag, M g = -18.4 mag) that are ≳2 mag brighter than typical type II SNe, this photometric boost also being consistent with the shock power supplied from CSM interaction. Comparison of SN 2023ixf to a grid of light-curve and multiepoch spectral models from the non-LTE radiative transfer code CMFGEN and the radiation-hydrodynamics code HERACLES suggests dense, solar-metallicity CSM confined to r = (0.5-1) × 1015 cm, and a progenitor mass-loss rate of $\dot{M}={10}^{-2}\,{M}_{\odot }$ yr-1. For the assumed progenitor wind velocity of v w = 50 km s-1, this corresponds to enhanced mass loss (i.e., superwind phase) during the last ~3-6 yr before explosion.
Alkaline vents (AVs) are hypothesized to have been a setting for the emergence of life, by creating strong gradients across inorganic membranes within chimney structures. In the past, three-dimensional chimney structures were formed under laboratory conditions; however, no in situ visualization or testing of the gradients was possible. We develop a quasi–two-dimensional microfluidic model of AVs that allows spatiotemporal visualization of mineral precipitation in low-volume experiments. Upon injection of an alkaline fluid into an acidic, iron-rich solution, we observe a diverse set of precipitation morphologies, mainly controlled by flow rate and ion concentration. Using microscope imaging and pH-dependent dyes, we show that finger-like precipitates can facilitate formation and maintenance of microscale pH gradients and accumulation of dispersed particles in confined geometries. Our findings establish a model to investigate the potential of gradients across a semipermeable boundary for early compartmentalization, accumulation, and chemical reactions at the origins of life.
Nucleon effective masses in neutron-rich matter are studied with the relativistic Brueckner-Hartree-Fock (RBHF) theory in the full Dirac space. The neutron and proton effective masses for symmetric nuclear matter are 0.80 times rest mass, which agrees well with the empirical values. In neutron-rich matter, the effective mass of the neutron is found to be larger than that of the proton, and the neutron-proton effective mass splittings at the empirical saturation density are predicted as 0.187α with α being the isospin asymmetry parameter. The result is compared to other ab initio calculations and is consistent with the constraints from the nuclear reaction and structure measurements, such as the nucleon-nucleus scattering, the giant resonances of 208Pb, and the Hugenholtz–Van Hove theorem with systematics of nuclear symmetry energy and its slope. The predictions of the neutron-proton effective mass splitting from the RBHF theory in the full Dirac space might be helpful to constrain the isovector parameters in phenomenological density functionals.
We investigate the possibility that blazars in the Roma-BZCAT Multifrequency Catalogue of Blazars (5BZCAT) are sources of the high-energy astrophysical neutrinos detected by the IceCube Neutrino Observatory, as recently suggested by Buson et al. (2022a,b). Although we can reproduce their ∼4.6σ result, which applies to 7 years of neutrino data in the Southern sky, we find no significant correlation with 5BZCAT sources when extending the search to the Northern sky, where IceCube is most sensitive to astrophysical signals. To further test this scenario, we use a larger sample consisting of 10 years of neutrino data recently released by the IceCube collaboration, this time finding no significant correlation in either the Southern or the Northern sky. These results suggest that the strong correlation reported by Buson et al. (2022a,b) using 5BZCAT could be due to a statistical fluctuation and possibly the spatial and flux non-uniformities in the blazar sample. We perform some additional correlation tests using the more uniform, flux-limited, and blazar-dominated Radio Fundamental Catalogue (RFC) and find a ∼3.2σ equivalent p-value when correlating it with the 7-year Southern neutrino sky. However, this correlation disappears completely when extending the analysis to the Northern sky and when analyzing 10 years of all-sky neutrino data. Our findings support a scenario where the contribution of the whole blazar class to the IceCube signal is relevant but not dominant, in agreement with most previous studies.
The early release science results from JWST have yielded an unexpected abundance of high-redshift luminous galaxies that seems to be in tension with current theories of galaxy formation. However, it is currently difficult to draw definitive conclusions form these results as the sources have not yet been spectroscopically confirmed. It is in any case important to establish baseline predictions from current state-of-the-art galaxy formation models that can be compared and contrasted with these new measurements. In this work, we use the new large-volume ($L_\mathrm{box}\sim 740 \, \mathrm{cMpc}$) hydrodynamic simulation of the MillenniumTNG project, suitably scaled to match results from higher resolution - smaller volume simulations, to make predictions for the high-redshift (z ≳ 8) galaxy population and compare them to recent JWST observations. We show that the simulated galaxy population is broadly consistent with observations until z ~ 10. From z ≈ 10-12, the observations indicate a preference for a galaxy population that is largely dust-free, but is still consistent with the simulations. Beyond z ≳ 12, however, our simulation results underpredict the abundance of luminous galaxies and their star-formation rates by almost an order of magnitude. This indicates either an incomplete understanding of the new JWST data or a need for more sophisticated galaxy formation models that account for additional physical processes such as Population III stars, variable stellar initial mass functions, or even deviations from the standard ΛCDM model. We emphasize that any new process invoked to explain this tension should only significantly influence the galaxy population beyond z ≳ 10, while leaving the successful galaxy formation predictions of the fiducial model intact below this redshift.
Cosmological simulations predict that during the evolution of galaxies, the specific star formation rate continuously decreases. In a previous study we showed that generally this is not caused by the galaxies running out of cold gas but rather a decrease in the fraction of gas capable of forming stars. To investigate the origin of this behavior, we use disk galaxies selected from the cosmological hydrodynamical simulation Magneticum Pathfinder and follow their evolution in time. We find that the mean density of the cold gas regions decreases with time. This is caused by the fact that during the evolution of the galaxies the star-forming regions move to larger galactic radii, where the gas density is lower. This supports the idea of inside-out growth of disk galaxies.
Milky Way Cepheid variables with accurate Hubble Space Telescope photometry have been established as standards for primary calibration of the cosmic distance ladder to achieve a percent-level determination of the Hubble constant (H 0). These 75 Cepheid standards are the fundamental sample for investigation of possible residual systematics in the local H 0 determination due to metallicity effects on their period-luminosity relations. We obtained new high-resolution (R ~ 81,000), high-signal-to-noise (S/N ~ 50-150) multiepoch spectra of 42 out of 75 Cepheid standards using the ESPaDOnS instrument at the 3.6 m Canada-France-Hawaii Telescope. Our spectroscopic metallicity measurements are in good agreement with the literature values with systematic differences up to 0.1 dex due to different metallicity scales. We homogenized and updated the spectroscopic metallicities of all 75 Milky Way Cepheid standards and derived their multiwavelength (GVIJHK s ) period-luminosity-metallicity and period-Wesenheit-metallicity relations using the latest Gaia parallaxes. The metallicity coefficients of these empirically calibrated relations exhibit large uncertainties due to low statistics and a narrow metallicity range (Δ[Fe/H] = 0.6 dex). These metallicity coefficients are up to 3 times better constrained if we include Cepheids in the Large Magellanic Cloud and range between -0.21 ± 0.07 and -0.43 ± 0.06 mag dex-1. The updated spectroscopic metallicities of these Milky Way Cepheid standards were used in the Cepheid-supernovae distance ladder formalism to determine H 0 = 72.9 ± 1.0 km s-1 Mpc-1, suggesting little variation (~0.1 km s-1 Mpc-1) in the local H 0 measurements due to different Cepheid metallicity scales.
HD 235088 (TOI-1430) is a young star known to host a sub-Neptune-sized planet candidate. We validated the planetary nature of HD 235088 b with multiband photometry, refined its planetary parameters, and obtained a new age estimate of the host star, placing it at 600-800 Myr. Previous spectroscopic observations of a single transit detected an excess absorption of He I coincident in time with the planet candidate transit. Here, we confirm the presence of He I in the atmosphere of HD 235088 b with one transit observed with CARMENES. We also detected hints of variability in the strength of the helium signal, with an absorption of −0.91 ± 0.11%, which is slightly deeper (2σ) than the previous measurement. Furthermore, we simulated the He I signal with a spherically symmetric 1D hydrodynamic model, finding that the upper atmosphere of HD 235088 b escapes hydrodynamically with a significant mass loss rate of (1.5−5) × 1010 g s−1 in a relatively cold outflow, with T = 3125 ±375 K, in the photon-limited escape regime. HD 235088 b (Rp = 2.045 ± 0.075 R⊕) is the smallest planet found to date with a solid atmospheric detection - not just of He I but any other atom or molecule. This positions it a benchmark planet for further analyses of evolving young sub-Neptune atmospheres.
Theoretical models indicate that photoevaporative and magnetothermal winds play a crucial role in the evolution and dispersal of protoplanetary disks and affect the formation of planetary systems. However, it is still unclear what wind-driving mechanism is dominant or if both are at work, perhaps at different stages of disk evolution. Recent spatially resolved observations by Fang et al. of the [O I] 6300 Å spectral line, a common disk wind tracer in TW Hya, revealed that about 80% of the emission is confined to the inner few astronomical units of the disk. In this work, we show that state-of-the-art X-ray-driven photoevaporation models can reproduce the compact emission and the line profile of the [O I] 6300 Å line. Furthermore, we show that the models also simultaneously reproduce the observed line luminosities and detailed spectral profiles of both the [O I] 6300 Å and the [Ne II] 12.8 μm lines. While MHD wind models can also reproduce the compact radial emission of the [O I] 6300 Å line, they fail to match the observed spectral profile of the [O I] 6300 Å line and underestimate the luminosity of the [Ne II] 12.8 μm line by a factor of 3. We conclude that, while we cannot exclude the presence of an MHD wind component, the bulk of the wind structure of TW Hya is predominantly shaped by a photoevaporative flow.
We present new astrometric and polarimetric observations of flares from Sgr A* obtained with GRAVITY, the near-infrared interferometer at ESO's Very Large Telescope Interferometer (VLTI), bringing the total sample of well-covered astrometric flares to four and polarimetric flares to six. Of all flares, two are well covered in both domains. All astrometric flares show clockwise motion in the plane of the sky with a period of around an hour, and the polarization vector rotates by one full loop in the same time. Given the apparent similarities of the flares, we present a common fit, taking into account the absence of strong Doppler boosting peaks in the light curves and the EHT-measured geometry. Our results are consistent with and significantly strengthen our model from 2018. First, we find that the combination of polarization period and measured flare radius of around nine gravitational radii (9Rg ≈ 1.5RISCO, innermost stable circular orbit) is consistent with Keplerian orbital motion of hot spots in the innermost accretion zone. The mass inside the flares' radius is consistent with the 4.297 × 106 M⊙ measured from stellar orbits at several thousand Rg. This finding and the diameter of the millimeter shadow of Sgr A* thus support a single black hole model. Second, the magnetic field configuration is predominantly poloidal (vertical), and the flares' orbital plane has a moderate inclination with respect to the plane of the sky, as shown by the non-detection of Doppler-boosting and the fact that we observe one polarization loop per astrometric loop. Finally, both the position angle on the sky and the required magnetic field strength suggest that the accretion flow is fueled and controlled by the winds of the massive young stars of the clockwise stellar disk 1-5″ from Sgr A*, in agreement with recent simulations.
GRAVITY is developed in a collaboration by MPE, LESIA of Paris Observatory/CNRS/Sorbonne Université/Univ. Paris Diderot and IPAG of Université Grenoble Alpes/CNRS, MPIA, Univ. of Cologne, CENTRA - Centro de Astrofisica e Gravitação, and ESO.
Natural ecosystems, in particular on the microbial scale, are inhabited by a large number of species. The population size of each species is affected by interactions of individuals with each other and by spatial and temporal changes in environmental conditions, such as resource abundance. Here, we use a generic population dynamics model to study how, and under what conditions, a periodic temporal environmental variation can alter an ecosystem's composition and biodiversity. We demonstrate that using timescale separation allows one to qualitatively predict the long-term population dynamics of interacting species in varying environments. We show that the notion of Tilman's R* rule, a well-known principle that applies for constant environments, can be extended to periodically varying environments if the timescale of environmental changes (e.g., seasonal variations) is much faster than the timescale of population growth (doubling time in bacteria). When these timescales are similar, our analysis shows that a varying environment deters the system from reaching a steady state, and stable coexistence between multiple species becomes possible. Our results posit that biodiversity can in part be attributed to natural environmental variations.
Bayesian_pyhf is a Python package that allows for the parallel Bayesian and frequentist evaluation of multi-channel binned statistical models. The Python library pyhf is used to build such models according to the HistFactory framework and already includes many frequentist inference methodologies. The pyhf-built models are then used as data-generating model for Bayesian inference and evaluated with the Python library PyMC. Based on Monte Carlo Chain Methods, PyMC allows for Bayesian modelling and together with the arviz library offers a wide range of Bayesian analysis tools.
It has been recently proposed that at each infinite distance limit in the moduli space of quantum gravity a perturbative description emerges with fundamental degrees of freedom given by those infinite towers of states whose typical mass scale is parametrically not larger than the ultraviolet cutoff, identified with the species scale. This proposal is applied to the familiar ten-dimensional type IIA and IIB superstring theories, when considering the limit of infinite string coupling. For type IIB, the light towers of states are given by excitations of the D1-brane, as expected from self-duality. Instead, for type IIA at strong coupling, which is dual to M-theory on $S^1$, we make the observation that the emergent degrees of freedom are bound states of transversal M2- and M5-branes with Kaluza-Klein momentum along the circle. We speculate on the interpretation of the necessity of including all these states for a putative quantum formulation of M-theory.
The complexity of modern cosmic ray observatories and the rich data sets they capture often require a sophisticated software framework to support the simulation of physical processes, detector response, as well as reconstruction and analysis of real and simulated data. Here we present the EUSO-OffLine framework. The code base was originally developed by the Pierre Auger Collaboration, and portions of it have been adopted by other collaborations to suit their needs. We have extended this software to fulfill the requirements of UHECR detectors and VHE neutrino detectors developed for the JEM-EUSO. These path-finder instruments constitute a program to chart the path to a future space-based mission like POEMMA. For completeness, we describe the overall structure of the framework developed by the Pierre Auger collaboration and continue with a description of the JEM-EUSO simulation and reconstruction capabilities. The framework is written predominantly in modern C++ and incorporates third-party libraries chosen based on functionality and our best judgment regarding support and longevity. Modularity is a central notion in the framework design, a requirement for large collaborations in which many individuals contribute to a common code base and often want to compare different approaches to a given problem. For the same reason, the framework is designed to be highly configurable, which allows us to contend with a variety of JEM-EUSO missions and observation scenarios. We also discuss how we incorporate broad, industry-standard testing coverage which is necessary to ensure quality and maintainability of a relatively large code base, and the tools we employ to support a multitude of computing platforms and enable fast, reliable installation of external packages. Finally, we provide a few examples of simulation and reconstruction applications using EUSO-OffLine.
One of the key limitations of large-scale structure surveys of the current and future generation, such as Euclid, LSST-Rubin or Roman, is the influence of feedback processes on the distribution of matter in the Universe. This effect, called baryonic feedback, modifies the matter power spectrum on non-linear scales much stronger than any cosmological parameter of interest. Constraining these modifications is therefore key to unlock the full potential of the upcoming surveys, and we propose to do so with the help of Fast Radio Bursts (FRBs). FRBs are short, astrophysical radio transients of extragalactic origin. Their burst signal is dispersed by the free electrons in the large-scale-structure, leading to delayed arrival times at different frequencies characterised by the dispersion measure (DM). Since the dispersion measure is sensitive to the integrated line-of-sight electron density, it is a direct probe of the baryonic content of the Universe. We investigate how FRBs can break the degeneracies between cosmological and feedback parameters by correlating the observed Dispersion Measure with the weak gravitational lensing signal of a Euclid-like survey. In particular we use a simple one-parameter model controlling baryonic feedback, but we expect similar findings for more complex models. Within this model we find that $\sim 10^4$ FRBs are sufficient to constrain the baryonic feedback 10 times better than cosmic shear alone. Breaking this degeneracy will tighten the constraints considerably, for example we expect a factor of two improvement on the sum of neutrino masses
A large fraction of red-supergiant stars seem to be enshrouded by circumstellar material (CSM) at the time of explosion. Relative to explosions in a vacuum, this CSM causes both a luminosity boost at early times as well as the presence of symmetric emission lines with a narrow core and electron-scattering wings typical of type IIn supernovae (SNe). For this study, we performed radiation-hydrodynamics and radiative transfer calculations for a variety of CSM configurations (i.e., compact, extended, and detached) and documented the resulting ejecta and radiation properties. We find that models with a dense, compact, and massive CSM on the order of 0.5 M⊙ can match the early luminosity boost of type II-P SNe but fail to produce type IIn-like spectral signatures (also known as "flash features"). These only arise if the photon mean free path in the CSM is large enough (i.e., if the density is low enough) to allow for a radiative precursor through a long-lived (i.e., a day to a week), radially extended unshocked optically thick CSM. The greater radiative losses and kinetic-energy extraction in this case boost the luminosity even for modest CSM masses - this boost comes with a delay for a detached CSM. The inadequate assumption of high CSM density, in which the shock travels essentially adiabatically, overestimates the CSM mass and associated mass-loss rate. Our simulations also indicate that type IIn-like spectral signatures last as long as there is optically-thick unshocked CSM. Constraining the CSM structure therefore requires a combination of light curves and spectra, rather than photometry alone. We emphasize that for a given total energy, the radiation excess fostered by the presence of CSM comes at the expense of kinetic energy, as evidenced by the disappearance of the fastest ejecta material and the accumulation of mass in a dense shell. Both effects can be constrained from spectra well after the interaction phase.
Type IIn supernovae occur when stellar explosions are surrounded by dense hydrogen-rich circumstellar matter. The dense circumstellar matter is likely formed by extreme mass loss from their progenitors shortly before they explode. The nature of Type IIn supernova progenitors and the mass-loss mechanism forming the dense circumstellar matter are still unknown. In this work, we investigate whether Type IIn supernova properties and their local environments are correlated. We use Type IIn supernovae with well-observed light curves and host-galaxy integral field spectroscopic data so that we can estimate both supernova and environmental properties. We find that Type IIn supernovae with a higher peak luminosity tend to occur in environments with lower metallicity and/or younger stellar populations. The circumstellar matter density around Type IIn supernovae is not significantly correlated with metallicity, so the mass-loss mechanism forming the dense circumstellar matter around Type IIn supernovae might be insensitive to metallicity.
We show that, in addition to the counting of canonical dimensions, a counting of loop orders is necessary to fully specify the power counting of Standard Model Effective Field Theory (SMEFT). Using concrete examples, we demonstrate that considering the canonical dimensions of operators alone may lead to inconsistent results. The counting of both, canonical dimensions and loop orders, establishes a clear hierarchy of the terms in SMEFT. In practice, this serves to identify, and focus on, the potentially dominating effects in any given high-energy process in a meaningful way. Additionally, this will lead to a consistent limitation of free parameters in SMEFT applications.
Context. Several observations of the Local Universe point toward the existence of very prominent structures: massive galaxy clusters and local superclusters on the one hand, but also large local voids and underdensities on the other. However, it is highly nontrivial to connect such different observational selected tracers to the underlying dark matter (DM) distribution.
Aims: Therefore, constructing mock catalogs of such observable tracers using cosmological hydrodynamics simulations is needed. These simulations have to follow galaxy formation physics and also have to be constrained to reproduce the Local Universe. Such constraints should be based on observables that directly probe the full underlying gravitational field, such as the observed peculiar velocity field, to provide an independent test on the robustness of these distinctive structures.
Methods: We used a 500 h−1 Mpc constrained simulation of the Local Universe to investigate the anomalies in the local density field, as found in observations. Constructing the initial conditions based on peculiar velocities derived from the CosmicFlows-2 catalog makes the predictions of the simulations completely independent from the distribution of the observed tracer population, and following galaxy formation physics directly in the hydrodynamics simulations also allows the comparison to be based directly on the stellar masses of galaxies or X-ray luminosity of clusters. We also used the 2668 h−1 Mpc large cosmological box from the Magneticum simulations to evaluate the frequency of finding such anomalies in random patches within simulations.
Results: We demonstrate that halos and galaxies in our constrained simulation trace the local dark matter density field very differently. Thus, this simulation reproduces the observed 50% underdensity of galaxy clusters and groups within the sphere of ≈100 Mpc when applying the same mass or X-ray luminosity limit used in the observed cluster sample (CLASSIX), which is consistent with a ≈1.5σ feature. At the same time, the simulation reproduces the observed overdensity of massive galaxy clusters within the same sphere, which on its own also corresponds to a ≈1.5σ feature. Interestingly, we find that only 44 out of 15 635 random realizations (i.e., 0.28%) match both anomalies, thus making the Local Universe a ≈3σ environment. We finally compared a mock galaxy catalog with the observed distribution of galaxies in the Local Universe, finding a match to the observed factor of 2 overdensity at ∼16 Mpc as well as the observed 15% underdensity at ∼40 Mpc.
Conclusions: Constrained simulations of the Local Universe which reproduce the main features of the local density field open a new window for local field cosmology, where the imprint of the specific density field and the impact on the bias through the observational specific tracers can be investigated in detail.
In this paper, we investigate two-loop non-planar triangle Feynman integrals involving elliptic curves. In contrast to the Sunrise and Banana integral families, the triangle families involve non-trivial sub-sectors. We show that the methodology developed in the context of Banana integrals can also be extended to these cases and obtain ε-factorized differential equations for all sectors. The letters are combinations of modular forms on the corresponding elliptic curves and algebraic functions arising from the sub-sectors. With uniform transcendental boundary conditions, we express our results in terms of iterated integrals order-by-order in the dimensional regulator, which can be evaluated efficiently. Our method can be straightforwardly generalized to other elliptic integral families and have important applications to precision physics at current and future high-energy colliders.
Achieving autonomous motion is a central objective in designing artificial cells that mimic biological cells in form and function. Cellular motion often involves complex multiprotein machineries, which are challenging to reconstitute in vitro. Here we achieve persistent motion of cell-sized liposomes. These small artificial vesicles are driven by a direct mechanochemical feedback loop between the MinD and MinE protein systems of Escherichia coli and the liposome membrane. Membrane-binding Min proteins self-organize asymmetrically around the liposomes, which results in shape deformation and generates a mechanical force gradient leading to motion. The protein distribution responds to the deformed liposome shape through the inherent geometry sensitivity of the reaction-diffusion dynamics of the Min proteins. We show that such a mechanochemical feedback loop between liposome and Min proteins is sufficient to drive continuous motion. Our combined experimental and theoretical study provides a starting point for the future design of motility features in artificial cells.
We present the first simulations of core-collapse supernovae in axial symmetry with feedback from fast neutrino flavor conversion (FFC). Our schematic treatment of FFCs assumes instantaneous flavor equilibration under the constraint of lepton-number conservation individually for each flavor. Systematically varying the spatial domain where FFCs are assumed to occur, we find that they facilitate SN explosions in low-mass (9 - 12 M⊙ ) progenitors that otherwise explode with longer time delays, whereas FFCs weaken the tendency to explode of higher-mass (around 20 M⊙) progenitors.
Free-floating planets (FFPs) can result from dynamical scattering processes happening in the first few million years of a planetary system's life. Several models predict the possibility, for these isolated planetary-mass objects, to retain exomoons after their ejection. The tidal heating mechanism and the presence of an atmosphere with a relatively high optical thickness may support the formation and maintenance of oceans of liquid water on the surface of these satellites. In order to study the timescales over which liquid water can be maintained, we perform dynamical simulations of the ejection process and infer the resulting statistics of the population of surviving exomoons around FFPs. The subsequent tidal evolution of the moons' orbital parameters is a pivotal step to determine when the orbits will circularize, with a consequential decay of the tidal heating. We find that close-in ($a \lesssim 25$ RJ) Earth-mass moons with carbon dioxide-dominated atmospheres could retain liquid water on their surfaces for long timescales, depending on the mass of the atmospheric envelope and the surface pressure assumed. Massive atmospheres are needed to trap the heat produced by tidal friction that makes these moons habitable. For Earth-like pressure conditions (p0 = 1 bar), satellites could sustain liquid water on their surfaces up to 52 Myr. For higher surface pressures (10 and 100 bar), moons could be habitable up to 276 Myr and 1.6 Gyr, respectively. Close-in satellites experience habitable conditions for long timescales, and during the ejection of the FFP remain bound with the escaping planet, being less affected by the close encounter.
Experimental searches for pure glueball states have proven challenging and so far yielded no results. This is believed to occur because glueballs mix with the ordinary q q bar states with the same quantum numbers. We will discuss an alternative mechanism, the formation of the glueball-meson molecular states. We will argue that the wave functions of already observed excited meson states may contain a significant part due to such molecular states. We discuss the phenomenology of glueball-meson molecules and comment on a possible charmless component of the XYZ states.
We study the effect of super-sample covariance (SSC) on the power spectrum and higher-order statistics; bispectrum, halo mass function, and void size function. We also investigate the effect of SSC on the cross covariance between the statistics. We consider both the matter and halo fields. Higher-order statistics of the large-scale structure contain additional cosmological information beyond the power spectrum and are a powerful tool to constrain cosmology. They are a promising probe for ongoing and upcoming high-precision cosmological surveys such as DESI, PFS, Rubin Observatory LSST, Euclid, SPHEREx, SKA, and Roman Space Telescope. Cosmological simulations used in modeling and validating these statistics often have sizes that are much smaller than the observed Universe. Density fluctuations on scales larger than the simulation box, known as super-sample modes, are not captured by the simulations and in turn can lead to inaccuracies in the covariance matrix. We compare the covariance measured using simulation boxes containing super-sample modes to those without. We also compare with the separate universe approach. We find that while the power spectrum, bispectrum and halo mass function show significant scale- or mass-dependent SSC, the void size function shows relatively small SSC. We also find significant SSC contributions to the cross covariances between the different statistics, implying that future joint analyses will need to carefully take into consideration the effect of SSC. To enable further study of SSC, our simulations have been made publicly available.
Fast radio bursts (FRBs) are short astrophysical transients of extragalactic origin. Their burst signal is dispersed by the free electrons in the large-scale-structure (LSS), leading to delayed arrival times at different frequencies. Another potential source of time delay is the well known Shapiro delay, which measures the space-space and time-time metric perturbations along the line-of-sight. If photons of different frequencies follow different trajectories, i.e. if the universality of free fall guaranteed by the weak equivalence principle (WEP) is violated, they would experience an additional relative delay. This quantity, however, is not observable at the background level as it is not gauge independent, which has led to confusion in previous papers. Instead, an imprint can be seen in the correlation between the time delays of different pulses. In this paper, we derive robust and consistent constraints from twelve localized FRBs on the violation of the WEP in the energy range between 4.6 and 6 meV. In contrast to a number of previous studies, we consider our signal to be not in the model, but in the covariance matrix of the likelihood. To do so, we calculate the covariance of the time delays induced by the free electrons in the LSS, the WEP breaking terms, the Milky Way and host galaxy. By marginalizing both host galaxy contribution and the contribution from the free electrons, we find that the parametrized post-Newtonian parameter γ characterizing the WEP violation must be constant in this energy range to 1 in 1013 at 68 per cent confidence. These are the tightest constraints to-date on Δγ in this low-energy range.
We present the first nonlinear lattice simulation of an axion field coupled to a U(1) gauge field during inflation. We use it to fully characterize the statistics of the primordial curvature perturbation ζ . We find high-order statistics to be essential in describing non-Gaussianity of ζ in the linear regime of the theory. On the contrary, non-Gaussianity is suppressed when the dynamics become nonlinear. This relaxes the bounds from overproduction of primordial black holes, allowing for an observable gravitational waves signal at pulsar timing array and interferometer scales. Our work establishes lattice simulations as a crucial tool to study the inflationary epoch and its predictions.
The intrinsic alignment (IA) of observed galaxy shapes with the underlying cosmic web is a source of contamination in weak lensing surveys. Sensitive methods to identify the IA signal will therefore need to be included in the upcoming weak lensing analysis pipelines. Hydrodynamical cosmological simulations allow us to directly measure the intrinsic ellipticities of galaxies, and thus provide a powerful approach to predict and understand the IA signal. Here we employ the novel, large-volume hydrodynamical simulation MTNG740, a product of the MillenniumTNG (MTNG) project, to study the IA of galaxies. We measure the projected correlation functions between the intrinsic shape/shear of galaxies and various tracers of large-scale structure, w+g, w+m, w++ over the radial range $r_{\rm p} \in [0.02 , 200]\, h^{-1}{\rm Mpc}$ and at redshifts z = 0.0, 0.5, and 1.0. We detect significant signal-to-noise IA signals with the density field for both elliptical and spiral galaxies. We also find significant intrinsic shear-shear correlations for ellipticals. We further examine correlations of the intrinsic shape of galaxies with the local tidal field. Here we find a significant IA signal for elliptical galaxies assuming a linear model. We also detect a weak IA signal for spiral galaxies under a quadratic tidal torquing model. Lastly, we measure the alignment between central galaxies and their host dark-matter haloes, finding small to moderate misalignments between their principal axes that decline with halo mass.
We present five far- and near-ultraviolet spectra of the Type II plateau supernova, SN 2022acko, obtained 5, 6, 7, 19, and 21 days after explosion, all observed with the Hubble Space Telescope/Space Telescope Imaging Spectrograph. The first three epochs are earlier than any Type II plateau supernova has been observed in the far-ultraviolet revealing unprecedented characteristics. These three spectra are dominated by strong lines, primarily from metals, which contrasts with the featureless early optical spectra. The flux decreases over the initial time series as the ejecta cool and line blanketing takes effect. We model this unique data set with the non-local thermodynamic equilibrium radiation transport code CMFGEN, finding a good match to the explosion of a low-mass red supergiant with energy E kin = 6 × 1050 erg. With these models we identify, for the first time, the ions that dominate the early ultraviolet spectra. We present optical photometry and spectroscopy, showing that SN 2022acko has a peak absolute magnitude of V = - 15.4 mag and plateau length of ~115 days. The spectra closely resemble those of SN 2005cs and SN 2012A. Using the combined optical and ultraviolet spectra, we report the fraction of flux as a function of bluest wavelength on days 5, 7, and 19. We create a spectral time-series of Type II supernovae in the ultraviolet, demonstrating the rapid decline of flux over the first few weeks of evolution. Future observations of Type II supernovae are required to map out the landscape of exploding red supergiants, with and without circumstellar material, which is best revealed in high-quality ultraviolet spectra.
We compute the next-to-leading order (NLO) hard correction to the gluon self-energy tensor with arbitrary soft momenta in a hot and/or dense weakly coupled plasma in Quantum Chromodynamics. Our diagrammatic computations of the two-loop and power corrections are performed within the hard-thermal-loop (HTL) framework and in general covariant gauge, using the real-time formalism. We find that after renormalization our individual results are finite and gauge-dependent, and they reproduce previously computed results in Quantum Electrodynamics in the appropriate limit. Combining our results, we also recover a formerly known gauge-independent matching coefficient and associated screening mass in a specific kinematic limit. Our NLO results supersede leading-order HTL results from the 1980s and pave the way to an improved understanding of the bulk properties of deconfined matter, such as the equation of state.
LiteBIRD is a planned JAXA-led cosmic microwave background (CMB) B-mode satellite experiment aiming for launch in the late 2020s, with a primary goal of detecting the imprint of primordial inflationary gravitational waves. Its current baseline focal-plane configuration includes 15 frequency bands between 40 and 402 GHz, fulfilling the mission requirements to detect the amplitude of gravitational waves with the total uncertainty on the tensor-to-scalar ratio, δr, down to δr < 0.001. A key aspect of this performance is accurate astrophysical component separation, and the ability to remove polarized thermal dust emission is particularly important. In this paper we note that the CMB frequency spectrum falls off nearly exponentially above 300 GHz relative to the thermal dust spectral energy distribution, and a relatively minor high frequency extension can therefore result in even lower uncertainties and better model reconstructions. Specifically, we compared the baseline design with five extended configurations, while varying the underlying dust modeling, in each of which the High-Frequency Telescope (HFT) frequency range was shifted logarithmically toward higher frequencies, with an upper cutoff ranging between 400 and 600 GHz. In each case, we measured the tensor-to-scalar ratio r uncertainty and bias using both parametric and minimum-variance component-separation algorithms. When the thermal dust sky model includes a spatially varying spectral index and temperature, we find that the statistical uncertainty on r after foreground cleaning may be reduced by as much as 30-50% by extending the upper limit of the frequency range from 400 to 600 GHz, with most of the improvement already gained at 500 GHz. We also note that a broader frequency range leads to higher residuals when fitting an incorrect dust model, but also it is easier to discriminate between models through higher χ2 sensitivity. Even in the case in which the fitting procedure does not correspond to the underlying dust model in the sky, and when the highest frequency data cannot be modeled with sufficient fidelity and must be excluded from the analysis, the uncertainty on r increases by only about 5% for a 500 GHz configuration compared to the baseline.
In recent years, automatic classifiers of image cutouts (also called "stamps") have been shown to be key for fast supernova discovery. The Vera C. Rubin Observatory will distribute about ten million alerts with their respective stamps each night, enabling the discovery of approximately one million supernovae each year. A growing source of confusion for these classifiers is the presence of satellite glints, sequences of point-like sources produced by rotating satellites or debris. The currently planned Rubin stamps will have a size smaller than the typical separation between these point sources. Thus, a larger field-of-view stamp could enable the automatic identification of these sources. However, the distribution of larger stamps would be limited by network bandwidth restrictions. We evaluate the impact of using image stamps of different angular sizes and resolutions for the fast classification of events (active galactic nuclei, asteroids, bogus, satellites, supernovae, and variable stars), using data from the Zwicky Transient Facility. We compare four scenarios: three with the same number of pixels (small field of view with high resolution, large field of view with low resolution, and a multiscale proposal) and a scenario with the full stamp that has a larger field of view and higher resolution. Compared to small field-of-view stamps, our multiscale strategy reduces misclassifications of satellites as asteroids or supernovae, performing on par with high-resolution stamps that are 15 times heavier. We encourage Rubin and its Science Collaborations to consider the benefits of implementing multiscale stamps as a possible update to the alert specification.
Mergers of galaxy clusters are promising probes of dark matter (DM) physics. For example, an offset between the DM component and the galaxy distribution can constrain DM self-interactions. We investigate the role of the intracluster medium (ICM) and its influence on DM-galaxy offsets in self-interacting dark matter models. To this end, we employ Smoothed Particle Hydrodynamics + N-body simulations to study idealized setups of equal- and unequal-mass mergers with head-on collisions. Our simulations show that the ICM hardly affects the offsets arising shortly after the first pericentre passage compared to DM-only simulations. But later on, e.g. at the first apocentre, the offsets can be amplified by the presence of the ICM. Furthermore, we find that cross-sections small enough not to be excluded by measurements of the core sizes of relaxed galaxy clusters have a chance to produce observable offsets. We found that different DM models affect the DM distribution and also the galaxy and ICM distribution, including its temperature. Potentially, the position of the shock fronts, combined with the brightest cluster galaxies, provides further clues to the properties of DM. Overall our results demonstrate that mergers of galaxy clusters at stages about the first apocentre passage could be more interesting in terms of DM physics than those shortly after the first pericentre passage. This may motivate further studies of mergers at later evolutionary stages.
In recent years, differential equations have become the method of choice to compute multi-loop Feynman integrals. Whenever they can be cast into canonical form, their solution in terms of special functions is straightforward. Recently, progress has been made in understanding the precise canonical form for Feynman integrals involving elliptic polylogarithms. In this article, we make use of an algorithmic approach that proves powerful to find canonical forms for these cases. To illustrate the method, we reproduce several known canonical forms from the literature and present examples where a canonical form is deduced for the first time. Together with this article, we also release an update for INITIAL, a publicly available Mathematica implementation of the algorithm.
The radioactive isotopes of 44Ti and 56Ni are important products of explosive nucleosynthesis, which play a key role for supernova (SN) diagnostics and were detected in several nearby young SN remnants. However, most SN models based on non-rotating single stars predict yields of 44Ti that are much lower than the values inferred from observations. We present, for the first time, the nucleosynthesis yields from a self-consistent three-dimensional (3D) SN simulation of an approximately 19 Msun progenitor star that reaches an explosion energy comparable to that of SN 1987A and that covers the evolution of the neutrino-driven explosion until more than 7 seconds after core bounce. We find a significant enhancement of the Ti/Fe yield compared to recent spherically symmetric (1D) models and demonstrate that the long-time evolution is crucial to understand the efficient production of 44Ti due to the non-monotonic temperature and density histories of ejected mass elements. Additionally, we identify characteristic signatures of the nucleosynthesis in proton-rich ejecta, in particular high yields of 45Sc and 64Zn.
Acceleration processes that occur in astrophysical plasmas produce cosmic rays that are observed on Earth. To study particle acceleration, fully-kinetic particle-in-cell (PIC) simulations are often used as they can unveil the microphysics of energization processes. Tracing of individual particles in PIC simulations is particularly useful in this regard. However, by-eye inspection of particle trajectories includes a high level of bias and uncertainty in pinpointing specific acceleration mechanisms that affect particles. Here we present a new approach that uses neural networks to aid individual particle data analysis. We demonstrate this approach on the test data that consists of 252,000 electrons which have been traced in a PIC simulation of a non-relativistic high Mach number perpendicular shock, in which we observe the two-stream electrostatic Buneman instability to pre-accelerate a portion of electrons to nonthermal energies. We perform classification, regression and anomaly detection by using a Convolutional Neural Network. We show that regardless of how noisy and imbalanced the datasets are, the regression and classification are able to predict the final energies of particles with high accuracy, whereas anomaly detection is able to discern between energetic and non-energetic particles. The methodology proposed may considerably simplify particle classification in large-scale PIC and also hybrid kinetic simulations.
We propose to apply several gradient estimation techniques to enable the differentiation of programs with discrete randomness in High Energy Physics. Such programs are common in High Energy Physics due to the presence of branching processes and clustering-based analysis. Thus differentiating such programs can open the way for gradient based optimization in the context of detector design optimization, simulator tuning, or data analysis and reconstruction optimization. We discuss several possible gradient estimation strategies, including the recent Stochastic AD method, and compare them in simplified detector design experiments. In doing so we develop, to the best of our knowledge, the first fully differentiable branching program.
Supernovae are an important source of energy in the interstellar medium. Young remnants of supernovae have a peak emission in the X-ray region, making them interesting objects for X-ray observations. In particular, the supernova remnant SN1006 is of great interest due to its historical record, proximity and brightness. It has therefore been studied by several X-ray telescopes. Improving the X-ray imaging of this and other remnants is important but challenging as it requires to address a spatially varying instrument response in order to achieve a high signal-to-noise ratio. Here, we use Chandra observations to demonstrate the capabilities of Bayesian image reconstruction using information field theory. Our objective is to reconstruct denoised, deconvolved and spatio-spectral resolved images from X-ray observations and to decompose the emission into different morphologies, namely diffuse and point-like. Further, we aim to fuse data from different detectors and pointings into a mosaic and quantify the uncertainty of our result. Utilizing prior knowledge on the spatial and spectral correlation structure of the two components, diffuse emission and point sources, the presented method allows the effective decomposition of the signal into these. In order to accelerate the imaging process, we introduce a multi-step approach, in which the spatial reconstruction obtained for a single energy range is used to derive an informed starting point for the full spatio-spectral reconstruction. The method is applied to 11 Chandra observations of SN1006 from 2008 and 2012, providing a detailed, denoised and decomposed view of the remnant. In particular, the separated view of the diffuse emission should provide new insights into its complex small-scale structures in the center of the remnant and at the shock front profiles.
We present the first cosmological constraints derived from the analysis of the void size function. This work relies on the final Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 12 (DR12) data set, a large spectroscopic galaxy catalog, ideal for the identification of cosmic voids. We extract a sample of voids from the distribution of galaxies, and we apply a cleaning procedure aimed at reaching high levels of purity and completeness. We model the void size function by means of an extension of the popular volume-conserving model, based on two additional nuisance parameters. Relying on mock catalogs specifically designed to reproduce the BOSS DR12 galaxy sample, we calibrate the extended size function model parameters and validate the methodology. We then apply a Bayesian analysis to constrain the Lambda cold dark matter (ΛCDM) model and one of its simplest extensions, featuring a constant dark energy equation of state parameter, w. Following a conservative approach, we put constraints on the total matter density parameter and the amplitude of density fluctuations, finding Ωm = 0.29 ± 0.06 and ${\sigma }_{8}={0.79}_{-0.08}^{+0.09}$ . Testing the alternative scenario, we derive w = -1.1 ± 0.2, in agreement with the ΛCDM model. These results are independent and complementary to those derived from standard cosmological probes, opening up new ways to identify the origin of potential tensions in the current cosmological paradigm.
The inner kiloparsec regions surrounding sub-Eddington (luminosity less than 10-3 in Eddington units, L Edd) supermassive black holes (BHs) often show a characteristic network of dust filaments that terminate in a nuclear spiral in the central parsecs. Here we study the role and fate of these filaments in one of the least accreting BHs known, M31 (10-7 L Edd) using hydrodynamical simulations. The evolution of a streamer of gas particles moving under the barred potential of M31 is followed from kiloparsec distance to the central parsecs. After an exploratory study of initial conditions, a compelling fit to the observed dust/ionized gas morphologies and line-of-sight velocities in the inner hundreds of parsecs is produced. After several million years of streamer evolution, during which friction, thermal dissipation, and self-collisions have taken place, the gas settles into a disk tens of parsecs wide. This is fed by numerous filaments that arise from an outer circumnuclear ring and spiral toward the center. The final configuration is tightly constrained by a critical input mass in the streamer of several 103 M ☉ (at an injection rate of 10-4 ${M}_{\odot }\,{{\rm{yr}}}^{-1}$ ); values above or below this lead to filament fragmentation or dispersion respectively, which are not observed. The creation of a hot gas atmosphere in the region of ~106 K is key to the development of a nuclear spiral during the simulation. The final inflow rate at 1 pc from the center is ~1.7 × 10-7 M ☉ yr-1, consistent with the quiescent state of the M31 BH.
The circumgalactic medium (CGM) plays a crucial role in galaxy evolution as it fuels star formation, retains metals ejected from the galaxies, and hosts gas flows in and out of galaxies. For Milky Way-type and more-massive galaxies, the bulk of the CGM is in hot phases best accessible at X-ray wavelengths. However, our understanding of the CGM remains largely unconstrained due to its tenuous nature. A promising way to probe the CGM is via X-ray absorption studies. Traditional absorption studies utilize bright background quasars, but this method probes the CGM in a pencil beam, and, due to the rarity of bright quasars, the galaxy population available for study is limited. Large-area, high spectral resolution X-ray microcalorimeters offer a new approach to exploring the CGM in emission and absorption. Here, we demonstrate that the cumulative X-ray emission from cosmic X-ray background sources can probe the CGM in absorption. We construct column density maps of major X-ray ions from the Magneticum simulation and build realistic mock images of nine galaxies to explore the detectability of X-ray absorption lines arising from the large-scale CGM. We conclude that the O VII absorption line is detectable around individual massive galaxies at the 3σ-6σ confidence level. For Milky Way-type galaxies, the O VII and O VIII absorption lines are detectable at the ~ 6σ and ~ 3σ levels even beyond the virial radius when coadding data from multiple galaxies. This approach complements emission studies, does not require additional exposures, and will allow for probing the baryon budget and the CGM at the largest scales.
While the role of local interactions in nonequilibrium phase transitions is well studied, a fundamental understanding of the effects of long-range interactions is lacking. We study the critical dynamics of reproducing agents subject to autochemotactic interactions and limited resources. A renormalization group analysis reveals distinct scaling regimes for fast (attractive or repulsive) interactions; for slow signal transduction, the dynamics is dominated by a diffusive fixed point. Furthermore, we present a correction to the Keller-Segel nonlinearity emerging close to the extinction threshold and a novel nonlinear mechanism that stabilizes the continuous transition against the emergence of a characteristic length scale due to a chemotactic collapse.
We introduce the star formation and supernova (SN) feedback model of the SATIN (Simulating AGNs Through ISM with Non-Equilibrium Effects) project to simulate the evolution of the star forming multiphase interstellar medium (ISM) of entire disc galaxies. This galaxy-wide implementation of a successful ISM feedback model tested in small box simulations naturally covers an order of magnitude in gas surface density, shear and radial motions. It is implemented in the adaptive mesh refinement code RAMSES at a peak resolution of 9 pc. New stars are represented by star cluster (sink) particles with individual SN delay times for massive stars. With SN feedback, cooling, and gravity, the galactic ISM develops a three-phase structure. The star formation rates naturally follow observed scaling relations for the local Milky Way gas surface density. SNe drive additional turbulence in the warm (300 < T < 104 K) gas and increase the kinetic energy of the cold gas, cooling out of the warm phase. The majority of the gas leaving the galactic ISM is warm and hot with mass loading factors of 3 ≤ η ≤ 10 up to h = 5 kpc away from the galaxy. While the hot gas is leaving the system, the warm and cold gas falls back onto the disc in a galactic fountain flow. The inclusion of other stellar feedback processes from massive stars seems to be needed to reduce the rate at which stars form at higher surface densities and to increase/decrease the amount of warm/cold gas.
Astrophysical shocks create cosmic rays by accelerating charged particles to relativistic speeds. However, the relative contribution of various types of shocks to the cosmic ray spectrum is still the subject of ongoing debate. Numerical studies have shown that in the non-relativistic regime, oblique shocks are capable of accelerating cosmic rays, depending on the Alfvénic Mach number of the shock. We now seek to extend this study into the mildly relativistic regime. In this case, dependence of the ion reflection rate on the shock obliquity is different compared to the nonrelativistic regime. Faster relativistic shocks are perpendicular for the majority of shock obliquity angles therefore their ability to initialize efficient DSA is limited. We define the ion injection rate using fully kinetic PIC simulation where we follow the formation of the shock and determine the fraction of ions that gets involved into formation of the shock precursor in the mildly relativistic regime covering a Lorentz factor range from 1 to 3. Then, with this result, we use a combined PIC-MHD method to model the large-scale evolution of the shock with the ion injection recipe dependent on the local shock obliquity. This methodology accounts for the influence of the self-generated or pre-existing upstream turbulence on the shock obliquity which allows study substantially larger and longer simulations compared to classical hybrid techniques.
We return to interpreting the historical SN~1987A neutrino data from a modern perspective. To this end, we construct a suite of spherically symmetric supernova models with the Prometheus-Vertex code, using four different equations of state and five choices of final baryonic neutron-star (NS) mass in the 1.36-1.93 M$_\odot$ range. Our models include muons and proto-neutron star (PNS) convection by a mixing-length approximation. The time-integrated signals of our 1.44 M$_\odot$ models agree reasonably well with the combined data of the four relevant experiments, IMB, Kam-II, BUST, and LSD, but the high-threshold IMB detector alone favors a NS mass of 1.7-1.8 M$_\odot$, whereas Kam-II alone prefers a mass around 1.4 M$_\odot$. The cumulative energy distributions in these two detectors are well matched by models for such NS masses, and the previous tension between predicted mean neutrino energies and the combined measurements is gone, with and without flavor swap. Generally, our predicted signals do not strongly depend on assumptions about flavor mixing, because the PNS flux spectra depend only weakly on antineutrino flavor. While our models show compatibility with the events detected during the first seconds, PNS convection and nucleon correlations in the neutrino opacities lead to short PNS cooling times of 5-9 s, in conflict with the late event bunches in Kam-II and BUST after 8-9 s, which are also difficult to explain by background. Speculative interpretations include the onset of fallback of transiently ejected material onto the NS, a late phase transition in the nuclear medium, e.g., from hadronic to quark matter, or other effects that add to the standard PNS cooling emission and either stretch the signal or provide a late source of energy. More research, including systematic 3D simulations, is needed to assess these open issues.
We derive a minimal basis of kernels furnishing the perturbative expansion of the density contrast and velocity divergence in powers of the initial density field that is applicable to cosmological models with arbitrary expansion history, thereby relaxing the commonly adopted Einstein-de-Sitter (EdS) approximation. For this class of cosmological models, the non-linear kernels are at every order given by a sum of terms, each of which factorizes into a time-dependent growth factor and a wavenumber-dependent basis function. We show how to reduce the set of basis functions to a minimal amount, and give explicit expressions up to order $n=5$. We find that for this minimal basis choice, each basis function individually displays the expected scaling behaviour due to momentum conservation, being non-trivial at $n\geq 4$. This is a highly desirable property for numerical evaluation of loop corrections. In addition, it allows us to match the density field to an effective field theory (EFT) description for cosmologies with an arbitrary expansion history, which we explicitly derive at order four. We evaluate the differences to the EdS approximation for $\Lambda$CDM and $w_0w_a$CDM, paying special attention to the irreducible cosmology dependence that cannot be absorbed into EFT terms for the one-loop bispectrum. Finally, we provide algebraic recursion relations for a special generalization of the EdS approximation that retains its simplicity and is relevant for mixed hot and cold dark matter models.
Non-thermal emission from relativistic electrons gives insight into the strength and morphology of intra-cluster magnetic fields, as well as providing powerful tracers of structure formation shocks. Emission caused by Cosmic Ray (CR) protons on the other hand still challenges current observations and is therefore testing models of proton acceleration at intra-cluster shocks. Large-scale simulations including the effects of CRs have been difficult to achieve and have been mainly reduced to simulating an overall energy budget, or tracing CR populations in post-processing of simulation output and has often been done for either protons or electrons. We use an efficient on-the-fly Fokker-Planck solver to evolve distributions of CR protons and electrons within every resolution element of our simulation. The solver accounts for CR acceleration at intra-cluster shocks, based on results of recent PIC simulations, re-acceleration due to shocks and MHD turbulence, adiabatic changes and radiative losses of electrons. We apply this model to zoom simulations of galaxy clusters, recently used to show the evolution of the small-scale turbulent dynamo on cluster scales. For these simulations we use a spectral resolution of 48 bins over 6 orders of magnitude in momentum for electrons and 12 bins over 6 orders of magnitude in momentum for protons. We present preliminary results about a possible formation mechanism for Wrong Way Radio Relics in our simulation.
Light (anti-) nuclei are a powerful tool both in collider physics and astrophysics. In searches for new and exotic physics, the expected small astrophysical backgrounds at low energies make these antinuclei ideal probes for, e.g., dark matter. At the same time, their composite structure and small binding energies imply that they can be used in collider experiments to probe the hadronization process and two-particle correlations. For the proper interpretation of such experimental studies, an improved theoretical understanding of (anti-) nuclei production in specific kinematic regions and detector setups is needed. In this work, we develop a coalescence framework for (anti-) deuteron production which accounts for both the emission volume and momentum correlations on an event-by-event basis: While momentum correlations can be provided by event generators, such as PYTHIA, the emission volume has to be derived from semiclassical considerations. Moreover, this framework goes beyond the equal-time approximation, which has been often assumed in femtoscopy experiments and (anti-) nucleus production models until now in small interacting systems. Using PYTHIA 8 as an event generator, we find that the equal-time approximation leads to an error of O (10 %) in low-energy processes like Υ decays, while the errors are negligible at CERN Large Hadron Collider energies. The framework introduced in this work paves the way for tuning event generators to (anti-) nuclei measurements.
Within the framework proposed by Caron-Huot and Wilhelm, we give a recipe for computing infrared anomalous dimensions purely on-shell, efficiently up to two loops in any massless theory. After introducing the general formalism and reviewing the one-loop recipe, we extract a practical formula that relates two-loop infrared anomalous dimensions to certain two- and three-particle phase space integrals with tree-level form factors of conserved operators. We finally provide several examples of the use of the two-loop formula and comment on some of its formal aspects, especially the cancellation of `one-loop squared' spurious terms. The present version of the paper is augmented with a detailed treatment of the structure of infrared divergences in massless theories of scalars and fermions up to two loops. In the calculation we encounter divergent phase space integrals and show in detail how these cancel among each other as required by the finiteness of the anomalous dimension. As a non-trivial check of the method, we also perform the computation with a standard diagrammatic approach, finding perfect agreement.
Modern spectroscopic surveys are mapping the Universe in an unprecedented way. In view of this, cosmic voids constitute promising cosmological laboratories. There are two primary statistics in void studies: (i) the void size function, which quantifies their abundance, and (ii) the void-galaxy cross-correlation function, which characterises the density and velocity fields in their surroundings. Nevertheless, in order to design reliable cosmological tests based on these statistics, it is necessary a complete description of the effects of geometrical (Alcock-Paczynski effect) and dynamical (Kaiser effect) distortions. Observational measurements show prominent anisotropic patterns that lead to biased cosmological constraints if they are not properly modelled. I will present a theoretical framework to address this problematic based on a cosmological and dynamical analysis of the mapping of voids between real and redshift space. In addition, I will present a new fiducial-free cosmological test based on two perpendicular projections of the correlation function which allows us to effectively break degeneracies in the model parameter space and to significantly reduce the number of mock catalogues needed to estimate covariances.
Experiments of free neutron beta decays can probe the weak interaction structure for tensor and scalar contributions. We can measure such contributions as a shift in the electron energy distribution. This thesis focuses on determining systematic uncertainties and corrections in the measurement with Perkeo III in 2019/20. I present the data analysis of this measurement with the corrections to estimate systematic uncertainties, test hypotheses of their causes, and develop new analysis tools.
We present a new description of cosmological evolution of the primordial magnetic field under the condition that it is non-helical and its energy density is larger than the kinetic energy density. We argue that the evolution can be described by four different regimes, according to whether the decay dynamics is linear or not, and whether the dominant dissipation term is the shear viscosity or the drag force. Using this classification and conservation of the Hosking integral, we present analytic models to adequately interpret the results of various numerical simulations of field evolution with variety of initial conditions. It is found that, contrary to the conventional wisdom, the decay of the field is generally slow, exhibiting the inverse transfer, because of the conservation of the Hosking integral. Using the description proposed here, one can trace the intermediate evolution history of the magnetic field and clarify whether each process governing its evolution is frozen or not. Its applicability to the early cosmology is important, since primordial magnetic fields are sometimes constrained to be quite weak, and multiple regimes including the frozen regime matters for such weak fields.
We investigate the ability of human 'expert' classifiers to identify strong gravitational lens candidates in Dark Energy Survey like imaging. We recruited a total of 55 people that completed more than 25 per cent of the project. During the classification task, we present to the participants 1489 images. The sample contains a variety of data including lens simulations, real lenses, non-lens examples, and unlabelled data. We find that experts are extremely good at finding bright, well-resolved Einstein rings, while arcs with g-band signal to noise less than ~25 or Einstein radii less than ~1.2 times the seeing are rarely recovered. Very few non-lenses are scored highly. There is substantial variation in the performance of individual classifiers, but they do not appear to depend on the classifier's experience, confidence or academic position. These variations can be mitigated with a team of 6 or more independent classifiers. Our results give confidence that humans are a reliable pruning step for lens candidates, providing pure and quantifiably complete samples for follow-up studies.
The RadMap Telescope is a new radiation-monitoring instrument operating in the U.S. Orbital Segment (USOS) of the International Space Station (ISS). The instrument was commissioned in May 2023 and will rotate through four locations inside American, European, and Japanese modules over a period of about six months. In some locations, it will take data alongside operational, validated detectors for a cross-check of measurements. RadMap's central detector is a finely segmented tracking calorimeter that records detailed depth-dose data relevant to studies of the radiation exposure of the ISS crew. It is also able to record particle-dependent energy spectra of cosmic-ray nuclei with energies up to several hundred MeV per nucleon. A unique feature of the detector is its ability to track nuclei with omnidirectional sensitivity at an angular resolution of two degrees. In this contribution, we present the design and capabilities of the RadMap Telescope and give an overview of the instrument's commissioning on the ISS.
We show that final state interactions (FSI) within a C P T invariant two-channel framework can enhance the charge-parity (C P ) violation difference between D0→π-π+ and D0→K-K+ decays up to the current experimental value. This result relies upon (i) the dominant tree level diagram, (ii) the well-known experimental values for the D0→π-π+ and D0→K-K+ branching ratios, and (iii) the π π →π π and π π →K K scattering data to extract the strong phase difference and inelasticity. Based on well-grounded theoretical properties, we find the sign and bulk value of the Δ AC P and AC P(D0→π-π+) recently observed by the LHCb Collaboration.
Mechanisms of nucleic acid accumulation were likely critical to life's emergence in the ferruginous oceans of the early Earth. How exactly prebiotic geological settings accumulated nucleic acids from dilute aqueous solutions, is poorly understood. As a possible solution to this concentration problem, we simulated the conditions of prebiotic low-temperature alkaline hydrothermal vents in co-precipitation experiments to investigate the potential of ferruginous chemical gardens to accumulate nucleic acids via sorption. The injection of an alkaline solution into an artificial ferruginous solution under anoxic conditions (O2 < 0.01% of present atmospheric levels) and at ambient temperatures, caused the precipitation of amakinite (“white rust”), which quickly converted to chloride-containing fougerite (“green rust”). RNA was only extractable from the ferruginous solution in the presence of a phosphate buffer, suggesting RNA in solution was bound to Fe2+ ions. During chimney formation, this iron-bound RNA rapidly accumulated in the white and green rust chimney structure from the surrounding ferruginous solution at the fastest rates in the initial white rust phase and correspondingly slower rates in the following green rust phase. This represents a new mechanism for nucleic acid accumulation in the ferruginous oceans of the early Earth, in addition to wet-dry cycles and may have helped to concentrate RNA in a dilute prebiotic ocean.
Early dust grain growth in protostellar envelopes infalling on young discs has been suggested in recent studies, supporting the hypothesis that dust particles start to agglomerate already during the Class 0/I phase of young stellar objects (YSOs). If this early evolution were confirmed, it would impact the usually assumed initial conditions of planet formation, where only particles with sizes ≲0.25μm are usually considered for protostellar envelopes. We aim to determine the maximum grain size of the dust population in the envelope of the Class 0/I protostar L1527 IRS, located in the Taurus star-forming region (140 pc). We use Atacama Large millimetre/sub-millimetre Array (ALMA) and Atacama Compact Array (ACA) archival data and present new observations, in an effort to both enhance the signal-to-noise ratio of the faint extended continuum emission and properly account for the compact emission from the inner disc. Using observations performed in four wavelength bands and extending the spatial range of previous studies, we aim to place tight constraints on the spectral (α) and dust emissivity (β) indices in the envelope of L1527 IRS. We find a rather flat α∼ 3.0 profile in the range 50-2000 au. Accounting for the envelope temperature profile, we derive values for the dust emissivity index, 0.9 < β < 1.6, and reveal a tentative, positive outward gradient. This could be interpreted as a distribution of mainly ISM-like grains at 2000 au, gradually progressing to (sub-)millimetre-sized dust grains in the inner envelope, where at R=300 au, β = 1.1 +/- 0.1. Our study supports a variation of the dust properties in the envelope of L1527 IRS. We discuss how this can be the result of in-situ grain growth, dust differential collapse from the parent core, or upward transport of disc large grains.
Complex coacervation describes the liquid-liquid phase separation of oppositely charged polymers. Active coacervates are droplets in which one of the electrolyte's affinity is regulated by chemical reactions. These droplets are particularly interesting because they are tightly regulated by reaction kinetics. For example, they serve as a model for membraneless organelles that are also often regulated by biochemical transformations such as post-translational modifications. They are also a great protocell model or could be used to synthesize life–they spontaneously emerge in response to reagents, compete, and decay when all nutrients have been consumed. However, the role of the unreactive building blocks, e.g., the polymeric compounds, is poorly understood. Here, we show the important role of the chemically innocent, unreactive polyanion of our chemically fueled coacervation droplets. We show that the polyanion drastically influences the resulting droplets′ life cycle without influencing the chemical reaction cycle–either they are very dynamic or have a delayed dissolution. Additionally, we derive a mechanistic understanding of our observations and show how additives and rational polymer design help to create the desired coacervate emulsion life cycles.
Galaxy-scale strong lenses in galaxy clusters provide a unique tool to investigate their inner mass distribution and the sub-halo density profiles in the low-mass regime, which can be compared with the predictions from cosmological simulations. We search for galaxy-galaxy strong-lensing systems in HST multi-band imaging of galaxy cluster cores from the CLASH and HFF programs by exploring the classification capabilities of deep learning techniques. Convolutional neural networks are trained utilising highly-realistic simulations of galaxy-scale strong lenses injected into the HST cluster fields around cluster members. To this aim, we take advantage of extensive spectroscopic information on member galaxies in 16 clusters and the accurate knowledge of the deflection fields in half of these from high-precision strong lensing models. Using observationally-based distributions, we sample magnitudes, redshifts and sizes of the background galaxy population. By placing these sources within the secondary caustics associated with cluster galaxies, we build a sample of ~3000 galaxy-galaxy strong lenses which preserve the full complexity of real multi-colour data and produce a wide diversity of strong lensing configurations. We study two deep learning networks processing a large sample of image cutouts in three HST/ACS bands, and we quantify their classification performance using several standard metrics. We find that both networks achieve a very good trade-off between purity and completeness (85%-95%), as well as good stability with fluctuations within 2%-4%. We characterise the limited number of false negatives and false positives in terms of the physical properties of the background sources and cluster members. We also demonstrate the neural networks' high degree of generalisation by applying our method to HST observations of 12 clusters with previously known galaxy-scale lensing systems.
This thesis focuses on two main research topics. Firstly, it explores the strong interaction in the proton-deuteron (p-d) system. This is done by measuring two-body correlations through the femtoscopy technique with p-d pairs in pp collision at LHC. The measured correlation is sensitive to the dynamics of three nucleons and can be explained only by the full three-body calculations. Secondly, it investigates the production of deuterons employing the coalescence model in pp collisions.
The constraining power promised by future large-scale structure LSS surveys has driven the development of ever better techniques for extracting cosmological information from those datasets. Increase in the expected number of modes that could be well within the reach of the theory offers an improvement of few orders of magnitude with respect to cosmic microwave background (CMB). This extra information is hidden within the non-linear structures of the LSS. It is necessary to very carefully model different physics at play in order to responsibly deal with the upcoming datasets. Consequently, the main goal of this thesis was to push the development and understanding of such theoretical models for the clustering of the large-scale structure. [...]
The Cherenkov effect describes the creation of well-defined photon signatures by charged particles traversing a medium faster than the speed of light of the medium. By knowing the momentum of the charged particles, the Cherenkov effect allows the identification of the charged particles. Particle identification is one of the primary reasons for using the Cherenkov effect in large detector systems built for high energy physics. Detector systems such as LHCb at the Large Hadron Collider at CERN and BELLE II at KEKb employ Cherenkov detectors for particle identification. In reverse, this thesis aims to reconstruct a known particle’s momentum by measuring its Cherenkov cone. Such a detector, called inverted RICH, has potential applications in beam diagnostics for high energy physics. This work presents the development, construction, and characterization of the prototype detector for an inverted RICH. The detector uses a Lithium Fluoride crystal (diameter 50 mm, and thickness 20 mm with a high refractive index in the UV). The Cherenkov photons created are converted to electrons in a cesium iodide (CsI) photocathode after being transmitted through a Chromium layer (Cr). The signal is detected by a 10x10 cm 2 resistive strip Micromegas. A high voltage guides the Cherenkov electrons through the Micromegas drift region of the detector while the charged particle creates primary electrons inside the gas-filled detector. In this thesis, different radiator and photocathode materials have been studied and explored using the Geant4 simulation toolkit. LiF and MgF 2 were the most suited radiators for initial studies due to their sizeable refractive index leading to a large photon yield. CsI was the most suitable candidate for the photocathode due to its high peak quantum efficiency of 9 %. Also, the CsI photocathode is easier to use with gaseous detectors compared to, e.g. bialkali. [...]
We apply the cobordism hypothesis with singularities to the case of affine Rozansky--Witten models, providing a construction of extended TQFTs that includes all line and surface defects. On a technical level, this amounts to proving that the associated homotopy 2-category is pivotal, and to systematically employing its 3-dimensional graphical calculus. This in particular allows us to explicitly calculate state spaces for surfaces with arbitrary defect networks. As specific examples we discuss symmetry defects which can be used to model non-trivial background gauge fields, as well as boundary conditions.
The building of planetary systems is controlled by the gas and dust dynamics of protoplanetary disks. While the gas is simultaneously accreted onto the central star and dissipated away by winds, dust grains aggregate and collapse to form planetesimals and eventually planets. This dust and gas dynamics involves instabilities, turbulence and complex non-linear interactions which ultimately control the observational appearance and the secular evolution of these disks. This chapter is dedicated to the most recent developments in our understanding of the dynamics of gaseous and dusty disks, covering hydrodynamic and magnetohydrodynamic turbulence, gas-dust instabilities, dust clumping and disk winds. We show how these physical processes have been tested from observations and highlight standing questions that should be addressed in the future.
The field of planet formation is in an exciting era, where recent observations of disks around low- to intermediate-mass stars made with state of the art interferometers and high-contrast optical and IR facilities have revealed a diversity of substructures, some possibly planet-related. It is therefore important to understand the physical and chemical nature of the protoplanetary building blocks, as well as their spatial distribution, to better understand planet formation. Since PPVI, the field has seen tremendous improvements in observational capabilities, enabling both surveys of large samples of disks and high resolution imaging studies of a few bright disks. Improvements in data quality and sample size have, however, opened up many fundamental questions about properties such as the mass budget of disks, its spatial distribution, and its radial extent. Moreover, the vertical structure of disks has been studied in greater detail with spatially resolved observations, providing new insights on vertical layering and temperature stratification, yet also bringing rise to questions about other properties, such as material transport and viscosity. Each one of these properties—disk mass, surface density distribution, outer radius, vertical extent, temperature structure, and transport—is of fundamental interest as they collectively set the stage for disk evolution and corresponding planet formation theories. In this chapter, we will review our understanding of the fundamental properties of disks including the relevant observational techniques to probe their nature, modeling methods, and the respective caveats. Finally, we discuss the implications for theories of disk evolution and planet formation underlining what new questions have since arisen as our observational facilities have improved.
We study the decay J /ψ →π+π-π0 within the framework of the Khuri-Treiman equations. We find that the BESIII experimental dipion mass distribution in the ρ (770 )-region is well reproduced with a once-subtracted P -wave amplitude. Furthermore, we show that F -wave contributions to the amplitude improve the description of the data in the π π mass region around 1.5 GeV. We also present predictions for the J /ψ →π0γ* transition form factor.
In order to predict the cosmological abundance of dark matter, an estimation of particle rates in an expanding thermal environment is needed. For thermal dark matter, the non-relativistic regime sets the stage for the freeze-out of the dark matter energy density. We compute transition widths and annihilation, bound-state formation, and dissociation cross sections of dark matter fermion pairs in the unifying framework of non-relativistic effective field theories at finite temperature, with the thermal bath modeling the thermodynamical behaviour of the early universe. We reproduce and extend some known results for the paradigmatic case of a dark fermion species coupled to dark gauge bosons. The effective field theory framework allows to highlight their range of validity and consistency, and to identify some possible improvements.
In this manuscript, we elaborate on a procedure to derive ϵ-factorised differential equations for multi-scale, multi-loop classes of Feynman integrals that evaluate to special functions beyond multiple polylogarithms. We demonstrate the applicability of our approach to diverse classes of problems, by working out ϵ-factorised differential equations for single- and multi-scale problems of increasing complexity. To start we are reconsidering the well-studied equal-mass two-loop sunrise case, and move then to study other elliptic two-, three- and four-point problems depending on multiple different scales. Finally, we showcase how the same approach allows us to obtain ϵ-factorised differential equations also for Feynman integrals that involve geometries beyond a single elliptic curve.
We consider electrically neutral complex-vector particles V below the GeV mass scale that, from a low-energy perspective, couple to the photon via higher-dimensional form factor interactions. We derive ensuing astrophysical constraints by considering the anomalous energy loss from the Sun, Horizontal Branch, and Red Giant stars as well as from SN1987A that arise from vector pair production in these environments. Under the assumption that the dark states V constitute dark matter, the bounds are then complemented by direct and indirect detection as well as cosmological limits. The relic density from freeze-out and freeze-in mechanisms is also computed. On the basis of a UV-complete model that realizes the considered effective couplings, we also discuss the naturalness of the constrained parameter space, and provide an analysis of the zero mass limit of V .
We study a neural network framework for the numerical evaluation of Feynman loop integrals that are fundamental building blocks for perturbative computations of physical observables in gauge and gravity theories. We show that such a machine learning approach improves the convergence of the Monte Carlo algorithm for high-precision evaluation of multi-dimensional integrals compared to traditional algorithms. In particular, we use a neural network to improve the importance sampling. For a set of representative integrals appearing in the computation of the conservative dynamics for a compact binary system in General Relativity, we perform a quantitative comparison between the Monte Carlo integrators VEGAS and i-flow, an integrator based on neural network sampling.
We demonstrate the importance of quantum jumps in the nonequilibrium evolution of bottomonium states in the quark-gluon plasma. Based on nonrelativistic effective field theory and the open quantum system framework, we evolve the density matrix of color singlet and octet pairs. We show that quantum regeneration of singlet states from octet configurations is necessary to understand experimental results for the suppression of both bottomonium ground and excited states. The values of the heavy-quarkonium transport coefficients used are consistent with recent lattice QCD determinations.
Galaxy clusters in the Universe occupy the important position of nodes of the cosmic web. They are connected among them by filaments, elongated structures composed of dark matter, galaxies, and gas. The connection of galaxy clusters to filaments is important, as it is related to the process of matter accretion onto the former. For this reason, investigating the connections to the cosmic web of massive clusters, especially well-known ones for which a lot of information is available, is a hot topic in astrophysics. In a previous work, we performed an analysis of the filament connections of the Coma cluster of galaxies, as detected from the observed galaxy distribution. In this work we resort to a numerical simulation whose initial conditions are constrained to reproduce the local Universe, including the region of the Coma cluster to interpret our observations in an evolutionary context. We detect the filaments connected to the simulated Coma cluster and perform an accurate comparison with the cosmic web configuration we detect in observations. We perform an analysis of the halos' spatial and velocity distributions close to the filaments in the cluster outskirts. We conclude that, although not significantly larger than the average, the flux of accreting matter on the simulated Coma cluster is significantly more collimated close to the filaments with respect to the general isotropic accretion flux. This paper is the first example of such a result and the first installment in a series of publications which will explore the build-up of the Coma cluster system in connection to the filaments of the cosmic web as a function of redshift.
Galaxy evolution is an important topic, and our physical understanding must be complete to establish a correct picture. This includes a thorough treatment of feedback. The effects of thermal–mechanical and radiative feedback have been widely considered; however, cosmic rays (CRs) are also powerful energy carriers in galactic ecosystems. Resolving the capability of CRs to operate as a feedback agent is therefore essential to advance our understanding of the processes regulating galaxies. The effects of CRs are yet to be fully understood, and their complex multi-channel feedback mechanisms operating across the hierarchy of galaxy structures pose a significant technical challenge. This review examines the role of CRs in galaxies, from the scale of molecular clouds to the circumgalactic medium. An overview of their interaction processes, their implications for galaxy evolution, and their observable signatures is provided and their capability to modify the thermal and hydrodynamic configuration of galactic ecosystems is discussed. We present recent advancements in our understanding of CR processes and interpretation of their signatures, and highlight where technical challenges and unresolved questions persist. We discuss how these may be addressed with upcoming opportunities.
Context. Galaxy clusters are the most massive bound objects in the recent history of the universe; the number density of galaxy clusters as a function of mass and redshift is a sensitive function of the cosmological parameters. To use clusters for cosmological parameter studies, it is necessary to determine their masses as accurately as possible, which is typically done via scaling relations between mass and observables.
Aims: X-ray observables can be biased by a number of effects, including multiphase gas and projection effects, especially in the case where cluster temperatures and luminosities are estimated from single-model fits to all of the emission with an overdensity radius such as r500c. Using simulated galaxy clusters from a realistic cosmological simulation, our aim is to determine the importance of these biases in the context of Spectrum-Roentgen-Gamma/eROSITA observations of clusters.
Methods: We extracted clusters from the Box2_hr simulation from the Magneticum suite, and simulated synthetic eROSITA observations of these clusters using PHOX to generate the photons and the end-to-end simulator SIXTE to trace them through the optics and simulate the detection process. We fitted the spectra from these observations and compared the fitted temperatures and luminosities to the quantities derived from the simulations. We fitted an intrinsically scattered LX − T scaling relation to these measurements following a Bayesian approach with which we fully took into account the selection effects and the mass function.
Results: The largest biases on the estimated temperature and luminosities of the clusters come from the inadequacy of single-temperature model fits to represent emission from multiphase gas, and from a bias arising from cluster emission within the projected r500c along the line of sight but outside of the spherical r500c. We find that the biases on temperature and luminosity due to the projection of emission from other clusters within r500c is comparatively small. We find eROSITA-like measurements of Magneticum clusters following a LX − T scaling relation that has a broadly consistent but slightly shallower slope compared to the literature values. We also find that the intrinsic scatter of LX at given T is lower compared to the recent observational results where the selection effects are fully considered.
Context. Observational studies carried out to calibrate the masses of galaxy clusters often use mass-richness relations to interpret galaxy number counts.
Aims: Here, we aim to study the impact of the richness-mass relation modelled with cosmological parameters on mock mass calibrations.
Methods: We build a Gaussian process regression emulator of high-mass satellite abundance normalisation and log-slope based on cosmological parameters Ωm, Ωb, σ8, h0, and redshift z. We train our emulator using Magneticum hydrodynamic simulations that span different cosmologies for a given set of feedback scheme parameters.
Results: We find that the normalisation depends, albeit weakly, on cosmological parameters, especially on Ωm and Ωb, and that their inclusion in mock observations increases the constraining power of these latter by 10%. On the other hand, the log-slope is ≈1 in every setup, and the emulator does not predict it with significant accuracy. We also show that satellite abundance cosmology dependency differs between full-physics simulations, dark-matter only, and non-radiative simulations.
Conclusions: Mass-calibration studies would benefit from modelling of the mass-richness relations with cosmological parameters, especially if the satellite abundance cosmology dependency.
We investigate the nucleosynthesis and kilonova properties of binary neutron star (NS) merger models that lead to intermediate remnant lifetimes of ~0.1-1 s until black hole (BH) formation and describe all components of the material ejected during the dynamical merger phase, NS remnant evolution, and final viscous disintegration of the BH torus after gravitational collapse. To this end, we employ a combination of hydrodynamics, nucleosynthesis, and radiative transfer tools to achieve a consistent end-to-end modeling of the system and its observables. We adopt a novel version of the Shakura-Sunyaev scheme allowing the approximate turbulent viscosity inside the NS remnant to vary independently of the surrounding disk. We find that asymmetric progenitors lead to shorter remnant lifetimes and enhanced ejecta masses, although the viscosity affects the absolute values of these characteristics. The integrated production of lanthanides and heavier elements in such binary systems is subsolar, suggesting that the considered scenarios contribute in a subdominant fashion to r-process enrichment. One reason is that BH tori formed after delayed collapse exhibit less neutron-rich conditions than typically found, and often assumed in previous BH torus models, for early BH formation. The outflows in our models feature strong anisotropy as a result of the lanthanide-poor polar neutrino-driven wind pushing aside lanthanide-rich dynamical ejecta. Considering the complexity of the models, the estimated kilonova light curves show promising agreement with AT 2017gfo after times of several days, while the remaining inconsistencies at early times could possibly be overcome in binary configurations with a more dominant neutrino-driven wind relative to the dynamical ejecta.
Majoron-like bosons would emerge from a supernova (SN) core by neutrino coalescence of the form ν ν →ϕ and ν ¯ν ¯→ϕ with 100-MeV-range energies. Subsequent decays to (anti)neutrinos of all flavors provide a flux component with energies much larger than the usual flux from the "neutrino sphere." The absence of 100-MeV-range events in the Kamiokande-II and Irvine-Michigan-Brookhaven signal of SN 1987A implies that less than 1% of the total energy was thus emitted and provides the strongest constraint on the Majoron-neutrino coupling of g ≲10-9 MeV /mϕ for 100 eV ≲mϕ≲100 MeV . It is straightforward to extend our new argument to other hypothetical feebly interacting particles.
We discuss the recent progress that has been made towards the computation of three-loop non-planar master integrals relevant to next-to-next-to-next-to-leading-order (N$^3$LO) corrections to processes such as H+jet production at the LHC. We describe the analytic structure of these integrals, as well as several technical issues regarding their analytic computation using canonical differential equations. Finally, we comment on the remaining steps towards the computation of all relevant three-loop topologies and their application to amplitude calculations.
The effective field theory of large-scale structure allows for a consistent perturbative bias expansion of the rest-frame galaxy density field. In this work, we present a systematic approach to renormalize galaxy bias and stochastic parameters using a finite cutoff scale $\Lambda$. We derive the differential equations of the Wilson-Polchinski renormalization group that describe the evolution of the finite-scale bias parameters with $\Lambda$, analogous to the $\beta$-function running in QFT. We further provide the connection between the finite-cutoff scheme and the renormalization procedure for $n$-point functions that has been used as standard in the literature so far; some inconsistencies in the treatment of renormalized bias in current EFT analyses are pointed out as well. The fixed-cutoff scheme allows us to predict, in a principled way, the finite part of loop contributions which is due to perturbative modes and which, in the standard renormalization approach, is absorbed into counterterms. We expect that this will allow for the robust extraction of (a yet-to-be-determined amount of) additional cosmological information from galaxy clustering, both when using field-level techniques and $n$-point functions.
New-generation direct searches for low mass dark matter feature detection thresholds at energies well below 100 eV, much lower than the energies of commonly used x-ray calibration sources. This requires new calibration sources with sub-keV energies. When searching for nuclear recoil signals, the calibration source should ideally cause monoenergetic nuclear recoils in the relevant energy range. Recently, a new calibration method based on the radiative neutron capture on 182W with subsequent deexcitation via single γ -emission leading to a nuclear recoil peak at 112 eV was proposed. The CRESST-III dark matter search operated several CaWO4-based detector modules with detection thresholds below 100 eV in the past years. We report the observation of a peak around the expected energy of 112 eV in the data of three different detector modules recorded while irradiated with neutrons from different AmBe calibration sources. We compare the properties of the observed peaks with GEANT-4 simulations and assess the prospects of using this for the energy calibration of CRESST-III detectors.
The task of reconstructing particles from low-level detector response data to predict the set of final state particles in collision events represents a set-to-set prediction task requiring the use of multiple features and their correlations in the input data. We deploy three separate set-to-set neural network architectures to reconstruct particles in events containing a single jet in a fully-simulated calorimeter. Performance is evaluated in terms of particle reconstruction quality, properties regression, and jet-level metrics. The results demonstrate that such a high-dimensional end-to-end approach succeeds in surpassing basic parametric approaches in disentangling individual neutral particles inside of jets and optimizing the use of complementary detector information. In particular, the performance comparison favors a novel architecture based on learning hypergraph structure, HGPflow, which benefits from a physically-interpretable approach to particle reconstruction.
The singlet sector of the O(N) ϕ4-model in AdS4 at large-N, gives rise to a dual conformal field theory on the conformal boundary of AdS4, which is a deformation of the generalized free field. We identify and compute an AdS4 three-point one-loop fish diagram that controls the exact large-N dimensions and operator product coefficients (OPE) for all "double trace" operators as a function of the renormalized ϕ4-couplings. We find that the space of ϕ4-coupling is compact with a boundary at the bulk Landau pole. The dual CFT is unitary only in an interval of negative couplings bounded by the Landau pole where the lowest OPE coefficient diverges.
A modular representation for the semileptonic decays of baryons originating from spin-polarized and correlated baryon-antibaryon pairs is derived. The complete spin information of the decaying baryon is propagated to the daughter baryon via a real-valued matrix. It allows us to obtain joint differential distributions in sequential processes involving the semileptonic decay in a straightforward way. The formalism is suitable for extraction of the semileptonic form factors in experiments where strange baryon-antibaryon pairs are produced in electron-positron annihilation or in charmonia decays. We give examples such as the complete angular distributions in the e+e-→Λ Λ ¯ process, where Λ →p e-ν¯e and Λ ¯→p ¯π+. The formalism can also be used to describe the distributions in semileptonic decays of charm and bottom baryons. Using the same principles, the modules to describe electromagnetic and neutral current weak baryon decay processes involving a charged lepton-antilepton pair can be obtained. As an example, we provide the decay matrix for the Dalitz transition between two spin-1 /2 baryons.
We study the relation between the metallicities of ionized and atomic gas in star-forming galaxies at z = 0-3 using the Evolution and Assembly of GaLaxies and their Environments (EAGLE) cosmological, hydrodynamical simulations. This is done by constructing a dense grid of sight lines through the simulated galaxies and obtaining the star formation rate- and H I column density-weighted metallicities, Z SFR and Z H I, for each sightline as proxies for the metallicities of ionized and atomic gas, respectively. We find Z SFR ≳ Z H I for almost all sight lines, with their difference generally increasing with decreasing metallicity. The stellar masses of galaxies do not have a significant effect on this trend, but the positions of the sight lines with respect to the galaxy centers play an important role: the difference between the two metallicities decreases when moving toward the galaxy centers, and saturates to a minimum value in the central regions of galaxies, irrespective of redshift and stellar mass. This implies that the mixing of the two gas phases is most efficient in the central regions of galaxies where sight lines generally have high column densities of H I. However, a high H I column density alone does not guarantee a small difference between the two metallicities. In galaxy outskirts, the inefficiency of the mixing of star-forming gas with H I seems to dominate over the dilution of heavy elements in H I through mixing with the pristine gas. We find good agreement between the available observational data and the Z SFR-Z H I relation predicted by the EAGLE simulations. Though, observed regions with a nuclear starburst mode of star formation appear not to follow the same relation.