Earth and other rocky objects in the inner Solar system are depleted in carbon compared to objects in the outer Solar system, the Sun, or the ISM. It is believed that this is a result of the selective removal of refractory carbon from primordial circumstellar material. In this work, we study the irreversible release of carbon into the gaseous environment via photolysis and pyrolysis of refractory carbonaceous material during the disc phase of the early Solar system. We analytically solve the one-dimensional advection equation and derive an explicit expression that describes the depletion of carbonaceous material in solids under the influence of radial and vertical transport. We find both depletion mechanisms individually fail to reproduce Solar system abundances under typical conditions. While radial transport only marginally restricts photodecomposition, it is the inefficient vertical transport that limits carbon depletion under these conditions. We show explicitly that an increase in the vertical mixing efficiency, and/or an increase in the directly irradiated disc volume, favours carbon depletion. Thermal decomposition requires a hot inner disc (>500 K) beyond 3 au to deplete the formation region of Earth and chondrites. We find FU Ori-type outbursts to produce these conditions such that moderately refractory compounds are depleted. However, such outbursts likely do not deplete the most refractory carbonaceous compounds beyond the innermost disc region. Hence, the refractory carbon abundance at 1 au typically does not reach terrestrial levels. Nevertheless, under specific conditions, we find photolysis and pyrolysis combined to reproduce Solar system abundances.
Many essential building blocks of life, including amino acids, sugars, and nucleosides, require
aldehydes for prebiotic synthesis. Pathways for their formation under early earth conditions
are therefore of great importance. We investigated the formation of aldehydes by an
experimental simulation of primordial early earth conditions, in line with the metal-sulfur
world theory in an acetylene-containing atmosphere. We describe a pH-driven, intrinsically
autoregulatory environment that concentrates acetaldehyde and other higher molecular
weight aldehydes. We demonstrate that acetaldehyde is rapidly formed from acetylene over a
nickel sulfide catalyst in an aqueous solution, followed by sequential reactions progressively
increasing the molecular diversity and complexity of the reaction mixture. Interestingly,
through inherent pH changes, the evolution of this complex matrix leads to auto-stabilization
of de novo synthesized aldehydes and alters the subsequent synthesis of relevant biomo-
lecules rather than yielding uncontrolled polymerization products. Our results emphasize the
impact of progressively generated compounds on the overall reaction conditions and
strengthen the role of acetylene in forming essential building blocks that are fundamental for
the emergence of terrestrial life.
We discuss the potential of the multi-tracer technique to improve observational constraints of the local primordial non-Gaussianity (PNG) parameter $f_{\rm NL}$ from the galaxy power spectrum. For two galaxy samples $A$ and $B$, we show the constraining power is $\propto |b_1^B b_\phi^A - b_1^A b_\phi^B|$, where $b_1$ and $b_\phi$ are the linear and PNG galaxy bias parameters. This allows for significantly improved constraints compared to the traditional expectation $\propto |b_1^A - b_1^B|$ based on naive universality-like relations where $b_\phi \propto b_1$. Using IllustrisTNG galaxy simulation data, we find that different equal galaxy number splits of the full sample lead to different $|b_1^B b_\phi^A - b_1^A b_\phi^B|$, and thus have different constraining power. Of all of the strategies explored, splitting by $g-r$ color is the most promising, more than doubling the significance of detecting $f_{\rm NL}b_\phi \neq 0$. Importantly, since these are constraints on $f_{\rm NL}b_\phi$ and not $f_{\rm NL}$, they do not require priors on the $b_\phi(b_1)$ relation. For direct constraints on $f_{\rm NL}$, we show that multi-tracer constraints can be significantly more robust than single-tracer to $b_\phi$ misspecifications and uncertainties; this relaxes the precision and accuracy requirements for $b_\phi$ priors. Our results present new opportunities to improve our chances to detect and robustly constrain $f_{\rm NL}$, and strongly motivate galaxy formation simulation campaigns to calibrate the $b_\phi(b_1)$ relation.
We use the BOSS DR12 galaxy power spectrum to constrain compensated isocurvature perturbations (CIP), which are opposite-sign primordial baryon and dark matter perturbations that leave the total matter density unchanged. Long-wavelength CIP $\sigma(\vec{x})$ enter the galaxy density contrast as $\delta_g(\vec{x}) \supset b_\sigma\sigma(\vec{x})$, with $b_\sigma$ the linear CIP galaxy bias parameter. We parameterize the CIP spectra as $P_{\sigma\sigma} = A^2P_{\mathcal{R}\mathcal{R}}$ and $P_{\sigma\mathcal{R}} = \xi\sqrt{P_{\sigma\sigma}P_{\mathcal{R}\mathcal{R}}}$, where $A$ is the CIP amplitude and $\xi$ is the correlation with the curvature perturbations $\mathcal{R}$. We find a significance of detection of $Ab_\sigma \neq 0$ of $1.8\sigma$ for correlated ($\xi = 1$) and $3.7\sigma$ for uncorrelated ($\xi = 0$) CIP. Large-scale data systematics have a bigger impact for uncorrelated CIP, which may explain the large significance of detection. The constraints on $A$ depend on the assumed priors for the $b_\sigma$ parameter, which we estimate using separate universe simulations. Assuming $b_\sigma$ values representative of all halos we find $\sigma_A = 145$ for correlated CIP and $\sigma_{|A|} = 475$ for uncorrelated CIP. Our strongest uncorrelated CIP constraint is for $b_\sigma$ representative of the $33\%$ most concentrated halos, $\sigma_{|A|} = 197$, which is better than the current CMB bounds $|A| \lesssim 360$. We also discuss the impact of the local primordial non-Gaussianity parameter $f_{\rm NL}$ in CIP constraints. Our results demonstrate the power of galaxy data to place tight constraints on CIP, and motivate works to understand better the impact of data systematics, as well as to determine theory priors for $b_\sigma$.
Context: Several observations of the local Universe (LU) point towards the existence of very prominent structures. The presence of massive galaxy clusters and local super clusters on the one hand, but also large local voids and under-densities on the other hand. However, it is highly non trivial to connect such different observational selected tracers to the underlying dark matter (DM) distribution. Methods (abridged): We used a 500 Mpc/h large constrained simulation of the LU with initial conditions based on peculiar velocities derived from the CosmicFlows-2 catalogue and follow galaxy formation physics directly in the hydro-dynamical simulations to base the comparison on stellar masses of galaxies or X-ray luminosity of clusters. We also used the 2668 Mpc/h large cosmological box from the Magneticum simulations to evaluate the frequency of finding such anomalies in random patches within simulations. Results: We demonstrate that haloes and galaxies in our constrained simulation trace the local DM density field very differently. Thereby, this simulation reproduces the observed 50% under-density of galaxy clusters and groups within the sphere of ~100 Mpc when applying the same mass or X-ray luminosity limit used in the observed cluster sample (CLASSIX), which is consistent with a ~1.5$\sigma$ feature. At the same time, the simulation reproduces the observed over-density of massive galaxy clusters within the same sphere, which on its own also corresponds to a ~1.5$\sigma$ feature. Interestingly, we find that only 44 out of 15635 random realizations (i.e. 0.28%) are matching both anomalies, making the LU to be a ~3$\sigma$ environment. We finally compared a mock galaxy catalogue with the observed distribution of galaxies in the LU, finding also a match to the observed factor of two over-density at ~16 Mpc as well as the observed 15% under-density at ~40 Mpc distance.
Free-floating planets (FFPs) can result from dynamical scattering processes happening in the first few million years of a planetary system's life. Several models predict the possibility, for these isolated planetary-mass objects, to retain exomoons after their ejection. The tidal heating mechanism and the presence of an atmosphere with a relatively high optical thickness may support the formation and maintenance of oceans of liquid water on the surface of these satellites. In order to study the timescales over which liquid water can be maintained, we perform dynamical simulations of the ejection process and infer the resulting statistics of the population of surviving exomoons around free-floating planets. The subsequent tidal evolution of the moons' orbital parameters is a pivotal step to determine when the orbits will circularize, with a consequential decay of the tidal heating. We find that close-in ($a \lesssim 25 $R$_{\rm J}$) Earth-mass moons with CO$_2$-dominated atmospheres could retain liquid water on their surfaces for long timescales, depending on the mass of the atmospheric envelope and the surface pressure assumed. Massive atmospheres are needed to trap the heat produced by tidal friction that makes these moons habitable. For Earth-like pressure conditions ($p_0$ = 1 bar), satellites could sustain liquid water on their surfaces up to 52 Myr. For higher surface pressures (10 and 100 bar), moons could be habitable up to 276 Myr and 1.6 Gyr, respectively. Close-in satellites experience habitable conditions for long timescales, and during the ejection of the FFP remain bound with the escaping planet, being less affected by the close encounter.
This work presents the results from extending the long-term monitoring program of stellar motions within the Galactic Center to include stars with separations of 2-7 arcsec from the compact radio source, Sgr A*. In comparison to the well studied inner 2 arcsec, a longer baseline in time is required to study these stars. With 17 years of data, a sufficient number of positions along the orbits of these outer stars can now be measured. This was achieved by designing a source finder to track the positions of ∼2000 stars in NACO/VLT adaptive-optics-assisted images of the Galactic Center from 2002 to 2019. Of the studied stars, 54 exhibit significant accelerations toward Sgr A*, most of which have separations of between 2 and 3 arcsec from the black hole. A further 20 of these stars have measurable radial velocities from SINFONI/VLT stellar spectra, which allows for the calculation of the orbital elements for these stars, thus increasing the number of known orbits in the Galactic Center by ∼40%. With orbits, we can consider which structural features within the Galactic Center nuclear star cluster these stars belong to. Most of the stars have orbital solutions that are consistent with the known clockwise rotating disk feature. Further, by employing Monte Carlo sampling for stars without radial velocity measurements, we show that many stars have a subset of possible orbits that are consistent with one of the known disk features within the Galactic Center.
We analyse $Z^\prime$ contributions to FCNC processes at the one-loop level. In analogy to the CKM matrix we introduce two $3\times3$ unitary matrices $\hat\Delta_d(Z^\prime)$ and $\hat\Delta_u(Z^\prime)$ which are also hermitian. They govern the flavour interactions mediated by $Z^\prime$ between down-quarks and up-quarks, respectively, with $\hat\Delta_d(Z^\prime)=\hat\Delta_u(Z^\prime)\equiv \hat\Delta_L(Z^\prime)$ for left-handed currents due to the unbroken $\text{SU(2)}_L$ gauge symmetry. This assures the suppression of these contributions to all $Z^\prime$ mediated FCNC processes at the one-loop level. As, in contrast to the GIM mechanism, one-loop $Z^\prime$ contributions to flavour observables in $K$ and $B_{s,d}$ systems are governed by down-quark masses, they are ${\cal O}(m^2_b/M^2_{Z^\prime})$ and negligible. With the ${\cal O}(m^2_t/M^2_{Z^\prime})$ suppression they are likely negligible also in the $D$ system. We present an explicit parametrization of $\hat\Delta_L(Z^\prime)$ in terms of two mixing angles and two complex phases that distinguishes it profoundly from the CKM matrix. This framework can be generalized to purely leptonic decays with matrices analogous to the PMNS matrix but profoundly different from it. Interestingly, the breakdown of flavour universality between the first two generations and the third one, both for quark and lepton couplings to $Z^\prime$, is identified as a consequence of $\hat\Delta_L(Z^\prime)$ being hermitian. The importance of the unitarity for both $\hat\Delta_L(Z^\prime)$ and the CKM matrix in the light of the Cabibbo anomaly is emphasized.
$Z^\prime$ models belong to the ones that can most easily explain the anomalies in $b\to s \mu^+\mu^-$ transitions. However, such an explanation by a single $Z^\prime$ gauge boson, as done in the literature, is severly constrained by the $B^0_s-\bar B_s^0$ mixing. Also the recent finding, hat the mass differences $\Delta M_s$, $\Delta M_d$, the CP-violating parameter $\varepsilon_K$, and the mixing induced CP-asymmetries $S_{\psi K_S}$ and $S_{\psi \phi}$ can be simultaneously well described within the SM without new physics (NP) contributions, is a challenge for $Z^\prime$ models with a single $Z^\prime$ contributing at tree-level to quark mixing. We point out that including a second $Z^\prime$ in the model allows to eliminate simultaneously tree-level contributions to the five $\Delta F=2$ observables used in the determination of the CKM parameters while leaving the room for NP in $\Delta M_K$ and $\Delta M_D$. The latter one can be removed at the price of infecting $\Delta M_s$ or $\Delta M_d$ by NP which is presently disfavoured. This pattern is transparently seen using the new mixing matrix for $Z^\prime$ interactions with quarks. This strategy allows significant tree-level contributions to $K$, $B_s$ and $B_d$ decays thereby allowing to explain the existing anomalies in $b\to s\mu^+\mu^-$ transitions and the anticipated anomaly in the ratio $\varepsilon'/\varepsilon$ much easier than in $Z^\prime$-Single scenarios. The proposed $Z^\prime$-Tandem mechanism bears some similarities to the GIM mechanism for the suppression of the FCNCs in the SM with the role of the charm quark played here by the second $Z^\prime$. However, it differs from the latter profoundly in that only NP contributions to quark mixing are eliminated at tree-level. We discuss briefly the implied flavour patterns in $K$ and $B$ decay observables in this NP scenario.
We demonstrate the importance of quantum jumps in the nonequilibrium evolution of bottomonium states in the quark-gluon plasma. Based on nonrelativistic effective field theory and the open quantum system framework, we evolve the density matrix of color singlet and octet pairs. We show that quantum regeneration of singlet states from octet configurations is necessary to understand experimental results for the suppression of both bottomonium ground and excited states. The values of the heavy-quarkonium transport coefficients used are consistent with recent lattice QCD determinations.
Resistive strip Micromegas (MICRO-MEsh GAseous Structure) detectors provide even at square meter sizes a high spatial resolution for the reconstruction of Minimum Ionizing Particles (MIPs) like muons. Micromegas detectors consist of three parallel planar structures. A cathode, a grounded mesh and a segmented anode structure form the detector. Square meter sizes challenge the high-voltage stability during operation, especially when using the frequently used gas mixture of Ar:CO2 (93:7 vol%) with low quencher content. To improve the HV-stability and to enhance the discharge quenching different gas mixtures have been investigated. A very promising one has an 2% admixture of isobutane forming the ternary gas Ar:CO2:iC4H10 (93:5:2 vol%). Long term irradiation studies investigating both gas mixtures interrupted by cosmic muon tracking efficiency measurements have been performed by irradiation with neutrons and gammas from a 10 GBq Am-Be source for a period of two years. The comparison shows gain increase under Ar:CO2:iC4H10 and a considerably improved HV-stable operation of the detector. It is investigated for any performance deterioration for each of the two gas mixtures with focus on pulse-height and changes of efficiency.
We report a comprehensive study of the cyanopolyyne chemistry in the prototypical prestellar core L1544. Using the 100m Robert C. Byrd Green Bank Telescope (GBT) we observe 3 emission lines of HC$_3$N, 9 lines of HC$_5$N, 5 lines of HC$_7$N, and 9 lines of HC$_9$N. HC$_9$N is detected for the first time towards the source. The high spectral resolution ($\sim$ 0.05 km s$^{-1}$) reveals double-peak spectral line profiles with the redshifted peak a factor 3-5 brighter. Resolved maps of the core in other molecular tracers indicates that the southern region is redshifted. Therefore, the bulk of the cyanopolyyne emission is likely associated with the southern region of the core, where free carbon atoms are available to form long chains, thanks to the more efficient illumination of the interstellar field radiation. We perform a simultaneous modelling of the HC$_5$N, HC$_7$N, and HC$_9$N lines, to investigate the origin of the emission. To enable this analysis, we performed new calculation of the collisional coefficients. The simultaneous fitting indicates a gas kinetic temperature of 5--12 K, a source size of 80$\arcsec$, and a gas density larger than 100 cm$^{-3}$. The HC$_5$N:HC$_7$N:HC$_9$N abundance ratios measured in L1544 are about 1:6:4. We compare our observations with those towards the the well-studied starless core TMC-1 and with the available measurements in different star-forming regions. The comparison suggests that a complex carbon chain chemistry is active in other sources and it is related to the presence of free gaseous carbon. Finally, we discuss the possible formation and destruction routes in the light of the new observations.
For Majorana fermions the anapole moment is the only allowed electromagnetic multipole moment. In this work we calculate the anapole moment induced at one-loop by the Yukawa and gauge interactions of a Majorana fermion, using the pinch technique to ensure the finiteness and gauge-invariance of the result. As archetypical example of a Majorana fermion, we calculate the anapole moment for the lightest neutralino in the Minimal Supersymmetric Standard Model, and specifically in the bino, wino and higgsino limits. Finally, we briefly discuss the implications of the anapole moment for the direct detection of dark matter in the form of Majorana fermions.
We study the relation between the metallicities of ionised and neutral gas in star-forming galaxies at z=1-3 using the EAGLE cosmological, hydrodynamical simulations. This is done by constructing a dense grid of sightlines through the simulated galaxies and obtaining the star formation rate- and HI column density-weighted metallicities, Z_{SFR} and Z_{HI}, for each sightline as proxies for the metallicities of ionised and neutral gas, respectively. We find Z_{SFR} > Z_{HI} for almost all sightlines, with their difference generally increasing with decreasing metallicity. The stellar masses of galaxies do not have a significant effect on this trend, but the positions of the sightlines with respect to the galaxy centres play an important role: the difference between the two metallicities decreases when moving towards the galaxy centres, and saturates to a minimum value in the central regions of galaxies, irrespective of redshift and stellar mass. This implies that the mixing of the two gas phases is most efficient in the central regions of galaxies where sightlines generally have high column densities of HI. However, a high HI column density alone does not guarantee a small difference between the two metallicities. In galaxy outskirts, the inefficiency of the mixing of star-forming gas with HI seems to dominate over the dilution of heavy elements in HI through mixing with the pristine gas. We find good agreement between the limited amount of available observational data and the Z_{SFR}-Z_{HI} relation predicted by the EAGLE simulations, but more data is required for stringent tests.
The Standard Model (SM) does not contain by definition any new physics (NP) contributions to any observable but contains four CKM parameters which are not predicted by this model. We point out that if these four parameters are determined in a global fit which includes processes that are infected by NP and therefore by sources outside the SM, the resulting so-called SM contributions to rare decay branching ratios cannot be considered as genuine SM contributions to the latter. On the other hand genuine SM predictions, that are free from the CKM dependence, can be obtained for suitable ratios of the K and B rare decay branching ratios to Δ Ms , Δ Md and | εK| , all calculated within the SM. These three observables contain by now only small hadronic uncertainties and are already well measured so that rather precise SM predictions for the ratios in question can be obtained. In this context the rapid test of NP infection in the Δ F =2 sector is provided by a | Vcb|-γ plot that involves Δ Ms , Δ Md , | εK| , and the mixing induced CP-asymmetry Sψ KS. As with the present hadronic matrix elements this test turns out to be negative, assuming negligible NP infection in the Δ F =2 sector and setting the values of these four observables to the experimental ones, allows to obtain SM predictions for all K and B rare decay branching ratios that are most accurate to date and as a byproduct to obtain the full CKM matrix on the basis of Δ F =2 transitions alone. Using this strategy we obtain SM predictions for 26 branching ratios for rare semileptonic and leptonic K and B decays with the μ+μ- pair or the ν ν ¯ pair in the final state. Most interesting turn out to be the anomalies in the low q2 bin in B+→K+μ+μ- (5.1 σ ) and Bs→ϕ μ+μ- (4.8 σ ).
Strong gravitational lensing and microlensing of supernovae (SNe) are emerging as a new probe of cosmology and astrophysics in recent years. We provide an overview of this nascent research field, starting with a summary of the first discoveries of strongly lensed SNe. We describe the use of the time delays between multiple SN images as a way to measure cosmological distances and thus constrain cosmological parameters, particularly the Hubble constant, whose value is currently under heated debates. New methods for measuring the time delays in lensed SNe have been developed, and the sample of lensed SNe from the upcoming Rubin Observatory Legacy Survey of Space and Time (LSST) is expected to provide competitive cosmological constraints. Lensed SNe are also powerful astrophysical probes. We review the usage of lensed SNe to constrain SN progenitors, acquire high-z SN spectra through lensing magnifications, infer SN sizes via microlensing, and measure properties of dust in galaxies. The current challenge in the field is the rarity and difficulty in finding lensed SNe. We describe various methods and ongoing efforts to find these spectacular explosions, forecast the properties of the expected sample of lensed SNe from upcoming surveys particularly the LSST, and summarize the observational follow-up requirements to enable the various scientific studies. We anticipate the upcoming years to be exciting with a boom in lensed SN discoveries.
The mechanisms that maintain turbulence in the interstellar medium (ISM) are still not identified. This work investigates how we can distinguish between two fundamental driving mechanisms: the accumulated effect of stellar feedback versus the energy injection from galactic scales. We perform a series of numerical simulations describing a stratified star-forming ISM subject to self-consistent stellar feedback. Large-scale external turbulent driving, of various intensities, is added to mimic galactic driving mechanisms. We analyse the resulting column density maps with a technique called Multi-scale non-Gaussian segmentation, which separates the coherent structures and the Gaussian background. This effectively discriminates between the various simulations and is a promising method to understand the ISM structure. In particular, the power spectrum of the coherent structures flattens above 60 pc when turbulence is driven only by stellar feedback. When large-scale driving is applied, the turn-over shifts to larger scales. A systematic comparison with the Large Magellanic Cloud (LMC) is then performed. Only 1 out of 25 regions has a coherent power spectrum that is consistent with the feedback-only simulation. A detailed study of the turn-over scale leads us to conclude that regular stellar feedback is not enough to explain the observed ISM structure on scales larger than 60 pc. Extreme feedback in the form of supergiant shells likely plays an important role but cannot explain all the regions of the LMC. If we assume ISM structure is generated by turbulence, another large-scale driving mechanism is needed to explain the entirety of the observations.
Planets are born from the gas and dust discs surrounding young stars. Energetic radiation from the central star can drive thermal outflows from the discs atmospheres, strongly affecting the evolution of the discs and the nascent planetary system. In this context, several numerical models of varying complexity have been developed to study the process of disc photoevaporation from their central stars. We describe the numerical techniques, the results and the predictivity of current models and identify observational tests to constrain them.
We use gradient flow to compute the static force based on a Wilson loop with a chromoelectric field insertion. The result can be compared on one hand to the static force from the numerical derivative of the lattice static energy, and on the other hand to the perturbative calculation, allowing a precise extraction of the $\Lambda_0$ parameter. This study may open the way to gradient flow calculations of correlators of chromoelectric and chromomagnetic fields, which typically arise in the nonrelativistic effective field theory factorization.
We present limits on the spin-independent interaction cross section of dark matter particles with silicon nuclei, derived from data taken with a cryogenic calorimeter with 0.35 g target mass operated in the CRESST-III experiment. A baseline nuclear recoil energy resolution of $(1.36\pm 0.05)$ eV$_{\text{nr}}$, currently the lowest reported for macroscopic particle detectors, and a corresponding energy threshold of $(10.0\pm 0.2)$ eV$_{\text{nr}}$ have been achieved, improving the sensitivity to light dark matter particles with masses below 160 MeV/c$^2$ by a factor of up to 20 compared to previous results. We characterize the observed low energy excess, and we exclude noise triggers and radioactive contaminations on the crystal surfaces as dominant contributions.
In many astrophysical applications, the cost of solving a chemical network represented by a system of ordinary differential equations (ODEs) grows significantly with the size of the network and can often represent a significant computational bottleneck, particularly in coupled chemo-dynamical models. Although standard numerical techniques and complex solutions tailored to thermochemistry can somewhat reduce the cost, more recently, machine learning algorithms have begun to attack this challenge via data-driven dimensional reduction techniques. In this work, we present a new class of methods that take advantage of machine learning techniques to reduce complex data sets (autoencoders), the optimization of multiparameter systems (standard backpropagation), and the robustness of well-established ODE solvers to to explicitly incorporate time dependence. This new method allows us to find a compressed and simplified version of a large chemical network in a semiautomated fashion that can be solved with a standard ODE solver, while also enabling interpretability of the compressed, latent network. As a proof of concept, we tested the method on an astrophysically relevant chemical network with 29 species and 224 reactions, obtaining a reduced but representative network with only 5 species and 12 reactions, and an increase in speed by a factor 65.
We generalize the next-to-leading order QCD calculations for the decay rates of $h\to gg$ and $h\to\gamma\gamma$ to the case of anomalous couplings of the Higgs boson. We demonstrate how this computation can be done in a consistent way within the framework of an electroweak chiral Lagrangian, based on a systematic power counting. It turns out that no additional coupling parameters arise at NLO in QCD beyond those already present at leading order. The impact of QCD is large for $h\to gg$ and the uncertainties from QCD are significantly reduced at NLO. $h\to\gamma\gamma$ is only mildly affected by QCD; here the NLO treatment practically eliminates the uncertainties. Consequently, our results will allow for an improved determination of anomalous Higgs couplings from these processes. The relation of our framework to a treatment in Standard Model effective field theory is also discussed.
We introduce a PYTHON package that provides simple and unified access to a collection of datasets from fundamental physics research—including particle physics, astroparticle physics, and hadron- and nuclear physics—for supervised machine learning studies. The datasets contain hadronic top quarks, cosmic-ray-induced air showers, phase transitions in hadronic matter, and generator-level histories. While public datasets from multiple fundamental physics disciplines already exist, the common interface and provided reference models simplify future work on cross-disciplinary machine learning and transfer learning in fundamental physics. We discuss the design and structure and line out how additional datasets can be submitted for inclusion. As showcase application, we present a simple yet flexible graph-based neural network architecture that can easily be applied to a wide range of supervised learning tasks. We show that our approach reaches performance close to dedicated methods on all datasets. To simplify adaptation for various problems, we provide easy-to-follow instructions on how graph-based representations of data structures, relevant for fundamental physics, can be constructed and provide code implementations for several of them. Implementations are also provided for our proposed method and all reference algorithms.
We present MUSE spectroscopy, Megacam imaging, and Chandra X-ray emission for SPT-CL J0307-6225, a $z = 0.58$ major merging galaxy cluster with a large BCG-SZ centroid separation and a highly disturbed X-ray morphology. The galaxy density distribution shows two main overdensities with separations of 0.144 and 0.017 arcmin to their respective BCGs. We characterize the central regions of the two colliding structures, namely 0307-6225N and 0307-6225S, finding velocity derived masses of M200, N = 2.44 ± 1.41 × 1014M⊙ and M200, S = 3.16 ± 1.88 × 1014M⊙, with a line-of-sight velocity difference of |Δv| = 342 km s-1. The total dynamically derived mass is consistent with the SZ derived mass of 7.63 h$_{70}^{-1}$ ± 1.36 × 1014M⊙. We model the merger using the Monte Carlo Merger Analysis Code, estimating a merging angle of 36$^{+14}_{-12}$ ° with respect to the plane of the sky. Comparing with simulations of a merging system with a mass ratio of 1:3, we find that the best scenario is that of an ongoing merger that began 0.96$^{+0.31}_{-0.18}$ Gyr ago. We also characterize the galaxy population using Hδ and [O II] λ3727 Å lines. We find that most of the emission-line galaxies belong to 0307-6225S, close to the X-ray peak position with a third of them corresponding to red-cluster sequence galaxies, and the rest to blue galaxies with velocities consistent with recent periods of accretion. Moreover, we suggest that 0307-6225S suffered a previous merger, evidenced through the two equally bright BCGs at the centre with a velocity difference of ~674 km s-1.
Simulations of idealized star-forming filaments of finite length typically show core growth that is dominated by two cores forming at its respective end. The end cores form due to a strong increasing acceleration at the filament ends that leads to a sweep-up of material during the filament collapse along its axis. As this growth mode is typically faster than any other core formation mode in a filament, the end cores usually dominate in mass and density compared to other cores forming inside a filament. However, observations of star-forming filaments do not show this prevalence of cores at the filament ends. We explore a possible mechanism to slow the growth of the end cores using numerical simulations of simultaneous filament and embedded core formation, in our case a radially accreting filament forming in a finite converging flow. While such a set-up still leads to end cores, they soon begin to move inwards and a density gradient is formed outside of the cores by the continued accumulation of material. As a result, the outermost cores are no longer located at the exact ends of the filament and the density gradient softens the inward gravitational acceleration of the cores. Therefore, the two end cores do not grow as fast as expected and thus do not dominate over other core formation modes in the filament.
Small grains play an essential role in astrophysical processes such as chemistry, radiative transfer, and gas/dust dynamics. The population of small grains is mainly maintained by the fragmentation process due to colliding grains. An accurate treatment of dust fragmentation is required in numerical modelling. However, current algorithms for solving fragmentation equation suffer from an overdiffusion in the conditions of 3D simulations. To tackle this challenge, we developed a discontinuous Galerkin scheme to solve efficiently the non-linear fragmentation equation with a limited number of dust bins.
The Sunyaev-Zeldovich (SZ) effect is a powerful tool in modern cosmology. With future observations promising ever improving SZ measurements, the relativistic corrections to the SZ signals from galaxy groups and clusters are increasingly relevant. As such, it is important to understand the differences between three temperature measures: (a) the average relativistic SZ (rSZ) temperature, (b) the mass-weighted temperature relevant for the thermal SZ (tSZ) effect, and (c) the X-ray spectroscopic temperature. In this work, we compare these cluster temperatures, as predicted by the BAHAMAS & MACSIS, ILLUSTRISTNG, MAGNETICUM, and THE THREE HUNDRED PROJECT simulations. Despite the wide range of simulation parameters, we find the SZ temperatures are consistent across the simulations. We estimate a $\simeq 10{{\ \rm per\ cent}}$ level correction from rSZ to clusters with Y ≃ 10-4 Mpc-2. Our analysis confirms a systematic offset between the three temperature measures; with the rSZ temperature $\simeq 20{{\ \rm per\ cent}}$ larger than the other measures, and diverging further at higher redshifts. We demonstrate that these measures depart from simple self-similar evolution and explore how they vary with the defined radius of haloes. We investigate how different feedback prescriptions and resolutions affect the observed temperatures, and discover the SZ temperatures are rather insensitive to these details. The agreement between simulations indicates an exciting avenue for observational and theoretical exploration, determining the extent of relativistic SZ corrections. We provide multiple simulation-based fits to the scaling relations for use in future SZ modelling.
Disc winds and planet formation are considered to be two of the most important mechanisms that drive the evolution and dispersal of protoplanetary discs and in turn define the environment in which planets form and evolve. While both have been studied extensively in the past, we combine them into one model by performing three-dimensional radiation-hydrodynamic simulations of giant planet hosting discs that are undergoing X-ray photoevaporation, with the goal to analyse the interactions between both mechanisms. In order to study the effect on observational diagnostics, we produce synthetic observations of commonly used wind-tracing forbidden emission lines with detailed radiative transfer and photoionization calculations. We find that a sufficiently massive giant planet carves a gap in the gas disc that is deep enough to affect the structure and kinematics of the pressure-driven photoevaporative wind significantly. This effect can be strong enough to be visible in the synthetic high-resolution observations of some of our wind diagnostic lines, such as the [O I] 6300 Å or [S II] 6730 Å lines. When the disc is observed at inclinations around 40° and higher, the spectral line profiles may exhibit a peak in the redshifted part of the spectrum, which cannot easily be explained by simple wind models alone. Moreover, massive planets can induce asymmetric substructures within the disc and the photoevaporative wind, giving rise to temporal variations of the line profiles that can be strong enough to be observable on time-scales of less than a quarter of the planet's orbital period.
We explore the potential of our novel triaxial modelling machinery in recovering the viewing angles, the shape, and the orbit distribution of galaxies by using a high-resolution N-body merger simulation. Our modelling technique includes several recent advancements. (i) Our new triaxial deprojection algorithm shape3d is able to significantly shrink the range of possible orientations of a triaxial galaxy and therefore to constrain its shape relying only on photometric information. It also allows to probe degeneracies, i.e. to recover different deprojections at the same assumed orientation. With this method we can constrain the intrinsic shape of the N-body simulation, i.e. the axis ratios p = b/a and q = c/a, with Δp and Δq ≲ 0.1 using only photometric information. The typical accuracy of the viewing angles reconstruction is 15°-20°. (ii) Our new triaxial Schwarzschild code smart exploits the full kinematic information contained in the entire non-parametric line-of-sight velocity distributions along with a 5D orbital sampling in phase space. (iii) We use a new generalized Akaike information criterion AICp to optimize the smoothing and to select the best-fitting model, avoiding potential biases in purely χ2-based approaches. With our deprojected densities, we recover the correct orbital structure and anisotropy parameter β with Δβ ≲ 0.1. These results are valid regardless of the tested orientation of the simulation and suggest that even despite the known intrinsic photometric and kinematic degeneracies the above described advanced methods make it possible to recover the shape and the orbital structure of triaxial bodies with unprecedented accuracy.
Several tentative associations between high-energy neutrinos and astrophysical sources have been recently reported, but a conclusive identification of these potential neutrino emitters remains challenging. We explore the use of Monte Carlo simulations of source populations to gain deeper insight into the physical implications of proposed individual source-neutrino associations. In particular, we focus on the IC170922A-TXS 0506+056 observation. Assuming a null model, we find a 7.6% chance of mistakenly identifying coincidences between γ-ray flares from blazars and neutrino alerts in 10-year surveys. We confirm that a blazar-neutrino connection based on the γ-ray flux is required to find a low chance coincidence probability and, therefore, a significant IC170922A-TXS 0506+056 association. We then assume this blazar-neutrino connection for the whole population and find that the ratio of neutrino to γ-ray fluxes must be ≲10−2 in order not to overproduce the total number of neutrino alerts seen by IceCube. For the IC170922A-TXS 0506+056 association to make sense, we must either accept this low flux ratio or suppose that only some rare sub-population of blazars is capable of high-energy neutrino production. For example, if we consider neutrino production only in blazar flares, we expect the flux ratio of between 10−3 and 10−1 to be consistent with a single coincident observation of a neutrino alert and flaring γ-ray blazar. These constraints should be interpreted in the context of the likelihood models used to find the IC170922A-TXS 0506+056 association, which assumes a fixed power-law neutrino spectrum of E−2.13 for all blazars.
Recently, two new families of non-linear massive electrodynamics have been proposed: Proca-Nuevo and Extended Proca-Nuevo. We explicitly show that both families are irremediably ghostful in two dimensions. Our calculations indicate the need to revisit the classical consistency of (Extended) Proca-Nuevo in higher dimensions before these settings can be regarded as ghostfree.
Neutron stars (NSs) and black holes (BHs) are born when the final collapse of the stellar core terminates the lives of stars more massive than about 9 Msun. This can trigger the powerful ejection of a large fraction of the star's material in a core-collapse supernova (CCSN), whose extreme luminosity is energized by the decay of radioactive isotopes such as 56Ni and 56Co. When evolving in close binary systems, the compact relics of such infernal catastrophes spiral towards each other on orbits gradually decaying by gravitational-wave emission. Ultimately, the violent collision of the two components forms a more massive, rapidly spinning remnant, again accompanied by the ejection of considerable amounts of matter. These merger events can be observed by high-energy bursts of gamma rays with afterglows and electromagnetic transients called kilonovae, which radiate the energy released in radioactive decays of freshly assembled rapid neutron-capture elements. By means of their mass ejection and the nuclear and neutrino reactions taking place in the ejecta, both CCSNe and compact object mergers (COMs) are prominent sites of heavy-element nucleosynthesis and play a central role in the cosmic cycle of matter and the chemical enrichment history of galaxies. The nuclear equation of state (EoS) of NS matter, from neutron-rich to proton-dominated conditions and with temperatures ranging from about zero to ~100 MeV, is a crucial ingredient in these astrophysical phenomena. It determines their dynamical processes, their remnant properties even at the level of deciding between NS or BH, and the properties of the associated emission of neutrinos, whose interactions govern the thermodynamic conditions and the neutron-to-proton ratio for nucleosynthesis reactions in the innermost ejecta. This chapter discusses corresponding EoS dependent effects of relevance in CCSNe as well as COMs. (slightly abridged)
The recently developed B-Mesogenesis scenario predicts decays of B mesons into a baryon and hypothetical dark antibaryon Ψ. We suggest a method to calculate the amplitude of the simplest exclusive decay mode B+ → pΨ. Considering two models of B-Mesogenesis, we obtain the B → p hadronic matrix elements by applying QCD light-cone sum rules with the proton light-cone distribution amplitudes. We estimate the B+ → pΨ decay width as a function of the mass and effective coupling of the dark antibaryon.
We investigate the formation and evolution of 'primordial' dusty rings occurring in the inner regions of protoplanetary discs, with the help of long-term, coupled dust-gas, magnetohydrodynamic simulations. The simulations are global and start from the collapse phase of the parent cloud core, while the dead zone is calculated via an adaptive α formulation by taking into account the local ionization balance. The evolution of the dusty component includes its growth and back reaction on to the gas. Previously, using simulations with only a gas component, we showed that dynamical rings form at the inner edge of the dead zone. We find that when dust evolution, as well as magnetic field evolution in the flux-freezing limit are included, the dusty rings formed are more numerous and span a larger radial extent in the inner disc, while the dead zone is more robust and persists for a much longer time. We show that these dynamical rings concentrate enough dust mass to become streaming unstable, which should result in a rapid planetesimal formation even in the embedded phases of the system. The episodic outbursts caused by the magnetorotational instability have a significant impact on the evolution of the rings. The outbursts drain the inner disc of grown dust, however, the period between bursts is sufficiently long for the planetesimal growth via streaming instability. The dust mass contained within the rings is large enough to ultimately produce planetary systems with the core accretion scenario. The low-mass systems rarely undergo outbursts, and, thus, the conditions around such stars can be especially conducive for planet formation.
The Mini-EUSO telescope was launched for the International Space Station on August 22nd , 2019 to observe from the ISS orbit (∼ 400 km altitude) various phenomena occurring in the Earth's atmosphere through a UV-transparent window located in the Russian Zvezda Module. Mini-EUSO is based on a set of two Fresnel lenses of 25 cm diameter each and a focal plane of 48 × 48 pixels, for a total field of view of 44 ° . Until July 2021, Mini-EUSO performed a total of 41 data acquisition sessions, obtaining UV images of the Earth in the 290 nm - 430 nm band with temporal and spatial resolution on ground of 2.5 μs and 6.3 × 6.3 km2, respectively. The data acquisition was performed with a 2.5 μs sampling rate, using a dedicated trigger looking for signals with a typical duration of tens of μs.
In the present paper the analysis of the performance of the 2.5 μs trigger logic is presented, with a focus on the method used for the analysis and the categories of triggered events. The expected functioning of the trigger logic has been confirmed, with the trigger rate on spurious events that remains within the requirements in nominal background conditions. The trigger logic detected several different phenomena, including lightning strikes, elves, ground-based flashers and events with EAS-like characteristics.
We report the detection of the ground state rotational emission of ammonia, ortho-NH3 (JK = 10 → 00) in a gravitationally lensed intrinsically hyperluminous star-bursting galaxy at z = 2.6. The integrated line profile is consistent with other molecular and atomic emission lines which have resolved kinematics well modelled by a 5 kpc-diameter rotating disc. This implies that the gas responsible for NH3 emission is broadly tracing the global molecular reservoir, but likely distributed in pockets of high density (n ≳ 5 × 104 cm-3). With a luminosity of 2.8 × 106 L⊙, the NH3 emission represents 2.5 × 10-7 of the total infrared luminosity of the galaxy, comparable to the ratio observed in the Kleinmann-Low nebula in Orion and consistent with sites of massive star formation in the Milky Way. If $L_{\rm NH_3}/L_{\rm IR}$ serves as a proxy for the 'mode' of star formation, this hints that the nature of star formation in extreme starbursts in the early Universe is similar to that of Galactic star-forming regions, with a large fraction of the cold interstellar medium in this state, plausibly driven by a storm of violent disc instabilities in the gas-dominated disc. This supports the 'full of Orions' picture of star formation in the most extreme galaxies seen close to the peak epoch of stellar mass assembly.
With the advent of high cadence, all-sky automated surveys, supernovae (SNe) are now discovered closer than ever to their dates of explosion. However, young pre-maximum light follow-up spectra of Type Ic supernovae (SNe Ic), probably arising from the most stripped massive stars, remain rare despite their importance. In this paper we present a set of 49 optical spectra observed with the Las Cumbres Observatory through the Global Supernova Project for 6 SNe Ic, including a total of 17 pre-maximum spectra, of which 8 are observed more than a week before V-band maximum light. This dataset increases the total number of publicly available pre-maximum light SN Ic spectra by 25% and we provide publicly available SNID templates that will significantly aid in the fast identification of young SNe Ic in the future. We present detailed analysis of these spectra, including Fe II 5169 velocity measurements, O I 7774 line strengths, and continuum shapes. We compare our results to published samples of stripped supernovae in the literature and find one SN in our sample that stands out. SN 2019ewu has a unique combination of features for a SN Ic: an extremely blue continuum, high absorption velocities, a P-cygni shaped feature almost 2 weeks before maximum light that TARDIS radiative transfer modeling attributes to C II rather than H$\alpha$, and weak or non-existent O I 7774 absorption feature until maximum light.
Comparing Galactic chemical evolution models to the observed elemental abundances in the Milky Way, we show that neutron star mergers can be a leading r-process site only if such mergers have very short delay times and/or beneficial masses of the compact objects at low metallicities. Namely, black hole-neutron star mergers, depending on the black-hole spins, can play an important role in the early chemical enrichment of the Milky Way. We also show that none of the binary population synthesis models used in this paper, i.e., COMPAS, StarTrack, Brussels, ComBinE, and BPASS, can currently reproduce the elemental abundance observations. The predictions are problematic not only for neutron star mergers, but also for Type Ia supernovae, which may point to shortcomings in binary evolution models.
Planet formation is a multi-scale process in which the coagulation of $\mathrm{\mu m}$-sized dust grains in protoplanetary disks is strongly influenced by the hydrodynamic processes on scales of astronomical units ($\approx 1.5\times 10^8 \,\mathrm{km}$). Studies are therefore dependent on subgrid models to emulate the micro physics of dust coagulation on top of a large scale hydrodynamic simulation. Numerical simulations which include the relevant physical effects are complex and computationally expensive. Here, we present a fast and accurate learned effective model for dust coagulation, trained on data from high resolution numerical coagulation simulations. Our model captures details of the dust coagulation process that were so far not tractable with other dust coagulation prescriptions with similar computational efficiency.
Gravitational time delays provide a powerful one step measurement of H0, independent of all other probes. One key ingredient in time delay cosmography are high accuracy lens models. Those are currently expensive to obtain, both, in terms of computing and investigator time (105 - 6 CPU hours and ~0.5-1 year, respectively). Major improvements in modeling speed are therefore necessary to exploit the large number of lenses that are forecast to be discovered over the current decade. In order to bypass this roadblock, we develop an automated modeling pipeline and apply it to a sample of 31 lens systems, observed by the Hubble Space Telescope in multiple bands. Our automated pipeline can derive models for 30/31 lenses with few hours of human time and <100 CPU hours of computing time for a typical system. For each lens, we provide measurements of key parameters and predictions of magnification as well as time delays for the multiple images. We characterize the cosmography-readiness of our models using the stability of differences in Fermat potential (proportional to time delay) w.r.t. modeling choices. We find that for 10/30 lenses our models are cosmography or nearly cosmography grade (<3 per cent and 3-5 per cent variations). For 6/30 lenses the models are close to cosmography grade (5-10 per cent). These results utilize informative priors and will need to be confirmed by further analysis. However, they are also likely to improve by extending the pipeline modeling sequence and options. In conclusion, we show that uniform cosmography grade modeling of large strong lens samples is within reach.
We take a major step towards computing $D$-dimensional one-loop amplitudes in general gauge theories, compatible with the principles of unitarity and the color-kinematics duality. For $n$-point amplitudes with either supersymmetry multiplets or generic non-supersymmetric matter in the loop, simple all-multiplicity expressions are obtained for the maximal cuts of kinematic numerators of $n$-gon diagrams. At $n=6,7$ points with maximal supersymmetry, we extend the cubic-diagram numerators to encode all contact terms, and thus solve the long-standing problem of \emph{simultaneously} realizing the following properties: color-kinematics duality, manifest locality, optimal power counting of loop momenta, quadratic rather than linearized Feynman propagators, compatibility with double copy as well as all graph symmetries. Color-kinematics dual representations with similar properties are presented in the half-maximally supersymmetric case at $n=4,5$ points. The resulting gauge-theory integrands and their supergravity counterparts obtained from the double copy are checked to reproduce the expected ultraviolet divergences.
Models of planetary core growth by either planetesimal or pebble accretion are traditionally disconnected from the models of dust evolution and formation of the first gravitationally-bound planetesimals. The state-of-the-art models typically start with massive planetary cores already present. We aim to study the formation and growth of planetary cores in a pressure bump, motivated by the annular structures observed in protoplanetary disks, starting with sub-micron-sized dust grains. We connect the models of dust coagulation and drift, planetesimal formation in the streaming instability, gravitational interactions between planetesimals, pebble accretion, and planet migration, into one uniform framework. We find that planetesimals forming early at the massive end of the size distribution grow quickly dominantly by pebble accretion. These few massive bodies grow on the timescales of ~100 000 years and stir the planetesimals formed later preventing the emergence of further planetary cores. Additionally, a migration trap occurs allowing for retention of the growing cores. Pressure bumps are favourable locations for the emergence and rapid growth of planetary cores by pebble accretion as the dust density and grain size are increased and the pebble accretion onset mass is reduced compared to a smooth-disk model.
We extend the publicly available quantumfdtd code. It was originally intended for solving the time-independent three-dimensional Schrödinger equation via the finite-difference time-domain (FDTD) method and for extracting the ground, first, and second excited states. We (a) include the case of the relativistic Schrödinger equation and (b) add two optimized FFT-based kinetic energy terms for the non-relativistic case. All the three new kinetic terms are computed using Fast Fourier Transform (FFT). We release the resulting code as version 3 of quantumfdtd. Finally, the code now supports arbitrary external file-based potentials and the option to project out distinct parity eigenstates from the solutions. Our goal is quark models used for phenomenological descriptions of QCD bound states, described by the three-dimensional Schrödinger equation. However, we target any field where solving either the non-relativistic or the relativistic three-dimensional Schrödinger equation is required.
We describe a new table-top electrostatic storage ring concept for $30$ keV polarized ions at frozen spin condition. The device will ultimately be capable of measuring magnetic fields with a resolution of 10$^{-21}$ T with sub-mHz bandwidth. With the possibility to store different kinds of ions or ionic molecules and access to prepare and probe states of the systems using lasers and SQUIDs, it can be used to search for electric dipole moments (EDMs) of electrons and nucleons, as well as axion-like particle dark matter and dark photon dark matter. Its sensitivity potential stems from several hours of storage time, comparably long spin coherence times, and the possibility to trap up to 10$^9$ particles in bunches with possibly different state preparations for differential measurements. As a dark matter experiment, it is most sensitive in the mass range of 10$^{-10}$ to 10$^{-19}$ eV, where it can potentially probe couplings orders of magnitude below current and proposed laboratory experiments.
Supernovae (SNe) that have been multiply-imaged by gravitational lensing are rare and powerful probes for cosmology. Each detection is an opportunity to develop the critical tools and methodologies needed as the sample of lensed SNe increases by orders of magnitude with the upcoming Vera C. Rubin Observatory and Nancy Grace Roman Space Telescope. The latest such discovery is of the quadruply-imaged Type Ia SN 2022qmx (aka, "SN Zwicky"; Goobar et al. 2022) at z = 0.3544. SN Zwicky was discovered by the Zwicky Transient Facility (ZTF) in spatially unresolved data. Here we present follow-up Hubble Space Telescope observations of SN Zwicky, the first from the multi-cycle "LensWatch" program (www.lenswatch.org). We measure photometry for each of the four images of SN Zwicky, which are resolved in three WFC3/UVIS filters (F475W, F625W, F814W) but unresolved with WFC3/IR F160W, and produce an analysis of the lensing system using a variety of independent lens modeling methods. We find consistency between time delays estimated with the single epoch of HST photometry and the lens model predictions constrained through the multiple image positions, with both inferring time delays of <1 day. Our lens models converge to an Einstein radius of (0.168+0.009-0.005)", the smallest yet seen in a lensed SN. The "standard candle" nature of SN Zwicky provides magnification estimates independent of the lens modeling that are brighter by ~1.5 mag and ~0.8 mag for two of the four images, suggesting significant microlensing and/or additional substructure beyond the flexibility of our image-position mass models.
We study the effects of light QCD axions on the stellar configuration of white dwarfs. At finite baryon density, the non-derivative coupling of the axion to nucleons displaces the axion from its in-vacuum minimum which implies a reduction of the nucleon mass. This dramatically alters the composition of stellar remnants. In particular, the modifications of the mass-radius relationship of white dwarfs allow us to probe large regions of unexplored axion parameter space without requiring it to be a significant fraction of dark matter.
The last two decades have witnessed the discovery of a myriad of new and unexpected hadrons. The future holds more surprises for us, thanks to new-generation experiments. Understanding the signals and determining the properties of the states requires a parallel theoretical effort. To make full use of available and forthcoming data, a careful amplitude modeling is required, together with a sound treatment of the statistical uncertainties, and a systematic survey of the model dependencies. We review the contributions made by the Joint Physics Analysis Center to the field of hadron spectroscopy.
We present a calculation of all matching coefficients for $N$-jettiness beam functions at next-to-next-to-next-to-leading order (N$^3$LO) in perturbative quantum chromodynamics (QCD). Our computation is performed starting from the respective collinear splitting kernels, which we integrate using the axial gauge. We use reverse unitarity to map the relevant phase-space integrals to loop integrals, which allows us to employ multi-loop techniques including integration-by-parts identities and differential equations. We find a canonical basis and use an algorithm to establish non-trivial partial fraction relations among the resulting master integrals, which allows us to reduce their number substantially. By use of regularity conditions, we express all necessary boundary constants in terms of an independent set, which we compute by direct integration of the corresponding integrals in the soft limit. In this way, we provide an entirely independent calculation of the matching coefficients which were previously computed in arXiv:2006.03056.
The CRESST experiment employs cryogenic calorimeters for the sensitive measurement of nuclear recoils induced by dark matter particles. The recorded signals need to undergo a careful cleaning process to avoid wrongly reconstructed recoil energies caused by pile-up and read-out artefacts. We frame this process as a time series classification task and propose to automate it with neural networks. With a data set of over one million labeled records from 68 detectors, recorded between 2013 and 2019 by CRESST, we test the capability of four commonly used neural network architectures to learn the data cleaning task. Our best performing model achieves a balanced accuracy of 0.932 on our test set. We show on an exemplary detector that about half of the wrongly predicted events are in fact wrongly labeled events, and a large share of the remaining ones have a context-dependent ground truth. We furthermore evaluate the recall and selectivity of our classifiers with simulated data. The results confirm that the trained classifiers are well suited for the data cleaning task.
The origins of the elements and isotopes of cosmic material is a critical aspect of understanding the evolution of the universe. Nucleosynthesis typically requires physical conditions of high temperatures and densities. These are found in the Big Bang, in the interiors of stars, and in explosions with their compressional shocks and high neutrino and neutron fluxes. Many different tools are available to disentangle the composition of cosmic matter, in material of extraterrestrial origins such as cosmic rays, meteorites, stardust grains, lunar and terrestrial sediments, and through astronomical observations across the electromagnetic spectrum. Understanding cosmic abundances and their evolution requires combining such measurements with approaches of astrophysical, nuclear theories and laboratory experiments, and exploiting additional cosmic messengers, such as neutrinos and gravitational waves. Recent years have seen significant progress in almost all these fields; they are presented in this review.
The Sun and the solar system are our reference system for abundances of elements and isotopes. Many direct and indirect methods are employed to establish a refined abundance record from the time when the Sun and the Earth were formed. Indications for nucleosynthesis in the local environment when the Sun was formed are derived from meteoritic material and inclusion of radioactive atoms in deep-sea sediments. Spectroscopy at many wavelengths and the neutrino flux from the hydrogen fusion processes in the Sun have established a refined model of how the nuclear energy production shapes stars. Models are required to explore nuclear fusion of heavier elements. These stellar evolution calculations have been confirmed by observations of nucleosynthesis products in the ejecta of stars and supernovae, as captured by stardust grains and by characteristic lines in spectra seen from these objects. One of the successes has been to directly observe γ rays from radioactive material synthesised in stellar explosions, which fully support the astrophysical models. Another has been the observation of radioactive afterglow and characteristic heavy-element spectrum from a neutron-star merger, confirming the neutron rich environments encountered in such rare explosions. The ejecta material captured by Earth over millions of years in sediments and identified through characteristic radio-isotopes suggests that nearby nucleosynthesis occurred in recent history, with further indications for sites of specific nucleosynthesis. Together with stardust and diffuse γ rays from radioactive ejecta, these help to piece together how cosmic materials are transported in interstellar space and re-cycled into and between generations of stars. Our description of cosmic compositional evolution needs such observational support, as it rests on several assumptions that appear challenged by recent recognition of violent events being common during evolution of a galaxy. This overview presents the flow of cosmic matter and the various sites of nucleosynthesis, as understood from combining many techniques and observations, towards the current knowledge of how the universe is enriched with elements.
The disappearance of the accretion disc in low-luminosity active galactic nuclei (LLAGN) leaves behind a faint optical nuclear continuum whose nature has been largely debated, mainly due to serious observational limitations in the IR to UV range. We combine multi-wavelength sub-arcsecond resolution observations -- able to isolate the genuine nuclear continuum -- with nebular lines in the mid-IR, to indirectly probe the shape of the extreme UV continuum. We found that 8 of the nearest prototype LLAGN are compatible with pure compact jet emission (self-absorbed synchrotron plus the associated self-Compton component) over more than ten orders of magnitude in frequency. When compared with typical radio galaxies, the LLAGN continua show two peculiarities: $i)$ a very steep spectral slope in the IR-to-optical/UV range ($-3.7 < \alpha_0 < -1.3$; $F_\nu \propto \nu^{\alpha_0}$); and $ii)$ a very high turnover frequency ($0.2-30\, \rm{THz}$; $1.3\,\rm{mm}-10\,\rm{\mu m}$). These attributes can be explained if the synchrotron continuum is mainly dominated by thermalised particles at the jet base or corona with considerably high temperatures, whereas only a small fraction of the energy ($\sim 20\%$) would be distributed along the high-energy power-law tail of accelerated particles. On the other hand, the nebular gas excitation in LLAGN is in agreement with photo-ionisation from inverse Compton radiation ($\alpha_{\rm x} \sim -0.7$), which would dominate the nuclear continuum shortwards of $\sim 3000$ Å. Our results suggest that the LLAGN continuum can be dominated at all wavelengths by undeveloped jets, powered by a thermalised particle distribution, similar to the behaviour observed in compact jets of quiescent black hole X-ray binaries. This has important implications in the context of galaxy evolution, since LLAGN may represent a major but underestimated source of kinetic feedback in galaxies.
Polarization of the cosmic microwave background (CMB) can probe new parity-violating physics such as cosmic birefringence (CB), which requires exquisite control over instrumental systematics. The non-idealities of the half-wave plate (HWP) represent a source of systematics when used as a polarization modulator. We study their impact on the CMB angular power spectra, which is partially degenerate with CB and miscalibration of the polarization angle. We use full-sky beam convolution simulations including HWP to generate mock noiseless time-ordered data, process them through a bin averaging map-maker, and calculate the power spectra including $TB$ and $EB$ correlations. We also derive analytical formulae which accurately model the observed spectra. For our choice of HWP parameters, the HWP-induced angle amounts to a few degrees, which could be misinterpreted as CB. Accurate knowledge of the HWP is required to mitigate this. Our simulation and analytical formulae will be useful for deriving requirements for the accuracy of HWP calibration.
The large total infrared (TIR) luminosities ($L_{\rm TIR} \gtrsim 10^{12}~L_\odot$) observed in $z \sim 6$ quasars are generally converted into high star formation rates ($SFR \gtrsim 10^2~M_\odot$ yr$^{-1}$) of their host galaxies. However, these estimates rely on the assumption that dust heating is dominated by stellar radiation, neglecting the contribution from the central Active Galactic Nuclei (AGN). We test the validity of this assumption by combining cosmological hydrodynamic simulations with radiative transfer calculations. We find that, when AGN radiation is included in the simulations, the mass (luminosity)-weighted dust temperature in the host galaxies increases from $T\approx 50$ K ($T \approx 70$ K) to $T\approx 80$ K ($T\approx 200$ K), suggesting that AGN effectively heat the bulk of dust in the host galaxy. We compute the AGN-host galaxy $SFR$ from the synthetic spectral energy distribution by using standard $SFR - L_{\rm TIR}$ relations, and compare the results with the "true" values in the simulations. We find that the $SFR$ is overestimated by a factor of $\approx 3$ ($\gtrsim 10$) for AGN bolometric luminosities of $L_{\rm bol} \approx 10^{12}~L_\odot$ ($\gtrsim 10^{13}~ L_\odot$), implying that the star formation rates of $z\sim 6$ quasars can be overestimated by over an order of magnitude.
We present MGLenS, a large series of modified gravity lensing simulations tailored for cosmic shear data analyses and forecasts in which cosmological and modified gravity parameters are varied simultaneously. Based on the FORGE and BRIDGE $N$-body simulation suites presented in companion papers, we construct 500,000 deg$^2$ of mock Stage-IV lensing data, sampling a pair of 4-dimensional volumes designed for the training of emulators. We validate the accuracy of MGLenS with inference analyses based on the lensing power spectrum exploiting our implementation of $f(R)$ and nDGP theoretical predictions within the cosmoSIS cosmological inference package. A Fisher analysis reveals that the vast majority of the constraining power from such a survey comes from the highest redshift galaxies alone. We further find from a full likelihood sampling that cosmic shear can achieve 95% CL constraints on the modified gravity parameters of log$_{10}\left[ f_{R_0}\right] < -5.24$ and log$_{10}\left[ H_0 r_c\right] > -0.05$, after marginalising over intrinsic alignments of galaxies and including scales up to $\ell=5000$. Such a survey setup could in fact detect with more than $3\sigma$ confidence $f(R)$ values larger than $3 \times 10^{-6}$ and $H_0 r_c$ smaller than 1.0. Scale cuts at $\ell=3000$ reduce the degeneracy breaking between $S_8$ and the modified gravity parameters, while photometric redshift uncertainty seem to play a subdominant role in our error budget. We finally explore the consequences of analysing data with the wrong gravity model, and report the catastrophic biases for a number of possible scenarios. The Stage-IV MGLenS simulations, the FORGE and BRIDGE emulators and the cosmoSIS interface modules will be made publicly available upon journal acceptance.
It has been suggested that a trail of diffuse galaxies, including two dark matter deficient galaxies (DMDGs), in the vicinity of NGC1052 formed because of a high-speed collision between two gas-rich dwarf galaxies, one bound to NGC1052 and the other one on an unbound orbit. The collision compresses the gas reservoirs of the colliding galaxies, which in turn triggers a burst of star formation. In contrast, the dark matter and pre-existing stars in the progenitor galaxies pass through it. Since the high pressures in the compressed gas are conducive to the formation of massive globular clusters (GCs), this scenario can explain the formation of DMDGs with large populations of massive GCs, consistent with the observations of NGC1052-DF2 (DF2) and NGC1052-DF4. A potential difficulty with this `mini bullet cluster' scenario is that the observed spatial distributions of GCs in DMDGs are extended. GCs experience dynamical friction causing their orbits to decay with time. Consequently, their distribution at formation should have been even more extended than that observed at present. Using a semi-analytic model, we show that the observed positions and velocities of the GCs in DF2 imply that they must have formed at a radial distance of 5-10kpc from the center of DF2. However, as we demonstrate, the scenario is difficult to reconcile with the fact that the strong tidal forces from NGC1052 strip the extendedly distributed GCs from DF2, requiring 33-59 massive GCs to form at the collision to explain observations.
Photo-nuclear reactions of light nuclei below a mass of $A=60$ are studied experimentally and theoretically by the PANDORA (Photo-Absorption of Nuclei and Decay Observation for Reactions in Astrophysics) project. Two experimental methods, virtual-photon excitation by proton scattering and real-photo absorption by a high-brilliance gamma-ray beam produced by laser Compton scattering, will be applied to measure the photo-absorption cross sections and the decay branching ratio of each decay channel as a function of the photon energy. Several nuclear models, e.g. anti-symmetrized molecular dynamics, mean-field type models, a large-scale shell model, and ab initio models, will be employed to predict the photo-nuclear reactions. The uncertainty in the model predictions will be evaluated from the discrepancies between the model predictions and the experimental data. The data and the predictions will be implemented in a general reaction calculation code TALYS . The results will be applied to the simulation of the photo-disintegration process of ultra-high-energy cosmic rays in inter-galactic propagation.
Planets are born from the gas and dust discs surrounding young stars. Energetic radiation from the central star can drive thermal outflows from the discs atmospheres, strongly affecting the evolution of the discs and the nascent planetary system. In this context several numerical models of varying complexity have been developed to study the process of disc photoevaporation from their central stars. We describe the numerical techniques, the results and the predictivity of current models and identify observational tests to constrain them.
We present the first systematic follow-up of Planck Sunyaev-Zeldovich effect (SZE) selected candidates down to signal-to-noise (S/N) of 3 over the 5000 deg$^2$ covered by the Dark Energy Survey. Using the MCMF cluster confirmation algorithm, we identify optical counterparts, determine photometric redshifts and richnesses and assign a parameter, $f_{\rm cont}$, that reflects the probability that each SZE-optical pairing represents a real cluster rather than a random superposition of physically unassociated systems. The new MADPSZ cluster catalogue consists of 1092 MCMF confirmed clusters and has a purity of 85%. We present the properties of subsamples of the MADPSZ catalogue that have purities ranging from 90% to 97.5%, depending on the adopted $f_{\rm cont}$ threshold. $M_{500}$ halo mass estimates, redshifts, richnesses, and optical centers are presented for all MADPSZ clusters. The MADPSZ catalogue adds 828 previously unknown Planck identified clusters over the DES footprint and provides redshifts for an additional 50 previously published Planck selected clusters with S/N>4.5. Using the subsample with spectroscopic redshifts, we demonstrate excellent cluster photo-$z$ performance with an RMS scatter in $\Delta z/(1+z)$ of 0.47%. Our MCMF based analysis allows us to infer the contamination fraction of the initial S/N>3 Planck selected candidate list, which is 50%. We present a method of estimating the completeness of the MADPSZ cluster sample and $f_{\rm cont}$ selected subsamples. In comparison to the previously published Planck cluster catalogues. this new S/N $>$ 3 MCMF confirmed cluster catalogue populates the lower mass regime at all redshifts and includes clusters up to z$\sim$1.3.
In this paper we focus on scattering amplitudes in maximally supersymmetric Yang-Mills theory and define a long sought-after geometry, the loop momentum amplituhedron, which we conjecture to encode tree and (the integrands of) loop amplitudes in spinor helicity variables. Motivated by the structure of amplitude singularities, we define an extended positive space, which enhances the Grassmannian space featuring at tree level, and a map which associates to each of its points tree-level kinematic variables and loop momenta. The image of this map is the loop momentum amplituhedron. Importantly, our formulation provides a global definition of the loop momenta. We conjecture that for all multiplicities and helicity sectors, there exists a canonical logarithmic differential form defined on this space, and provide its explicit form in a few examples.
The cosmological constant and its phenomenology remain among the greatest puzzles in theoretical physics. We review how modifications of Einstein's general relativity could alleviate the different problems associated with it that result from the interplay of classical gravity and quantum field theory. We introduce a modern and concise language to describe the problems associated with its phenomenology, and inspect no-go theorems and their loopholes to motivate the approaches discussed here. Constrained gravity approaches exploit minimal departures from general relativity; massive gravity introduces mass to the graviton; Horndeski theories lead to the breaking of translational invariance of the vacuum; and models with extra dimensions change the symmetries of the vacuum. We also review screening mechanisms that have to be present in some of these theories if they aim to recover the success of general relativity on small scales as well. Finally, we summarise the statuses of these models in their attempt to solve the different cosmological constant problems while being able to account for current astrophysical and cosmological observations.
Energetic jets that traverse the quark-gluon plasma created in heavy-ion collisions serve as excellent probes to study this new state of deconfined QCD matter. Presently, however, our ability to achieve a crisp theoretical interpretation of the crescent number of jet observables measured in experiments is hampered by the presence of selection biases. The aim of this work is to minimise those selection biases associated to the modification of the quark- vs. gluon-initiated jet fraction in order to assess the presence of other medium-induced effects, namely color decoherence, by exploring the rapidity dependence of jet substructure observables. So far, all jet substructure measurements at mid-rapidity have shown that heavy-ion jets are narrower than vacuum jets. We show both analytically and with Monte Carlo simulations that if the narrowing effect persists at forward rapidities, where the quark-initiated jet fraction is greatly increased, this could serve as an unambiguous experimental observation of color decoherence dynamics in heavy-ion collisions.
Context. The understanding of the accretion process has a central role in the understanding of star and planet formation.
Aims: We aim to test how accretion variability influences previous correlation analyses of the relation between X-ray activity and accretion rates, which is important for understanding the evolution of circumstellar disks and disk photoevaporation.
Methods: We monitored accreting stars in the Orion Nebula Cluster from November 24, 2014, until February 17, 2019, for 42 epochs with the Wendelstein Wide Field Imager in the Sloan Digital Sky Survey u'g'r' filters on the 2 m Fraunhofer Telescope on Mount Wendelstein. Mass accretion rates were determined from the measured ultraviolet excess. The influence of the mass accretion rate variability on the relation between X-ray luminosities and mass accretion rates was analyzed statistically.
Results: We find a typical interquartile range of ∼0.3 dex for the mass accretion rate variability on timescales from weeks to ∼2 yr. The variability has likely no significant influence on a correlation analysis of the X-ray luminosity and the mass accretion rate observed at different times when the sample size is large enough.
Conclusions: The observed anticorrelation between the X-ray luminosity and the mass accretion rate predicted by models of photoevaporation-starved accretion is likely not due to a bias introduced by different observing times.
Full Tables 1-3 and reduced data are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/666/A55
Heavy QCD axions are well-motivated extensions of the QCD axion that address the quality problem while still solving the strong CP problem. Owing to the gluon coupling, critical for solving the strong CP problem, these axions can be produced in significant numbers in beam dump and collider environments for axion decay constants as large as PeV, relevant for addressing the axion quality problem. In addition, if these axions have leptonic couplings, they can give rise to long-lived decay into lepton pairs, in particular, dominantly into muons above the dimuon threshold and below the GeV scale in a broad class of axion models. Considering existing constraints, primarily from rare meson decays, we demonstrate that current and future neutrino facilities and long-lived particle searches have the potential to probe significant parts of the heavy QCD axion parameter space via dimuon final states.
We cross-correlate positions of galaxies measured in data from the first three years of the Dark Energy Survey with Compton-$y$-maps generated using data from the South Pole Telescope (SPT) and the {\it Planck} mission. We model this cross-correlation measurement together with the galaxy auto-correlation to constrain the distribution of gas in the Universe. We measure the hydrostatic mass bias or, equivalently, the mean halo bias-weighted electron pressure $\langle b_{h}P_{e}\rangle$, using large-scale information. We find $\langle b_{h}P_{e}\rangle$ to be $[0.16^{+0.03}_{-0.04},0.28^{+0.04}_{-0.05},0.45^{+0.06}_{-0.10},0.54^{+0.08}_{-0.07},0.61^{+0.08}_{-0.06},0.63^{+0.07}_{-0.08}]$ meV cm$^{-3}$ at redshifts $z \sim [0.30, 0.46, 0.62,0.77, 0.89, 0.97]$. These values are consistent with previous work where measurements exist in the redshift range. We also constrain the mean gas profile using small-scale information, enabled by the high-resolution of the SPT data. We compare our measurements to different parametrized profiles based on the cosmo-OWLS hydrodynamical simulations. We find that our data are consistent with the simulation that assumes an AGN heating temperature of $10^{8.5}$K but are incompatible with the model that assumes an AGN heating temperature of $10^{8.0}$K. These comparisons indicate that the data prefer a higher value of electron pressure than the simulations within $r_{500c}$ of the galaxies' halos.
Wide, deep, blind continuum surveys at submillimetre/millimetre (submm/mm) wavelengths are required to provide a full inventory of the dusty, distant Universe. However, conducting such surveys to the necessary depth, with sub-arcsec angular resolution, is prohibitively time-consuming, even for the most advanced submm/mm telescopes. Here, we report the most recent results from the ALMACAL project, which exploits the 'free' calibration data from the Atacama Large Millimetre/submillimetre Array (ALMA) to map the lines of sight towards and beyond the ALMA calibrators. ALMACAL has now covered 1,001 calibrators, with a total sky coverage around 0.3 deg2, distributed across the sky accessible from the Atacama desert, and has accumulated more than 1,000h of integration. The depth reached by combining multiple visits to each field makes ALMACAL capable of searching for faint, dusty, star-forming galaxies (DSFGs), with detections at multiple frequencies to constrain the emission mechanism. Based on the most up-to-date ALMACAL database, we report the detection of 186 DSFGs with flux densities down to S870um ~ 0.2mJy, comparable with existing ALMA large surveys but less susceptible to cosmic variance. We report the number counts at five wavelengths between 870um and 3mm, in ALMA bands 3, 4, 5, 6 and 7, providing a benchmark for models of galaxy formation and evolution. By integrating the observed number counts and the best-fitting functions, we also present the resolved fraction of the cosmic infrared background (CIB) and the CIB spectral shape. Combining existing surveys, ALMA has currently resolved about half of the CIB in the submm/mm regime.
We analyse the full shape of anisotropic clustering measurements from the extended Baryon Oscillation Spectroscopic survey (eBOSS) quasar sample together with the combined galaxy sample from the Baryon Oscillation Spectroscopic Survey (BOSS). We obtain constraints on the cosmological parameters independent of the Hubble parameter $h$ for the extensions of the $\Lambda$CDM models, focusing on cosmologies with free dark energy equation of state parameter $w$. We combine the clustering constraints with those from the latest CMB data from Planck to obtain joint constraints for these cosmologies for $w$ and the additional extension parameters - its time evolution $w_{\rm{a}}$, the physical curvature density $\omega_{K}$ and the neutrino mass sum $\sum m_{\nu}$. Our joint constraints are consistent with flat $\Lambda$CDM cosmological model within 68\% confidence limits. We demonstrate that the Planck data are able to place tight constraints on the clustering amplitude today, $\sigma_{12}$, in cosmologies with varying $w$ and present the first constraints for the clustering amplitude for such cosmologies, which is found to be slightly higher than the $\Lambda$CDM value. Additionally, we show that when we vary $w$ and allow for non-flat cosmologies and the physical curvature density is used, Planck prefers a curved universe at $4\sigma$ significance, which is $\sim2\sigma$ higher than when using the relative curvature density $\Omega_{\rm{K}}$. Finally, when $w$ is varied freely, clustering provides only a modest improvement (of 0.021 eV) on the upper limit of $\sum m_{\nu}$.
Context. Observations of the supernova remnant (SNR) Cassiopeia A (Cas A) show significant asymmetries in the reverse shock that cannot be explained by models describing a remnant expanding through a spherically symmetric wind of the progenitor star.
Aims: We investigate whether a past interaction of Cas A with a massive asymmetric shell of the circumstellar medium can account for the observed asymmetries of the reverse shock.
Methods: We performed three-dimensional (3D) (magneto)-hydrodynamic simulations that describe the remnant evolution from the SN explosion to its interaction with a massive circumstellar shell. The initial conditions (soon after the shock breakout at the stellar surface) are provided by a 3D neutrino-driven SN model whose morphology closely resembles Cas A and the SNR simulations cover ≈2000 yr of evolution. We explored the parameter space of the shell, searching for a set of parameters able to produce an inward-moving reverse shock in the western hemisphere of the remnant at the age of ≈350 yr, analogous to that observed in Cas A.
Results: The interaction of the remnant with the shell can produce asymmetries resembling those observed in the reverse shock if the shell was asymmetric with the densest portion in the (blueshifted) nearside to the northwest (NW). According to our favorite model, the shell was thin (thickness σ ≈ 0.02 pc) with a radius rsh ≈ 1.5 pc from the center of the explosion. The reverse shock shows the following asymmetries at the age of Cas A: (i) it moves inward in the observer frame in the NW region, while it moves outward in most other regions; (ii) the geometric center of the reverse shock is offset to the NW by ≈0.1 pc from the geometric center of the forward shock; and (iii) the reverse shock in the NW region has enhanced nonthermal emission because, there, the ejecta enter the reverse shock with a higher relative velocity (between 4000 and 7000 km s−1) than in other regions (below 2000 km s−1).
Conclusions: The large-scale asymmetries observed in the reverse shock of Cas A can be interpreted as signatures of the interaction of the remnant with an asymmetric dense circumstellar shell that occurred between ≈180 and ≈240 yr after the SN event. We suggest that the shell was, most likely, the result of a massive eruption from the progenitor star that occurred between 104 and 105 yr prior to core-collapse. We estimate a total mass of the shell of the order of 2 M⊙.
We present BIFROST, an extended version of the GPU-accelerated hierarchical fourth-order forward symplectic integrator code FROST. BIFROST (BInaries in FROST) can efficiently evolve collisional stellar systems with arbitrary binary fractions up to $f_\mathrm{bin}=100\%$ by using secular and regularised integration for binaries, triples, multiple systems or small clusters around black holes within the fourth-order forward integrator framework. Post-Newtonian (PN) terms up to order PN3.5 are included in the equations of motion of compact subsystems with optional three-body and spin-dependent terms. PN1.0 terms for interactions with black holes are computed everywhere in the simulation domain. The code has several merger criteria (gravitational-wave inspirals, tidal disruption events and stellar and compact object collisions) with the addition of relativistic recoil kicks for compact object mergers. We show that for systems with $N$ particles the scaling of the code remains good up to $N_\mathrm{GPU} \sim 40\times N / 10^6$ GPUs and that the increasing binary fractions up to 100 per cent hardly increase the code running time (less than a factor $\sim 1.5$). We also validate the numerical accuracy of BIFROST by presenting a number of star clusters simulations the most extreme ones including a core collapse and a merger of two intermediate mass black holes with a relativistic recoil kick.
Galaxy clusters and cosmic voids are the most extreme objects of our Universe in terms of mass and size, tracing two opposite sides of the large-scale matter density field. By studying their abundance as a function of their mass and radius, respectively, i.e. the halo mass function (HMF) and void size function (VSF), it is possible to achieve fundamental constraints on the cosmological model. While the HMF has already been extensively exploited providing robust constraints on the main cosmological model parameters (e.g. $\Omega_{\rm m}$, $\sigma_8$ and $S_8$), the VSF is still emerging as a viable and effective cosmological probe. Given the expected complementarity of these statistics, in this work we aim at estimating the costraining power deriving from their combination. To achieve this goal, we exploit realistic mock samples of galaxy clusters and voids extracted from state-of-the-art large hydrodynamical simulations, in the redshift range $0.2 \leq z \leq 1$. We perform an accurate calibration of the free parameters of the HMF and VSF models, needed to take into account the differences between the types of mass tracers used in this work and those considered in previous literature analyses. Then, we obtain constraints on $\Omega_{\rm m}$ and $\sigma_8$ by performing a Bayesian Markov Chain Monte Carlo analysis. We find that cluster and void counts represent powerful independent and complementary probes to test the cosmological framework. In particular, we found that the constraining power of the HMF on $\Omega_{\rm m}$ and $\sigma_8$ improves drastically with the VSF contribution, increasing the $S_8$ constraint precision by a factor of about $60\%$.
Disc winds and planet formation are considered to be two of the most important mechanisms that drive the evolution and dispersal of protoplanetary discs and in turn define the environment in which planets form and evolve. While both have been studied extensively in the past, we combine them into one model by performing three-dimensional radiation-hydrodynamic simulations of giant planet hosting discs that are undergoing X-ray photo-evaporation, with the goal to analyse the interactions between both mechanisms. In order to study the effect on observational diagnostics, we produce synthetic observations of commonly used wind-tracing forbidden emission lines with detailed radiative transfer and photo-ionisation calculations. We find that a sufficiently massive giant planet carves a gap in the gas disc that is deep enough to affect the structure and kinematics of the pressure-driven photo-evaporative wind significantly. This effect can be strong enough to be visible in the synthetic high-resolution observations of some of our wind diagnostic lines, such as the [OI] 6300 Å or [SII] 6730 Å lines. When the disc is observed at inclinations around 40° and higher, the spectral line profiles may exhibit a peak in the redshifted part of the spectrum, which cannot easily be explained by simple wind models alone. Moreover, massive planets can induce asymmetric substructures within the disc and the photo-evaporative wind, giving rise to temporal variations of the line profiles that can be strong enough to be observable on timescales of less than a quarter of the planet's orbital period.
This chapter reviews the construction of ``soft-collinear gravity'', the effective field theory which describes the interaction of collinear and soft gravitons with matter (and themselves), to all orders in the soft-collinear power expansion, focusing on the essential concepts. Among them are an emergent soft background gauge symmetry, which lives on the light-like trajectories of energetic particles and allows for a manifestly gauge-invariant representation of the interactions in terms of a soft covariant derivative and the soft Riemann tensor, and a systematic treatment of collinear interactions, which are absent at leading power in gravity. The gravitational soft theorems are derived from soft-collinear gravity at the Lagrangian level. The symmetries of the effective theory provide a transparent explanation of why soft graviton emission is universal to sub-sub-leading power, but gauge boson emission is not and suggest a physical interpretation of the form of the universal soft factors in terms of the charges corresponding to the soft symmetries. The power counting of soft-collinear gravity further provides an understanding of the structure of loop corrections to the soft theorems.
We study the broadband emission of the TeV blazar Mrk501 using multi-wavelength (MWL) observations from 2017 to 2020 performed with a multitude of instruments, involving, among others, MAGIC, Fermi-LAT, NuSTAR, Swift, GASP-WEBT, and OVRO. During this period, Mrk501 showed an extremely low broadband activity, which may help to unravel its baseline emission. Despite the low activity, significant flux variations are detected at all wavebands, with the highest variations occurring at X-rays and VHE $\gamma$-rays. A significant correlation (>3$\sigma$) between X-rays and VHE $\gamma$-rays is measured, supporting leptonic scenarios to explain the variable parts of the spectral energy distribution (SED), also during low activity states. Extending our data set to 12-years (from 2008 to 2020), we find significant correlations between X-rays and HE $\gamma$-rays, indicating, for the first time, a common physical origin driving the variability between these two bands. We additionally find a correlation between HE $\gamma$-rays and radio, with the radio emission lagging the HE $\gamma$-ray emission by more than 100 days. This is consistent with the $\gamma$-ray emission zone being located upstream of the radio-bright regions of the Mrk501 jet. Furthermore, Mrk501 showed a historically low activity in both X-rays and VHE $\gamma$-rays from mid-2017 to mid-2019 with a stable VHE flux (>2TeV) of 5% the emission of the Crab Nebula. The broadband SED of this 2-year long low-state, the potential baseline emission of Mrk501, can be adequately characterized with a one-zone leptonic model, and with (lepto)-hadronic models that fulfill the neutrino flux constraints from IceCube. We explore the time evolution of the SED towards the historically low-state, revealing that the stable baseline emission may be ascribed to a standing shock, and the variable emission to an additional expanding or traveling shock.
We use the Magneticum suite of state-of-the-art hydrodynamical simulations to identify cosmic voids based on the watershed technique and investigate their most fundamental properties across different resolutions in mass and scale. This encompasses the distributions of void sizes, shapes, and content, as well as their radial density and velocity profiles traced by the distribution of cold dark matter particles and halos. We also study the impact of various tracer properties, such as their sparsity and mass, and the influence of void merging on these summary statistics. Our results reveal that all of the analyzed void properties are physically related to each other and describe universal characteristics that are largely independent of tracer type and resolution. Most notably, we find that the motion of tracers around void centers is perfectly consistent with linear dynamics, both for individual, as well as stacked voids. Despite the large range of scales accessible in our simulations, we are unable to identify the occurrence of nonlinear dynamics even inside voids of only a few Mpc in size. This suggests voids to be among the most pristine probes of cosmology down to scales that are commonly referred to as highly nonlinear in the field of large-scale structure.
Using the decomposition of the $D$-dimensional space-time into parallel and perpendicular subspaces, we study and prove a connection between Landau and leading singularities for $N$-point one-loop Feynman integrals by applying multi-dimensional theory of residues. We show that if $D=N$ and $D=N+1$, the leading singularity corresponds to the inverse of the square root of the leading Landau singularity of the first and second type, respectively. We make use of this outcome to systematically provide differential equations of Feynman integrals in canonical forms and the extension of the connection of these singularities at multi-loop level by exploiting the loop-by-loop approach. Illustrative examples with the calculation of Landau and leading singularities are provided to supplement our results.
We present non-linear solutions of Vlasov Perturbation Theory (VPT), describing gravitational clustering of collisionless dark matter with dispersion and higher cumulants induced by orbit crossing. We show that VPT can be cast into a form that is formally analogous to standard perturbation theory (SPT), but including additional perturbation variables, non-linear interactions, and a more complex propagation. VPT non-linear kernels have a crucial decoupling property: for fixed total momentum, the kernels becomes strongly suppressed when any of the individual momenta cross the dispersion scale into the non-linear regime. This screening of UV modes allows us to compute non-linear corrections to power spectra even for cosmologies with very blue power-law input spectra, for which SPT diverges. We compare predictions for the density and velocity divergence power spectra as well as the bispectrum at one-loop order to N-body results in a scaling universe with spectral indices $-1\leq n_s\leq +2$. We find a good agreement up to the non-linear scale for all cases, with a reach that increases with the spectral index $n_s$. We discuss the generation of vorticity as well as vector and tensor modes of the velocity dispersion, showing that neglecting vorticity when including dispersion would lead to a violation of momentum conservation. We verify momentum conservation when including vorticity, and compute the vorticity power spectrum at two-loop order, necessary to recover the correct large-scale limit with slope $n_w=2$. Comparing to our N-body measurements confirms the cross-over from $k^4$ to $k^2$ scaling on large scales. Our results provide a proof-of-principle that perturbative techniques for dark matter clustering can be systematically improved based on the known underlying collisionless dynamics.
The polarization of the cosmic microwave background (CMB) can be used to search for parity-violating processes like that predicted by a Chern-Simons coupling to a light pseudoscalar field. Such an interaction rotates $E$ modes into $B$ modes in the observed CMB signal by an effect known as cosmic birefringence. Even though isotropic birefringence can be confused with the rotation produced by a miscalibration of the detectors' polarization angles the degeneracy between both effects is broken when Galactic foreground emission is used as a calibrator. In this work, we use realistic simulations of the High-Frequency Instrument of the Planck mission to test the impact that Galactic foreground emission and instrumental systematics have on the recent birefringence measurements obtained through this technique. Our results demonstrate the robustness of the methodology against the miscalibration of polarization angles and other systematic effects, like intensity-to-polarization leakage, beam leakage, or cross-polarization effects. However, our estimator is sensitive to the $EB$ correlation of polarized foreground emission. Here we propose to correct the bias induced by dust $EB$ by modeling the foreground signal with templates produced in Bayesian component-separation analyses that fit parametric models to CMB data. Acknowledging the limitations of currently available dust templates like that of the Commander sky model, high-precision CMB data and a characterization of dust beyond the modified blackbody paradigm are needed to obtain a definitive measurement of cosmic birefringence in the future.
The standard perturbation theory (SPT) approach to gravitational clustering is based on a fluid approximation of the underlying Vlasov-Poisson dynamics, taking only the zeroth and first cumulant of the phase-space distribution function into account (density and velocity fields). This assumption breaks down when dark matter particle orbits cross and leads to well-known problems, e.g. an anomalously large backreaction of small-scale modes onto larger scales that compromises predictivity. We extend SPT by incorporating second and higher cumulants generated by orbit crossing. For collisionless matter, their equations of motion are completely fixed by the Vlasov-Poisson system, and thus we refer to this approach as Vlasov Perturbation Theory (VPT). Even cumulants develop a background value, and they enter the hierarchy of coupled equations for the fluctuations. The background values are in turn sourced by power spectra of the fluctuations. The latter can be brought into a form that is formally analogous to SPT, but with an extended set of variables and linear as well as non-linear terms, that we derive explicitly. In this paper, we focus on linear solutions, which are far richer than in SPT, showing that modes that cross the dispersion scale set by the second cumulant are highly suppressed. We derive stability conditions on the background values of even cumulants from the requirement that exponential instabilities be absent. We also compute the expected magnitude of averaged higher cumulants for various halo models and show that they satisfy the stability conditions. Finally, we derive self-consistent solutions of perturbations and background values for a scaling universe and study the convergence of the cumulant expansion. The VPT framework provides a conceptually straightforward and deterministic extension of SPT that accounts for the decoupling of small-scale modes.
We perform an effective field theory analysis to correlate the charged lepton flavor violating processes $\ell_i\to\ell_j\gamma\gamma$ and $\ell_i\to\ell_j\gamma$. Using the current upper bounds on the rate for $\ell_i\to\ell_j\gamma$, we derive model-independent upper limits on the rates for $\ell_i\to\ell_j\gamma\gamma$. Our indirect limits are about three orders of magnitude stronger than the direct bounds from current searches for $\mu\to e\gamma\gamma$, and four orders of magnitude better than current bounds for $\tau\to\ell\gamma\gamma$. We also stress the relevance of Belle II or a Super Tau Charm Facility to discover the rare decay $\tau\to\ell\gamma\gamma$.
We present the first systematic follow-up of Planck Sunyaev-Zeldovich effect (SZE) selected candidates down to signal-to-noise (S/N) of 3 over the 5000 deg$^2$ covered by the Dark Energy Survey. Using the MCMF cluster confirmation algorithm, we identify optical counterparts, determine photometric redshifts and richnesses and assign a parameter, $f_{\rm cont}$, that reflects the probability that each SZE-optical pairing represents a real cluster rather than a random superposition of physically unassociated systems. The new MADPSZ cluster catalogue consists of 1092 MCMF confirmed clusters and has a purity of 85%. We present the properties of subsamples of the MADPSZ catalogue that have purities ranging from 90% to 97.5%, depending on the adopted $f_{\rm cont}$ threshold. $M_{500}$ halo mass estimates, redshifts, richnesses, and optical centers are presented for all MADPSZ clusters. The MADPSZ catalogue adds 828 previously unknown Planck identified clusters over the DES footprint and provides redshifts for an additional 50 previously published Planck selected clusters with S/N>4.5. Using the subsample with spectroscopic redshifts, we demonstrate excellent cluster photo-$z$ performance with an RMS scatter in $\Delta z/(1+z)$ of 0.47%. Our MCMF based analysis allows us to infer the contamination fraction of the initial S/N>3 Planck selected candidate list, which is 50%. We present a method of estimating the completeness of the MADPSZ cluster sample and $f_{\rm cont}$ selected subsamples. In comparison to the previously published Planck cluster catalogues. this new S/N $>$ 3 MCMF confirmed cluster catalogue populates the lower mass regime at all redshifts and includes clusters up to z$\sim$1.3.
We use the Magneticum suite of state-of-the-art hydrodynamical simulations to identify cosmic voids based on the watershed technique and investigate their most fundamental properties across different resolutions in mass and scale. This encompasses the distributions of void sizes, shapes, and content, as well as their radial density and velocity profiles traced by the distribution of cold dark matter particles and halos. We also study the impact of various tracer properties, such as their sparsity and mass, and the influence of void merging on these summary statistics. Our results reveal that all of the analyzed void properties are physically related to each other and describe universal characteristics that are largely independent of tracer type and resolution. Most notably, we find that the motion of tracers around void centers is perfectly consistent with linear dynamics, both for individual, as well as stacked voids. Despite the large range of scales accessible in our simulations, we are unable to identify the occurrence of nonlinear dynamics even inside voids of only a few Mpc in size. This suggests voids to be among the most pristine probes of cosmology down to scales that are commonly referred to as highly nonlinear in the field of large-scale structure.
The dark matter halo sparsity, i.e. the ratio between spherical halo masses enclosing two different overdensities, provides a non-parametric proxy of the halo mass distribution that has been shown to be a sensitive probe of the cosmological imprint encoded in the mass profile of haloes hosting galaxy clusters. Mass estimations at several overdensities would allow for multiple sparsity measurements, which can potentially retrieve the entirety of the cosmological information imprinted on the halo profile. Here, we investigate the impact of multiple sparsity measurements on the cosmological model parameter inference. For this purpose, we analyse N-body halo catalogues from the Raygal and M2Csims simulations and evaluate the correlations among six different sparsities from spherical overdensity halo masses at Δ = 200, 500, 1000, and 2500 (in units of the critical density). Remarkably, sparsities associated to distinct halo mass shells are not highly correlated. This is not the case for sparsities obtained using halo masses estimated from the Navarro-Frenk-White (NFW) best-fitting profile, which artificially correlates different sparsities to order one. This implies that there is additional information in the mass profile beyond the NFW parametrization and that it can be exploited with multiple sparsities. In particular, from a likelihood analysis of synthetic average sparsity data, we show that cosmological parameter constraints significantly improve when increasing the number of sparsity combinations, though the constraints saturate beyond four sparsity estimates. We forecast constraints for the CHEX-MATE cluster sample and find that systematic mass bias errors mildly impact the parameter inference, though more studies are needed in this direction.
Cosmological inference with large galaxy surveys requires theoretical models that combine precise predictions for large-scale structure with robust and flexible galaxy formation modelling throughout a sufficiently large cosmic volume. Here, we introduce the MillenniumTNG (MTNG) project which combines the hydrodynamical galaxy formation model of IllustrisTNG with the large volume of the Millennium simulation. Our largest hydrodynamic simulation, covering (500 Mpc/h)^3 = (740 Mpc)^3, is complemented by a suite of dark-matter-only simulations with up to 4320^3 dark matter particles (a mass resolution of 1.32 x 10^8 Msun/h) using the fixed-and-paired technique to reduce large-scale cosmic variance. The hydro simulation adds 4320^3 gas cells, achieving a baryonic mass resolution of 2 x 10^7 Msun/h. High time-resolution merger trees and direct lightcone outputs facilitate the construction of a new generation of semi-analytic galaxy formation models that can be calibrated against both the hydro simulation and observation, and then applied to even larger volumes - MTNG includes a flagship simulation with 1.1 trillion dark matter particles and massive neutrinos in a volume of (3000 Mpc)^3. In this introductory analysis we carry out convergence tests on basic measures of non-linear clustering such as the matter power spectrum, the halo mass function and halo clustering, and we compare simulation predictions to those from current cosmological emulators. We also use our simulations to study matter and halo statistics, such as halo bias and clustering at the baryonic acoustic oscillation scale. Finally we measure the impact of baryonic physics on the matter and halo distributions.
We introduce a novel technique for constraining cosmological parameters and galaxy assembly bias using non-linear redshift-space clustering of galaxies. We scale cosmological N-body simulations and insert galaxies with the SubHalo Abundance Matching extended (SHAMe) empirical model to generate over 175,000 clustering measurements spanning all relevant cosmological and SHAMe parameter values. We then build an emulator capable of reproducing the projected galaxy correlation function at the monopole, quadrupole and hexadecapole level for separations between $0.1\,h^{-1}{\rm Mpc}$ and $25\,h^{-1}{\rm Mpc}$. We test this approach by using the emulator and Monte Carlo Markov Chain (MCMC) inference to jointly estimate cosmology and assembly bias parameters both for the MTNG740 hydrodynamic simulation and for a semi-analytical galaxy formation model (SAM) built on the MTNG740-DM dark matter-only simulation, obtaining unbiased results for all cosmological parameters. For instance, for MTNG740 and a galaxy number density of $n\sim 0.01 h^{3}{\rm Mpc}^{-3}$, we obtain $\sigma_{8}=0.799^{+0.039}_{-0.044}$ ($\sigma_{8,{\rm MTNG}} =$ 0.8159), and $\Omega_\mathrm{M}h^2= 0.138^{+ 0.025}_{- 0.018}$ ($\Omega_{\mathrm{M}} h^2_{\rm MTNG} =$ 0.142). For fixed Hubble parameter ($h$), the constraint becomes $\Omega_\mathrm{M}h^2= 0.137^{+ 0.011}_{- 0.012}$. Our method performs similarly well for the SAM and for other tested sample densities. We almost always recover the true amount of galaxy assembly bias within one sigma. The best constraints are obtained when scales smaller than $2\,h^{-1}{\rm Mpc}$ are included, as well as when at least the projected correlation function and the monopole are incorporated. These methods offer a powerful way to constrain cosmological parameters using galaxy surveys.
Endpoint divergences in the convolution integrals appearing in next-to-leading-power factorization theorems prevent a straightforward application of standard methods to resum large logarithmic power-suppressed corrections in collider physics. We study the power-suppressed configuration of the thrust distribution in the two-jet region, where a gluon-initiated jet recoils against a quark-antiquark pair. With the aid of operatorial endpoint factorization conditions, we derive a factorization formula where the individual terms are free from endpoint divergences and can be written in terms of renormalized hard, (anti) collinear, and soft functions in four dimensions. This framework enables us to perform the first resummation of the endpoint-divergent SCET$_{\rm I}$ observables at the leading logarithmic accuracy using exclusively renormalization-group methods.
Approximate methods to populate dark matter halos with galaxies are of great utility to large galaxy surveys. However, the limitations of simple halo occupation models (HODs) preclude a full use of small-scale galaxy clustering data and call for more sophisticated models. We study two galaxy populations, luminous red galaxies (LRGs) and star-forming emission-line galaxies (ELGs), at two epochs, $z=1$ and $z=0$, in the large volume, high-resolution hydrodynamical simulation of the MillenniumTNG project. In a partner study we concentrated on the small-scale, one-halo regime down to $r\sim 0.1{\rm Mpc}/h$, while here we focus on modeling galaxy assembly bias in the two-halo regime, $r\gtrsim 1{\rm Mpc}/h$. Interestingly, the ELG signal exhibits scale dependence out to relatively large scales ($r\sim 20{\rm Mpc}/h$), implying that the linear bias approximation for this tracer is invalid on these scales, contrary to common assumptions. The 10-15\% discrepancy present in the standard halo model prescription is only reconciled when we augment our halo occupation model with a dependence on extrinsic halo properties ("shear" being the best-performing one) rather than intrinsic ones (e.g., concentration, peak mass). We argue that this fact constitutes evidence for two-halo galaxy conformity. Including tertiary assembly bias (i.e. a property beyond mass and "shear") is not an essential requirement for reconciling the galaxy assembly bias signal of LRGs, but the combination of external and internal properties is beneficial for recovering ELG the clustering. We find that centrals in low-mass haloes dominate the assembly bias signal of both populations. Finally, we explore the predictions of our model for higher-order statistics such as nearest-neighbor counts. The latter supplies additional information about galaxy assembly bias and can be used to break degeneracies between halo model parameters.
We propose a parametrization of the leading B-meson light-cone distribution amplitude (LCDA) in heavy-quark effective theory (HQET). In position space, it uses a conformal transformation that yields a systematic Taylor expansion and an integral bound, which enables control of the truncation error. Our parametrization further produces compact analytical expressions for a variety of derived quantities. At a given reference scale, our momentum-space parametrization corresponds to an expansion in associated Laguerre polynomials, which turn into confluent hypergeometric functions 1F1 under renormalization-group evolution at one-loop accuracy. Our approach thus allows a straightforward and transparent implementation of a variety of phenomenological constraints, regardless of their origin. Moreover, we can include theoretical information on the Taylor coefficients by using the local operator product expansion. We showcase the versatility of the parametrization in a series of phenomenological pseudo-fits.
Although high energetic radiation from flares is a potential threat to exoplanet atmospheres and may lead to surface sterilization, it might also provide the extra energy for low-mass stars needed to trigger and sustain prebiotic chemistry. We investigate two flares on TRAPPIST-1, an ultra-cool dwarf star that hosts seven exoplanets of which three lie within its habitable zone. The flares are detected in all four passbands of the MuSCAT2 allowing a determination of their temperatures and bolometric energies. We analyzed the light curves of the MuSCAT1 and MuSCAT2 instruments obtained between 2016 and 2021 in $g,r,i,z_\mathrm{s}$-filters. We conducted an automated flare search and visually confirmed possible flare events. We studied the temperature evolution, the global temperature, and the peak temperature of both flares. For the first time we infer effective black body temperatures of flares that occurred on TRAPPIST-1. The black body temperatures for the two TRAPPIST-1 flares derived from the SED are consistent with $T_\mathrm{SED} = 7940_{-390}^{+430}$K and $T_\mathrm{SED} = 6030_{-270}^{+300}$K. The flare black body temperatures at the peak are also calculated from the peak SED yielding $T_\mathrm{SEDp} = 13620_{-1220}^{1520}$K and $T_\mathrm{SEDp} = 8290_{-550}^{+660}$K. We show that for the ultra-cool M-dwarf TRAPPIST-1 the flare black body temperatures associated with the total continuum emission are lower and not consistent with the usually adopted assumption of 9000-10000 K. This could imply different and faster cooling mechanisms. Further multi-color observations are needed to investigate whether or not our observations are a general characteristic of ultra-cool M-dwarfs. This would have significant implications for the habitability of exoplanets around these stars because the UV surface flux is likely to be overestimated by the models with higher flare temperatures.
We propose an analogue of spin fields for the relativistic RNS-particle in 4 dimensions, in order to describe Ramond-Ramond states as "two-particle" excitations on the world line. On a natural representation space we identify a differential whose cohomology agrees with RR-fields equations. We then discuss the non-linear theory encoded in deformations of the latter by background fields. We also formulate a sigma model for this spin field from which we recover the RNS-formulation by imposing suitable constraints.
Context. Millimeter astronomy provides valuable information on the birthplaces of planetary systems. In order to compare theoretical models with observations, the dust component has to be carefully calculated.
Aims: Here, we aim to study the effects of dust entrainment in photoevaporative winds, and the ejection and drag of dust due to the effects caused by radiation from the central star.
Methods: We improved and extended the existing implementation of a two-population dust and pebble description in the global Bern/Heidelberg planet formation and evolution model. Modern prescriptions for photoevaporative winds were used and we accounted for settling and advection of dust when calculating entrainment rates. In order to prepare for future population studies with varying conditions, we explored a wide range of disk, photoevaporation, and dust parameters.
Results: If dust can grow to pebble sizes, that is, if they are resistant to fragmentation or turbulence is weak, drift dominates and the entrained mass is small but larger than under the assumption of no vertical advection of grains with the gas flow. For the case of fragile dust shattering at velocities of 1m s−1 - as indicated in laboratory experiments -, an order of magnitude more dust is entrained, which becomes the main dust removal process. Radiation pressure effects disperse massive, dusty disks on timescales of a few hundred Myr.
Conclusions: These results highlight the importance of dust entrainment in winds as a solid-mass removal process. Furthermore, this model extension lays the foundations for future statistical studies of the formation of planets in their birth environment.
Feeding with gas in streams is predicted to be an important galaxy growth mechanism. Using an idealised setup, we study the impact of stream feeding (with 10$^7$ M$_{\odot}$ Myr$^{-1}$ rate) on the star formation and outflows of disc galaxies with $\sim$10$^{11}$ M$_{\odot}$ baryonic mass. The magneto-hydrodynamical simulations are carried out with the PIERNIK code and include star formation, feedback from supernova, and cosmic ray advection and diffusion. We find that stream accretion enhances galactic star formation. Lower angular momentum streams result in more compact discs, higher star formation rates and stronger outflows. In agreement with previous studies, models including cosmic rays launch stronger outflows travelling much further into the galactic halo. Cosmic ray supported outflows are also cooler than supernova only driven outflows. With cosmic rays, the star formation is suppressed and the thermal pressure is reduced. We find evidence for two distinct outflow phases. The warm outflows have high angular momentum and stay close to the galactic disc, while the hot outflow phase has low angular momentum and escapes from the centre deep into the halo. Cosmic rays can therefore have a strong impact on galaxy evolution by removing low angular momentum, possibly metal enriched gas from the disc and injecting it into the circumgalactic medium.
Wide, deep, blind continuum surveys at submillimetre/millimetre (submm/mm) wavelengths are required to provide a full inventory of the dusty, distant Universe. However, conducting such surveys to the necessary depth, with sub-arcsec angular resolution, is prohibitively time-consuming, even for the most advanced submm/mm telescopes. Here, we report the most recent results from the ALMACAL project, which exploits the 'free' calibration data from the Atacama Large Millimetre/submillimetre Array (ALMA) to map the lines of sight towards and beyond the ALMA calibrators. ALMACAL has now covered 1,001 calibrators, with a total sky coverage around 0.3 deg2, distributed across the sky accessible from the Atacama desert, and has accumulated more than 1,000 h of integration. The depth reached by combining multiple visits to each field makes ALMACAL capable of searching for faint, dusty, star-forming galaxies (DSFGs), with detections at multiple frequencies to constrain the emission mechanism. Based on the most up-to-date ALMACAL database, we report the detection of 186 DSFGs with flux densities down to S870μm ~ 0.2 mJy, comparable with existing ALMA large surveys but less susceptible to cosmic variance. We report the number counts at five wavelengths between 870 μm and 3 mm, in ALMA bands 3, 4, 5, 6 and 7, providing a benchmark for models of galaxy formation and evolution. By integrating the observed number counts and the best-fitting functions, we also present the resolved fraction of the cosmic infrared background (CIB) and the CIB spectral shape. Combining existing surveys, ALMA has currently resolved about half of the CIB in the submm/mm regime.
Cosmic voids are promising cosmological laboratories for studying the dark energy phenomenon and alternative gravity theories. They are receiving special attention nowadays in view of the new generation of galaxy spectroscopic surveys, which are covering an unprecedented volume and redshift range. There are two primary statistics in void studies: (i) the void size function, which characterises the abundance of voids, and (ii) the void-galaxy cross-correlation function, which contains information about the density and velocity fields in these regions. However, it is necessary a complete description of the effects of geometrical (Alcock-Paczynski effect, AP) and dynamical (Kaiser effect, RSD) distortions around voids in order to design reliable cosmological tests based on these statistics. Observational measurements show prominent anisotropic patterns that lead to biased cosmological constraints if they are not properly modelled. This thesis addresses this problematic by presenting a theoretical and statistical framework based on dynamical and cosmological foundations capable of describing all the underlying effects involved: the expansion effect (t-RSD), the off-centring effect (v-RSD), the AP-volume effect and the ellipticity effect (e-RSD). These effects can be understood by studying the mapping of voids between real and redshift space. In this way, we lay the foundations for a proper modelling of the aforementioned statistics. In addition, we present a new cosmological test based on two perpendicular projections of the correlation function. The method is fiducial-cosmology free, which allows us to effectively break any possible degeneracy between the cosmological parameters involved. Moreover, it allows us to significantly reduce the number of mock catalogues needed to estimate covariances.
Context: Recent observations with the Atacama Large Millimeter Array (ALMA) have shown that the large dust aggregates observed at millimeter wavelengths settle to the midplane into a remarkably thin layer. Aims: We intend to find out if the geometric thinness of these layers is evidence against the vertical shear instability (VSI) operating in these disks. Methods: We performed hydrodynamic simulations of a protoplanetary disk with a locally isothermal equation of state, and let the VSI fully develop. We sprinkled dust particles and followed their motion as they got stirred up by the VSI. We determined for which grain size the layer becomes geometrically thin enough to be consistent with ALMA observations. We then verified if, with these grain sizes, it is still possible to generate a moderately optically thick layer at millimeter wavelengths, as observations appear to indicate. Results: We found that even very large dust aggregates with Stokes numbers close to unity get stirred up to relatively large heights above the midplane by the VSI, which is in conflict with the observed geometric thinness. For grains so large that the Stokes number exceeds unity, the layer can be made to remain thin, but we show that it is hard to make dust layers optically thick at ALMA wavelengths (e.g., tau(1.3mm)>=1) with such large dust aggregates. Conclusions: We conclude that protoplanetary disks with geometrically thin midplane dust layers cannot be VSI unstable, at least not down to the disk midplane. Explanations for the inhibition of the VSI include a reduced dust-to-gas ratio of the small dust grains that are responsible for the radiative cooling of the disk. A reduction of small grains by a factor of between 10 and 100 is sufficient to quench the VSI. Such a reduction is plausible in dust growth models, and still consistent with observations at optical and infrared wavelengths.
Under some assumptions on the hierarchy of relevant energy scales, we compute the nonrelativistic QCD (NRQCD) long-distance matrix elements (LDMEs) for inclusive production of $J/\psi$, $\psi(2S)$, and $\Upsilon$ states based on the potential NRQCD (pNRQCD) effective field theory. Based on the pNRQCD formalism, we obtain expressions for the LDMEs in terms of the quarkonium wavefunctions at the origin and universal gluonic correlators, which do not depend on the heavy quark flavor or the radial excitation. This greatly reduces the number of nonperturbative unknowns and substantially enhances the predictive power of the nonrelativistic effective field theory formalism. We obtain improved determinations of the LDMEs for $J/\psi$, $\psi(2S)$, and $\Upsilon$ states thanks to the universality of the gluonic correlators, and obtain phenomenological results for cross sections and polarizations at large transverse momentum that agree well with measurements at the LHC.
Multiply imaged time-variable sources can be used to measure absolute distances as a function of redshifts and thus determine cosmological parameters, chiefly the Hubble Constant H$_0$. In the two decades up to 2020, through a number of observational and conceptual breakthroughs, this so-called time-delay cosmography has reached a precision sufficient to be an important independent voice in the current ``Hubble tension'' debate between early- and late-universe determinations of H$_0$. The 2020s promise to deliver major advances in time-delay cosmography, owing to the large number of lenses to be discovered by new and upcoming surveys and the vastly improved capabilities for follow-up and analysis. In this review -- after a brief summary of the foundations of the method and recent advances -- we outline the opportunities for the decade and the challenges that will need to be overcome in order to meet the goal of the determination of H$_0$ from time-delay cosmography with 1\% precision and accuracy.
Luminous red galaxies (LRGs) and blue star-forming emission-line galaxies (ELGs) are key tracers of large-scale structure used by cosmological surveys. Theoretical predictions for such data are often done via simplistic models for the galaxy-halo connection. In this work, we use the large, high-fidelity hydrodynamical simulation of the MillenniumTNG project (MTNG) to inform a new phenomenological approach for obtaining an accurate and flexible galaxy-halo model on small scales. Our aim is to study LRGs and ELGs at two distinct epochs, $z = 1$ and $z = 0$, and recover their clustering down to very small scales, $r \sim 0.1 \ {\rm Mpc}/h$, i.e. the one-halo regime, while a companion paper extends this to a two-halo model for larger distances. The occupation statistics of ELGs in MTNG inform us that: (1) the satellite occupations exhibit a slightly super-Poisson distribution, contrary to commonly made assumptions, and (2) that haloes containing at least one ELG satellite are twice as likely to host a central ELG. We propose simple recipes for modeling these effects, each of which calls for the addition of a single free parameter to simpler halo occupation models. To construct a reliable satellite population model, we explore the LRG and ELG satellite radial and velocity distributions and compare them with those of subhalos and particles in the simulation. We find that ELGs are anisotropically distributed within halos, which together with our occupation results provides strong evidence for cooperative galaxy formation (manifesting itself as one-halo galaxy conformity); i.e.~galaxies with similar properties form in close proximity to each other. Our refined galaxy-halo model represents a useful improvement of commonly used analysis tools and thus can be of help to increase the constraining power of large-scale structure surveys.
Upcoming large galaxy surveys will subject the standard cosmological model, $\Lambda$CDM, to new precision tests. These can be tightened considerably if theoretical models of galaxy formation are available that can predict galaxy clustering and galaxy-galaxy lensing on the full range of measurable scales throughout volumes as large as those of the surveys and with sufficient flexibility that uncertain aspects of the underlying astrophysics can be marginalised over. This, in particular, requires mock galaxy catalogues in large cosmological volumes that can be directly compared to observation, and can be optimised empirically by Monte Carlo Markov Chains or other similar schemes to eliminate or estimate astrophysical parameters related to galaxy formation when constraining cosmology. Semi-analytic galaxy formation methods implemented on top of cosmological dark matter simulations offer a computationally efficient approach to construct physically based and flexibly parametrised galaxy formation models, and as such they are more potent than still faster, but purely empirical models. Here we introduce an updated methodology for the semi-analytic L-GALAXIES code, allowing it to be applied to simulations of the new MillenniumTNG project, producing galaxies directly on fully continuous past lightcones, potentially over the full sky, out to high redshift, and for all galaxies more massive than $\sim 10^8\,{\rm M}_\odot$. We investigate the numerical convergence of the resulting predictions, and study the projected galaxy clustering signals of different samples. The new methodology can be viewed as an important step towards more faithful forward-modelling of observational data, helping to reduce systematic distortions in the comparison of theory to observations.
Cosmological simulations are an important theoretical pillar for understanding nonlinear structure formation in our Universe and for relating it to observations on large scales. In several papers, we introduce our MillenniumTNG (MTNG) project that provides a comprehensive set of high-resolution, large volume simulations of cosmic structure formation aiming to better understand physical processes on large scales and to help interpreting upcoming large-scale galaxy surveys. We here focus on the full physics box MTNG740 that computes a volume of $(740\,\mathrm{Mpc})^3$ with a baryonic mass resolution of $3.1\times~10^7\,\mathrm{M_\odot}$ using \textsc{arepo} with $80.6$~billion cells and the IllustrisTNG galaxy formation model. We verify that the galaxy properties produced by MTNG740 are consistent with the TNG simulations, including more recent observations. We focus on galaxy clusters and analyse cluster scaling relations and radial profiles. We show that both are broadly consistent with various observational constraints. We demonstrate that the SZ-signal on a deep lightcone is consistent with Planck limits. Finally, we compare MTNG740 clusters with galaxy clusters found in Planck and the SDSS-8 RedMaPPer richness catalogue in observational space, finding very good agreement as well. However, {\it simultaneously} matching cluster masses, richness, and Compton-$y$ requires us to assume that the SZ mass estimates for Planck clusters are underestimated by $0.2$~dex on average. Thanks to its unprecedented volume for a high-resolution hydrodynamical calculation, the MTNG740 simulation offers rich possibilities to study baryons in galaxies, galaxy clusters, and in large scale structure, and in particular their impact on upcoming large cosmological surveys.
Modern redshift surveys are tasked with mapping out the galaxy distribution over enormous distance scales. Existing hydrodynamical simulations, however, do not reach the volumes needed to match upcoming surveys. We present results for the clustering of galaxies using a new, large volume hydrodynamical simulation as part of the MillenniumTNG (MTNG) project. With a computational volume that is $\approx15$ times larger than the next largest such simulation currently available, we show that MTNG is able to accurately reproduce the observed clustering of galaxies as a function of stellar mass. When separated by colour, there are some discrepancies with respect to the observed population, which can be attributed to the quenching of satellite galaxies in our model. We combine MTNG galaxies with those generated using a semi-analytic model to emulate the sample selection of luminous red galaxies (LRGs) and emission line galaxies (ELGs), and show that although the bias of these populations is approximately (but not exactly) constant on scales larger than $\approx10$ Mpc, there is significant scale-dependent bias on smaller scales. The amplitude of this effect varies between the two galaxy types, and also between the semi-analytic model and MTNG. We show that this is related to the distribution of haloes hosting LRGs and ELGs. Using mock SDSS-like catalogues generated on MTNG lightcones, we demonstrate the existence of prominent baryonic acoustic features in the large-scale galaxy clustering. We also demonstrate the presence of realistic redshift space distortions in our mocks, finding excellent agreement with the multipoles of the redshift-space clustering measured in SDSS data.
We report the discovery and characterization of two small transiting planets orbiting the bright M3.0V star TOI-1468 (LSPM J0106+1913), whose transit signals were detected in the photometric time series in three sectors of the TESS mission. We confirm the planetary nature of both of them using precise radial velocity measurements from the CARMENES and MAROON-X spectrographs, and supplement them with ground-based transit photometry. A joint analysis of all these data reveals that the shorter-period planet, TOI-1468 b (Pb = 1.88 d), has a planetary mass of Mb = 3.21 ± 0.24M⊕ and a radius of Rb = 1.280−0.039+0.038 R⊕, resulting in a density of ρb = 8.39−0.92+1.05 g cm−3, which is consistent with a mostly rocky composition. For the outer planet, TOI-1468 c (Pc = 15.53 d), we derive a mass of Mc = 6.64−0.68+0.67 M⊕,aradius of Rc = 2.06 ± 0.04 R⊕, and a bulk density of ρc = 2.00−0.19+0.21 g cm−3, which corresponds to a rocky core composition with a H/He gas envelope. These planets are located on opposite sides of the radius valley, making our system an interesting discovery as there are only a handful of other systems with the same properties. This discovery can further help determine a more precise location of the radius valley for small planets around M dwarfs and, therefore, shed more light on planet formation and evolution scenarios.
Radial velocities and photometry are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/666/A155