Previous studies have shown that dark matter-deficient galaxies (DMDG) such as NGC 1052-DF2 (hereafter DF2) can result from tidal stripping. An important question, though, is whether such a stripping scenario can explain DF2's large specific frequency of globular clusters (GCs). After all, tidal stripping and shocking preferentially remove matter from the outskirts. We examine this using idealized, high-resolution simulations of a regular dark matter-dominated galaxy that is accreted on to a massive halo. As long as the initial (pre-infall) dark matter halo of the satellite is cored, which is consistent with predictions of cosmological, hydrodynamical simulations, the tidal remnant can be made to resemble DF2 in all its properties, including its GC population. The required orbit has a pericentre at the 8.3 percentile of the distribution for subhaloes at infall, and thus is not particularly extreme. On this orbit the satellite loses 98.5 (30) per cent of its original dark matter (stellar) mass, and thus evolves into a DMDG. The fraction of GCs that is stripped off depends on the initial radial distribution. If, at infall, the median projected radius of the GC population is roughly two times that of the stars, consistent with observations of isolated galaxies, only ~20 per cent of the GCs are stripped off. This is less than for the stars, which is due to dynamical friction counteracting the tidal stirring. We predict that, if indeed DF2 was crafted by strong tides, its stellar outskirts should have a very shallow metallicity gradient.
Using bona-fide black hole (BH) mass estimates from reverberation mapping and the line ratio [Si VI] 1.963$\rm{\mu m}$/Brγbroad as tracer of the AGN ionizing continuum, a novel BH-mass scaling relation of the form log(MBH) = (6.40 ± 0.17) - (1.99 ± 0.37) × log ([Si VI]/Brγbroad), dispersion 0.47 dex, over the BH mass interval, 106-108 M⊙ is found. Following on the geometrically thin accretion disc approximation and after surveying a basic parameter space for coronal lines production, we believe one of main drivers of the relation is the effective temperature of the disc, which is effectively sampled by the [Si VI] 1.963$\rm{\mu m}$ coronal line for the range of BH masses considered. By means of CLOUDY photoionization models, the observed anticorrelation appears to be formally in line with the thin disc prediction Tdisc ∝ MBH-1/4.
Basis transformations often involve Fierz and other relations which are only valid in $D=4$ dimensions. In general $D$ space-time dimensions however, evanescent operators have to be introduced, in order to preserve such identities. Such evanescent operators contribute to one-loop basis transformations as well as to two-loop renormalization group running. We present a simple procedure on how to systematically change basis at the one-loop level by obtaining shifts due to evanescent operators. As an example we apply this method to derive the one-loop basis transformation from the BMU basis useful for NLO QCD calculations, to the JMS basis used in the matching to the SMEFT.
The diffusive epidemic process is a paradigmatic example of an absorbing state phase transition in which healthy and infected individuals spread with different diffusion constants. Using stochastic activity spreading simulations in combination with finite-size scaling analyses we reveal two qualitatively different processes that characterize the critical dynamics: subdiffusive propagation of infection clusters and diffusive fluctuations in the healthy population. This suggests the presence of a strong-coupling regime and sheds new light on a long-standing debate about the theoretical classification of the system.
We present results on the star cluster properties from a series of high resolution smoothed particles hydrodynamics (SPH) simulations of isolated dwarf galaxies as part of the GRIFFIN project. The simulations at sub-parsec spatial resolution and a minimum particle mass of 4 M⊙ incorporate non-equilibrium heating, cooling, and chemistry processes, and realize individual massive stars. The simulations follow feedback channels of massive stars that include the interstellar-radiation field variable in space and time, the radiation input by photo-ionization and supernova explosions. Varying the star formation efficiency per free-fall time in the range ϵff = 0.2-50${{\ \rm per\ cent}}$ neither changes the star formation rates nor the outflow rates. While the environmental densities at star formation change significantly with ϵff, the ambient densities of supernovae are independent of ϵff indicating a decoupling of the two processes. At low ϵff, gas is allowed to collapse more before star formation, resulting in more massive, and increasingly more bound star clusters are formed, which are typically not destroyed. With increasing ϵff, there is a trend for shallower cluster mass functions and the cluster formation efficiency Γ for young bound clusters decreases from $50 {{\ \rm per\ cent}}$ to $\sim 1 {{\ \rm per\ cent}}$ showing evidence for cluster disruption. However, none of our simulations form low mass (<103 M⊙) clusters with structural properties in perfect agreement with observations. Traditional star formation models used in galaxy formation simulations based on local free-fall times might therefore be unable to capture star cluster properties without significant fine tuning.
LiteBIRD, the Lite (Light) satellite for the study of B-mode polarization and Inflation from cosmic background Radiation Detection, is a space mission for primordial cosmology and fundamental physics. The Japan Aerospace Exploration Agency (JAXA) selected LiteBIRD in May 2019 as a strategic large-class (L-class) mission, with an expected launch in the late 2020s using JAXA's H3 rocket. LiteBIRD is planned to orbit the Sun-Earth Lagrangian point L2, where it will map the cosmic microwave background (CMB) polarization over the entire sky for three years, with three telescopes in 15 frequency bands between 34 and 448 GHz, to achieve an unprecedented total sensitivity of 2.2$\mu$K-arcmin, with a typical angular resolution of 0.5$^\circ$ at 100 GHz. The primary scientific objective of LiteBIRD is to search for the signal from cosmic inflation, either making a discovery or ruling out well-motivated inflationary models. The measurements of LiteBIRD will also provide us with insight into the quantum nature of gravity and other new physics beyond the standard models of particle physics and cosmology. We provide an overview of the LiteBIRD project, including scientific objectives, mission and system requirements, operation concept, spacecraft and payload module design, expected scientific outcomes, potential design extensions and synergies with other projects.
The Hubble constant (H0) is one of the fundamental parameters in cosmology, but there is a heated debate around the > 4σ tension between the local Cepheid distance ladder and the early Universe measurements. Strongly lensed Type Ia supernovae (LSNe Ia) are an independent and direct way to measure H0, where a time-delay measurement between the multiple supernova (SN) images is required. In this work, we present two machine learning approaches for measuring time delays in LSNe Ia, namely, a fully connected neural network (FCNN) and a random forest (RF). For the training of the FCNN and the RF, we simulate mock LSNe Ia from theoretical SN Ia models that include observational noise and microlensing. We test the generalizability of the machine learning models by using a final test set based on empirical LSN Ia light curves not used in the training process, and we find that only the RF provides a low enough bias to achieve precision cosmology; as such, RF is therefore preferred over our FCNN approach for applications to real systems. For the RF with single-band photometry in the i band, we obtain an accuracy better than 1% in all investigated cases for time delays longer than 15 days, assuming follow-up observations with a 5σ point-source depth of 24.7, a two day cadence with a few random gaps, and a detection of the LSNe Ia 8 to 10 days before peak in the observer frame. In terms of precision, we can achieve an approximately 1.5-day uncertainty for a typical source redshift of ∼0.8 on the i band under the same assumptions. To improve the measurement, we find that using three bands, where we train a RF for each band separately and combine them afterward, helps to reduce the uncertainty to ∼1.0 day. The dominant source of uncertainty is the observational noise, and therefore the depth is an especially important factor when follow-up observations are triggered. We have publicly released the microlensed spectra and light curves used in this work.
https://github.com/shsuyu/HOLISMOKES-public/tree/main/HOLISMOKES_VII
Context. The dynamics of the intracluster medium (ICM) is affected by turbulence driven by several processes, such as mergers, accretion and feedback from active galactic nuclei.
Aims: X-ray surface brightness fluctuations have been used to constrain turbulence in galaxy clusters. Here, we use simulations to further investigate the relation between gas density and turbulent velocity fluctuations, with a focus on the effect of the stratification of the ICM.
Methods: In this work, we studied the turbulence driven by hierarchical accretion by analysing a sample of galaxy clusters simulated with the cosmological code ENZO. We used a fixed scale filtering approach to disentangle laminar from turbulent flows.
Results: In dynamically perturbed galaxy clusters, we found a relation between the root mean square of density and velocity fluctuations, albeit with a different slope than previously reported. The Richardson number is a parameter that represents the ratio between turbulence and buoyancy, and we found that this variable has a strong dependence on the filtering scale. However, we could not detect any strong relation between the Richardson number and the logarithmic density fluctuations, in contrast to results by recent and more idealised simulations. In particular, we find a strong effect from radial accretion, which appears to be the main driver for the gas fluctuations. The ubiquitous radial bias in the dynamics of the ICM suggests that homogeneity and isotropy are not always valid assumptions, even if the turbulent spectra follow Kolmogorov's scaling. Finally, we find that the slope of the velocity and density spectra are independent of cluster-centric radii.
Recent wide-area surveys have enabled us to study the Milky Way with unprecedented detail. Its inner regions, hidden behind dust and gas, have been partially unveiled with the arrival of near-infrared (IR) photometric and spectroscopic data sets. Among recent discoveries, there is a population of low-mass globular clusters, known to be missing, especially towards the Galactic bulge. In this work, five new low-luminosity globular clusters located towards the bulge area are presented. They were discovered by searching for groups in the multidimensional space of coordinates, colours, and proper motions from the Gaia EDR3 catalogue and later confirmed with deeper VVV survey near-IR photometry. The clusters show well-defined red giant branches and, in some cases, horizontal branches with their members forming a dynamically coherent structure in proper motion space. Four of them were confirmed by spectroscopic follow-up with the MUSE instrument on the ESO VLT. Photometric parameters were derived, and when available, metallicities, radial velocities, and orbits were determined. The new clusters Gran 1 and 5 are bulge globular clusters, while Gran 2, 3 and 4 present halo-like properties. Preliminary orbits indicate that Gran 1 might be related to the Main Progenitor, or the so-called 'low-energy' group, while Gran 2, 3 and 5 appears to follow the Gaia-Enceladus/Sausage structure. This study demonstrates that the Gaia proper motions, combined with the spectroscopic follow-up and colour-magnitude diagrams, are required to confirm the nature of cluster candidates towards the inner Galaxy. High stellar crowding and differential extinction may hide other low-luminosity clusters.
Cosmology requires new physics beyond the Standard Model of elementary particles and fields. What is the fundamental physics behind dark matter and dark energy? What generated the initial fluctuations in the early Universe? Polarised light of the cosmic microwave background (CMB) may hold the key to answers. In this article, we discuss two new developments in this research area. First, if the physics behind dark matter and dark energy violates parity symmetry, their coupling to photons rotates the plane of linear polarisation as the CMB photons travel more than 13 billion years. This effect is known as `cosmic birefringence': space filled with dark matter and dark energy behaves as if it were a birefringent material, like a crystal. A tantalising hint for such a signal has been found with the statistical significance of $3\sigma$. Next, the period of accelerated expansion in the very early Universe, called `cosmic inflation', produced a stochastic background of primordial gravitational waves (GW). What generated GW? The leading idea is vacuum fluctuations in spacetime, but matter fields could also produce a significant amplitude of primordial GW. Finding its origin using CMB polarisation opens a new window into the physics behind inflation. These new scientific targets may influence how data from future CMB experiments are collected, calibrated, and analysed.
Recent cosmological analyses rely on the ability to accurately sample from high-dimensional posterior distributions. A variety of algorithms have been applied in the field, but justification of the particular sampler choice and settings is often lacking. Here we investigate three such samplers to motivate and validate the algorithm and settings used for the Dark Energy Survey (DES) analyses of the first 3 years (Y3) of data from combined measurements of weak lensing and galaxy clustering. We employ the full DES Year 1 likelihood alongside a much faster approximate likelihood, which enables us to assess the outcomes from each sampler choice and demonstrate the robustness of our full results. We find that the ellipsoidal nested sampling algorithm $\texttt{MultiNest}$ reports inconsistent estimates of the Bayesian evidence and somewhat narrower parameter credible intervals than the sliced nested sampling implemented in $\texttt{PolyChord}$. We compare the findings from $\texttt{MultiNest}$ and $\texttt{PolyChord}$ with parameter inference from the Metropolis-Hastings algorithm, finding good agreement. We determine that $\texttt{PolyChord}$ provides a good balance of speed and robustness, and recommend different settings for testing purposes and final chains for analyses with DES Y3 data. Our methodology can readily be reproduced to obtain suitable sampler settings for future surveys.
Quantum coherence is one of the most striking features of quantum mechanics rooted in the superposition principle. Recently, it has been demonstrated that it is possible to harvest the quantum coherence from a coherent scalar field. In order to explore a new method of detecting axion dark matter, we consider a point-like Unruh-DeWitt detector coupled to the axion field and quantify a coherent measure of the detector. We show that the detector can harvest the quantum coherence from the axion dark matter. To be more precise, we consider a two-level electron system in an atom as the detector. In this case, we obtain the coherence measure C = 2.2 × 10−6γ(T/1s) where T and γ are an observation time and the Lorentz factor. At the same time, the axion mass ma we can probe is determined by the energy gap of the detector.
We present a demonstration of the in-flight polarization angle calibration for the JAXA/ISAS second strategic large class mission, LiteBIRD, and estimate its impact on the measurement of the tensor-to-scalar ratio parameter, r, using simulated data. We generate a set of simulated sky maps with CMB and polarized foreground emission, and inject instrumental noise and polarization angle offsets to the 22 (partially overlapping) LiteBIRD frequency channels. Our in-flight angle calibration relies on nulling the EB cross correlation of the polarized signal in each channel. This calibration step has been carried out by two independent groups with a blind analysis, allowing an accuracy of the order of a few arc-minutes to be reached on the estimate of the angle offsets. Both the corrected and uncorrected multi-frequency maps are propagated through the foreground cleaning step, with the goal of computing clean CMB maps. We employ two component separation algorithms, the Bayesian-Separation of Components and Residuals Estimate Tool (B-SeCRET), and the Needlet Internal Linear Combination (NILC). We find that the recovered CMB maps obtained with algorithms that do not make any assumptions about the foreground properties, such as NILC, are only mildly affected by the angle miscalibration. However, polarization angle offsets strongly bias results obtained with the parametric fitting method. Once the miscalibration angles are corrected by EB nulling prior to the component separation, both component separation algorithms result in an unbiased estimation of the r parameter. While this work is motivated by the conceptual design study for LiteBIRD, its framework can be broadly applied to any CMB polarization experiment. In particular, the combination of simulation plus blind analysis provides a robust forecast by taking into account not only detector sensitivity but also systematic effects.
We use hydrodynamical separate universe simulations with the IllustrisTNG model to predict the local primordial non-Gaussianity (PNG) bias parameters b ϕ and b ϕδ, which enter at leading order in the galaxy power spectrum and bispectrum. This is the first time that b ϕδ is measured from either gravity-only or galaxy formation simulations. For dark matter halos, the popular assumption of universality overpredicts the b ϕδ(b 1) relation in the range 1 ≲ b 1 ≲ 3 by up to Δ b ϕδ ~ 3 (b 1 is the linear density bias). The adequacy of the universality relation is worse for the simulated galaxies, with the relations b ϕ(b 1) and b ϕδ(b 1) being generically redshift-dependent and very sensitive to how galaxies are selected (we test total, stellar and black hole mass, black hole mass accretion rate and color). The uncertainties on b ϕ and b ϕδ have a direct, often overlooked impact on the constraints of the local PNG parameter f NL, which we study and discuss. For a survey with V = 100 Gpc3/h3 at z=1, uncertainties Δ b ϕ ≲ 1 and Δ b ϕδ ≲ 5 around values close to the fiducial can yield relatively unbiased constraints on f NL using power spectrum and bispectrum data. We also show why priors on galaxy bias are useful even in analyses that fit for products f NL b ϕ and f NL b ϕδ. The strategies we discuss to deal with galaxy bias uncertainties can be straightforwardly implemented in existing f NL constraint analyses (we provide fits for some of the bias relations). Our results motivate more works with galaxy formation simulations to refine our understanding of b ϕ and b ϕδ towards improved constraints on f NL.
We present MG-GLAM, a code developed for the very fast production of full N-body cosmological simulations in modified gravity (MG) models. We describe the implementation, numerical tests and first results of a large suite of cosmological simulations for two broad classes of MG models with derivative coupling terms - the Vainshtein- and Kmouflage-type models - which respectively features the Vainshtein and Kmouflage screening mechanism. Derived from the parallel particle-mesh code GLAM, MG-GLAM incorporates an efficient multigrid relaxation technique to solve the characteristic nonlinear partial differential equations of these models. For Kmouflage, we have proposed a new algorithm for the relaxation solver, and run the first simulations of the model to understand its cosmological behaviour. In a companion paper, we describe versions of this code developed for conformally-coupled MG models, including several variants of f(R) gravity, the symmetron model and coupled quintessence. Altogether, MG-GLAM has so far implemented the prototypes for most MG models of interest, and is broad and versatile. The code is highly optimised, with a tremendous (over two orders of magnitude) speedup when comparing its running time with earlier N-body codes, while still giving accurate predictions of the matter power spectrum and dark matter halo abundance. MG-GLAM is ideal for the generation of large numbers of MG simulations that can be used in the construction of mock galaxy catalogues and accurate emulators for ongoing and future galaxy surveys.
We use a recent census of the Milky Way (MW) satellite galaxy population to constrain the lifetime of particle dark matter (DM). We consider two-body decaying dark matter (DDM) in which a heavy DM particle decays with lifetime $\tau$ comparable to the age of the Universe to a lighter DM particle (with mass splitting $\epsilon$) and to a dark radiation species. These decays impart a characteristic "kick velocity," $V_{\mathrm{kick}}=\epsilon c$, on the DM daughter particles, significantly depleting the DM content of low-mass subhalos and making them more susceptible to tidal disruption. We fit the suppression of the present-day DDM subhalo mass function (SHMF) as a function of $\tau$ and $V_{\mathrm{kick}}$ using a suite of high-resolution zoom-in simulations of MW-mass halos, and we validate this model on new DDM simulations of systems specifically chosen to resemble the MW. We implement our DDM SHMF predictions in a forward model that incorporates inhomogeneities in the spatial distribution and detectability of MW satellites and uncertainties in the mapping between galaxies and DM halos, the properties of the MW system, and the disruption of subhalos by the MW disk using an empirical model for the galaxy--halo connection. By comparing to the observed MW satellite population, we conservatively exclude DDM models with $\tau < 18\ \mathrm{Gyr}$ ($29\ \mathrm{Gyr}$) for $V_{\mathrm{kick}}=20\ \mathrm{km}\, \mathrm{s}^{-1}$ ($40\ \mathrm{km}\, \mathrm{s}^{-1}$) at $95\%$ confidence. These constraints are among the most stringent and robust small-scale structure limits on the DM particle lifetime and strongly disfavor DDM models that have been proposed to alleviate the Hubble and $S_8$ tensions.
Several marginally significant associations between high-energy neutrinos and potential astrophysical sources have been recently reported, but a conclusive identification of these sources remains challenging. We explore the use of Monte Carlo simulations to gain deeper insight into the implications of, in particular, the IC170922A-TXS 0506+056 observation. Assuming a null model, we find a 7.6% chance to mistakenly identify coincidences between flaring blazars and neutrino alerts in 10-year surveys. We confirm that a blazar-neutrino connection based on the ${\gamma}$-ray flux is required to find a low chance coincidence probability and, therefore, a significant IC170922A-TXS 0506+056 association. We then assume this blazar-neutrino connection for the whole population and find that the ratio of neutrino to ${\gamma}$-ray fluxes must be $\lesssim 10^{-2}$ in order not to overproduce the total number of neutrino alerts seen by IceCube. For the IC170922A-TXS 0506+056 association to make sense, we must either accept this low flux ratio or suppose that only some rare sub-population of blazars is capable of high-energy neutrino production. For example, if we consider neutrino production only in blazar flares, we expect the flux ratio of between $10^{-3}$ and $10^{-1}$ to be consistent with a single coincident observation of a neutrino alert and flaring blazar. These conclusions are robust with respect to the uncertainties in our modelling assumptions.
We present a fast and precise method to approximate the physics model of the Karlsruhe Tritium Neutrino (KATRIN) experiment using a neural network. KATRIN is designed to measure the effective electron anti-neutrino mass $m_\nu $ using the kinematics of $\upbeta $-decay with a sensitivity of 200 meV at 90% confidence level. To achieve this goal, a highly accurate model prediction with relative errors below the $10^{-4}$-level is required. Using the regular numerical model for the analysis of the final KATRIN dataset is computationally extremely costly or requires approximations to decrease the computation time. Our solution to reduce the computational requirements is to train a neural network to learn the predicted $\upbeta $-spectrum and its dependence on all relevant input parameters. This results in a speed-up of the calculation by about three orders of magnitude, while meeting the stringent accuracy requirements of KATRIN.
CRESST is one of the most prominent direct detection experiments for dark matter particles with sub-GeV/c$^2$ mass. One of the advantages of the CRESST experiment is the possibility to include a large variety of nuclides in the target material used to probe dark matter interactions. In this work, we discuss in particular the interactions of dark matter particles with protons and neutrons of $^{6}$Li. This is now possible thanks to new calculations on nuclear matrix elements of this specific isotope of Li. To show the potential of using this particular nuclide for probing dark matter interactions, we used the data collected previously by a CRESST prototype based on LiAlO$_2$ and operated in an above ground test-facility at Max-Planck-Institut für Physik in Munich, Germany. In particular, the inclusion of $^{6}$Li in the limit calculation drastically improves the result obtained for spin-dependent interactions with neutrons in the whole mass range. The improvement is significant, greater than two order of magnitude for dark matter masses below 1 GeV/c$^2$, compared to the limit previously published with the same data.
As part of the cosmology analysis using Type Ia Supernovae (SN Ia) in the Dark Energy Survey (DES), we present photometrically identified SN Ia samples using multiband light curves and host galaxy redshifts. For this analysis, we use the photometric classification framework SuperNNovatrained on realistic DES-like simulations. For reliable classification, we process the DES SN programme (DES-SN) data and introduce improvements to the classifier architecture, obtaining classification accuracies of more than 98 per cent on simulations. This is the first SN classification to make use of ensemble methods, resulting in more robust samples. Using photometry, host galaxy redshifts, and a classification probability requirement, we identify 1863 SNe Ia from which we select 1484 cosmology-grade SNe Ia spanning the redshift range of 0.07 < z < 1.14. We find good agreement between the light-curve properties of the photometrically selected sample and simulations. Additionally, we create similar SN Ia samples using two types of Bayesian Neural Network classifiers that provide uncertainties on the classification probabilities. We test the feasibility of using these uncertainties as indicators for out-of-distribution candidates and model confidence. Finally, we discuss the implications of photometric samples and classification methods for future surveys such as Vera C. Rubin Observatory Legacy Survey of Space and Time.
We describe a systematic approach for the evaluation of Witten diagrams for multi-loop scattering amplitudes of a conformally coupled scalar $\phi^4$-theory in Euclidean AdS$_4$, by recasting the Witten diagrams as flat space Feynman integrals. We derive closed form expressions for the anomalous dimensions for all double-trace operators up to the second order in the coupling constant. We explain the relation between the flat space unitarity methods and the discontinuities of the short distance expansion on the boundary of Witten diagrams.
The integrated shear 3-point correlation function ζ_± is a higher-order statistic of the cosmic shear field that describes the modulation of the 2-point correlation function ξ_± by long-wavelength features in the field. Here, we introduce a new theoretical model to calculate ζ_± that is accurate on small angular scales, and that allows to take baryonic feedback effects into account. Our model builds on the realization that the small-scale ζ_± is dominated by the non-linear matter bispectrum in the squeezed limit, which can be evaluated accurately using the non-linear matter power spectrum and its first-order response functions to density and tidal field perturbations. We demonstrate the accuracy of our model by showing that it reproduces the small-scale ζ_± measured in simulated cosmic shear maps. The impact of baryonic feedback enters effectively only through the corresponding impact on the non-linear matter power spectrum, thereby permitting to account for these astrophysical effects on ζ_± similarly to how they are currently accounted for on ξ_±. Using a simple idealized Fisher matrix forecast for a DES-like survey we find that, compared to ξ_±, a combined |$\xi _{\pm }\ \&\ \zeta _{\pm }$| analysis can lead to improvements of order |$20\!-\!40{{\ \rm per\ cent}}$| on the constraints of cosmological parameters such as σ_8 or the dark energy equation of state parameter w_0. We find similar levels of improvement on the constraints of the baryonic feedback parameters, which strengthens the prospects for cosmic shear data to obtain tight constraints not only on cosmology but also on astrophysical feedback models. These encouraging results motivate future works on the integrated shear 3-point correlation function towards applications to real survey data.
This is the second part of a thorough investigation of the redshift-space effects that affect void properties and the impact they have on cosmological tests. Here, we focus on the void-galaxy cross-correlation function, specifically, on the projected versions that we developed in a previous work. The pillar of the analysis is the one-to-one relationship between real and redshift-space voids above the shot-noise level identified with a spherical void finder. Under this mapping, void properties are affected by three effects: (i) a systematic expansion as a consequence of the distortions induced by galaxy dynamics, (ii) the Alcock-Paczynski volume effect, which manifests as an overall expansion or contraction depending on the fiducial cosmology, and (iii) a systematic off-centring along the line of sight as a consequence of the distortions induced by void dynamics. We found that correlations are also affected by an additional source of distortions: the ellipticity of voids. This is the first time that distortions due to the off-centring and ellipticity effects are detected and quantified. With a simplified test, we verified that the Gaussian streaming model is still robust provided all these effects are taken into account, laying the foundations for improvements in current models in order to obtain unbiased cosmological constraints from spectroscopic surveys. Besides this practical importance, this analysis also encodes key information about the structure and dynamics of the Universe at the largest scales. Furthermore, some of the effects constitute cosmological probes by themselves, as is the case of the void ellipticity.
Context. X-ray- and extreme-ultraviolet- (together: XEUV-) driven photoevaporative winds acting on protoplanetary disks around young T-Tauri stars may crucially impact disk evolution, affecting both gas and dust distributions.
Aims: We constrain the dust densities in a typical XEUV-driven outflow, and determine whether these winds can be observed at μm-wavelengths.
Methods: We used dust trajectories modelled atop a 2D hydrodynamical gas model of a protoplanetary disk irradiated by a central T-Tauri star. With these and two different prescriptions for the dust distribution in the underlying disk, we constructed wind density maps for individual grain sizes. We used the dust density distributions obtained to synthesise observations in scattered and polarised light.
Results: For an XEUV-driven outflow around a M* = 0.7 M⊙ T-Tauri star with LX = 2 × 1030 erg s−1, we find a dust mass-loss rate Ṁdust ≲ 4.1 × 10−11 M⊙ yr−1 for an optimistic estimate of dust densities in the wind (compared to Ṁgas ≈ 3.7 × 10−8 M⊙ yr−1). The synthesised scattered-light images suggest a distinct chimney structure emerging at intensities I∕Imax < 10−4.5 (10−3.5) at λobs = 1.6 (0.4) μm, while the features in the polarised-light images are even fainter. Observations synthesised from our model do not exhibit clear features for SPHERE IRDIS, but show a faint wind signature for JWST NIRCam under optimal conditions.
Conclusions: Unambiguous detections of photoevaporative XEUV winds launched from primordial disks are at least challenging with current instrumentation; this provides a possible explanation as to why disk winds are not routinely detected in scattered or polarised light. Our calculations show that disk scale heights retrieved from scattered-light observations should be only marginally affected by the presence of an XEUV wind.
Decays of the neutral and long-lived η and η′ mesons provide a unique, flavor-conserving laboratory to test low-energy Quantum Chromodynamics and search for new physics beyond the Standard Model. They have drawn world-wide attention in recent years and have inspired broad experimental programs in different high-intensity-frontier centers. New experimental data will offer critical inputs to precisely determine the light quark mass ratios, η-η′ mixing parameters, and hadronic contributions to the anomalous magnetic moment of the muon. At the same time, it will provide a sensitive probe to test potential new physics. This includes searches for hidden photons, light Higgs scalars, and axion-like particles that are complementary to worldwide efforts to detect new light particles below the GeV mass scale, as well as tests of discrete symmetry violation. In this review, we give an update on theoretical developments, discuss the experimental opportunities, and identify future research needed in this field.
Although galactic outflows play a key role in our understanding of the evolution of galaxies, the exact mechanism by which galactic outflows are driven is still far from being understood and, therefore, our understanding of associated feedback mechanisms that control the evolution of galaxies is still plagued by many enigmas. In this work, we present a simple toy model that can provide insight on how non-axisymmetric instabilities in galaxies (bars, spiral arms, warps) can lead to local exponential magnetic field growth by radial flows beyond the equipartition value by at least two orders of magnitude on a timescale of a few 100 Myr. Our predictions show that the process can lead to galactic outflows in barred spiral galaxies with a mass-loading factor η ≍ 0.1, in agreement with our numerical simulations. Moreover, our outflow mechanism could contribute to an understanding of the large fraction of barred spiral galaxies that show signs of galactic outflows in the CHANG-ES survey. Extending our model shows the importance of such processes in high-redshift galaxies by assuming equipartition between magnetic energy and turbulent energy. Simple estimates for the star formation rate in our model together with cross correlated masses from the star-forming main sequence at redshifts z ~ 2 allow us to estimate the outflow rate and mass-loading factors by non-axisymmetric instabilities and a subsequent radial inflow dynamo, giving mass-loading factors of η ≍ 0.1 for galaxies in the range of M ⋆ = 109-1012 M ⊙, in good agreement with recent results of SINFONI and KMOS 3D.
Several marginally significant associations between high-energy neutrinos and potential astrophysical sources have been recently reported, but a conclusive identification of these sources remains challenging. We explore the use of Monte Carlo simulations to gain deeper insight into the implications of, in particular, the IC170922A-TXS 0506+056 observation. Assuming a null model, we find a 7.6% chance to mistakenly identify coincidences between flaring blazars and neutrino alerts in 10-year surveys. We confirm that a blazar-neutrino connection based on the ${\gamma}$-ray flux is required to find a low chance coincidence probability and, therefore, a significant IC170922A-TXS 0506+056 association. We then assume this blazar-neutrino connection for the whole population and find that the ratio of neutrino to ${\gamma}$-ray fluxes must be $\lesssim 10^{-2}$ in order not to overproduce the total number of neutrino alerts seen by IceCube. For the IC170922A-TXS 0506+056 association to make sense, we must either accept this low flux ratio or suppose that only some rare sub-population of blazars is capable of high-energy neutrino production. For example, if we consider neutrino production only in blazar flares, we expect the flux ratio of between $10^{-3}$ and $10^{-1}$ to be consistent with a single coincident observation of a neutrino alert and flaring blazar. These conclusions are robust with respect to the uncertainties in our modelling assumptions.
We investigate the asymptotia of decelerating and spatially flat FLRW spacetimes at future null infinity. We find that the asymptotic algebra of diffeomorphisms can be enlarged to a one-parameter deformation of the recently discovered Weyl-BMS algebra for asymptotically flat spacetimes by relaxing the boundary conditions. The deformation parameter is related to the equation of state of the fluid. We then study the equations of motion for asymptotically FLRW spacetimes with finite fluxes and show that the dynamics is fully constrained by the stress-tensor of the source. Finally, we propose an expression for the charges which are associated with the cosmological supertranslations and whose evolution equation features a novel contribution arising from the Hubble flow.
We construct extended TQFTs associated to Rozansky--Witten models with target manifolds $T^*\mathbb{C}^n$. The starting point of the construction is the 3-category whose objects are such Rozansky--Witten models, and whose morphisms are defects of all codimensions. By truncation, we obtain a (non-semisimple) 2-category $\mathcal{C}$ of bulk theories, surface defects, and isomorphism classes of line defects. Through a systematic application of the cobordism hypothesis we construct a unique extended oriented 2-dimensional TQFT valued in $\mathcal{C}$ for every affine Rozansky--Witten model. By evaluating this TQFT on closed surfaces we obtain the infinite-dimensional state spaces (graded by flavour and R-charges) of the initial 3-dimensional theory. Furthermore, we explicitly compute the commutative Frobenius algebras that classify the restrictions of the extended theories to circles and bordisms between them.
We carry out a dedicated search for strong-lens systems with high-redshift lens galaxies with the goal of extending strong lensing-assisted galaxy evolutionary studies to earlier cosmic time. Two strong-lens classifiers are constructed from a deep residual network and trained with datasets of different lens redshift and brightness distributions. We classify a sample of 5,356,628 pre-selected objects from the Wide layer fields in the second public data release of the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) by applying the two classifiers to their HSC $gri$-filter cutouts. Cutting off at thresholds that correspond to a false-positive rate of $10^{-3}$ on our test set, the two classifiers identify 5,468 and 6,119 strong-lens candidates. Visually inspecting the cutouts of those candidates results in 735 grade-A/B strong-lens candidates in total, of which 277 candidates are discovered for the first time. This is the single largest set of galaxy-scale strong-lens candidates discovered with HSC data to date, and nearly half of it (331/735) contains lens galaxies with photometric redshifts above 0.6. Our discoveries will serve as a valuable target list for ongoing and scheduled spectroscopic surveys such as the Dark Energy Spectroscopic Instrument, the Subaru Prime Focus Spectrograph project, and the Maunakea Spectroscopic Explorer.
We present a fast and precise method to approximate the physics model of the Karlsruhe Tritium Neutrino (KATRIN) experiment using a neural network. KATRIN is designed to measure the effective electron anti-neutrino mass using the kinematics of beta-decay with a sensitivity of 200 meV at 90% confidence level. To achieve this goal, a highly accurate model prediction with relative errors below the 1e-4-level is required. Using the regular numerical model for the analysis of the final KATRIN dataset is computationally extremely costly or requires approximations to decrease the computation time. Our solution to reduce the computational requirements is to train a neural network to learn the predicted beta-spectrum and its dependence on all relevant input parameters. This results in a speed-up of the calculation by about three orders of magnitude, while meeting the stringent accuracy requirements of KATRIN.
Atmospheres of highly irradiated gas giant planets host a large variety of atomic and ionic species. Here we observe the thermal emission spectra of the two ultra-hot Jupiters WASP-33b and KELT-20b/MASCARA-2b in the near-infrared wavelength range with CARMENES. Via high-resolution Doppler spectroscopy, we searched for neutral silicon (Si) in their dayside atmospheres. We detect the Si spectral signature of both planets via cross-correlation with model spectra. Detection levels of 4.8σ and 5.4σ, respectively, are observed when assuming a solar atmospheric composition. This is the first detection of Si in exoplanet atmospheres. The presence of Si is an important finding due to its fundamental role in cloud formation and, hence, for the planetary energy balance. Since the spectral lines are detected in emission, our results also confirm the presence of an inverted temperature profile in the dayside atmospheres of both planets.
Study Analysis Group 21 (SAG21) of the Exoplanet Exploration Program Analysis Group (ExoPAG) was organized to study the effect of stellar contamination on space-based transmission spectroscopy, a method for studying exoplanetary atmospheres by measuring the wavelength-dependent radius of a planet as it transits its star. Transmission spectroscopy relies on a precise understanding of the spectrum of the star being occulted. However, stars are not homogeneous, constant light sources but have temporally evolving photospheres and chromospheres with inhomogeneities like spots, faculae, and plages. This SAG has brought together an interdisciplinary team of more than 100 scientists, with observers and theorists from the heliophysics, stellar astrophysics, planetary science, and exoplanetary atmosphere research communities, to study the current needs that can be addressed in this context to make the most of transit studies from current NASA facilities like HST and JWST. The analysis produced 14 findings, which fall into three Science Themes encompassing (1) how the Sun is used as our best laboratory to calibrate our understanding of stellar heterogeneities ("The Sun as the Stellar Benchmark"), (2) how stars other than the Sun extend our knowledge of heterogeneities ("Surface Heterogeneities of Other Stars") and (3) how to incorporate information gathered for the Sun and other stars into transit studies ("Mapping Stellar Knowledge to Transit Studies").
Black hole (BH) accretion discs formed in compact-object mergers or collapsars may be major sites of the rapid-neutron-capture (r-)process, but the conditions determining the electron fraction (Ye) remain uncertain given the complexity of neutrino transfer and angular-momentum transport. After discussing relevant weak-interaction regimes, we study the role of neutrino absorption for shaping Ye using an extensive set of simulations performed with two-moment neutrino transport and again without neutrino absorption. We vary the torus mass, BH mass and spin, and examine the impact of rest-mass and weak-magnetism corrections in the neutrino rates. We also test the dependence on the angular-momentum transport treatment by comparing axisymmetric models using the standard α-viscosity with viscous models assuming constant viscous length-scales (lt) and 3D magnetohydrodynamic (MHD) simulations. Finally, we discuss the nucleosynthesis yields and basic kilonova properties. We find that absorption pushes Ye towards ~0.5 outside the torus, while inside increasing the equilibrium value $Y_\mathrm{ e}^{\mathrm{eq}}$ by ~0.05-0.2. Correspondingly, a substantial ejecta fraction is pushed above Ye = 0.25, leading to a reduced lanthanide fraction and a brighter, earlier, and bluer kilonova than without absorption. More compact tori with higher neutrino optical depth, τ, tend to have lower $Y_\mathrm{ e}^{\mathrm{eq}}$ up to τ ~ 1-10, above which absorption becomes strong enough to reverse this trend. Disc ejecta are less (more) neutron rich when employing an lt = const. viscosity (MHD treatment). The solar-like abundance pattern found for our MHD model marginally supports collapsar discs as major r-process sites, although a strong r-process may be limited to phases of high mass-infall rates, $\dot{M}\, \, \raise0.14em\rm{\gt }\lower0.28em\rm{\sim }\, \, 2\times 10^{-2}$ M⊙ s-1.
We study stellar population and structural properties of massive $\log(M_{\star} / M_{\odot}) > 11$ galaxies at $z\sim2.7$ in the Magneticum (box 3) and IllustrisTNG (TNG100, TNG300) hydrodynamical simulations. We find stellar mass functions broadly consistent with observations, with no scarcity of massive, quiescent galaxies at $z\sim2.7$, but with a higher quiescent galaxy fraction at high masses in IllustrisTNG. Average ages of simulated quiescent galaxies are between 0.8 and 1.0 Gyr, older by a factor $\sim2$ than observed in spectroscopically-confirmed quiescent galaxies at similar redshift. Besides being potentially indicative of issues with star-formation recipes in simulations, this discrepancy might also be partly explained by limitations in the estimation of observed ages. We investigate the purity of simulated UVJ rest-frame colorselected massive quiescent samples with photometric uncertainties typical of deep surveys (e.g., COSMOS). We find evidence for significant contamination (up to 60 percent) by dusty star-forming galaxies in the UVJ region that is typically populated by older quiescent sources. Furthermore, simulations suggest that the completeness of UVJ-selected quiescent samples at this redshift may be reduced by 30 percent due to a high fraction of young quiescent galaxies not entering the UVJ quiescent region. Massive, quiescent galaxies in simulations have on average lower angular momenta and higher projected axis ratios and concentrations than star-forming counterparts. Average sizes of simulated quiescent galaxies are relatively close to observed ones, and broadly consistent within the uncertainties. The average size ratio of quiescent and star-forming galaxies in the probed mass range is formally consistent with observations, although this result is partly affected by poor statistics.
Context. The mass of protoplanetary disks is arguably one of their most important quantities shaping their evolution toward planetary systems, but it remains a challenge to determine this quantity. Using the high spatial resolution now available on telescopes such as the Atacama Large Millimeter/submillimeter Array (ALMA), recent studies derived a relation between the disk surface density and the location of the "dust lines". This is a new concept in the field, linking the disk size at different continuum wavelengths with the radial distribution of grain populations of different sizes.
Aims: We aim to use a dust evolution model to test the dependence of the dust line location on disk gas mass. In particular, we are interested in the reliability of the method for disks showing radial substructures, as recent high-resolution observations revealed.
Methods: We performed dust evolution calculations, which included perturbations to the gas surface density with different amplitudes at different radii, to investigate their effect on the global drift timescale of dust grains. These models were then used to calibrate the relation between the dust grain drift timescale and the disk gas mass. We investigated under which condition the dust line location is a good mass estimator and tested how different stellar and disk properties (disk mass, stellar mass, disk age, and dust-to-gas ratio) affect the dust line properties. Finally, we show the applicability of this method to disks such as TW Hya and AS 209 that have been observed at high angular resolution with ALMA and show pronounced disk structures.
Results: Our models without pressure bumps confirm a strong dependence of the dust line location on the disk gas mass and its applicability as a reliable mass estimator. The other disk properties do not significantly affect the dust line location, except for the age of the system, which is the major source of uncertainty for this mass estimator. A population of synthetic disks was used to calibrate an analytic relation between the dust line location and the disk mass for smooth disks, finding that previous mass estimates based on dust lines overestimate disk masses by about one order of magnitude. Radial pressure bumps can alter the location of the dust line by up to ~10 au, while its location is mainly determined by the disk mass. Therefore, an accurate mass estimation requires a proper evaluation of the effect of bumps. However, when radial substructures act as traps for dust grains, the relation between the dust line location and disk mass becomes weaker, and other mass estimators need to be adopted.
Conclusions: Our models show that the determination of the dust line location is a promising approach to the mass estimate of protoplanetay disks, but the exact relation between the dust line location and disk mass depends on the structure of the particular disk. We calibrated the relation for disks without evidence of radial structures, while for more complex structures we ran a simple dust evolution model. However, this method fails when there is evidence of strong dust traps. It is possible to reveal when dust evolution is dominated by traps, providing the necessary information for when the method should be applied with caution.
We compute the QCD static force and potential using gradient flow at next-to-leading order in the strong coupling. The static force is the spatial derivative of the static potential: it encodes the QCD interaction at both short and long distances. While on the one side the static force has the advantage of being free of the O(ΛQCD) renormalon affecting the static potential when computed in perturbation theory, on the other side its direct lattice QCD computation suffers from poor convergence. The convergence can be improved by using gradient flow, where the gauge fields in the operator definition of a given quantity are replaced by flowed fields at flow time t, which effectively smear the gauge fields over a distance of order √{t }, while they reduce to the QCD fields in the limit t → 0. Based on our next-to-leading order calculation, we explore the properties of the static force for arbitrary values of t, as well as in the t → 0 limit, which may be useful for lattice QCD studies.
The origin of life on Earth involves the early appearance of an information-containing molecule such as RNA. The basic building blocks of RNA could have been delivered by carbon-rich meteorites, or produced in situ by processes beginning with the synthesis of hydrogen cyanide (HCN) in the early Earth's atmosphere. Here, we construct a robust physical and non-equilibrium chemical model of the early Earth atmosphere. The atmosphere is supplied with hydrogen from impact degassing of meteorites, sourced with water evaporated from the oceans, carbon dioxide from volcanoes, and methane from undersea hydrothermal vents, and in which lightning and external UV-driven chemistry produce HCN. This allows us to calculate the rain-out of HCN into warm little ponds (WLPs). We then use a comprehensive sources and sinks numerical model to compute the resulting abundances of nucleobases, ribose, and nucleotide precursors such as 2-aminooxazole resulting from aqueous and UV-driven chemistry within them. We find that at 4.4 bya (billion years ago) the limits of adenine concentrations in ponds for habitable surfaces is 0.05$\mu$M in the absence of seepage. These concentrations can be maintained for over 100 Myr. Meteorite delivery of adenine to WLPs can provide boosts in concentration by 2-3 orders of magnitude, but these boosts deplete within months by UV photodissociation, seepage, and hydrolysis. The early evolution of the atmosphere is dominated by the decrease of hydrogen due to falling impact rates and atmospheric escape, and the rise of oxygenated species such as OH from H2O photolysis. Our work points to an early origin of RNA on Earth within ~200 Myr of the Moon-forming impact.
We obtain Proca field theory from the quantisation of the N = 2 supersymmetric worldline upon supplementing the graded BRST-algebra with an extra multiplet of oscillators. The linearised theory describes the BV-extended spectrum of Proca theory, together with a Stückelberg field. When coupling the theory to background fields we derive the Proca equations, arising as consistency conditions in the BRST procedure. We also explore non-abelian modifications, complexified vector fields as well as coupling to a dilaton field. We propose a cubic action on the space of BRST-operators which reproduces the known Proca action.
The hot and dense core formed in the collapse of a massive star is a powerful source of hypothetical feebly-interacting particles such as sterile neutrinos, dark photons, axion-like particles (ALPs), and others. Radiative decays such as $a\to2\gamma$ deposit this energy in the surrounding material if the mean free path is less than the radius of the progenitor star. For the first time, we use a supernova (SN) population with particularly low explosion energies as the most sensitive calorimeters to constrain this possibility. These SNe are observationally identified as low-luminosity events with low ejecta velocities and low masses of ejected $^{56}$Ni. Their low energies limit the energy deposition from particle decays to less than about 0.1 B, where $1~{\rm B~(bethe)}=10^{51}~{\rm erg}$. For 1-500 MeV-mass ALPs, this generic argument excludes ALP-photon couplings $G_{a\gamma\gamma}$ in the $10^{-10}$-$10^{-8}~{\rm GeV}^{-1}$ range.
The statistical models used to derive the results of experimental analyses are of incredible scientific value and are essential information for analysis preservation and reuse. In this paper, we make the scientific case for systematically publishing the full statistical models and discuss the technical developments that make this practical. By means of a variety of physics cases -- including parton distribution functions, Higgs boson measurements, effective field theory interpretations, direct searches for new physics, heavy flavor physics, direct dark matter detection, world averages, and beyond the Standard Model global fits -- we illustrate how detailed information on the statistical modelling can enhance the short- and long-term impact of experimental results.
We study the production of very light elements (Z < 20) in the dynamical and spiral-wave wind ejecta of binary neutron star mergers by combining detailed nucleosynthesis calculations with the outcome of numerical relativity merger simulations. All our models are targeted to GW170817 and include neutrino radiation. We explore different finite-temperature, composition-dependent nuclear equations of state, and binary mass ratios, and find that hydrogen and helium are the most abundant light elements. For both elements, the decay of free neutrons is the driving nuclear reaction. In particular, ~0.5-2 × 10-6 M ⊙ of hydrogen are produced in the fast expanding tail of the dynamical ejecta, while ~1.5-11 × 10-6 M ⊙ of helium are synthesized in the bulk of the dynamical ejecta, usually in association with heavy r-process elements. By computing synthetic spectra, we find that the possibility of detecting hydrogen and helium features in kilonova spectra is very unlikely for fiducial masses and luminosities, even when including nonlocal thermodynamic equilibrium effects. The latter could be crucial to observe helium lines a few days after merger for faint kilonovae or for luminous kilonovae ejecting large masses of helium. Finally, we compute the amount of strontium synthesized in the dynamical and spiral-wave wind ejecta, and find that it is consistent with (or even larger than, in the case of a long-lived remnant) the one required to explain early spectral features in the kilonova of GW170817.
The integrated shear 3-point correlation function $\zeta_{\pm}$ is a higher-order statistic of the cosmic shear field that describes the modulation of the 2-point correlation function $\xi_{\pm}$ by long-wavelength features in the field. Here, we introduce a new theoretical model to calculate $\zeta_{\pm}$ that is accurate on small angular scales, and that allows to take baryonic feedback effects into account. Our model builds on the realization that the small-scale $\zeta_{\pm}$ is dominated by the nonlinear matter bispectrum in the squeezed limit, which can be evaluated accurately using the nonlinear matter power spectrum and its first-order response functions to density and tidal field perturbations. We demonstrate the accuracy of our model by showing that it reproduces the small-scale $\zeta_{\pm}$ measured in simulated cosmic shear maps. The impact of baryonic feedback enters effectively only through the corresponding impact on the nonlinear matter power spectrum, thereby permitting to account for these astrophysical effects on $\zeta_{\pm}$ similarly to how they are currently accounted for on $\xi_{\pm}$. Using a simple idealized Fisher matrix forecast for a DES-like survey we find that, compared to $\xi_{\pm}$, a combined $\xi_{\pm}\ \&\ \zeta_{\pm}$ analysis can lead to improvements of order $20-40\%$ on the constraints of cosmological parameters such as $\sigma_8$ or the dark energy equation of state parameter $w_0$. We find similar levels of improvement on the constraints of the baryonic feedback parameters, which strengthens the prospects for cosmic shear data to obtain tight constraints not only on cosmology but also on astrophysical feedback models. These are encouraging results that motivate future works on the integrated shear 3-point correlation function towards applications to real survey data.
Major mergers between galaxy clusters can produce large turbulent and bulk flow velocities in the intra-cluster medium and thus imprint diagnostic features in X-ray spectral emission lines from heavy ions. As demonstrated by Hitomi in observations of the Perseus cluster, measurements of gas velocities in clusters from high-resolution X-ray spectra will be achievable with upcoming X-ray calorimeters like those on board XRISM, Athena, or a Lynx like mission. We investigate this possibility for interesting locations across a major cluster merger from a hydrodynamical simulation, via X-ray synthetic spectra with a few eV energy resolution. We observe the system from directions perpendicular to the plane of the merger and along the merger axis. In these extreme geometrical configurations, we find clear non-Gaussian shapes of the iron He-like K_alpha line at 6.7keV. The velocity dispersion predicted from the simulations can be retrieved for the brightest 100ks pointings with XRISM Resolve, despite some discrepancy related to the complex non-Gaussian line shapes. Measurements in faint regions require however high S/N and the larger collecting area of the Athena X-IFU calorimeter is thus needed. With the latter, we also investigate the gas temperature and velocity gradient across the merger bow shock edge, from 20"-wide annuli extracted from a single 1Ms X-IFU pointing. We find best-fit temperature and velocity dispersion values that are consistent with predictions from the simulations within 1-sigma, but the uncertainties on the inferred velocity dispersion are too large to place any stringent constraints on the shallow gradient downstream of the shock. We also present simulated images of the thermal and kinetic Sunyaev-Zeldovich effects, using the above viewing configurations, and compare the results at angular resolutions appropriate for future observatories such as CMB-S4 and AtLAST.
The immediate vicinity of an active supermassive black hole—with its event horizon, photon ring, accretion disk and relativistic jets—is an appropriate place to study physics under extreme conditions, particularly general relativity and magnetohydrodynamics. Observing the dynamics of such compact astrophysical objects provides insights into their inner workings, and the recent observations of M87* by the Event Horizon Telescope1-6 using very-long-baseline interferometry techniques allows us to investigate the dynamical processes of M87* on timescales of days. Compared with most radio interferometers, very-long-baseline interferometry networks typically have fewer antennas and low signal-to-noise ratios. Furthermore, the source is variable, prohibiting integration over time to improve signal-to-noise ratio. Here, we present an imaging algorithm7,8 that copes with the data scarcity and temporal evolution, while providing an uncertainty quantification. Our algorithm views the imaging task as a Bayesian inference problem of a time-varying brightness, exploits the correlation structure in time and reconstructs (2 + 1 + 1)-dimensional time-variable and spectrally resolved images. We apply this method to the Event Horizon Telescope observations of M87*9 and validate our approach on synthetic data. The time- and frequency-resolved reconstruction of M87* confirms variable structures on the emission ring and indicates extended and time-variable emission structures outside the ring itself.
For decades we have known that the Sun lies within the Local Bubble, a cavity of low-density, high-temperature plasma surrounded by a shell of cold, neutral gas and dust1-3. However, the precise shape and extent of this shell4,5, the impetus and timescale for its formation6,7, and its relationship to nearby star formation8 have remained uncertain, largely due to low-resolution models of the local interstellar medium. Here we report an analysis of the three-dimensional positions, shapes and motions of dense gas and young stars within 200 pc of the Sun, using new spatial9-11 and dynamical constraints12. We find that nearly all of the star-forming complexes in the solar vicinity lie on the surface of the Local Bubble and that their young stars show outward expansion mainly perpendicular to the bubble's surface. Tracebacks of these young stars' motions support a picture in which the origin of the Local Bubble was a burst of stellar birth and then death (supernovae) taking place near the bubble's centre beginning approximately 14 Myr ago. The expansion of the Local Bubble created by the supernovae swept up the ambient interstellar medium into an extended shell that has now fragmented and collapsed into the most prominent nearby molecular clouds, in turn providing robust observational support for the theory of supernova-driven star formation.
The detection of the accelerated expansion of the Universe has been one of the major breakthroughs in modern cosmology. Several cosmological probes (CMB, SNe Ia, BAO) have been studied in depth to better understand the nature of the mechanism driving this acceleration, and they are being currently pushed to their limits, obtaining remarkable constraints that allowed us to shape the standard cosmological model. In parallel to that, however, the percent precision achieved has recently revealed apparent tensions between measurements obtained from different methods. These are either indicating some unaccounted systematic effects, or are pointing toward new physics. Following the development of CMB, SNe, and BAO cosmology, it is critical to extend our selection of cosmological probes. Novel probes can be exploited to validate results, control or mitigate systematic effects, and, most importantly, to increase the accuracy and robustness of our results. This review is meant to provide a state-of-art benchmark of the latest advances in emerging beyond-standard cosmological probes. We present how several different methods can become a key resource for observational cosmology. In particular, we review cosmic chronometers, quasars, gamma-ray bursts, standard sirens, lensing time-delay with galaxies and clusters, cosmic voids, neutral hydrogen intensity mapping, surface brightness fluctuations, secular redshift drift, and clustering of standard candles. The review describes the method, systematics, and results of each probe in a homogeneous way, giving the reader a clear picture of the available innovative methods that have been introduced in recent years and how to apply them. The review also discusses the potential synergies and complementarities between the various probes, exploring how they will contribute to the future of modern cosmology.
The intrinsic alignments of galaxies, i.e. the correlation between galaxy shapes and their environment, are a major source of contamination for weak gravitational lensing surveys. Most studies of intrinsic alignments have so far focused on measuring and modelling the correlations of luminous red galaxies with galaxy positions or the filaments of the cosmic web. In this work, we investigate alignments around cosmic voids. We measure the intrinsic alignments of luminous red galaxies detected by the Sloan Digital Sky Survey around a sample of voids constructed from those same tracers and with radii in the ranges: [20-30; 30-40; 40-50] h-1 Mpc and in the redshift range z = 0.4-0.8. We present fits to the measurements based on a linear model at large scales, and on a new model based on the void density profile inside the void and in its neighbourhood. We constrain the free scaling amplitude of our model at small scales, finding no significant alignment at 1σ for either sample. We observe a deviation from the null hypothesis, at large scales, of 2σ for voids with radii between 20 and 30 h-1 Mpc, and 1.5σ for voids with radii between 30 and 40 h-1 Mpc and constrain the amplitude of the model on these scales. We find no significant deviation at 1σ for larger voids. Our work is a first attempt at detecting intrinsic alignments of galaxy shapes around voids and provides a useful framework for their mitigation in future void lensing studies.
We carried out 3D dust + gas radiative hydrodynamic simulations of forming planets. We investigated a parameter grid of a Neptune-mass, a Saturn-mass, a Jupiter-mass, and a five-Jupiter-mass planet at 5.2, 30, and 50 au distance from their star. We found that the meridional circulation (Szulágyi et al. 2014; Fung & Chiang 2016) drives a strong vertical flow for the dust as well, hence the dust is not settled in the midplane, even for millimeter-sized grains. The meridional circulation will deliver dust and gas vertically onto the circumplanetary region, efficiently bridging over the gap. The Hill-sphere accretion rates for the dust are ~10-8-10-10 M Jup yr-1, increasing with planet mass. For the gas component, the gain is 10-6-10-8 M Jup yr-1. The difference between the dust and gas-accretion rates is smaller with decreasing planetary mass. In the vicinity of the planet, the millimeter-sized grains can get trapped easier than the gas, which means the circumplanetary disk might be enriched with solids in comparison to the circumstellar disk. We calculated the local dust-to-gas ratio (DTG) everywhere in the circumstellar disk and identified the altitude above the midplane where the DTG is 1, 0.1, 0.01, and 0.001. The larger the planetary mass, the more the millimeter-sized dust is delivered and a larger fraction of the dust disk is lifted by the planet. The stirring of millimeter-sized dust is negligible for Neptune-mass planets or below, but significant above Saturn-mass planets.
The early Earth 4 billion years ago was a scarce place for the emergence of life. After the formation of the oceans, it was most likely difficult to extract the essential ionic building blocks of life, such as phosphate or salts, from the existing geomaterial in sufficiently high concentrations and suitable mixing ratios. We show how ubiquitous heat fluxes through rock fractures implement a physical solution to this problem: Thermal convection and thermophoresis together are able to separate calcium from phosphorus and thus use ubiquitous but otherwise inert apatite as a phosphate source. Furthermore, the mixing ratio of different salts is modified according to their thermophoretic properties, providing a suitable non-equilibrium environment for the first prebiotic reactions.
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 364653263 - TRR 235 (CRC235). Funding by the Volkswagen Initiative 'Life? - A Fresh Scientific Approach to the Basic Principles of Life', from the Simons Foundation and from Germany's Excellence Strategy EXC-2094-390783311 is gratefully acknowledged. We are grateful for funding by the European Research Council (ERC starting Grant, RiboLife) under 802000 and the MaxSynBio consortium, which is jointly funded by the Federal Ministry of Education and Research of Germany and the Max Planck Society. We wish to acknowledge the support of ERC ADV 2018 Grant 834225 (EAVESDROP). We thank for financial support from ERC-2017-ADG from the European Research Council. The work is supported by the Center for Nanoscience Munich (CeNS).
The Extreme Universe Space Observatory - Super Pressure Balloon (EUSO-SPB2) mission will fly two custom telescopes to measure Čerenkov- and fluorescence-emission of extensive air-showers from cosmic rays at the PeV and EeV-scale, and search for tau-neutrinos. Telescope integration and laboratory calibration will be performed in Colorado. To estimate the point spread function and efficiency of the integrated telescopes, a test beam system that delivers a 1-meter diameter parallel beam of light is being fabricated. End-to-end tests of the fully integrated instruments will be carried out in a field campaign at dark sites in the Utah desert. The EUSO-SPB2 optics and laboratory tests will be presented at the APS.
This work was partially supported by Basic Science Interdisciplinary Research Projects of RIKEN and JSPS KAKENHI Grant (22340063, 23340081, and 24244042), by the Italian Ministry of Foreign Affairs and International Cooperation, by the Italian Space Agency through the ASI INFN agreements n. 2017-8-H.0 and n. 2021-8-HH.0, by NASA award 11-APRA-0058, 16-APROBES16-0023, 17-APRA17-0066, NNX17AJ82G, NNX13AH54G, 80NSSC18K0246, 80NSSC18K0473, 80NSSC19K0626, and 80NSSC18K0464 in the USA, by the French space agency CNES, by the Deutsches Zentrum für Luft- und Raumfahrt, the Helmholtz Alliance for Astroparticle Physics funded by the Initiative and Networking Fund of the Helmholtz Association (Germany), by Slovak Academy of Sciences MVTS JEM-EUSO, by National Science Centre in Poland Grants 2017/27/B/ST9/02162 and 2020/37/B/ST9/01821, by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094-390783311, by Mexican funding agencie.
We reemphasize the strong dependence of the branching ratios $B(K^+\to\pi^+\nu\bar\nu)$ and $B(K_L\to\pi^0\nu\bar\nu)$ on $|V_{cb}|$ that is stronger than in rare $B$ decays, in particular for $K_L\to\pi^0\nu\bar\nu$. Thereby the persistent tension between inclusive and exclusive determinations of $|V_{cb}|$ weakens the power of these theoretically clean decays in the search for new physics (NP). We demonstrate how this uncertainty can be practically removed by considering within the SM suitable ratios of the two branching ratios between each other and with other observables like the branching ratios for $K_S\to\mu^+\mu^-$, $B_{s,d}\to\mu^+\mu^-$ and $B\to K(K^*)\nu\bar\nu$. We use as basic CKM parameters $V_{us}$, $|V_{cb}|$ and the angles $\beta$ and $\gamma$ in the unitarity triangle (UT). This avoids the use of the problematic $|V_{ub}|$. A ratio involving $B(K^+\to\pi^+\nu\bar\nu)$ and $B(B_s\to\mu^+\mu^-)$ while being $|V_{cb}|$-independent exhibits sizable dependence on the angle $\gamma$. It should be of interest for several experimental groups in the coming years. We point out that the $|V_{cb}|$-independent ratio of $B(B^+\to K^+\nu\bar\nu)$ and $B(B_s\to\mu^+\mu^-)$ from Belle II and LHCb signals a $1.8\sigma$ tension with its SM value. As a complementary test of the Standard Model, we propose to extract $|V_{cb}|$ from different observables as a function of $\beta$ and $\gamma$. We illustrate this with $\epsilon_K$, $\Delta M_d$ and $\Delta M_s$ finding tensions between these three determinations of $|V_{cb}|$ within the SM. From $\Delta M_s$ and $S_{\psi K_S}$ alone we find $|V_{cb}|=41.8(6)\times 10^{-3}$ and $|V_{ub}|=3.65(12)\times 10^{-3}$. We stress the importance of a precise measurement of $\gamma$. We obtain most precise SM predictions for considered branching ratios of rare K and B decays to date.
In this contribution, I review some of the latest advances in calculational techniques in theoretical particle physics. I focus, in particular, on their application to the calculation of highly non-trivial scattering processes, which are relevant for precision phenomenology studies at the Large Hadron Collider at CERN.
We compute NRQCD long-distance matrix elements that appear in the inclusive production cross sections of P-wave heavy quarkonia in the framework of potential NRQCD. The formalism developed in this work applies to strongly coupled charmonia and bottomonia. This makes possible the determination of color-octet NRQCD long-distance matrix elements without relying on measured cross section data, which has not been possible so far. We obtain results for inclusive production cross sections of χcJ and χbJ at the LHC, which are in good agreement with measurements.
Gamma rays from nuclear processes such as radioactive decay and de-excitations are among the most-direct tools to witness the production and existence of specific nuclei and isotopes in and near cosmic nucleosynthesis sites. With space-borne instrumentation such as NuSTAR and SPI/INTEGRAL, and experimental techniques to handle a substantial instrumental background from cosmic-ray activations of the spacecraft and instrument, unique results have been obtained, from diffuse emissions of nuclei and positrons in interstellar surroundings of sources, as well as from observations of cosmic explosions and their radioactive afterglows. These witness non-sphericity in supernova explosions and a flow of nucleosynthesis ejecta through superbubbles as common source environments. Next-generation experiments that are awaiting space missions promise a next level of observational nuclear astrophysics.
We use separate universe simulations with the IllustrisTNG galaxy formation model to predict the local PNG bias parameters bΦ and bΦδ of atomic neutral hydrogen, H$_{I}$. These parameters and their relation to the linear density bias parameter b
$_{1}$ play a key role in observational constraints of the local PNG parameter f
$_{NL}$ using the H$_{I}$ power spectrum and bispectrum. Our results show that the popular calculation based on the universality of the halo mass function overpredicts the bΦ(b
$_{1}$) and bΦδ(b
$_{1}$) relations measured in the simulations. In particular, our results show that at z ≲ 1 the H$_{I}$ power spectrum is more sensitive to f
$_{NL}$ compared to previously thought (bΦ is more negative), but is less sensitive at other epochs (bΦ is less positive). We discuss how this can be explained by the competition of physical effects such as that large-scale gravitational potentials with local PNG (i) accelerate the conversion of hydrogen to heavy elements by star formation, (ii) enhance the effects of baryonic feedback that eject the gas to regions more exposed to ionizing radiation, and (iii) promote the formation of denser structures that shield the H$_{I}$ more efficiently. Our numerical results can be used to revise existing forecast studies on f
$_{NL}$ using 21 cm line-intensity mapping data. Despite this first step towards predictions for the local PNG bias parameters of H$_{I}$, we emphasize that more work is needed to assess their sensitivity on the assumed galaxy formation physics and H$_{I}$ modeling strategy.
The scalar field theory of cosmological inflation constitutes nowadays one of the preferred scenarios for the physics of the early universe. In this paper we aim at studying the inflationary universe making use of a numerical lattice simulation. Various lattice codes have been written in the last decades and have been extensively used for understating the reheating phase of the universe, but they have never been used to study the inflationary phase itself far from the end of inflation (i.e. about 50 e-folds before the end of inflation). In this paper we use a lattice simulation to reproduce the well-known results of some simple models of single-field inflation, particularly for the scalar field perturbation. The main model that we consider is the standard slow-roll inflation with an harmonic potential for the inflaton field. We explore the technical aspects that need to be accounted for in order to reproduce with precision the nearly scale invariant power spectrum of inflaton perturbations. We also consider the case of a step potential, and show that the simulation is able to correctly reproduce the oscillatory features in the power spectrum of this model. Even if a lattice simulation is not needed in these cases, that are well within the regime of validity of linear perturbation theory, this sets the basis to future work on using lattice simulations to study more complicated models of inflation.
Analysis of large galaxy surveys requires confidence in the robustness of numerical simulation methods. The simulations are used to construct mock galaxy catalogues to validate data analysis pipelines and identify potential systematics. We compare three N-body simulation codes, abacus, gadget-2, and swift, to investigate the regimes in which their results agree. We run N-body simulations at three different mass resolutions, 6.25 × 10^8, 2.11 × 10^9, and 5.00 × 10^9 h^−1 M_⊙, matching phases to reduce the noise within the comparisons. We find systematic errors in the halo clustering between different codes are smaller than the Dark Energy Spectroscopic Instrument (DESI) statistical error for s > 20 h−1 Mpc in the correlation function in redshift space. Through the resolution comparison we find that simulations run with a mass resolution of 2.1 × 10^9 h^−1 M_⊙ are sufficiently converged for systematic effects in the halo clustering to be smaller than the DESI statistical error at scales larger than 20 h−1 Mpc. These findings show that the simulations are robust for extracting cosmological information from large scales which is the key goal of the DESI survey. Comparing matter power spectra, we find the codes agree to within 1 per cent for k ≤ 10 h Mpc^−1. We also run a comparison of three initial condition generation codes and find good agreement. In addition, we include a quasi-N-body code, FastPM, since we plan use it for certain DESI analyses. The impact of the halo definition and galaxy–halo relation will be presented in a follow-up study.
We provide the first combined cosmological analysis of South Pole Telescope (SPT) and Planck cluster catalogs. The aim is to provide an independent calibration for Planck scaling relations, exploiting the cosmological constraining power of the SPT-SZ cluster catalog and its dedicated weak lensing (WL) and X-ray follow-up observations. We build a new version of the Planck cluster likelihood. In the $\nu \Lambda$CDM scenario, focusing on the mass slope and mass bias of Planck scaling relations, we find $\alpha_{\text{SZ}} = 1.49 _{-0.10}^{+0.07}$ and $(1-b)_{\text{SZ}} = 0.69 _{-0.14}^{+0.07}$ respectively. The results for the mass slope show a $\sim 4 \, \sigma$ departure from the self-similar evolution, $\alpha_{\text{SZ}} \sim 1.8$. This shift is mainly driven by the matter density value preferred by SPT data, $\Omega_m = 0.30 \pm 0.03$, lower than the one obtained by Planck data alone, $\Omega_m = 0.37 _{-0.06}^{+0.02}$. The mass bias constraints are consistent both with outcomes of hydrodynamical simulations and external WL calibrations, $(1-b) \sim 0.8$, and with results required by the Planck cosmic microwave background cosmology, $(1-b) \sim 0.6$. From this analysis, we obtain a new catalog of Planck cluster masses $M_{500}$. We estimate the relation between the published Planck derived $M_{\text{SZ}}$ masses and our derived masses, as a measured mass bias. We analyse the mass, redshift and detection noise dependence of this quantity, finding an increasing trend towards high redshift and low mass. These results mimic the effect of departure from self-similarity in cluster evolution, showing different dependencies for the low-mass high-mass, low-z high-z regimes.
The multihadron decays $ {\Lambda}_b^0 $ → D+pπ−π− and $ {\Lambda}_b^0 $ → D$^{*}$+pπ−π− are observed in data corresponding to an integrated luminosity of 3 fb$^{−1}$, collected in proton-proton collisions at centre-of-mass energies of 7 and 8 TeV by the LHCb detector. Using the decay $ {\Lambda}_b^0 $ → $ {\Lambda}_c^{+} $π$^{+}$π$^{−}$π$^{−}$ as a normalisation channel, the ratio of branching fractions is measured to be$ \frac{\mathcal{B}\left({\Lambda}_b^0\to {D}^{+}p{\pi}^{-}{\pi}^{-}\right)}{\mathcal{B}\left({\Lambda}_b^0\to {\Lambda}_c^0{\pi}^{+}{\pi}^{-}{\pi}^{-}\right)}\times \frac{\mathcal{B}\left({D}^{+}\to {K}^{-}{\pi}^{+}{\pi}^{+}\right)}{\mathcal{B}\left({\Lambda}_c^0\to {pK}^{-}{\pi}^{-}\right)}=\left(5.35\pm 0.21\pm 0.16\right)\%, $where the first uncertainty is statistical and the second systematic. The ratio of branching fractions for the $ {\Lambda}_b^0 $ → D$^{*+}$pπ$^{−}$π$^{−}$ and $ {\Lambda}_b^0 $ → D$^{+}$pπ$^{−}$π$^{−}$ decays is found to be$ \frac{\mathcal{B}\left({\Lambda}_b^0\to {D}^{\ast +}p{\pi}^{-}{\pi}^{-}\right)}{\mathcal{B}\left({\Lambda}_b^0\to {D}^{+}p{\pi}^{-}{\pi}^{-}\right)}\times \left(\mathcal{B}\left({D}^{\ast +}\to {D}^{+}{\pi}^0\right)+\mathcal{B}\left({D}^{\ast +}\to {D}^{+}\gamma \right)\right)=\left(61.3\pm 4.3\pm 4.0\right)\%. $[graphic not available: see fulltext]
A data sample collected with the LHCb detector corresponding to an integrated luminosity of 9 fb$^{-1}$ is used to measure eleven $CP$ violation observables in $B^\pm\to Dh^\pm$ decays, where $h$ is either a kaon or a pion. The neutral $D$ meson decay is reconstructed in the three-body final states: $K^\pm\pi^\mp\pi^0$; $\pi^+\pi^-\pi^0$; $K^+K^-\pi^0$ and the suppressed $\pi^\pm K^\mp\pi^0$ combination. The mode where a large $CP$ asymmetry is expected, $B^\pm\to [\pi^\pm K^\mp\pi^0]_DK^\pm$, is observed with a significance greater than seven standard deviations. The ratio of the partial width of this mode relative to that of the favoured mode, $B^\pm\to [K^\pm\pi^\mp\pi^0]_D K^\pm$, is $R_{{\rm ADS}(K)} = (1.27\pm0.16\pm0.02)\times 10^{-2}$. Evidence for a large $CP$ asymmetry is also seen: $A_{{\rm ADS}(K)} = -0.38\pm0.12\pm0.02$. Constraints on the CKM angle $\gamma$ are calculated from the eleven reported observables.
We compute the three-loop helicity amplitudes for the scattering of four gluons in QCD. We employ projectors in the ’t Hooft-Veltman scheme and construct the amplitudes from a minimal set of physical building blocks, which allows us to keep the computational complexity under control. We obtain relatively compact results that can be expressed in terms of harmonic polylogarithms. In addition, we consider the Regge limit of our amplitude and extract the gluon Regge trajectory in full three-loop QCD. This is the last missing ingredient required for studying single-Reggeon exchanges at next-to-next-to-leading logarithmic accuracy.
The last two decades have witnessed the discovery of a myriad of new and unexpected hadrons. The future holds more surprises for us, thanks to new-generation experiments. Understanding the signals and determining the properties of the states requires a parallel theoretical effort. To make full use of available and forthcoming data, a careful amplitude modeling is required, together with a sound treatment of the statistical uncertainties, and a systematic survey of the model dependencies. We review the contributions made by the Joint Physics Analysis Center to the field of hadron spectroscopy.
We present the v1.0 release of CLMM, an open source PYTHON library for the estimation of the weak lensing masses of clusters of galaxies. CLMM is designed as a stand-alone toolkit of building blocks to enable end-to-end analysis pipeline validation for upcoming cluster cosmology analyses such as the ones that will be performed by the Vera C. Rubin Legacy Survey of Space and Time-Dark Energy Science Collaboration (LSST-DESC). Its purpose is to serve as a flexible, easy-to-install, and easy-to-use interface for both weak lensing simulators and observers and can be applied to real and mock data to study the systematics affecting weak lensing mass reconstruction. At the core of CLMM are routines to model the weak lensing shear signal given the underlying mass distribution of galaxy clusters and a set of data operations to prepare the corresponding data vectors. The theoretical predictions rely on existing software, used as backends in the code, that have been thoroughly tested and cross-checked. Combined theoretical predictions and data can be used to constrain the mass distribution of galaxy clusters as demonstrated in a suite of example Jupyter Notebooks shipped with the software and also available in the extensive online documentation.
Simulations of idealised star-forming filaments of finite length typically show core growth which is dominated by two cores forming at its respective end. The end cores form due to a strong increasing acceleration at the filament ends which leads to a sweep-up of material during the filament collapse along its axis. As this growth mode is typically faster than any other core formation mode in a filament, the end cores usually dominate in mass and density compared to other cores forming inside a filament. However, observations of star-forming filaments often do not show this prevalence of cores at the filament edges. We use numerical simulations of accreting filaments forming in a finite converging flow to explore a possible mechanism which leads to a suppression of the end cores. While such a setup still leads to end cores that soon begin to move inwards, the continued accumulation of material outside of these makes a key difference: their positions now lie within the larger filamentary structure and not at its edges. This softens their inward gravitational acceleration as they are embedded by new material further out. As a result, these two cores do not grow as fast as expected for the edge effect and thus do not dominate over other core formation modes in the filament.
Catalytic particles are spatially organized in a number of biological systems across different length scales, from enzyme complexes to metabolically coupled cells. Despite operating on different scales, these systems all feature localized reactions involving partially hindered diffusive transport, which is determined by the collective arrangement of the catalysts. Yet it remains largely unexplored how different arrangements affect the interplay between the reaction and transport dynamics, which ultimately determines the flux through the reaction pathway. Here we show that two fundamental trade-offs arise, the first between efficient inter-catalyst transport and the depletion of substrate, and the second between steric confinement of intermediate products and the accessibility of catalysts to substrate. We use a model reaction pathway to characterize the general design principles for the arrangement of catalysts that emerge from the interplay of these trade-offs. We find that the question of optimal catalyst arrangements generalizes the well-known Thomson problem of electrostatics.
We adapt the dual-null foliation to the functional Schrödinger representation of quantum field theory and study the behavior of quantum probes in plane-wave space-times near the null singularity. A comparison between the Einstein-Rosen and the Brinkmann patch, where the latter extends beyond the first, shows a seeming tension that can be resolved by comparing the configuration spaces. Our analysis concludes that Einstein-Rosen space-times support exclusively configurations with nonempty gravitational memory that are focused to a set of measure zero in the focal plane with respect to a Brinkmann observer. To conclude, we provide a rough framework to estimate the qualitative influence of backreactions on these results.
A systematic global investigation of differential charge radii has been performed within the CDFT framework for the first time. Theoretical results obtained with conventional covariant energy density functionals and the separable pairing interaction of Tian et al. [Phys. Lett. B 676, 44 (2009), 10.1016/j.physletb.2009.04.067] are compared with experimental differential charge radii in the regions of the nuclear chart in which available experimental data crosses the neutron shell closures at N =28 ,50 ,82 , and 126. The analysis of absolute differential radii of different isotopic chains and their relative properties indicate clearly that such properties are reasonably well described in model calculations in the cases when the mean-field approximation is justified. However, while the observed clusterization of differential charge radii of different isotopic chains is well described above the N =50 and N =126 shell closures, it is more difficult to reproduce it above the N =28 and N =82 shell closures because of possible deficiencies in the underlying single-particle structure. The impact of the latter has been evaluated for spherical shapes and it was shown that the relative energies of the single-particle states and the patterns of their occupation with increasing neutron number have an appreciable impact on the evolution of the δ «r2»N ,N' values. These factors also limit the predictive power of model calculations in the regions of high densities of the single-particle states of different origin. It is shown that the kinks in the charge radii at neutron shell closures are due to the underlying single-particle structure and due to weakening or collapse of pairing at these closures. The regions of the nuclear chart in which the correlations beyond mean field are expected to have an impact on charge radii are indicated; the analysis shows that the assignment of a calculated excited prolate minimum to the experimental ground state allows us to understand the trends of the evolution of differential charge radii with neutron number in many cases of shape coexistence even at the mean-field level. It is usually assumed that pairing is a dominant contributor to odd-even staggering (OES) in charge radii. Our analysis paints a more complicated picture. It suggests a new mechanism in which the fragmentation of the single-particle content of the ground state in odd-mass nuclei due to particle-vibration coupling provides a significant contribution to OES in charge radii.
We investigate the impact of gas accretion in streams on the evolution of disc galaxies, using magnetohydrodynamic simulations including advection and anisotropic diffusion of cosmic rays (CRs) generated by supernovae as the only source of feedback. Stream accretion has been suggested as an important galaxy growth mechanism in cosmological simulations and we vary their orientation and angular momentum in idealized setups. We find that accretion streams trigger the formation of galactic rings and enhanced star formation. The star formation rates and consequently the CR-driven outflow rates are higher for low angular momentum accretion streams, which also result in more compact, lower angular momentum discs. The CR generated outflows show a characteristic structure. At low outflow velocities (<50 km s-1), the angular momentum distribution is similar to the disc and the gas is in a fountain flow. Gas at high outflow velocities (>200 km s-1), penetrating deep into the halo, has close to zero angular momentum, and originates from the centre of the galaxies. As the mass loading factors of the CR-driven outflows are of the order of unity and higher, we conclude that this process is important for the removal of low angular momentum gas from evolving disc galaxies and the transport of, potentially metal enriched, material from galactic centres far into the galactic haloes.
As ever-more sensitive experiments are made in the quest for primordial CMB B Modes, the number of potentially significant astrophysical contaminants becomes larger as well. Thermal emission from interplanetary dust, for example, has been detected by the Planck satellite. While the polarization fraction of this Zodiacal, or interplanetary dust emission (IPDE) is expected to be low, it is bright enough to be detected in total power. Here, estimates of the magnitude of the effect as it might be seen by the LiteBIRD satellite are made. The COBE IPDE model from Kelsall et al. (1998) is combined with a model of the LiteBIRD experiment's scanning strategy to estimate potential contamination of the CMB in both total power and in polarization power spectra. LiteBIRD should detect IPDE in temperature across all of its bands, from 40 through 402 GHz, and should improve limits on the polarization fraction of IPDE at the higher end of this frequency range. If the polarization fraction of IPDE is of order 1%, the current limit from ISO/CAM measurements in the mid-infrared, it may induce large-scale polarization B Modes comparable to cosmological models with an r of order 0.001. In this case, the polarized IPDE would also need to be modeled and removed. As a CMB foreground, IPDE will always be subdominant to Galactic emissions, though because it caused by emission from grains closer to us, it appears variable as the Earth travels around the Sun, and may thereby complicate the data analysis somewhat. But with an understanding of some of the symmetries of the emission and some flexibility in the data processing, it should not be the primary impediment to the CMB polarization measurement.
The finite-temperature linear response theory based on the finite-temperature relativistic Hartree-Bogoliubov (FT-RHB) model is developed in the charge-exchange channel to study the temperature evolution of spin-isospin excitations. Calculations are performed self-consistently with relativistic point-coupling interactions DD-PC1 and DD-PCX. In the charge-exchange channel, the pairing interaction can be split into isovector (T =1 ) and isoscalar (T =0 ) parts. For the isovector component, the same separable form of the Gogny D1S pairing interaction is used both for the ground-state calculation as well as for the residual interaction, while the strength of the isoscalar pairing in the residual interaction is determined by comparison with experimental data on Gamow-Teller resonance (GTR) and isobaric analog resonance (IAR) centroid energy differences in even-even tin isotopes. The temperature effects are introduced by treating Bogoliubov quasiparticles within a grand-canonical ensemble. Thus, unlike the conventional formulation of the quasiparticle random-phase approximation (QRPA) based on the Bardeen-Cooper-Schrieffer (BCS) basis, our model is formulated within the Hartree-Fock-Bogoliubov (HFB) quasiparticle basis. Implementing a relativistic point-coupling interaction and a separable pairing force allows for the reduction of complicated two-body residual interaction matrix elements, which considerably decreases the dimension of the problem in the coordinate space. The main advantage of this method is to avoid the diagonalization of a large QRPA matrix, especially at finite temperature where the size of configuration space is significantly increased. The implementation of the linear response code is used to study the temperature evolution of IAR, GTR, and spin-dipole resonance (SDR) in even-even tin isotopes in the temperature range T =0 -1.5 MeV.
The flat stellar density cores of massive elliptical galaxies form rapidly due to sinking supermassive black holes (SMBHs) in gas-poor galaxy mergers. After the SMBHs form a bound binary, gravitational slingshot interactions with nearby stars drive the core regions towards a tangentially biased stellar velocity distribution. We use collisionless galaxy merger simulations with accurate collisional orbit integration around the central SMBHs to demonstrate that the removal of stars from the centre by slingshot kicks accounts for the entire change in velocity anisotropy. The rate of strong (unbinding) kicks is constant over several hundred Myr at $\sim 3 \ \mathrm{ M}_\odot\, \rm yr^{-1}$ for our most massive SMBH binary (MBH = 1.7 × 1010 M⊙). Using a frequency-based orbit classification scheme (box, x-tube, z-tube, rosette), we demonstrate that slingshot kicks mostly affect box orbits with small pericentre distances, leading to a velocity anisotropy of β ≲ -0.6 within several hundred Myr as observed in massive ellipticals with large cores. We show how different SMBH masses affect the orbital structure of the merger remnants and present a kinematic tomography connecting orbit families to integral field kinematic features. Our direct orbit classification agrees remarkably well with a modern triaxial Schwarzschild analysis applied to simulated mock kinematic maps.
The strong X-ray irradiation from young solar-type stars may play a crucial role in the thermodynamics and chemistry of circumstellar discs, driving their evolution in the last stages of disc dispersal as well as shaping the atmospheres of newborn planets. In this paper, we study the influence of stellar mass on circumstellar disc mass-loss rates due to X-ray irradiation, extending our previous study of the mass-loss rate's dependence on the X-ray luminosity and spectrum hardness. We focus on stars with masses between 0.1 and 1 M⊙, which are the main target of current and future missions to find potentially habitable planets. We find a linear relationship between the mass-loss rates and the stellar masses when changing the X-ray luminosity accordingly with the stellar mass. This linear increase is observed also when the X-ray luminosity is kept fixed because of the lower disc aspect ratio which allows the X-ray irradiation to reach larger radii. We provide new analytical relations for the mass-loss rates and profiles of photoevaporative winds as a function of the stellar mass that can be used in disc and planet population synthesis models. Our photoevaporative models correctly predict the observed trend of inner-disc lifetime as a function of stellar mass with an increased steepness for stars smaller than 0.3 M⊙, indicating that X-ray photoevaporation is a good candidate to explain the observed disc dispersal process.
Young solar-type stars are known to be strong X-ray emitters and their X-ray spectra have been widely studied. X-rays from the central star may play a crucial role in the thermodynamics and chemistry of the circumstellar material as well as in the atmospheric evolution of young planets. In this paper, we present model spectra based on spectral parameters derived from the observations of young stars in the Orion nebula cluster from the Chandra Orion Ultradeep Project (COUP). The spectra are then used to calculate new photoevaporation prescriptions that can be used in disc and planet population synthesis models. Our models clearly show that disc wind mass loss rates are controlled by the stellar luminosity in the soft ($100\, \mathrm{eV}$ to $1\, \mathrm{keV}$) X-ray band. New analytical relations are provided for the mass loss rates and profiles of photoevaporative winds as a function of the luminosity in the soft X-ray band. The agreement between observed and predicted transition disc statistics moderately improved using the new spectra, but the observed population of strongly accreting large cavity discs can still not be reproduced by these models. Furthermore, our models predict a population of non-accreting transition discs that are not observed. This highlights the importance of considering the depletion of millimetre-sized dust grains from the outer disc, which is a likely reason why such discs have not been detected yet.
We study Polyakov loop as well as correlators of real and imaginary parts of the Polyakov loop in 2+1 flavor QCD at finite temperature. We use hypercubic (HYP) smearing to improve the signal in the lattice calculations and to obtain reliable results for the correlators at large distances. From the large distance behavior of the correlators we estimate the chromo-electric screening length to be (0.38-44)/T. Furthermore, we show that the short distance distortions due to HYP smearing do not affect the physics of interest
The formation of peptide bonds is one of the most important biochemical reaction steps. Without the development of structurally and catalytically active polymers, there would be no life on our planet. However, the formation of large, complex oligomer systems is prevented by the high thermodynamic barrier of peptide condensation in aqueous solution. Liquid sulphur dioxide proves to be a superior alternative for copper-catalyzed peptide condensations. Compared to water, amino acids are activated in sulphur dioxide, leading to the incorporation of all 20 proteinogenic amino acids into proteins. Strikingly, even extremely low initial reactant concentrations of only 50 mM are sufficient for extensive peptide formation, yielding up to 2.9% of dialanine in 7 days. The reactions carried out at room temperature and the successful use of the Hadean mineral covellite (CuS) as a catalyst, suggest a volcanic environment for the formation of the peptide world on early Earth.
We study and model the properties of galaxy clusters in the normal-branch Dvali-Gabadadze-Porrati (nDGP) model of gravity, which is representative of a wide class of theories that exhibit the Vainshtein screening mechanism. Using the first cosmological simulations that incorporate both full baryonic physics and nDGP, we find that, despite being efficiently screened within clusters, the fifth force can raise the temperature of the intracluster gas, affecting the scaling relations between the cluster mass and three observable mass proxies: the gas temperature, the Compton Y-parameter of the Sunyaev-Zel'dovich effect, and the X-ray analogue of the Y-parameter. Therefore, unless properly accounted for, this could lead to biased measurements of the cluster mass in tests that use cluster observations, such as cluster number counts, to probe gravity. Using a suite of dark-matter-only simulations, which span a wide range of box sizes and resolutions, and which feature very different strengths of the fifth force, we also calibrate general fitting formulae that can reproduce the nDGP halo concentration at percent accuracy for 0 ≤ z ≤ 1, and halo mass function with ${\lesssim}3{{\ \rm per\ cent}}$ accuracy at 0 ≤ z ≤ 1 (increasing to ${\lesssim}5{{\ \rm per\ cent}}$ for 1 ≤ z ≤ 2), over a halo mass range spanning four orders of magnitude. Our model for the concentration can be used for converting between halo mass overdensities and predicting statistics such as the non-linear matter power spectrum. The results of this work will form part of a framework for unbiased constraints of gravity using the data from ongoing and upcoming cluster surveys.
As an important step towards a complete next-to-leading (NLO) QCD analysis of the ratio ε'/ε within the Standard Model Effective Field Theory (SMEFT), we present for the first time the NLO master formula for the BSM part of this ratio expressed in terms of the Wilson coefficients of all contributing operators evaluated at the electroweak scale. To this end we use the common Weak Effective Theory (WET) basis (the so-called JMS basis) for which tree-level and one-loop matching to the SMEFT are already known. The relevant hadronic matrix elements of BSM operators at the electroweak scale are taken from Dual QCD approach and the SM ones from lattice QCD. It includes the renormalization group evolution and quark-flavour threshold effects at NLO in QCD from hadronic scales, at which these matrix elements have been calculated, to the electroweak scale.
We present a follow-up analysis examining the dynamics and structures of 41 massive, large star-forming galaxies at z ~ 0.67 - 2.45 using both ionized and molecular gas kinematics. We fit the galaxy dynamics with models consisting of a bulge, a thick, turbulent disk, and an NFW dark matter halo, using code that fully forward-models the kinematics, including all observational and instrumental effects. We explore the parameter space using Markov Chain Monte Carlo (MCMC) sampling, including priors based on stellar and gas masses and disk sizes. We fit the full sample using extracted 1D kinematic profiles. For a subset of 14 well-resolved galaxies, we also fit the 2D kinematics. The MCMC approach robustly confirms the results from least-squares fitting presented in Paper I: the sample galaxies tend to be baryon-rich on galactic scales (within one effective radius). The 1D and 2D MCMC results are also in good agreement for the subset, demonstrating that much of the galaxy dynamical information is captured along the major axis. The 2D kinematics are more affected by the presence of noncircular motions, which we illustrate by constructing a toy model with constant inflow for one galaxy that exhibits residual signatures consistent with radial motions. This analysis, together with results from Paper I and other studies, strengthens the finding that massive, star-forming galaxies at z ~ 1 - 2 are baryon-dominated on galactic scales, with lower dark matter fractions toward higher baryonic surface densities. Finally, we present details of the kinematic fitting code used in this analysis.
Accreting supermassive binary black holes (SMBBHs) are potential multimessenger sources because they emit both gravitational-wave and electromagnetic (EM) radiation. Past work has shown that their EM output may be periodically modulated by an asymmetric density distribution in the circumbinary disk, often called an "overdensity" or "lump;" this modulation could possibly be used to identify a source as a binary. We explore the sensitivity of the overdensity to SMBBH mass ratio and magnetic flux through the accretion disk. We find that the relative amplitude of the overdensity and its associated EM periodic signal both degrade with diminishing mass ratio, vanishing altogether somewhere between 1:2 and 1:5. Greater magnetization also weakens the lump and any modulation of the light output. We develop a model to describe how lump formation results from internal stress degrading faster in the lump region than it can be rejuvenated through accretion inflow, and predicts a threshold value in specific internal stress below which lump formation should occur and which all our lump-forming simulations satisfy. Thus, detection of such a modulation would provide a constraint on both mass ratio and magnetic flux piercing the accretion flow.
The question of what determines the width of Kuiper belt analogues (exoKuiper belts) is an open one. If solved, this understanding would provide valuable insights into the architecture, dynamics, and formation of exoplanetary systems. Recent observations by ALMA have revealed an apparent paradox in this field, the presence of radially narrow belts in protoplanetary discs that are likely the birthplaces of planetesimals, and exoKuiper belts nearly four times as wide in mature systems. If the parent planetesimals of this type of debris disc indeed form in these narrow protoplanetary rings via streaming instability where dust is trapped, we propose that this width dichotomy could naturally arise if these dust traps form planetesimals whilst migrating radially, e.g. as caused by a migrating planet. Using the dust evolution software DUSTPY, we find that if the initial protoplanetary disc and trap conditions favour planetesimal formation, dust can still effectively accumulate and form planetesimals as the trap moves. This leads to a positive correlation between the inward radial speed and final planetesimal belt width, forming belts up to ~100AU over 10 Myr of evolution. We show that although planetesimal formation is most efficient in low-viscosity (α = 10-4) discs with steep dust traps to trigger the streaming instability, the large widths of most observed planetesimal belts constrain α to values ≥4 × 10-4 at tens of AU, otherwise the traps cannot migrate far enough. Additionally, the large spread in the widths and radii of exoKuiper belts could be due to different trap migration speeds (or protoplanetary disc lifetimes) and different starting locations, respectively. Our work serves as a first step to link exoKuiper belts and rings in protoplanetary discs.
The bispectrum is the leading non-Gaussian statistic in large-scale structure, carrying valuable information on cosmology that is complementary to the power spectrum. To access this information, we need to model the bispectrum in the weakly nonlinear regime. In this work we present the first two-loop, i.e. next-to-next-to-leading order perturbative description of the bispectrum within an effective field theory (EFT) framework. Using an analytic expansion of the perturbative kernels up to F6 we derive a renormalized bispectrum that is demonstrated to be independent of the UV cutoff. We show that the EFT parameters associated with the four independent second-order EFT operators known from the one-loop bispectrum are sufficient to absorb the UV sensitivity of the two-loop contributions in the double-hard region. In addition, we employ a simplified treatment of the single-hard region, introducing one extra EFT parameter at two-loop order. We compare our results to N -body simulations using the realization-based grid perturbation theory method and find good agreement within the expected range, as well as consistent values for the EFT parameters. The two-loop terms start to become relevant at k ≈0.07 h Mpc-1. The range of wave numbers with percent-level agreement, independently of the shape, extends from 0.08 to 0.15 h Mpc-1 when going from one to two loops at z =0 . In addition, we quantify the impact of using exact instead of Einstein-de-Sitter kernels for the one-loop bispectrum, and discuss in how far their impact can be absorbed into a shift of the EFT parameters.
Shortly after its discovery, General Relativity (GR) was applied to predict the behavior of our Universe on the largest scales, and later became the foundation of modern cosmology. Its validity has been verified on a range of scales and environments from the Solar system to merging black holes. However, experimental confirmations of GR on cosmological scales have so far lacked the accuracy one would hope for - its applications on those scales being largely based on extrapolation and its validity there sometimes questioned in the shadow of the discovery of the unexpected cosmic acceleration. Future astronomical instruments surveying the distribution and evolution of galaxies over substantial portions of the observable Universe, such as the Dark Energy Spectroscopic Instrument (DESI), will be able to measure the fingerprints of gravity and their statistical power will allow strong constraints on alternatives to GR.
EOS is an open-source software for a variety of computational tasks in flavor physics. Its use cases include theory predictions within and beyond the Standard Model of particle physics, Bayesian inference of theory parameters from experimental and theoretical likelihoods, and simulation of pseudo events for a number of signal processes. EOS ensures high-performance computations through a C++ back-end and ease of usability through a Python front-end. To achieve this flexibility, EOS enables the user to select from a variety of implementations of the relevant decay processes and hadronic matrix elements at run time. In this article, we describe the general structure of the software framework and provide basic examples. Further details and in-depth interactive examples are provided as part of the EOS online documentation.
Neutrino telescopes are unrivaled tools to explore the Universe at its most extreme. The current generation of telescopes has shown that very high energy neutrinos are produced in the cosmos, even with hints of their possible origin, and that these neutrinos can be used to probe our understanding of particle physics at otherwise inaccessible regimes. The fluxes, however, are low, which means newer, larger telescopes are needed. Here we present the Pacific Ocean Neutrino Experiment, a proposal to build a multi-cubic-kilometer neutrino telescope off the coast of Canada. The idea builds on the experience accumulated by previous sea-water missions, and the technical expertise of Ocean Networks Canada that would facilitate deploying such a large infrastructure. The design and physics potential of the first stage and a full-scale P-ONE are discussed.
Searches for rare $ {B}_s^0 $ and B$^{0}$ decays into four muons are performed using proton-proton collision data recorded by the LHCb experiment, corresponding to an integrated luminosity of 9 fb$^{−1}$. Direct decays and decays via light scalar and J/ψ resonances are considered. No evidence for the six decays searched for is found and upper limits at the 95% confidence level on their branching fractions ranging between 1.8 × 10$^{−10}$ and 2.6 × 10$^{−9}$ are set.[graphic not available: see fulltext]
We compute the QCD static force and potential using gradient flow at next-to-leading order in the strong coupling. The static force is the spatial derivative of the static potential: it encodes the QCD interaction at both short and long distances. While on the one side the static force has the advantage of being free of the O(Λ$_{QCD}$) renormalon affecting the static potential when computed in perturbation theory, on the other side its direct lattice QCD computation suffers from poor convergence. The convergence can be improved by using gradient flow, where the gauge fields in the operator definition of a given quantity are replaced by flowed fields at flow time t, which effectively smear the gauge fields over a distance of order $ \sqrt{t} $, while they reduce to the QCD fields in the limit t → 0. Based on our next-to-leading order calculation, we explore the properties of the static force for arbitrary values of t, as well as in the t → 0 limit, which may be useful for lattice QCD studies.
The experimental detection of the CE$\nu$NS allows the investigation of neutrinos and neutrino sources with all-flavor sensitivity. Given its large content in neutrons and stability, Pb is a very appealing choice as target element. The presence of the radioisotope $^{210}$Pb (T$_{1/2}\sim$22 yrs) makes natural Pb unsuitable for low-background, low-energy event searches. This limitation can be overcome employing Pb of archaeological origin, where several half-lives of $^{210}$Pb have gone by. We present results of a cryogenic measurement of a 15g PbWO$_4$ crystal, grown with archaeological Pb (older than $\sim$2000 yrs) that achieved a sub-keV nuclear recoil detection threshold. A ton-scale experiment employing such material, with a detection threshold for nuclear recoils of just 1 keV would probe the entire Milky Way for SuperNovae, with equal sensitivity for all neutrino flavors, allowing the study of the core of such exceptional events.
We compute NRQCD long-distance matrix elements that appear in the inclusive production cross sections of $P$-wave heavy quarkonia in the framework of potential NRQCD. The formalism developed in this work applies to strongly coupled charmonia and bottomonia. This makes possible the determination of color-octet NRQCD long-distance matrix elements without relying on measured cross section data, which has not been possible so far. We obtain results for inclusive production cross sections of $\chi_{cJ}$ and $\chi_{bJ}$ at the LHC, which are in good agreement with measurements.
The exploration of the universe has recently entered a new era thanks to the multi-messenger paradigm, characterized by a continuous increase in the quantity and quality of experimental data that is obtained by the detection of the various cosmic messengers (photons, neutrinos, cosmic rays and gravitational waves) from numerous origins. They give us information about their sources in the universe and the properties of the intergalactic medium. Moreover, multi-messenger astronomy opens up the possibility to search for phenomenological signatures of quantum gravity. On the one hand, the most energetic events allow us to test our physical theories at energy regimes which are not directly accessible in accelerators; on the other hand, tiny effects in the propagation of very high energy particles could be amplified by cosmological distances. After decades of merely theoretical investigations, the possibility of obtaining phenomenological indications of Planck-scale effects is a revolutionary step in the quest for a quantum theory of gravity, but it requires cooperation between different communities of physicists (both theoretical and experimental). This review, prepared within the COST Action CA18108 “Quantum gravity phenomenology in the multi-messenger approach”, is aimed at promoting this cooperation by giving a state-of-the art account of the interdisciplinary expertise that is needed in the effective search of quantum gravity footprints in the production, propagation and detection of cosmic messengers.
We search for the signature of shocks in stacked gas pressure profiles of galaxy clusters using data from the South Pole Telescope (SPT). Specifically, we stack the recently released Compton-y maps from the 2500 deg^2 SPT-SZ survey on the locations of clusters identified in that same dataset. The sample contains 516 clusters with mean mass <M200m> = 1e14.9 msol and redshift <z> = 0.55. We analyze in parallel a set of zoom-in hydrodynamical simulations from The Three Hundred project. The SPT-SZ data show two features: (i) a pressure deficit at R/R200m = $1.08 \pm 0.09$, measured at $3.1\sigma$ significance and not observed in the simulations, and; (ii) a sharp decrease in pressure at R/R200m = $4.58 \pm 1.24$ at $2.0\sigma$ significance. The pressure deficit is qualitatively consistent with a shock-induced thermal non-equilibrium between electrons and ions, and the second feature is consistent with accretion shocks seen in previous studies. We split the cluster sample by redshift and mass, and find both features exist in all cases. There are also no significant differences in features along and across the cluster major axis, whose orientation roughly points towards filamentary structure. As a consistency test, we also analyze clusters from the Planck and Atacama Cosmology Telescope Polarimeter surveys and find quantitatively similar features in the pressure profiles. Finally, we compare the accretion shock radius (Rsh_acc) with existing measurements of the splashback radius (Rsp) for SPT-SZ and constrain the lower limit of the ratio, Rsh_acc/Rsp > $2.16 \pm 0.59$.
Using proton-proton collision data, corresponding to an integrated luminosity of 9 fb$^{−1}$ collected with the LHCb detector, seven decay modes of the $ {\mathrm{B}}_{\mathrm{c}}^{+} $ meson into a J/ψ or ψ(2S) meson and three charged hadrons, kaons or pions, are studied. The decays $ {\mathrm{B}}_{\mathrm{c}}^{+} $ → (ψ(2S) → J/ψπ$^{+}$π$^{−}$)π$^{+}$, $ {\mathrm{B}}_{\mathrm{c}}^{+} $ → ψ(2S)π$^{+}$π$^{−}$π$^{+}$, $ {\mathrm{B}}_{\mathrm{c}}^{+} $ → J/ψK$^{+}$π$^{−}$π$^{+}$ and $ {\mathrm{B}}_{\mathrm{c}}^{+} $ → J/ψK$^{+}$K$^{−}$K$^{+}$ are observed for the first time, and evidence for the $ {\mathrm{B}}_{\mathrm{c}}^{+} $ → ψ(2S)K$^{+}$K$^{−}$π$^{+}$, decay is found, where J/ψ and ψ(2S) mesons are reconstructed in their dimuon decay modes. The ratios of branching fractions between the different $ {\mathrm{B}}_{\mathrm{c}}^{+} $ decays are reported as well as the fractions of the decays proceeding via intermediate resonances. The results largely support the factorisation approach used for a theoretical description of the studied decays.[graphic not available: see fulltext]
We introduce a new statistical test based on the observed spacings of ordered data. The statistic is sensitive to detect non-uniformity in random samples, or short-lived features in event time series. Under some conditions, this new test can outperform existing ones, such as the well known Kolmogorov-Smirnov or Anderson-Darling tests, in particular when the number of samples is small and differences occur over a small quantile of the null hypothesis distribution. A detailed description of the test statistic is provided including an illustration and examples, together with a parameterization of its distribution based on simulation.
We present a photometric analysis of star and star cluster (SC) formation in a high-resolution simulation of a dwarf galaxy starburst that allows the formation of individual stars to be followed. Previous work demonstrated that the properties of the SCs formed in the simulation are in good agreement with observations. In this paper, we create mock spectral energy distributions and broad-band photometric images using the radiative transfer code SKIRT 9. We test several observational star formation rate (SFR) tracers and find that $24$ $\mu$m, total infrared and H$\alpha$ trace the underlying SFR during the (post)starburst phase, while UV tracers yield a more accurate picture of star formation during quiescent phases prior to and after the merger. We then place the simulated galaxy at distances of $10$ and $50$ Mpc and use aperture photometry at Hubble Space Telescope resolution to analyse the simulated SC population. During the starburst phase, a hierarchically forming set of SCs leads inaccurate source separation because of crowding. This results in estimated SC mass function slopes that are up to $\sim0.3$ shallower than the true slope of $\sim-1.9$ to $-2$ found for the bound clusters identified from the particle data in the simulation. The masses of the largest clusters are overestimated by a factor of up to $2.9$ due to unresolved clusters within the apertures. The aperture-based analysis also produces a relation between cluster formation efficiency and SFR surface density that is slightly flatter than that recovered from bound clusters. The differences are strongest in quiescent SF environments.
Self-organized pattern formation is vital for many biological processes. Reaction-diffusion models have advanced our understanding of how biological systems develop spatial structures, starting from homogeneity. However, biological processes inherently involve multiple spatial and temporal scales and transition from one pattern to another over time, rather than progressing from homogeneity to a pattern. To deal with such multiscale systems, coarse-graining methods are needed that allow the dynamics to be reduced to the relevant degrees of freedom at large scales, but without losing information about the patterns at the small scales. Here, we present a semi-phenomenological approach which exploits mass-conservation in pattern formation, and enables to reconstruct information about patterns from the large-scale dynamics. The basic idea is to partition the domain into distinct regions (coarse-grain) and determine instantaneous dispersion relations in each region, which ultimately inform about local pattern-forming instabilities. We illustrate our approach by studying the Min system, a paradigmatic model for protein pattern formation. By performing simulations, we first show that the Min system produces multiscale patterns in a spatially heterogeneous geometry. This prediction is confirmed experimentally by in vitro reconstitution of the Min system. Using a recently developed theoretical framework for mass-conserving reaction-diffusion systems, we show that the spatiotemporal evolution of the total protein densities on large-scales reliably predicts the pattern-forming dynamics. Our approach provides an alternative and versatile theoretical framework for complex systems where analytical coarse-graining methods are not applicable, and can in principle be applied to a wide range of systems with an underlying conservation law.
We report on the status of the analysis of the static energy in $2+1+1$-flavor QCD. The static energy is obtained by measuring Wilson line correlators in Coulomb gauge using the HISQ action, yielding the scales $r_{0}/a$, $r_{1}/a$, $r_{2}/a$, their ratios, and the string tension $\sigma r_{i}^{2}$. We put emphasis on the possible effects due to the dynamical charm-quark by comparing the lattice results to continuum results of the static energy with and without a massive flavor at two-loop accuracy. We employ gauge-field ensembles from the HotQCD and MILC Collaborations.
We compute the leading corrections to the differential cross section for top-pair production via gluon fusion due to third-generation dimension-six operators at leading order in QCD. The Standard Model fields are assumed to couple only weakly to the hypothetical new sector. A systematic approach then suggests treating single insertions of the operator class containing gluon field strength tensors on the same footing as explicitly loop suppressed contributions from four-fermion operators. This is in particular the case for the chromomagnetic operator Q(u G ) and the purely bosonic operators Q(G ) and Q(φ G ). All leading order dimension-six contributions are consequently suppressed with a loop factor 1 /16 π2.
The interstellar medium is characterized by an intricate filamentary network that exhibits complex structures. These show a variety of different shapes (e.g. junctions, rings, etc.) deviating strongly from the usually assumed cylindrical shape. A possible formation mechanism are filament mergers that we analyse in this study. Indeed, the proximity of filaments in networks suggests mergers to be rather likely. As the merger has to be faster than the end dominated collapse of the filament along its major axis, we expect three possible results: (a) The filaments collapse before a merger can happen, (b) the merged filamentary complex shows already signs of cores at the edges, or (c) the filaments merge into a structure which is not end-dominated. We develop an analytic formula for the merging and core-formation time-scale at the edge and validate our model via hydrodynamical simulations with the adaptive-mesh-refinement-code RAMSES. This allows us to predict the outcome of a filament merger, given different initial conditions which are the initial distance and the respective line-masses of each filament as well as their relative velocities.
The construction of catalogues of a particular type of galaxy can be complicated by interlopers contaminating the sample. In spectroscopic galaxy surveys this can be due to the misclassification of an emission line; for example in the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) low-redshift [O II] emitters may make up a few per cent of the observed Ly α emitter (LAE) sample. The presence of contaminants affects the measured correlation functions and power spectra. Previous attempts to deal with this using the cross-correlation function have assumed sources at a fixed redshift, or not modelled evolution within the adopted redshift bins. However, in spectroscopic surveys like HETDEX, where the contamination fraction is likely to be redshift dependent, the observed clustering of misclassified sources will appear to evolve strongly due to projection effects, even if their true clustering does not. We present a practical method for accounting for the presence of contaminants with redshift-dependent contamination fractions and projected clustering. We show using mock catalogues that our method, unlike existing approaches, yields unbiased clustering measurements from the upcoming HETDEX survey in scenarios with redshift-dependent contamination fractions within the redshift bins used. We show our method returns autocorrelation functions with systematic biases much smaller than the statistical noise for samples with at least as high as 7 per cent contamination. We also present and test a method for fitting for the redshift-dependent interloper fraction using the LAE-[O II] galaxy cross-correlation function, which gives less biased results than assuming a single interloper fraction for the whole sample.
A short review of existing efforts to understand charge radii and related indicators on a global scale within the covariant density functional theory (CDFT) is presented. Using major classes of covariant energy density functionals (CEDFs), the global accuracy of the description of experimental absolute and differential charge radii within the CDFT framework has been established. This assessment is supplemented by an evaluation of theoretical statistical and systematic uncertainties in the description of charge radii. New results on the accuracy of the description of differential charge radii in deformed actinides and light superheavy nuclei are presented and the role of octupole deformation in their reproduction is evaluated. Novel mechanisms leading to odd-even staggering in charge radii are discussed. Finally, we analyze the role of self-consistency effects in an accurate description of differential charge radii.