Cosmic-ray antideuterons could be a key for the discovery of exotic phenomena in our Galaxy, such as dark-matter annihilations or primordial black hole evaporation. Unfortunately the theoretical predictions of the antideuteron flux at Earth are plagued with uncertainties from the mechanism of antideuteron production and propagation in the Galaxy. We present the most up-to-date calculation of the antideuteron fluxes from cosmic-ray collisions with the interstellar medium and from exotic processes. We include for the first time the antideuteron inelastic interaction cross section recently measured by the ALICE collaboration to account for the loss of antideuterons during propagation. In order to bracket the uncertainty in the expected fluxes, we consider several state-of-the-art models of antideuteron production and of cosmic-ray propagation.
In nuclear collisions the incident protons generate a Coulomb field which acts on produced charged particles. The impact of these interactions on charged pion transverse-mass and rapidity spectra, as well as on pion-pion momentum correlations is investigated in Au+Au collisions at $\sqrt{s_{NN}}$ = 2.4 GeV. We show that the low-mt part of the data ($m_t < 0.2$ GeV/c$^2$) can be well described with a Coulomb-modified Boltzmann distribution that also takes changes of the Coulomb field during the expansion of the fireball into account. The observed centrality dependence of the fitted mean Coulomb potential deviates strongly from a $A_{part}^{2/3}$ scaling, indicating that, next to the fireball, the non-interacting charged spectators have to be taken into account. For the most central collisions, the Coulomb modifications of the HBT source radii are found to be consistent with the potential extracted from the single-pion transverse-mass distributions. This finding suggests that the region of homogeneity obtained from two-pion correlations coincides with the region in which the pions freeze-out. Using the inferred mean-square radius of the charge distribution at freeze-out, we have deduced a baryon density, in fair agreement with values obtained from statistical hadronization model fits to the particle yields.
We report our analysis for the static energy in (2+1+1)-flavor QCD over a wide range of lattice spacings and several quark masses. We obtain results for the static energy out to distances of nearly 1 fm, allowing us to perform a simultaneous determination of the lattice scales $r_2$, $r_1$ and $r_0$ as well as the string tension, $\sigma$. While our results for ${r_0}/{r_1}$ and $r_0$ $\sqrt{\sigma}$ agree with published (2+1)-flavor results, our result for ${r_1}/{r_2}$ differs significantly from the value obtained in the (2+1)-flavor case, likely due to the effect of the charm quark. We study in detail the effect of the charm quark on the static energy by comparing our results on the finest lattices with the previously published (2+1)-flavor QCD results at similar lattice spacing. The lattice results agree well with the two-loop perturbative expression of the static energy incorporating finite charm mass effects.
Mildly relativistic perpendicular, collisionless multiple-ion gamma-ray burst shocks are analyzed using 2D3V particle-in-cell simulations. A characteristic feature of multiple-ion shocks is alternating maxima of the α particle and the proton densities, at least in the early downstream. Turbulence, shock-drift acceleration, and evidence of stochastic acceleration are observed. We performed simulations with both in-plane (B y ) and out-of-plane (B z ) magnetic fields, as well as in a perpendicular shock setup with φ = 45°, and saw multiple differences: while with B z , the highest-energetic particles mostly gain energy at the beginning of the shock, with B y , particles continue gaining energy and it does not appear that they have reached their final energy level. A larger magnetization σ leads to more high-energetic particles in our simulations. One important quantity for astronomers is the electron acceleration efficiency ϵ e , which is measurable due to synchrotron radiation. This quantity hardly changes when changing the amount of α particles while keeping σ constant. It is, however, noteworthy that ϵ e strongly differs for in-plane and out-of-plane magnetic fields. When looking at the proton and α acceleration efficiency, ϵ p and ϵ α , the energy of α particles always decreases when passing the shock into the downstream, whereas the energy of protons can increase if α particles account for the majority of the ions.
Context. Over the last years, large (sub-)millimetre surveys of protoplanetary disks in different star forming regions have well constrained the demographics of disks, such as their millimetre luminosities, spectral indices, and disk radii. Additionally, several high-resolution observations have revealed an abundance of substructures in the disk's dust continuum. The most prominent are ring like structures, which are likely caused by pressure bumps trapping dust particles. The origins and characteristics of these pressure bumps, nevertheless, need to be further investigated.
Aims: The purpose of this work is to study how dynamic pressure bumps affect observational properties of protoplanetary disks. We further aim to differentiate between the planetary- versus zonal flow-origin of pressure bumps.
Methods: We perform one-dimensional gas and dust evolution simulations, setting up models with varying pressure bump features, including their amplitude and location, growth time, and number of bumps. We subsequently run radiative transfer calculations to obtain synthetic images, from which we obtain the different quantities of observations.
Results: We find that the outermost pressure bump determines the disk's dust size across different millimetre wavelengths and confirm that the observed dust masses of disks with optically thick inner bumps (<40 au) are underestimated by up to an order of magnitude. Our modelled dust traps need to form early (<0.1 Myr), fast (on viscous timescales), and must be long lived (>Myr) to obtain the observed high millimetre luminosities and low spectral indices of disks. While the planetary bump models can reproduce these observables irrespectively of the opacity prescription, the highest opacities are needed for the dynamic bump model, which mimics zonal flows in disks, to be in line with observations.
Conclusions: Our findings favour the planetary- over the zonal flow-origin of pressure bumps and support the idea that planet formation already occurs in early class 0-1 stages of circumstellar disks. The determination of the disk's effective size through its outermost pressure bump also delivers a possible answer to why disks in recent low-resolution surveys appear to have the same sizes across different millimetre wavelengths.
Planets are born from the gas and dust discs surrounding young stars. Energetic radiation from the central star can drive thermal outflows from the discs atmospheres, strongly affecting the evolution of the discs and the nascent planetary system. In this context, several numerical models of varying complexity have been developed to study the process of disc photoevaporation from their central stars. We describe the numerical techniques, the results and the predictivity of current models and identify observational tests to constrain them.
We study the inner structure of the group-scale lens CASSOWARY 31 (CSWA 31) by adopting both strong lensing and dynamical modeling. CSWA 31 is a peculiar lens system. The brightest group galaxy (BGG) is an ultra-massive elliptical galaxy at z = 0.683 with a weighted mean velocity dispersion of σ = 432 ± 31 km s−1. It is surrounded by group members and several lensed arcs probing up to ≃150 kpc in projection. Our results significantly improve on previous analyses of CSWA 31 thanks to the new HST imaging and MUSE integral-field spectroscopy. From the secure identification of five sets of multiple images and measurements of the spatially resolved stellar kinematics of the BGG, we conduct a detailed analysis of the multi-scale mass distribution using various modeling approaches, in both the single and multiple lens-plane scenarios. Our best-fit mass models reproduce the positions of multiple images and provide robust reconstructions for two background galaxies at z = 1.4869 and z = 2.763. Despite small variations related to the different sets of input constraints, the relative contributions from the BGG and group-scale halo are remarkably consistent in our three reference models, demonstrating the self-consistency between strong lensing analyses based on image position and extended image modeling. We find that the ultra-massive BGG dominates the projected total mass profiles within 20 kpc, while the group-scale halo dominates at larger radii. The total projected mass enclosed within Reff = 27.2 kpc is 1.10−0.04+0.02 × 1013 M⊙. We find that CSWA 31 is a peculiar fossil group, strongly dark-matter dominated toward the central region, and with a projected total mass profile similar to higher-mass cluster-scale halos. The total mass-density slope within the effective radius is shallower than isothermal, consistent with previous analyses of early-type galaxies in overdense environments.
Full Table B.1 is only available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (ftp://130.79.128.5) or via https://cdsarc.cds.unistra.fr/viz-bin/cat/J/A+A/668/A162
Cytoskeletal networks form complex intracellular structures. Here we investigate a minimal model for filament-motor mixtures in which motors act as depolymerases and thereby regulate filament length. Combining agent-based simulations and hydrodynamic equations, we show that resource-limited length regulation drives the formation of filament clusters despite the absence of mechanical interactions between filaments. Even though the orientation of individual remains fixed, collective filament orientation emerges in the clusters, aligned orthogonal to their interfaces.
We compute the three-loop helicity amplitudes for q q ¯ → gg and its crossed partonic channels, in massless QCD. Our analytical results provide a non-trivial check of the color quadrupole contribution to the infrared poles for external states in different color representations. At high energies, the qg → qg amplitude shows the predicted factorized form from Regge theory and confirms previous results for the gluon Regge trajectory extracted from qq' → qq' and gg → gg scattering.
Context. Models of planetary core growth by either planetesimal or pebble accretion are traditionally disconnected from the models of dust evolution and formation of the first gravitationally bound planetesimals. State-of-the-art models typically start with massive planetary cores already present.
Aims: We aim to study the formation and growth of planetary cores in a pressure bump, motivated by the annular structures observed in protoplanetary disks, starting with submicron-sized dust grains.
Methods: We connect the models of dust coagulation and drift, planetesimal formation in the streaming instability, gravitational interactions between planetesimals, pebble accretion, and planet migration into one uniform framework.
Results: We find that planetesimals forming early at the massive end of the size distribution grow quickly, predominantly by pebble accretion. These few massive bodies grow on timescales of ~100 000 yr and stir the planetesimals that form later, preventing the emergence of further planetary cores. Additionally, a migration trap occurs, allowing for retention of the growing cores.
Conclusions: Pressure bumps are favourable locations for the emergence and rapid growth of planetary cores by pebble accretion as the dust density and grain size are increased and the pebble accretion onset mass is reduced compared to a smooth-disc model.
Multiply imaged time-variable sources can be used to measure absolute distances as a function of redshifts and thus determine cosmological parameters, chiefly the Hubble Constant H0. In the two decades up to 2020, through a number of observational and conceptual breakthroughs, this so-called time-delay cosmography has reached a precision sufficient to be an important independent voice in the current "Hubble tension" debate between early- and late-universe determinations of H0. The 2020s promise to deliver major advances in time-delay cosmography, owing to the large number of lenses to be discovered by new and upcoming surveys and the vastly improved capabilities for follow-up and analysis. In this review, after a brief summary of the foundations of the method and recent advances, we outline the opportunities for the decade and the challenges that will need to be overcome in order to meet the goal of the determination of H0 from time-delay cosmography with 1% precision and accuracy.
We use gradient flow to compute the static force based on a Wilson loop with a chromoelectric field insertion. The result can be compared on one hand to the static force from the numerical derivative of the lattice static energy, and on the other hand to the perturbative calculation, allowing a precise extraction of the $\Lambda_0$ parameter. This study may open the way to gradient flow calculations of correlators of chromoelectric and chromomagnetic fields, which typically arise in the nonrelativistic effective field theory factorization.
We investigate the main tensions within the current standard model of cosmology from the perspective of the void size function in BOSS DR12 data. For this purpose, we present the first cosmological constraints on the parameters $S_8\equiv \sigma_8\sqrt{\Omega_{\rm m}/0.3}$ and $H_0$ obtained from voids as a stand-alone probe. We rely on an extension of the popular volume-conserving model for the void size function, tailored to the application on data, including geometric and dynamic distortions. We calibrate the two nuisance parameters of this model with the official BOSS collaboration mock catalogs and propagate their uncertainty through the statistical analysis of the BOSS void number counts. We focus our analysis on the $\Omega_{\rm m}$--$\sigma_8$ and $\Omega_{\rm m}$--$H_0$ parameter planes and derive the marginalized constraints $S_8 = 0.78^{+0.16}_{-0.14}$ and $H_0=65.2^{+4.5}_{-3.6}$ $\mathrm{km} \ \mathrm{s}^{-1} \ \mathrm{Mpc}^{-1}$. Our estimate of $S_8$ is fully compatible with constraints from the literature, while our $H_0$ value slightly disagrees by more than $1\sigma$ with recent local distance ladder measurements of type Ia supernovae. Our results open up a new viewing angle on the rising cosmological tensions and are expected to improve notably in precision when jointly analyzed with independent probes.
Context. Recent observations with the Atacama Large Millimeter Array (ALMA) have shown that the large dust aggregates observed at millimeter wavelengths settle to the midplane into a remarkably thin layer. This sets strong limits on the strength of the turbulence and other gas motions in these disks.
Aims: We intend to find out if the geometric thinness of these layers is evidence against the vertical shear instability (VSI) operating in these disks. We aim to verify if a dust layer consisting of large enough dust aggregates could remain geometrically thin enough to be consistent with the latest observations of these dust layers, even if the disk is unstable to the VSI. If this is falsified, then the observed flatness of these dust layers proves that these disks are stable against the VSI, even out to the large radii at which these dust layers are observed.
Methods: We performed hydrodynamic simulations of a protoplanetary disk with a locally isothermal equation of state, and let the VSI fully develop. We sprinkled dust particles with a given grain size at random positions near the midplane and followed their motion as they got stirred up by the VSI, assuming no feedback onto the gas. We repeated the experiment for different grain sizes and determined for which grain size the layer becomes thin enough to be consistent with ALMA observations. We then verified if, with these grain sizes, it is still possible (given the constraints of dust opacity and gravitational stability) to generate a moderately optically thick layer at millimeter wavelengths, as observations appear to indicate.
Results: We found that even very large dust aggregates with Stokes numbers close to unity get stirred up to relatively large heights above the midplane by the VSI, which is in conflict with the observed geometric thinness. For grains so large that the Stokes number exceeds unity, the layer can be made to remain thin, but we show that it is hard to make dust layers optically thick at ALMA wavelengths (e.g., τ1.3mm ≳ 1) with such large dust aggregates.
Conclusions: We conclude that protoplanetary disks with geometrically thin midplane dust layers cannot be VSI unstable, at least not down to the disk midplane. Explanations for the inhibition of the VSI out to several hundreds of au include a high dust-to-gas ratio of the midplane layer, a modest background turbulence, and/or a reduced dust-to-gas ratio of the small dust grains that are responsible for the radiative cooling of the disk. A reduction of small grains by a factor of between 10 and 100 is sufficient to quench the VSI. Such a reduction is plausible in dust growth models, and still consistent with observations at optical and infrared wavelengths.
In the past few years, the Event Horizon Telescope (EHT) has provided the first-ever event horizon-scale images of the supermassive black holes (BHs) M87* and Sagittarius A* (Sgr A*). The next-generation EHT project is an extension of the EHT array that promises larger angular resolution and higher sensitivity to the dim, extended flux around the central ring-like structure, possibly connecting the accretion flow and the jet. The ngEHT Analysis Challenges aim to understand the science extractability from synthetic images and movies so as to inform the ngEHT array design and analysis algorithm development. In this work, we take a look at the numerical fluid simulations used to construct the source models in the challenge set, which currently target M87* and Sgr A*. We have a rich set of models encompassing steady-state radiatively-inefficient accretion flows with time-dependent shearing hotspots, radiative and non-radiative general relativistic magneto-hydrodynamic simulations that incorporate electron heating and cooling. We find that the models exhibit remarkably similar temporal and spatial properties, except for the electron temperature since radiative losses substantially cool down electrons near the BH and the jet sheath. We restrict ourselves to standard torus accretion flows, and leave larger explorations of alternate accretion models to future work.
Context. An excess of galaxy-galaxy strong lensing (GGSL) in galaxy clusters compared to expectations from the Λ cold-dark-matter (CDM) cosmological model has recently been reported. Theoretical estimates of the GGSL probability are based on the analysis of numerical hydrodynamical simulations in ΛCDM cosmology.
Aims: We quantify the impact of the numerical resolution and active galactic nucleus (AGN) feedback scheme adopted in cosmological simulations on the predicted GGSL probability, and determine if varying these simulation properties can alleviate the gap with observations.
Methods: We analyze cluster-size halos (M200 > 5 × 1014 M⊙) simulated with different mass and force resolutions and implementing several independent AGN feedback schemes. Our analysis focuses on galaxies with Einstein radii in the range 0.″5 ≤ θE ≤ 3″.
Results: We find that improving the mass resolution by factors of 10 and 25, while using the same galaxy formation model that includes AGN feedback, does not affect the GGSL probability. We find similar results regarding the choice of gravitational softening. On the contrary, adopting an AGN feedback scheme that is less efficient at suppressing gas cooling and star formation leads to an increase in the GGSL probability by a factor of between 3 and 6. However, we notice that such simulations form overly massive galaxies whose contribution to the lensing cross section would be significant but that their Einstein radii are too large to be consistent with the observations. The primary contributors to the observed GGSL cross sections are galaxies with smaller masses that are compact enough to become critical for lensing. The population with these required characteristics appears to be absent from simulations. Conclusion. Based on these results, we reaffirm the tension between observations of GGSL and theoretical expectations in the framework of the ΛCDM cosmological model. The GGSL probability is sensitive to the galaxy formation model implemented in the simulations. Still, all the tested models have difficulty simultaneously reproducing the stellar mass function and the internal structure of galaxies.
The existence of a nucleon-$\phi$ (N-$\phi$) bound state has been subject of theoretical and experimental investigations for decades. In this letter a re-analysis of the p-$\phi$ correlation measured at the LHC is presented, using as input recent lattice calculations of the N-$\phi$ interaction in the spin 3/2 channel obtained by the HAL QCD collaboration. A constrained fit of the experimental data allows to determine the spin 1/2 channel of the p-$\phi$ interaction with evidence of the formation of a p-$\phi$ bound state. The scattering length and effective range extracted from the spin 1/2 channel are $f_0^{(1/2)}=(-1.47^{+0.44}_{-0.37}(\mathrm{stat.})^{+0.14}_{-0.17}(\mathrm{syst.})+i\cdot0.00^{+0.26}_{-0.00}(\mathrm{stat.})^{+0.15}_{-0.00}(\mathrm{syst.}))$ fm and $d_0^{(1/2)}=(0.37^{+0.07}_{-0.08}(\mathrm{stat.})^{+0.03}_{-0.03}(\mathrm{syst.})+i\cdot~0.00^{+0.00}_{-0.02}(\mathrm{stat.})^{+0.00}_{-0.01}(\mathrm{syst.}))$ fm, respectively. The corresponding binding energy is estimated to be in the range $14.7-56.6$ MeV. This is the first experimental evidence of a p-$\phi$ bound state.
We present limits on the spin-independent interaction cross section of dark matter particles with silicon nuclei, derived from data taken with a cryogenic calorimeter with 0.35 g target mass operated in the CRESST-III experiment. A baseline nuclear recoil energy resolution of $(1.36\pm 0.05)$ eV$_{\text{nr}}$, currently the lowest reported for macroscopic particle detectors, and a corresponding energy threshold of $(10.0\pm 0.2)$ eV$_{\text{nr}}$ have been achieved, improving the sensitivity to light dark matter particles with masses below 160 MeV/c$^2$ by a factor of up to 20 compared to previous results. We characterize the observed low energy excess, and we exclude noise triggers and radioactive contaminations on the crystal surfaces as dominant contributions.
In many astrophysical applications, the cost of solving a chemical network represented by a system of ordinary differential equations (ODEs) grows significantly with the size of the network and can often represent a significant computational bottleneck, particularly in coupled chemo-dynamical models. Although standard numerical techniques and complex solutions tailored to thermochemistry can somewhat reduce the cost, more recently, machine learning algorithms have begun to attack this challenge via data-driven dimensional reduction techniques. In this work, we present a new class of methods that take advantage of machine learning techniques to reduce complex data sets (autoencoders), the optimization of multiparameter systems (standard backpropagation), and the robustness of well-established ODE solvers to to explicitly incorporate time dependence. This new method allows us to find a compressed and simplified version of a large chemical network in a semiautomated fashion that can be solved with a standard ODE solver, while also enabling interpretability of the compressed, latent network. As a proof of concept, we tested the method on an astrophysically relevant chemical network with 29 species and 224 reactions, obtaining a reduced but representative network with only 5 species and 12 reactions, and an increase in speed by a factor 65.
Context. Disk winds are an important mechanism for accretion and disk evolution around young stars. The accreting intermediate-mass T-Tauri star RY Tau has an active jet and a previously known disk wind. Archival optical and new near-infrared observations of the RY Tau system show two horn-like components stretching out as a cone from RY Tau. Scattered light from the disk around RY Tau is visible in the near-infrared, but not seen at optical wavelengths. In the near-infrared, dark wedges separate the horns from the disk, indicating that we may see the scattered light from a disk wind.
Aims: We aim to test the hypothesis that a dusty disk wind could be responsible for the optical effect in which the disk around RY Tau is hidden in the I band, but visible in the H band. This could be the first detection of a dusty disk wind in scattered light. We also want to constrain the grain size and dust mass in the wind and the wind-launching region.
Methods: We used archived Atacama-Large-Millimetre-Array (ALMA) and Spectro-Polarimetric High-contrast Exoplanet REsearch (SPHERE) I band observations combined with newly acquired SPHERE H band observations and available literature to build a simple geometric model of the RY Tau disk and disk wind. We used Monte Carlo radiative transfer modelling MCMax3D to create comparable synthetic observations that test the effect of a dusty wind on the optical effect in the observations. We constrained the grain size and dust mass needed in the disk wind to reproduce the effect from the observations.
Results: A model geometrically reminiscent of a dusty disk wind with small micron to sub-micron-sized grains elevated above the disk can reproduce the optical effect seen in the observations. The mass in the obscuring component of the wind has been constrained to 1 × 10−9 M⊙ ≤ M ≤ 5 × 10−8 M⊙, which corresponds to a mass-loss rate in the wind of about ~1 × 10−8 M⊙ yr−1.
Conclusions: A simple model of a disk wind with micron to sub-micron-sized grains elevated above the disk is able to prevent stellar radiation to scatter in the disk at optical wavelengths while allowing photons to reach the disk in the near-infrared. Estimates of mass-loss rate correspond to previously presented theoretical models and points towards the idea that a magneto-hydrodynamic-type wind is the more likely scenario.
We critically reconsider the argument based on 't Hooft anomaly matching that aims at proving chiral symmetry breaking in QCD-like theories with $N_c>2$ colors and $N_f$ flavors of vectorlike quarks in the fundamental representation. The main line of reasoning relies on a property of the solutions of the anomaly matching and persistent mass equations called $N_f$-independence. The validity of $N_f$-independence was assumed based on qualitative arguments, but it was never proven rigorously. We provide a detailed proof and clarify under which (dynamical) conditions it holds. Our result is valid for a generic spectrum of massless composite fermions including baryons and exotics. We then present a novel argument that does not require any dynamical assumption and is based on downlifting solutions to smaller values of $N_f$. When applied to QCD ($N_c=3)$, our theorem implies that chiral symmetry must be spontaneously broken for $3\leq N_f<N_f^{CFT}$, where $N_f^{CFT}$ is the lower edge of the conformal window. A second argument is also presented based on continuity, which assumes the absence of phase transitions when the quark masses are sent to infinity. When applied to QCD, this result explains why chiral symmetry is broken for $N_f=2$, despite integer solutions of the equations exist in this case. Explicit examples and a numerical analysis are presented in a companion paper.
Magnetars are isolated young neutron stars characterised by the most intense magnetic fields known in the Universe, which power a wide variety of high-energy emissions from giant flares to fast radio bursts. The origin of their magnetic field is still a challenging question. In situ magnetic field amplification by dynamo action could potentially generate ultra-strong magnetic fields in fast-rotating progenitors. However, it is unclear whether the fraction of progenitors harbouring fast core rotation is sufficient to explain the entire magnetar population. To address this point, we propose a new scenario for magnetar formation involving a slowly rotating progenitor, in which a slow-rotating proto-neutron star is spun up by the supernova fallback. We argue that this can trigger the development of the Tayler-Spruit dynamo while other dynamo processes are disfavoured. Using the findings of previous studies of this dynamo and simulation results characterising the supernova fallback, we derive equations modelling the coupled evolution of the proto-neutron star rotation and magnetic field. Their time integration for different accreted masses is successfully compared with analytical estimates of the amplification timescales and saturation value of the magnetic field. We find that the magnetic field is amplified within 20 − 40 s after the core bounce, and that the radial magnetic field saturates at intensities between ∼1013 and 1015 G, therefore spanning the full range of a magnetar's dipolar magnetic fields. The toroidal magnetic field is predicted to be a factor of 10-100 times stronger, lying between ∼1015 and 3 × 1016 G. We also compare the saturation mechanisms proposed respectively by H.C. Spruit and J. Fuller, showing that magnetar-like magnetic fields can be generated for a neutron star spun up to rotation periods of ≲8 ms and ≲28 ms, corresponding to accreted masses of ≳ 4 × 10−2 M⊙ and ≳ 1.1 × 10−2 M⊙, respectively. Therefore, our results suggest that magnetars can be formed from slow-rotating progenitors for accreted masses compatible with recent supernova simulations and leading to plausible initial rotation periods of the proto-neutron star.
A subfraction of dark matter or new particles trapped inside celestial objects can significantly alter their macroscopic properties. We investigate the new physics imprint on celestial objects by using a generic framework to solve the Tolman-Oppenheimer-Volkoff (TOV) equations for up to two fluids. We test the impact of populations of new particles on celestial objects, including the sensitivity to self-interaction sizes, new particle mass, and net population mass. Applying our setup to neutron stars and boson stars, we find rich phenomenology for a range of these parameters, including the creation of extended atmospheres. These atmospheres are detectable by their impact on the tidal Love number, which can be measured at upcoming gravitational wave experiments such as Advanced LIGO, the Einstein Telescope, and LISA. We release our calculation framework as a publicly available code, allowing the TOV equations to be generically solved for arbitrary new physics models in novel and admixed celestial objects.
We introduce a PYTHON package that provides simple and unified access to a collection of datasets from fundamental physics research—including particle physics, astroparticle physics, and hadron- and nuclear physics—for supervised machine learning studies. The datasets contain hadronic top quarks, cosmic-ray-induced air showers, phase transitions in hadronic matter, and generator-level histories. While public datasets from multiple fundamental physics disciplines already exist, the common interface and provided reference models simplify future work on cross-disciplinary machine learning and transfer learning in fundamental physics. We discuss the design and structure and line out how additional datasets can be submitted for inclusion. As showcase application, we present a simple yet flexible graph-based neural network architecture that can easily be applied to a wide range of supervised learning tasks. We show that our approach reaches performance close to dedicated methods on all datasets. To simplify adaptation for various problems, we provide easy-to-follow instructions on how graph-based representations of data structures, relevant for fundamental physics, can be constructed and provide code implementations for several of them. Implementations are also provided for our proposed method and all reference algorithms.
Simulations of idealized star-forming filaments of finite length typically show core growth that is dominated by two cores forming at its respective end. The end cores form due to a strong increasing acceleration at the filament ends that leads to a sweep-up of material during the filament collapse along its axis. As this growth mode is typically faster than any other core formation mode in a filament, the end cores usually dominate in mass and density compared to other cores forming inside a filament. However, observations of star-forming filaments do not show this prevalence of cores at the filament ends. We explore a possible mechanism to slow the growth of the end cores using numerical simulations of simultaneous filament and embedded core formation, in our case a radially accreting filament forming in a finite converging flow. While such a set-up still leads to end cores, they soon begin to move inwards and a density gradient is formed outside of the cores by the continued accumulation of material. As a result, the outermost cores are no longer located at the exact ends of the filament and the density gradient softens the inward gravitational acceleration of the cores. Therefore, the two end cores do not grow as fast as expected and thus do not dominate over other core formation modes in the filament.
Small grains play an essential role in astrophysical processes such as chemistry, radiative transfer, and gas/dust dynamics. The population of small grains is mainly maintained by the fragmentation process due to colliding grains. An accurate treatment of dust fragmentation is required in numerical modelling. However, current algorithms for solving fragmentation equation suffer from an overdiffusion in the conditions of 3D simulations. To tackle this challenge, we developed a discontinuous Galerkin scheme to solve efficiently the non-linear fragmentation equation with a limited number of dust bins.
Disc winds and planet formation are considered to be two of the most important mechanisms that drive the evolution and dispersal of protoplanetary discs and in turn define the environment in which planets form and evolve. While both have been studied extensively in the past, we combine them into one model by performing three-dimensional radiation-hydrodynamic simulations of giant planet hosting discs that are undergoing X-ray photoevaporation, with the goal to analyse the interactions between both mechanisms. In order to study the effect on observational diagnostics, we produce synthetic observations of commonly used wind-tracing forbidden emission lines with detailed radiative transfer and photoionization calculations. We find that a sufficiently massive giant planet carves a gap in the gas disc that is deep enough to affect the structure and kinematics of the pressure-driven photoevaporative wind significantly. This effect can be strong enough to be visible in the synthetic high-resolution observations of some of our wind diagnostic lines, such as the [O I] 6300 Å or [S II] 6730 Å lines. When the disc is observed at inclinations around 40° and higher, the spectral line profiles may exhibit a peak in the redshifted part of the spectrum, which cannot easily be explained by simple wind models alone. Moreover, massive planets can induce asymmetric substructures within the disc and the photoevaporative wind, giving rise to temporal variations of the line profiles that can be strong enough to be observable on time-scales of less than a quarter of the planet's orbital period.
We explore the potential of our novel triaxial modelling machinery in recovering the viewing angles, the shape, and the orbit distribution of galaxies by using a high-resolution N-body merger simulation. Our modelling technique includes several recent advancements. (i) Our new triaxial deprojection algorithm shape3d is able to significantly shrink the range of possible orientations of a triaxial galaxy and therefore to constrain its shape relying only on photometric information. It also allows to probe degeneracies, i.e. to recover different deprojections at the same assumed orientation. With this method we can constrain the intrinsic shape of the N-body simulation, i.e. the axis ratios p = b/a and q = c/a, with Δp and Δq ≲ 0.1 using only photometric information. The typical accuracy of the viewing angles reconstruction is 15°-20°. (ii) Our new triaxial Schwarzschild code smart exploits the full kinematic information contained in the entire non-parametric line-of-sight velocity distributions along with a 5D orbital sampling in phase space. (iii) We use a new generalized Akaike information criterion AICp to optimize the smoothing and to select the best-fitting model, avoiding potential biases in purely χ2-based approaches. With our deprojected densities, we recover the correct orbital structure and anisotropy parameter β with Δβ ≲ 0.1. These results are valid regardless of the tested orientation of the simulation and suggest that even despite the known intrinsic photometric and kinematic degeneracies the above described advanced methods make it possible to recover the shape and the orbital structure of triaxial bodies with unprecedented accuracy.
Motivated by the discrepancy between Bayesian and frequentist upper limits on the tensor-to-scalar ratio parameter r found by the SPIDER collaboration, we investigate whether a similar trend is also present in the latest Planck and BICEP/Keck Array data. We derive a new upper bound on r using the frequentist profile likelihood method. We vary all the relevant cosmological parameters of the ΛCDM model, as well as the nuisance parameters. Unlike the Bayesian analysis using Markov Chain Monte Carlo (MCMC), our analysis is independent of the choice of priors. Using Planck Public Release 4, BICEP/Keck Array 2018, Planck cosmic microwave background lensing, and baryon acoustic oscillation data, we find an upper limit of r < 0.037 at 95% Confidence Level (C.L.), similar to the Bayesian MCMC result of r < 0.038 for a flat prior on r and a conditioned Planck lowlEB covariance matrix.
Gamma-ray bursts (GRBs) are the most luminous transients in the universe and are utilized as probes of early stars, gravitational wave counterparts and collisionless shock physics. In spite of studies on polarimetry of GRBs in individual wavelengths that characterized intriguing properties of prompt emission and afterglow, no coordinated multi-wavelength measurements have yet been performed. Here we report the first coordinated simultaneous polarimetry in the optical and radio bands for the afterglow associated with the typical long GRB 191221B. Our observations successfully caught the radio emission, which is not affected by synchrotron self-absorption, and show that the emission is depolarized in the radio band compared with the optical one. Our simultaneous polarization angle measurement and temporal polarization monitoring indicate the existence of cool electrons that increase the estimate of jet kinetic energy by a factor of more than 4 for this GRB afterglow. Further coordinated multi-wavelength polarimetric campaigns would improve our understanding of the total jet energies and magnetic field configurations in the emission regions of various types of GRBs, which are required to comprehend the mass scales of their progenitor systems and the physics of collisionless shocks.
Recent cosmological analyses with large-scale structure and weak lensing measurements, usually referred to as 3$\times$2pt, had to discard a lot of signal-to-noise from small scales due to our inability to precisely model non-linearities and baryonic effects. Galaxy-galaxy lensing, or the position-shear correlation between lens and source galaxies, is one of the three two-point correlation functions that are included in such analyses, usually estimated with the mean tangential shear. However, tangential shear measurements at a given angular scale $\theta$ or physical scale $R$ carry information from all scales below that, forcing the scale cuts applied in real data to be significantly larger than the scale at which theoretical uncertainties become problematic. Recently there have been a few independent efforts that aim to mitigate the non-locality of the galaxy-galaxy lensing signal. Here we perform a comparison of the different methods, including the Y transformation described in Park et al. (2021), the point-mass marginalization methodology presented in MacCrann et al. (2020) and the Annular Differential Surface Density statistic described in Baldauf et al. (2010). We do the comparison at the cosmological constraints level in a noiseless simulated combined galaxy clustering and galaxy-galaxy lensing analysis. We find that all the estimators perform equivalently using a Rubin Observatory Legacy Survey of Space and Time (LSST) Year 1 like setup. This is because all the estimators project out the mode responsible for the non-local nature of the galaxy-galaxy lensing measurements, which we have identified as $1/R^2$. We finally apply all the estimators to DES Y3 data and confirm that they all give consistent results.
Recently, two new families of non-linear massive electrodynamics have been proposed: Proca-Nuevo and Extended Proca-Nuevo. We explicitly show that both families are irremediably ghostful in two dimensions. Our calculations indicate the need to revisit the classical consistency of (Extended) Proca-Nuevo in higher dimensions before these settings can be regarded as ghostfree.
The evolution of the Kelvin-Helmholtz Instability (KHI) is widely used to assess the performance of numerical methods. We employ this instability to test both the smoothed particle hydrodynamics (SPH) and the meshless finite mass (MFM) implementation in OPENGADGET3. We quantify the accuracy of SPH and MFM in reproducing the linear growth of the KHI with different numerical and physical set-ups. Among them, we consider: (i) numerical induced viscosity, and (ii) physically motivated, Braginskii viscosity, and compare their effect on the growth of the KHI. We find that the changes of the inferred numerical viscosity when varying nuisance parameters such as the set-up or the number of neighbours in our SPH code are comparable to the differences obtained when using different hydrodynamical solvers, i.e. MFM. SPH reproduces the expected reduction of the growth rate in the presence of physical viscosity and recovers well the threshold level of physical viscosity needed to fully suppress the instability. In the case of galaxy clusters with a virial temperature of 3 × 107 K, this level corresponds to a suppression factor of ≍10-3 of the classical Braginskii value. The intrinsic, numerical viscosity of our SPH implementation in such an environment is inferred to be at least an order of magnitude smaller (i.e. ≍10-4), re-ensuring that modern SPH methods are suitable to study the effect of physical viscosity in galaxy clusters.
We present the first results of a comprehensive supernova (SN) radiative-transfer (RT) code-comparison initiative (StaNdaRT), where the emission from the same set of standardised test models is simulated by currently used RT codes. We ran a total of ten codes on a set of four benchmark ejecta models of Type Ia SNe. We consider two sub-Chandrasekhar-mass (Mtot = 1.0 M⊙) toy models with analytic density and composition profiles and two Chandrasekhar-mass delayed-detonation models that are outcomes of hydrodynamical simulations. We adopt spherical symmetry for all four models. The results of the different codes, including the light curves, spectra, and the evolution of several physical properties as a function of radius and time are provided in electronic form in a standard format via a public repository. We also include the detailed test model profiles and several Python scripts for accessing and presenting the input and output files. We also provide the code used to generate the toy models studied here. In this paper, we describe the test models, radiative-transfer codes, and output formats in detail, and provide access to the repository. We present example results of several key diagnostic features.
The detection of the accelerated expansion of the Universe has been one of the major breakthroughs in modern cosmology. Several cosmological probes (Cosmic Microwave Background, Supernovae Type Ia, Baryon Acoustic Oscillations) have been studied in depth to better understand the nature of the mechanism driving this acceleration, and they are being currently pushed to their limits, obtaining remarkable constraints that allowed us to shape the standard cosmological model. In parallel to that, however, the percent precision achieved has recently revealed apparent tensions between measurements obtained from different methods. These are either indicating some unaccounted systematic effects, or are pointing toward new physics. Following the development of CMB, SNe, and BAO cosmology, it is critical to extend our selection of cosmological probes. Novel probes can be exploited to validate results, control or mitigate systematic effects, and, most importantly, to increase the accuracy and robustness of our results. This review is meant to provide a state-of-art benchmark of the latest advances in emerging "beyond-standard" cosmological probes. We present how several different methods can become a key resource for observational cosmology. In particular, we review cosmic chronometers, quasars, gamma-ray bursts, standard sirens, lensing time-delay with galaxies and clusters, cosmic voids, neutral hydrogen intensity mapping, surface brightness fluctuations, stellar ages of the oldest objects, secular redshift drift, and clustering of standard candles. The review describes the method, systematics, and results of each probe in a homogeneous way, giving the reader a clear picture of the available innovative methods that have been introduced in recent years and how to apply them. The review also discusses the potential synergies and complementarities between the various probes, exploring how they will contribute to the future of modern cosmology.
Several tentative associations between high-energy neutrinos and astrophysical sources have been recently reported, but a conclusive identification of these potential neutrino emitters remains challenging. We explore the use of Monte Carlo simulations of source populations to gain deeper insight into the physical implications of proposed individual source-neutrino associations. In particular, we focus on the IC170922A-TXS 0506+056 observation. Assuming a null model, we find a 7.6% chance of mistakenly identifying coincidences between γ-ray flares from blazars and neutrino alerts in 10-year surveys. We confirm that a blazar-neutrino connection based on the γ-ray flux is required to find a low chance coincidence probability and, therefore, a significant IC170922A-TXS 0506+056 association. We then assume this blazar-neutrino connection for the whole population and find that the ratio of neutrino to γ-ray fluxes must be ≲10−2 in order not to overproduce the total number of neutrino alerts seen by IceCube. For the IC170922A-TXS 0506+056 association to make sense, we must either accept this low flux ratio or suppose that only some rare sub-population of blazars is capable of high-energy neutrino production. For example, if we consider neutrino production only in blazar flares, we expect the flux ratio of between 10−3 and 10−1 to be consistent with a single coincident observation of a neutrino alert and flaring γ-ray blazar. These constraints should be interpreted in the context of the likelihood models used to find the IC170922A-TXS 0506+056 association, which assumes a fixed power-law neutrino spectrum of E−2.13 for all blazars.
Aims: Stellar flares emit thermal and nonthermal radiation in the X-ray and ultraviolet (UV) regime. Although high energetic radiation from flares is a potential threat to exoplanet atmospheres and may lead to surface sterilization, it might also provide the extra energy for low-mass stars needed to trigger and sustain prebiotic chemistry. Despite the UV continuum emission being constrained partly by the flare temperature, few efforts have been made to determine the flare temperature for ultra-cool M-dwarfs. We investigate two flares on TRAPPIST-1, an ultra-cool dwarf star that hosts seven exoplanets of which three lie within its habitable zone. The flares are detected in all four passbands of the MuSCAT2 instrument allowing a determination of their temperatures and bolometric energies.
Methods: We analyzed the light curves of the MuSCATl (multicolor simultaneous camera for studying atmospheres of transiting exoplanets) and MuSCAT2 instruments obtained between 2016 and 2021 in g, r, i, zs-filters. We conducted an automated flare search and visually confirmed possible flare events. The black body temperatures were inferred directly from the spectral energy distribution (SED) by extrapolating the filter-specific flux. We studied the temperature evolution, the global temperature, and the peak temperature of both flares.
Results: White-light M-dwarf flares are frequently described in the literature by a black body with a temperature of 9000-10 000 K. For the first time we infer effective black body temperatures of flares that occurred on TRAPPIST-1. The black body temperatures for the two TRAPPIST-1 flares derived from the SED are consistent with TSED = 7940−390+430 K and TSED = 6030−270+300 K. The flare black body temperatures at the peak are also calculated from the peak SED yielding TSEDp = 13 620−1220+1520 K and TSEDp = 8290−550+660 K. We update the flare frequency distribution of TRAPPIST-1 and discuss the impacts of lower black body temperatures on exoplanet habitability.
Conclusions: We show that for the ultra-cool M-dwarf TRAPPIST-1 the flare black body temperatures associated with the total continuum emission are lower and not consistent with the usually adopted assumption of 9000-10 000 K in the context of exoplanet research. For the peak emission, both flares seem to be consistent with the typical range from 9000 to 14 000 K, respectively. This could imply different and faster cooling mechanisms. Further multi-color observations are needed to investigate whether or not our observations are a general characteristic of ultra-cool M-dwarfs. This would have significant implications for the habitability of exoplanets around these stars because the UV surface flux is likely to be overestimated by the models with higher flare temperatures.
The photometry of the two flares in g, r, i, and zs filters is only available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (ftp://130.79.128.5) or via https://cdsarc.cds.unistra.fr/viz-bin/cat/J/A+A/668/A111
We explore the possible phases of a condensed dark matter (DM) candidate taken to be in the form of a fermion with a Yukawa coupling to a scalar particle, at zero temperature but at finite density. This theory essentially depends on only four parameters, the Yukawa coupling, the fermion mass, the scalar mediator mass, and the DM density. At low-fermion densities we delimit the Bardeen-Cooper-Schrieffer (BCS), Bose-Einstein condensate (BEC), and crossover phases as a function of model parameters using the notion of scattering length. We further study the BCS phase by consistently including emergent effects such as the scalar-density condensate and superfluid gaps. Within the mean-field approximation, we derive the consistent set of gap equations, retaining their momentum dependence, and valid in both the nonrelativistic and relativistic regimes. We present numerical solutions to the set of gap equations, in particular when the mediator mass is smaller and larger than the DM mass. Finally, we discuss the equation of state and possible astrophysical implications for asymmetric DM.
Context. Winds in protoplanetary disks play an important role in their evolution and dispersal. However, the physical process that is actually driving the winds is still unclear (i.e. magnetically versus thermally driven), and can only be understood by directly confronting theoretical models with observational data.
Aims: We aim to interpret observational data for molecular hydrogen and atomic oxygen lines that show kinematic disk-wind signatures in order to investigate whether or not purely thermally driven winds are consistent with the data.
Methods: We use hydrodynamic photoevaporative disk-wind models and post-process them with a thermochemical model to produce synthetic observables for the spectral lines o-H2 1-0 S(1) at 2.12 µm and [OI] 1D2-3P2 at 0.63 µm and directly compare the results to a sample of observations.
Results: We find that our photoevaporative disk-wind model is consistent with the observed signatures of the blueshifted narrow low-velocity component (NLVC) - which is usually associated with slow disk winds - for both tracers. Only for one out of seven targets that show blueshifted NLVCs does the photoevaporative model fail to explain the observed line kinematics. Our results also indicate that interpreting spectral line profiles using simple methods, such as the thin-disk approximation, to determine the line emitting region is not appropriate for the majority of cases and can yield misleading conclusions. This is due to the complexity of the line excitation, wind dynamics, and the impact of the actual physical location of the line-emitting regions on the line profiles.
Conclusions: The photoevaporative disk-wind models are largely consistent with the studied observational data set, but it is not possible to clearly discriminate between different wind-driving mechanisms. Further improvements to the models are necessary, such as consistent modelling of the dynamics and chemistry, and detailed modelling of individual targets (i.e. disk structure) would be beneficial. Furthermore, a direct comparison of magnetically driven disk-wind models to the observational data set is necessary in order to determine whether or not spatially unresolved observations of multiple wind tracers are sufficient to discriminate between theoretical models.
Neutron stars (NSs) and black holes (BHs) are born when the final collapse of the stellar core terminates the lives of stars more massive than about 9 Msun. This can trigger the powerful ejection of a large fraction of the star's material in a core-collapse supernova (CCSN), whose extreme luminosity is energized by the decay of radioactive isotopes such as 56Ni and 56Co. When evolving in close binary systems, the compact relics of such infernal catastrophes spiral towards each other on orbits gradually decaying by gravitational-wave emission. Ultimately, the violent collision of the two components forms a more massive, rapidly spinning remnant, again accompanied by the ejection of considerable amounts of matter. These merger events can be observed by high-energy bursts of gamma rays with afterglows and electromagnetic transients called kilonovae, which radiate the energy released in radioactive decays of freshly assembled rapid neutron-capture elements. By means of their mass ejection and the nuclear and neutrino reactions taking place in the ejecta, both CCSNe and compact object mergers (COMs) are prominent sites of heavy-element nucleosynthesis and play a central role in the cosmic cycle of matter and the chemical enrichment history of galaxies. The nuclear equation of state (EoS) of NS matter, from neutron-rich to proton-dominated conditions and with temperatures ranging from about zero to ~100 MeV, is a crucial ingredient in these astrophysical phenomena. It determines their dynamical processes, their remnant properties even at the level of deciding between NS or BH, and the properties of the associated emission of neutrinos, whose interactions govern the thermodynamic conditions and the neutron-to-proton ratio for nucleosynthesis reactions in the innermost ejecta. This chapter discusses corresponding EoS dependent effects of relevance in CCSNe as well as COMs. (slightly abridged)
In the context of the ESO-VLT Multi-Instrument Kinematic Survey (MIKiS) of Galactic globular clusters, here we present the line-of-sight velocity dispersion profile of NGC 6440, a massive globular cluster located in the Galactic bulge. By combining the data acquired with four different spectrographs, we obtained the radial velocity of a sample of $\sim 1800$ individual stars distributed over the entire cluster extension, from $\sim$0.1$"$ to 778$"$ from the center. Using a properly selected sample of member stars with the most reliable radial velocity measures, we derived the velocity dispersion profile up to 250$"$ from the center. The profile is well described by the same King model that best fits the projected star density distribution, with a constant inner plateau (at ${\sigma}_0 \sim $ 12 km s$^{-1}$) and no evidence of a central cusp or other significant deviations. Our data allowed to study the presence of rotation only in the innermost regions of the cluster (r < 5$"$), revealing a well-defined pattern of ordered rotation with a position angle of the rotation axis of $\sim$132 $\pm$ 2° and an amplitude of $\sim$3 km s$^{-1}$ (corresponding to Vrot/${\sigma}_0 \sim$ 0.3). Also, a flattening of the system qualitatively consistent with the rotation signal has been detected in the central region.
It has been suggested that a trail of diffuse galaxies, including two dark-matter-deficient galaxies (DMDGs), in the vicinity of NGC 1052 formed because of a high-speed collision between two gas-rich dwarf galaxies, one bound to NGC 1052 and the other one on an unbound orbit. The collision compresses the gas reservoirs of the colliding galaxies, which in turn triggers a burst of star formation. In contrast, the dark matter and preexisting stars in the progenitor galaxies pass through it. Since the high pressures in the compressed gas are conducive to the formation of massive globular clusters (GCs), this scenario can explain the formation of DMDGs with large populations of massive GCs, consistent with the observations of NGC 1052-DF2 (DF2) and NGC 1052-DF4. A potential difficulty with this "mini bullet cluster" scenario is that the observed spatial distributions of GCs in DMDGs are extended. GCs experience dynamical friction causing their orbits to decay with time. Consequently, their distribution at formation should have been even more extended than that observed at present. Using a semianalytic model, we show that the observed positions and velocities of the GCs in DF2 imply that they must have formed at a radial distance of 5-10 kpc from the center of DF2. However, as we demonstrate, the scenario is difficult to reconcile with the fact that the strong tidal forces from NGC 1052 strip the extendedly distributed GCs from DF2, requiring 33-59 massive GCs to form at the collision to explain observations.
Gravitational-wave detections are enabling measurements of the rate of coalescences of binaries composed of two compact objects—neutron stars and/or black holes. The coalescence rate of binaries containing neutron stars is further constrained by electromagnetic observations, including Galactic radio binary pulsars and short gamma-ray bursts. Meanwhile, increasingly sophisticated models of compact objects merging through a variety of evolutionary channels produce a range of theoretically predicted rates. Rapid improvements in instrument sensitivity, along with plans for new and improved surveys, make this an opportune time to summarise the existing observational and theoretical knowledge of compact-binary coalescence rates.
The analysis of cosmological galaxy surveys requires realistic simulations for their interpretation. Forward modelling is a powerful method to simulate galaxy clustering without the need for an underlying complex model. This approach requires fast cosmological simulations with a high resolution and large volume, to resolve small dark matter halos associated to single galaxies. In this work, we present fast halo and subhalo clustering simulations based on the Lagrangian perturbation theory code PINOCCHIO, which generates halos and merger trees. The subhalo progenitors are extracted from the merger history and the survival of subhalos is modelled. We introduce a new fitting function for the subhalo merger time, which includes a redshift dependence of the fitting parameters. The spatial distribution of subhalos within their hosts is modelled using a number density profile. We compare our simulations with the halo finder ROCKSTAR applied to the full N-body code GADGET-2. The subhalo velocity function and the correlation function of halos and subhalos are in good agreement. We investigate the effect of the chosen number density profile on the resulting subhalo clustering. Our simulation is approximate yet realistic and significantly faster compared to a full N-body simulation combined with a halo finder. The fast halo and subhalo clustering simulations offer good prospects for galaxy forward models using subhalo abundance matching.
The recently developed B-Mesogenesis scenario predicts decays of B mesons into a baryon and hypothetical dark antibaryon Ψ. We suggest a method to calculate the amplitude of the simplest exclusive decay mode B+ → pΨ. Considering two models of B-Mesogenesis, we obtain the B → p hadronic matrix elements by applying QCD light-cone sum rules with the proton light-cone distribution amplitudes. We estimate the B+ → pΨ decay width as a function of the mass and effective coupling of the dark antibaryon.
We investigate the formation and evolution of 'primordial' dusty rings occurring in the inner regions of protoplanetary discs, with the help of long-term, coupled dust-gas, magnetohydrodynamic simulations. The simulations are global and start from the collapse phase of the parent cloud core, while the dead zone is calculated via an adaptive α formulation by taking into account the local ionization balance. The evolution of the dusty component includes its growth and back reaction on to the gas. Previously, using simulations with only a gas component, we showed that dynamical rings form at the inner edge of the dead zone. We find that when dust evolution, as well as magnetic field evolution in the flux-freezing limit are included, the dusty rings formed are more numerous and span a larger radial extent in the inner disc, while the dead zone is more robust and persists for a much longer time. We show that these dynamical rings concentrate enough dust mass to become streaming unstable, which should result in a rapid planetesimal formation even in the embedded phases of the system. The episodic outbursts caused by the magnetorotational instability have a significant impact on the evolution of the rings. The outbursts drain the inner disc of grown dust, however, the period between bursts is sufficiently long for the planetesimal growth via streaming instability. The dust mass contained within the rings is large enough to ultimately produce planetary systems with the core accretion scenario. The low-mass systems rarely undergo outbursts, and, thus, the conditions around such stars can be especially conducive for planet formation.
The Mini-EUSO telescope was launched for the International Space Station on August 22nd , 2019 to observe from the ISS orbit (∼ 400 km altitude) various phenomena occurring in the Earth's atmosphere through a UV-transparent window located in the Russian Zvezda Module. Mini-EUSO is based on a set of two Fresnel lenses of 25 cm diameter each and a focal plane of 48 × 48 pixels, for a total field of view of 44 ° . Until July 2021, Mini-EUSO performed a total of 41 data acquisition sessions, obtaining UV images of the Earth in the 290 nm - 430 nm band with temporal and spatial resolution on ground of 2.5 μs and 6.3 × 6.3 km2, respectively. The data acquisition was performed with a 2.5 μs sampling rate, using a dedicated trigger looking for signals with a typical duration of tens of μs.
In the present paper the analysis of the performance of the 2.5 μs trigger logic is presented, with a focus on the method used for the analysis and the categories of triggered events. The expected functioning of the trigger logic has been confirmed, with the trigger rate on spurious events that remains within the requirements in nominal background conditions. The trigger logic detected several different phenomena, including lightning strikes, elves, ground-based flashers and events with EAS-like characteristics.
The holographic principle suggests that the Hilbert space of quantum gravity is locally finite-dimensional. Motivated by this point-of-view, and its application to the observable Universe, we introduce a set of numerical and conceptual tools to describe scalar fields with finite-dimensional Hilbert spaces, and to study their behaviour in expanding cosmological backgrounds. These tools include accurate approximations to compute the vacuum energy of a field mode k as a function of the dimension dk of the mode Hilbert space, as well as a parametric model for how that dimension varies with |k|. We show that the maximum entropy of our construction momentarily scales like the boundary area of the observable Universe for some values of the parameters of that model. And we find that the maximum entropy generally follows a sub-volume scaling as long as dk decreases with |k|. We also demonstrate that the vacuum energy density of the finite-dimensional field is dynamical and decays between two constant epochs in our fiducial construction. These results rely on a number of non-trivial modelling choices, but our general framework may serve as a starting point for future investigations of the impact of finite-dimensionality of Hilbert space on cosmological physics.
We present multiple results on the production of loosely bound molecules in bottomonium annihilations and e+e- collisions at √{s }=10.58 GeV . We perform the first comprehensive test of several models for deuteron production against all the existing data in this energy region. We fit the free parameters of the models to reproduce the observed cross sections, and we predict the deuteron spectrum and production and the cross section for the e+e-→d d ¯+X process both at the ϒ (1 ,2 ,3 S ) resonances and at √{s }=10.58 GeV . The predicted spectra show differences but are all compatible with the uncertainties of the existing data. These differences could be addressed if larger datasets are collected by the Belle II experiment. Fixing the source size parameter to reproduce the deuteron data, we then predict the production rates for H dibaryon and hypertriton in this energy region using a simple coalescence model. Our prediction on the H dibaryon production rate is below the limits set by the direct search at the Belle experiment, but in the range accessible to the Belle II experiment. The systematic effect due to the MC modeling of quarks and gluon fragmentation into baryons is reduced, deriving a new tuning of the PYTHIA 8 Monte Carlo generator using the available measurement of single- and double-particle spectra in ϒ decays.
The Euclid mission - with its spectroscopic galaxy survey covering a sky area over 15 000 deg2 in the redshift range 0.9 < z < 1.8 - will provide a sample of tens of thousands of cosmic voids. This paper thoroughly explores for the first time the constraining power of the void size function on the properties of dark energy (DE) from a survey mock catalogue, the official Euclid Flagship simulation. We identified voids in the Flagship light-cone, which closely matches the features of the upcoming Euclid spectroscopic data set. We modelled the void size function considering a state-of-the art methodology: we relied on the volume-conserving (Vdn) model, a modification of the popular Sheth & van de Weygaert model for void number counts, extended by means of a linear function of the large-scale galaxy bias. We found an excellent agreement between model predictions and measured mock void number counts. We computed updated forecasts for the Euclid mission on DE from the void size function and provided reliable void number estimates to serve as a basis for further forecasts of cosmological applications using voids. We analysed two different cosmological models for DE: the first described by a constant DE equation of state parameter, w, and the second by a dynamic equation of state with coefficients w0 and wa. We forecast 1σ errors on w lower than 10% and we estimated an expected figure of merit (FoM) for the dynamical DE scenario FoMw0, wa = 17 when considering only the neutrino mass as additional free parameter of the model. The analysis is based on conservative assumptions to ensure full robustness, and is a pathfinder for future enhancements of the technique. Our results showcase the impressive constraining power of the void size function from the Euclid spectroscopic sample, both as a stand-alone probe, and to be combined with other Euclid cosmological probes.
This paper is published on behalf of the Euclid Consortium.
The peak-patch algorithm is used to identify the densest minicluster seeds in the initial axion density field simulated from string decay. The fate of these dense seeds is found by tracking the subsequent gravitational collapse in cosmological N -body simulations. We find that miniclusters at late times are well described by Navarro-Frenk-White profiles, although for around 80% of simulated miniclusters a single power-law density profile of r-2.9 is an equally good fit due to the unresolved scale radius. Under the assumption that all miniclusters with an unresolved scale radius are described by a power-law plus axion star density profile, we identify a significant number of miniclusters that might be dense enough to give rise to gravitational microlensing if the axion mass is 0.2 meV ≲ma≲3 meV . Higher resolution simulations resolving the inner structure and axion star formation are necessary to explore this possibility further.
Polarization modulator units (PMUs) represent a critical and powerful component in CMB polarization experiments to suppress the 1/f noise component and mitigate systematic uncertainties induced by detector gain drifts and beam asymmetries. The LiteBIRD mission (expected launch in the late 2020 s) will be equipped with 3 PMUs, one for each of the 3 telescopes, and aims at detecting the primordial gravitational waves with a sensitivity of δ r <0.001 . Each PMU is based on a continuously rotating transmissive half-wave plate held by a superconducting magnetic bearing in the 5 K environment. To achieve and monitor the rotation a number of subsystems is needed: clamp and release system and motor coils for the rotation; optical encoder, capacitive, Hall and temperature sensors to monitor its dynamic stability. In this contribution, we present a preliminary thermal design of the harness configuration for the PMUs of the mid- and high- frequency telescopes. The design is based on both the stringent system constraint for the total thermal budget available for the PMUs (≲ 4 mW at 5 K) and on the requirements for different subsystem: coils currents (up to 10 mA), optical fibers for encoder readout, 25 MHz bias signal for temperature and levitation monitors.
We provide a simple computation in order to estimate the probability of a given hierarchy between two scales. In particular, we work in a model provided with a gauge symmetry, with two scalar doublets. We start from a scale-invariant classical Lagrangian, but by taking into account the Coleman-Weinberg mechanism, we obtain masses for the gauge bosons and the scalars. This approach typically provides a light (L) and a heavy (H) sector related to the two different vacuum expectation values of the two scalars. We compute the size of the hypervolume of the parameter space of the model associated with an interval of mass ratios between these two sectors. We define the probability as proportional to this size and conclude that probabilities of very large hierarchies are not negligible in the type of models studied in this work.
In compact astrophysical objects, the neutrino density can be so high that neutrino-neutrino refraction can lead to fast flavor conversion of the kind νeν¯e↔νxν¯x with x =μ , τ , depending on the neutrino angle distribution. Previously, we have shown that in a homogeneous, axisymmetric two-flavor system, these collective solutions evolve in analogy to a gyroscopic pendulum. In flavor space, its deviation from the weak-interaction direction is quantified by a variable cos ϑ that moves between +1 and cos ϑmin, the latter following from a linear mode analysis. As a next step, we include collisional damping of flavor coherence, assuming a common damping rate Γ for all modes. Empirically we find that the damped pendular motion reaches an asymptotic level of pair conversion f =A +(1 -A )cos ϑmin (numerically A ≃0.370 ) that does not depend on details of the angular distribution (except for fixing cos ϑmin), the initial seed, nor Γ . On the other hand, even a small asymmetry between the neutrino and antineutrino damping rates strongly changes this picture and can even enable flavor instabilities in otherwise stable systems.
In the past decades, numerous experiments have emerged to unveil the nature of dark matter, one of the most discussed open questions in modern particle physics. Among them, the Cryogenic Rare Event Search with Superconducting Thermometers (CRESST) experiment, located at the Laboratori Nazionali del Gran Sasso, operates scintillating crystals as cryogenic phonon detectors. In this work, we present first results from the operation of two detector modules which both have 10.46 g LiAlO2 targets in CRESST-III. The lithium contents in the crystal are Li 6 , with an odd number of protons and neutrons, and Li 7 , with an odd number of protons. By considering both isotopes of lithium and Al 27 , we set the currently strongest cross section upper limits on spin-dependent interaction of dark matter with protons and neutrons for the mass region between 0.25 and 1.5 GeV /c2 .
We report the detection of the ground state rotational emission of ammonia, ortho-NH3 (JK = 10 → 00) in a gravitationally lensed intrinsically hyperluminous star-bursting galaxy at z = 2.6. The integrated line profile is consistent with other molecular and atomic emission lines which have resolved kinematics well modelled by a 5 kpc-diameter rotating disc. This implies that the gas responsible for NH3 emission is broadly tracing the global molecular reservoir, but likely distributed in pockets of high density (n ≳ 5 × 104 cm-3). With a luminosity of 2.8 × 106 L⊙, the NH3 emission represents 2.5 × 10-7 of the total infrared luminosity of the galaxy, comparable to the ratio observed in the Kleinmann-Low nebula in Orion and consistent with sites of massive star formation in the Milky Way. If $L_{\rm NH_3}/L_{\rm IR}$ serves as a proxy for the 'mode' of star formation, this hints that the nature of star formation in extreme starbursts in the early Universe is similar to that of Galactic star-forming regions, with a large fraction of the cold interstellar medium in this state, plausibly driven by a storm of violent disc instabilities in the gas-dominated disc. This supports the 'full of Orions' picture of star formation in the most extreme galaxies seen close to the peak epoch of stellar mass assembly.
We report three searches for high energy neutrino emission from astrophysical objects using data recorded with IceCube between 2011 and 2020. Improvements over previous work include new neutrino reconstruction and data calibration methods. In one search, the positions of 110 a priori selected gamma-ray sources were analyzed individually for a possible surplus of neutrinos over atmospheric and cosmic background expectations. We found an excess of 79+22−20 neutrinos associated with the nearby active galaxy NGC 1068 at a significance of 4.2σ. The excess, which is spatially consistent with the direction of the strongest clustering of neutrinos in the Northern Sky, is interpreted as direct evidence of TeV neutrino emission from a nearby active galaxy. The inferred flux exceeds the potential TeV gamma-ray flux by at least one order of magnitude.
Nucleotides play a fundamental role in organisms, from adenosine triphosphate (ATP), the body‘s main source of energy, to cofactors of enzymatic reactions (e. g. coenzyme A), to nucleoside monophosphates as essential building blocks of deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). Although nucleotides play such an elemental role, there is no pathway to date for the selective formation of nucleoside 5′-monophosphates. Here, we demonstrate a selective reaction pathway for 5’ mono-phosphorylation for all canonical purine and pyrimidine bases under exceptionally mild prebiotic relevant conditions in water and without using a condensing agent. The pivotal reaction step involves activated imidazolidine-4-thione phosphates. The selective formation of non-cyclic mono-phosphorylated nucleosides represents a novel and unique route to nucleotides and opens exciting perspectives in the study of the origins of life.
In nature, organophosphates provide key functions such as information storage and transport, structural tasks, and energy transfer. Since condensations are unfavourable in water and nucleophilic attack at phosphate is kinetically inhibited, various abiogenesis hypotheses for the formation of organophosphate are discussed. Recently, the application of phosphites as phosphorylation agent showed promising results. However, elevated temperatures and additional reaction steps are required to obtain organophosphates. Here we show that in liquid sulfur dioxide, which acts as solvent and oxidant, efficient organophosphate formation is enabled. Phosphorous acid yields up to 32.6% 5′ nucleoside monophosphate, 3.6% 5′ nucleoside diphosphate, and the formation of nucleoside triphosphates and dinucleotides in a single reaction step at room temperature. In addition to the phosphorylation of organic compounds, we observed diserine formation. Thus, we suggest volcanic environments as reaction sites for biopolymer formation on Early Earth. Because of the simple recyclability of sulfur dioxide, the reaction is also interesting for synthesis chemistry.
Planet formation is a multi-scale process in which the coagulation of $\mathrm{\mu m}$-sized dust grains in protoplanetary disks is strongly influenced by the hydrodynamic processes on scales of astronomical units ($\approx 1.5\times 10^8 \,\mathrm{km}$). Studies are therefore dependent on subgrid models to emulate the micro physics of dust coagulation on top of a large scale hydrodynamic simulation. Numerical simulations which include the relevant physical effects are complex and computationally expensive. Here, we present a fast and accurate learned effective model for dust coagulation, trained on data from high resolution numerical coagulation simulations. Our model captures details of the dust coagulation process that were so far not tractable with other dust coagulation prescriptions with similar computational efficiency.
Asymmetric dark matter under certain conditions could form compact star-like objects, which can be searched either through gravitational lensing or by observation of gravitational waves from binaries involving such compact objects. In this paper we analyze possible signatures of such dark stars made of asymmetric dark matter with a portal to the Standard Model. We argue that compact dark stars could capture protons and electrons from the interstellar medium, which then accumulate in the core of the dark star, forming a very hot gas that emits X-rays or $\gamma$-rays. For dark matter parameters compatible with current laboratory constraints, compact dark stars could be sufficiently luminous to be detected at the Earth as point sources in the X-ray or $\gamma$-ray sky.
The last two decades have witnessed the discovery of a myriad of new and unexpected hadrons. The future holds more surprises for us, thanks to new-generation experiments. Understanding the signals and determining the properties of the states requires a parallel theoretical effort. To make full use of available and forthcoming data, a careful amplitude modeling is required, together with a sound treatment of the statistical uncertainties, and a systematic survey of the model dependencies. We review the contributions made by the Joint Physics Analysis Center to the field of hadron spectroscopy.
The origins of the elements and isotopes of cosmic material is a critical aspect of understanding the evolution of the universe. Nucleosynthesis typically requires physical conditions of high temperatures and densities. These are found in the Big Bang, in the interiors of stars, and in explosions with their compressional shocks and high neutrino and neutron fluxes. Many different tools are available to disentangle the composition of cosmic matter, in material of extraterrestrial origins such as cosmic rays, meteorites, stardust grains, lunar and terrestrial sediments, and through astronomical observations across the electromagnetic spectrum. Understanding cosmic abundances and their evolution requires combining such measurements with approaches of astrophysical, nuclear theories and laboratory experiments, and exploiting additional cosmic messengers, such as neutrinos and gravitational waves. Recent years have seen significant progress in almost all these fields; they are presented in this review.
The Sun and the solar system are our reference system for abundances of elements and isotopes. Many direct and indirect methods are employed to establish a refined abundance record from the time when the Sun and the Earth were formed. Indications for nucleosynthesis in the local environment when the Sun was formed are derived from meteoritic material and inclusion of radioactive atoms in deep-sea sediments. Spectroscopy at many wavelengths and the neutrino flux from the hydrogen fusion processes in the Sun have established a refined model of how the nuclear energy production shapes stars. Models are required to explore nuclear fusion of heavier elements. These stellar evolution calculations have been confirmed by observations of nucleosynthesis products in the ejecta of stars and supernovae, as captured by stardust grains and by characteristic lines in spectra seen from these objects. One of the successes has been to directly observe γ rays from radioactive material synthesised in stellar explosions, which fully support the astrophysical models. Another has been the observation of radioactive afterglow and characteristic heavy-element spectrum from a neutron-star merger, confirming the neutron rich environments encountered in such rare explosions. The ejecta material captured by Earth over millions of years in sediments and identified through characteristic radio-isotopes suggests that nearby nucleosynthesis occurred in recent history, with further indications for sites of specific nucleosynthesis. Together with stardust and diffuse γ rays from radioactive ejecta, these help to piece together how cosmic materials are transported in interstellar space and re-cycled into and between generations of stars. Our description of cosmic compositional evolution needs such observational support, as it rests on several assumptions that appear challenged by recent recognition of violent events being common during evolution of a galaxy. This overview presents the flow of cosmic matter and the various sites of nucleosynthesis, as understood from combining many techniques and observations, towards the current knowledge of how the universe is enriched with elements.
The emergence of collective motion among interacting, self-propelled agents is a central paradigm in non-equilibrium physics. Examples of such active matter range from swimming bacteria and cytoskeletal motility assays to synthetic self-propelled colloids and swarming microrobots. Remarkably, the aggregation capabilities of many of these systems rely on a theme as fundamental as it is ubiquitous in nature: communication. Despite its eminent importance, the role of communication in the collective organization of active systems is not yet fully understood. Here we report on the multi-scale self-organization of interacting self-propelled agents that locally process information transmitted by chemical signals. We show that this communication capacity dramatically expands their ability to form complex structures, allowing them to self-organize through a series of collective dynamical states at multiple hierarchical levels. Our findings provide insights into the role of self-sustained signal processing for self-organization in biological systems and open routes to applications using chemically driven colloids or microrobots.
We present the first quantitative spectral analysis of blue supergiant stars in the nearby galaxy NGC 2403. Out of a sample of 47 targets observed with the LRIS spectrograph at the Keck I telescope we have extracted 16 B- and A-type supergiants for which we have data of sufficient quality to carry out a comparison with model spectra of evolved massive stars and infer the stellar parameters. The radial metallicity gradient of NGC 2403 that we derive has a slope of -0.14(±0.05) dex ${r}_{e}^{-1}$ , and is in accordance with the analysis of H II region oxygen abundances. We present evidence that the stellar metallicities that we obtain in extragalactic systems in general agree with the nebular abundances based on the analysis of the auroral lines, over more than 1 order of magnitude in metallicity. Adopting the known relation between stellar parameters and intrinsic luminosity we find a distance modulus μ = 27.38 ± 0.08 mag. While this can be brought into agreement with Cepheid-based determinations, it is 0.14 mag short of the value measured from the tip of the red giant branch. We update the mass-metallicity relation secured from chemical abundance studies of stars in resolved star-forming galaxies.
The importance of alternative methods for measuring the Hubble constant, such as time-delay cosmography, is highlighted by the recent Hubble tension. It is paramount to thoroughly investigate and rule out systematic biases in all measurement methods before we can accept new physics as the source of this tension. In this study, we perform a check for systematic biases in the lens modelling procedure of time-delay cosmography by comparing independent and blind time-delay predictions of the system WGD 2038−4008 from two teams using two different software programs: GLEE and LENSTRONOMY. The predicted time delays from the two teams incorporate the stellar kinematics of the deflector and the external convergence from line-of-sight structures. The un-blinded time-delay predictions from the two teams agree within 1.2σ, implying that once the time delay is measured the inferred Hubble constant will also be mutually consistent. However, there is a ∼4σ discrepancy between the power-law model slope and external shear, which is a significant discrepancy at the level of lens models before the stellar kinematics and the external convergence are incorporated. We identify the difference in the reconstructed point spread function (PSF) to be the source of this discrepancy. When the same reconstructed PSF was used by both teams, we achieved excellent agreement, within ∼0.6σ, indicating that potential systematics stemming from source reconstruction algorithms and investigator choices are well under control. We recommend that future studies supersample the PSF as needed and marginalize over multiple algorithms or realizations for the PSF reconstruction to mitigate the systematics associated with the PSF. A future study will measure the time delays of the system WGD 2038−4008 and infer the Hubble constant based on our mass models.
The measurement of the absolute neutrino mass scale from cosmological large-scale clustering data is one of the key science goals of the Euclid mission. Such a measurement relies on precise modelling of the impact of neutrinos on structure formation, which can be studied with $N$-body simulations. Here we present the results from a major code comparison effort to establish the maturity and reliability of numerical methods for treating massive neutrinos. The comparison includes eleven full $N$-body implementations (not all of them independent), two $N$-body schemes with approximate time integration, and four additional codes that directly predict or emulate the matter power spectrum. Using a common set of initial data we quantify the relative agreement on the nonlinear power spectrum of cold dark matter and baryons and, for the $N$-body codes, also the relative agreement on the bispectrum, halo mass function, and halo bias. We find that the different numerical implementations produce fully consistent results. We can therefore be confident that we can model the impact of massive neutrinos at the sub-percent level in the most common summary statistics. We also provide a code validation pipeline for future reference.
We demonstrate how to use persistent homology for cosmological parameter inference in a tomographic cosmic shear survey. We obtain the first cosmological parameter constraints from persistent homology by applying our method to the first-year data of the Dark Energy Survey. To obtain these constraints, we analyse the topological structure of the matter distribution by extracting persistence diagrams from signal-to-noise maps of aperture masses. This presents a natural extension to the widely used peak count statistics. Extracting the persistence diagrams from the cosmo-SLICS, a suite of N-body simulations with variable cosmological parameters, we interpolate the signal using Gaussian processes and marginalise over the most relevant systematic effects, including intrinsic alignments and baryonic effects. For the structure growth parameter, we find S8 = 0.747−0.031+0.025, which is in full agreement with other late-time probes. We also constrain the intrinsic alignment parameter to A = 1.54 ± 0.52, which constitutes a detection of the intrinsic alignment effect at almost 3σ.
Thermal bombs are a widely used method to artificially trigger explosions of core-collapse supernovae (CCSNe) to determine their nucleosynthesis or ejecta and remnant properties. Recently, their use in spherically symmetric (1D) hydrodynamic simulations led to the result that 56, 57Ni and 44Ti are massively underproduced compared to observational estimates for Supernova 1987A, if the explosions are slow, i.e., if the explosion mechanism of CCSNe releases the explosion energy on long timescales. It was concluded that rapid explosions are required to match observed abundances, i.e., the explosion mechanism must provide the CCSN energy nearly instantaneously on timescales of some ten to order 100 ms. This result, if valid, would disfavor the neutrino-heating mechanism, which releases the CCSN energy on timescales of seconds. Here, we demonstrate by 1D hydrodynamic simulations and nucleosynthetic post-processing that these conclusions are a consequence of disregarding the initial collapse of the stellar core in the thermal-bomb modelling before the bomb releases the explosion energy. We demonstrate that the anti-correlation of 56Ni yield and energy-injection timescale vanishes when the initial collapse is included and that it can even be reversed, i.e., more 56Ni is made by slower explosions, when the collapse proceeds to small radii similar to those where neutrino heating takes place in CCSNe. We also show that the 56Ni production in thermal-bomb explosions is sensitive to the chosen mass cut and that a fixed mass layer or fixed volume for the energy deposition cause only secondary differences. Moreover, we propose a most appropriate setup for thermal bombs.
Detection of a gravitational-wave signal of non-astrophysical origin would be a landmark discovery, potentially providing a significant clue to some of our most basic, big-picture scientific questions about the Universe. In this white paper, we survey the leading early-Universe mechanisms that may produce a detectable signal -- including inflation, phase transitions, topological defects, as well as primordial black holes -- and highlight the connections to fundamental physics. We review the complementarity with collider searches for new physics, and multimessenger probes of the large-scale structure of the Universe.
The simulation of particle physics data is a fundamental but computationally intensive ingredient for physics analysis at the Large Hadron Collider, where observational set-valued data is generated conditional on a set of incoming particles. To accelerate this task, we present a novel generative model based on a graph neural network and slot-attention components, which exceeds the performance of pre-existing baselines.
The observed homogeneity and spatial flatness of the Universe suggest that there was a period of accelerated expansion just after the Big Bang, called inflation. In the standard picture, this expansion is driven by the inflaton, a scalar field beyond the standard model of particle physics. If other fields are present during this epoch, they can leave sizable traces on inflationary ob- servables that might be revealed using upcoming experiments. Studying the phenomenological consequences of such fields often requires going beyond perturbation theory due to the nonlinear physics involved in several non-minimal inflationary scenarios.[...]
In this thesis, we construct the soft-collinear Lagrangian for gravity systematically beyond leading power in the power-counting parameter and provide a set of minimal building blocks for the N-jet operators. We find that the effective theory is covariant with respect to an emergent soft background field that is obscured in the full theory. The emission of a soft gluon and graviton from a non-radiative process is investigated and an operatorial version of the soft theorem is obtained.
Living systems on earth are homochiral. This means that for every chiral species they contain, one of the two possible enantiomers is present in much higher fraction than its mirrored counterpart. Homochirality has continuously puzzled scientists ever since the discovery of chirality by Pasteur, because a mechanism for its emergence is not yet solved, nor is the question of whether homochirality is a prerequisite or a consequence of life. In this thesis, we propose two physical scenarios in which homochirality could have emerged prior to or alongside life. We first show that large and complex chiral chemical networks are subject to a symmetry breaking transition from a racemic state to a homochiral one as the number of chiral compounds they contain becomes large. This robust mechanism relies on properties of large random matrices and requires only a few constraints on the chemical network. It is illustrated with a generalization of the famous Frank model which contains a large number of chemical species. We also quantify how abundant chiral molecules are in nature through an analysis of molecular databanks which shows a threshold above which chiral compounds dominate achiral ones. In a second part, we present a scenario based on template-directed ligation of biopolymers such as RNA, which involves the extension of RNA polymers by ligation with other oligomers or monomer compatible with base paring. This process presents autocatalysis and chiral inhibition which are two key ingredients for a symmetry breaking transition leading to a homochiral state. Using detailed stochastic simulations of template-directed ligation of chiral polymeric systems, we thus investigate the propensity of systems inocculated initially with a racemic mixture of RNA monomers to evolve towards a homochiral polymer system in the presence of racemization reactions. Two kinds of reactors and their different conditions are studied in this work: closed out-of-equilibrium reactors with a conserved number of RNA monomers and open reactors in which species are being degraded over time and some are chemostated. In addition, temperature cycles or dry-wet cycles are assumed to be present in both cases. We find that full homochirality is reached for closed systems in presence of racemization reactions due to chiral stalling, which slows ligation when opposite chiralities are paired closed to the ligation site. Remarkably, the homochirality transition helps the system to reach longer average polymer length, which is typically difficult in non-enzymatic polymerization. Open reactor simulations can only reach partial and transient enantiomeric excesses but without the need of racemization reactions. The work presented in this thesis focuses on the amplification process of a small initial enantiomeric excess imbalance generated by a particular physical or chemical phenomenon or simply by statistical fluctuations.
When water interacts with porous rocks, its wetting and surface tension properties create air bubbles in large number. To probe their relevance as a setting for the emergence of life, we microfluidically created foams that were stabilized with lipids. A persistent non-equilibrium setting was provided by a thermal gradient. The foam's large surface area triggers capillary flows and wet-dry reactions that accumulate, aggregate and oligomerize RNA, offering a compelling habitat for RNA-based early life as it offers both wet and dry conditions in direct neighborhood. Lipids were screened to stabilize the foams. The prebiotically more probable myristic acid stabilized foams over many hours. The capillary flow created by the evaporation at the water-air interface provided an attractive force for molecule localization and selection for molecule size. For example, self-binding oligonucleotide sequences accumulated and formed micrometer-sized aggregates which were shuttled between gas bubbles. The wet-dry cycles at the foam bubble interfaces triggered a non-enzymatic RNA oligomerization from 2’,3’-cyclic CMP and GMP which despite the small dry reaction volume was superior to the corresponding dry reaction. The found characteristics make heated foams an interesting, localized setting for early molecular evolution.
We consider the full set of master integrals with internal top-and W-propagators contributing to the three-loop Higgs self-energy diagrams of order O (α2αs). We split the master integrals into a system relevant to the Feynman diagrams proportional to the product of Yukawa couplings ybyt and the complement. For both systems we define master integrals of uniform weight, such that the associated differential equation is in ε-factorised form. The occurring square roots are rationalised and all master integrals are expressible in multiple polylogarithms.
We present the first systematic follow-up of Planck Sunyaev-Zeldovich effect (SZE) selected candidates down to signal-to-noise (S/N) of 3 over the 5000 deg$^2$ covered by the Dark Energy Survey. Using the MCMF cluster confirmation algorithm, we identify optical counterparts, determine photometric redshifts and richnesses and assign a parameter, $f_{\rm cont}$, that reflects the probability that each SZE-optical pairing represents a real cluster rather than a random superposition of physically unassociated systems. The new MADPSZ cluster catalogue consists of 1092 MCMF confirmed clusters and has a purity of 85%. We present the properties of subsamples of the MADPSZ catalogue that have purities ranging from 90% to 97.5%, depending on the adopted $f_{\rm cont}$ threshold. $M_{500}$ halo mass estimates, redshifts, richnesses, and optical centers are presented for all MADPSZ clusters. The MADPSZ catalogue adds 828 previously unknown Planck identified clusters over the DES footprint and provides redshifts for an additional 50 previously published Planck selected clusters with S/N>4.5. Using the subsample with spectroscopic redshifts, we demonstrate excellent cluster photo-$z$ performance with an RMS scatter in $\Delta z/(1+z)$ of 0.47%. Our MCMF based analysis allows us to infer the contamination fraction of the initial S/N>3 Planck selected candidate list, which is 50%. We present a method of estimating the completeness of the MADPSZ cluster sample and $f_{\rm cont}$ selected subsamples. In comparison to the previously published Planck cluster catalogues. this new S/N $>$ 3 MCMF confirmed cluster catalogue populates the lower mass regime at all redshifts and includes clusters up to z$\sim$1.3.
Context. The understanding of the accretion process has a central role in the understanding of star and planet formation.
Aims: We aim to test how accretion variability influences previous correlation analyses of the relation between X-ray activity and accretion rates, which is important for understanding the evolution of circumstellar disks and disk photoevaporation.
Methods: We monitored accreting stars in the Orion Nebula Cluster from November 24, 2014, until February 17, 2019, for 42 epochs with the Wendelstein Wide Field Imager in the Sloan Digital Sky Survey u'g'r' filters on the 2 m Fraunhofer Telescope on Mount Wendelstein. Mass accretion rates were determined from the measured ultraviolet excess. The influence of the mass accretion rate variability on the relation between X-ray luminosities and mass accretion rates was analyzed statistically.
Results: We find a typical interquartile range of ∼0.3 dex for the mass accretion rate variability on timescales from weeks to ∼2 yr. The variability has likely no significant influence on a correlation analysis of the X-ray luminosity and the mass accretion rate observed at different times when the sample size is large enough.
Conclusions: The observed anticorrelation between the X-ray luminosity and the mass accretion rate predicted by models of photoevaporation-starved accretion is likely not due to a bias introduced by different observing times.
Full Tables 1-3 and reduced data are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/666/A55
Context. Observations of the supernova remnant (SNR) Cassiopeia A (Cas A) show significant asymmetries in the reverse shock that cannot be explained by models describing a remnant expanding through a spherically symmetric wind of the progenitor star.
Aims: We investigate whether a past interaction of Cas A with a massive asymmetric shell of the circumstellar medium can account for the observed asymmetries of the reverse shock.
Methods: We performed three-dimensional (3D) (magneto)-hydrodynamic simulations that describe the remnant evolution from the SN explosion to its interaction with a massive circumstellar shell. The initial conditions (soon after the shock breakout at the stellar surface) are provided by a 3D neutrino-driven SN model whose morphology closely resembles Cas A and the SNR simulations cover ≈2000 yr of evolution. We explored the parameter space of the shell, searching for a set of parameters able to produce an inward-moving reverse shock in the western hemisphere of the remnant at the age of ≈350 yr, analogous to that observed in Cas A.
Results: The interaction of the remnant with the shell can produce asymmetries resembling those observed in the reverse shock if the shell was asymmetric with the densest portion in the (blueshifted) nearside to the northwest (NW). According to our favorite model, the shell was thin (thickness σ ≈ 0.02 pc) with a radius rsh ≈ 1.5 pc from the center of the explosion. The reverse shock shows the following asymmetries at the age of Cas A: (i) it moves inward in the observer frame in the NW region, while it moves outward in most other regions; (ii) the geometric center of the reverse shock is offset to the NW by ≈0.1 pc from the geometric center of the forward shock; and (iii) the reverse shock in the NW region has enhanced nonthermal emission because, there, the ejecta enter the reverse shock with a higher relative velocity (between 4000 and 7000 km s−1) than in other regions (below 2000 km s−1).
Conclusions: The large-scale asymmetries observed in the reverse shock of Cas A can be interpreted as signatures of the interaction of the remnant with an asymmetric dense circumstellar shell that occurred between ≈180 and ≈240 yr after the SN event. We suggest that the shell was, most likely, the result of a massive eruption from the progenitor star that occurred between 104 and 105 yr prior to core-collapse. We estimate a total mass of the shell of the order of 2 M⊙.
We use the Magneticum suite of state-of-the-art hydrodynamical simulations to identify cosmic voids based on the watershed technique and investigate their most fundamental properties across different resolutions in mass and scale. This encompasses the distributions of void sizes, shapes, and content, as well as their radial density and velocity profiles traced by the distribution of cold dark matter particles and halos. We also study the impact of various tracer properties, such as their sparsity and mass, and the influence of void merging on these summary statistics. Our results reveal that all of the analyzed void properties are physically related to each other and describe universal characteristics that are largely independent of tracer type and resolution. Most notably, we find that the motion of tracers around void centers is perfectly consistent with linear dynamics, both for individual, as well as stacked voids. Despite the large range of scales accessible in our simulations, we are unable to identify the occurrence of nonlinear dynamics even inside voids of only a few Mpc in size. This suggests voids to be among the most pristine probes of cosmology down to scales that are commonly referred to as highly nonlinear in the field of large-scale structure.
The polarization of the cosmic microwave background (CMB) can be used to search for parity-violating processes like that predicted by a Chern-Simons coupling to a light pseudoscalar field. Such an interaction rotates $E$ modes into $B$ modes in the observed CMB signal by an effect known as cosmic birefringence. Even though isotropic birefringence can be confused with the rotation produced by a miscalibration of the detectors' polarization angles the degeneracy between both effects is broken when Galactic foreground emission is used as a calibrator. In this work, we use realistic simulations of the High-Frequency Instrument of the Planck mission to test the impact that Galactic foreground emission and instrumental systematics have on the recent birefringence measurements obtained through this technique. Our results demonstrate the robustness of the methodology against the miscalibration of polarization angles and other systematic effects, like intensity-to-polarization leakage, beam leakage, or cross-polarization effects. However, our estimator is sensitive to the $EB$ correlation of polarized foreground emission. Here we propose to correct the bias induced by dust $EB$ by modeling the foreground signal with templates produced in Bayesian component-separation analyses that fit parametric models to CMB data. Acknowledging the limitations of currently available dust templates like that of the Commander sky model, high-precision CMB data and a characterization of dust beyond the modified blackbody paradigm are needed to obtain a definitive measurement of cosmic birefringence in the future.
The dark matter halo sparsity, i.e. the ratio between spherical halo masses enclosing two different overdensities, provides a non-parametric proxy of the halo mass distribution that has been shown to be a sensitive probe of the cosmological imprint encoded in the mass profile of haloes hosting galaxy clusters. Mass estimations at several overdensities would allow for multiple sparsity measurements, which can potentially retrieve the entirety of the cosmological information imprinted on the halo profile. Here, we investigate the impact of multiple sparsity measurements on the cosmological model parameter inference. For this purpose, we analyse N-body halo catalogues from the Raygal and M2Csims simulations and evaluate the correlations among six different sparsities from spherical overdensity halo masses at Δ = 200, 500, 1000, and 2500 (in units of the critical density). Remarkably, sparsities associated to distinct halo mass shells are not highly correlated. This is not the case for sparsities obtained using halo masses estimated from the Navarro-Frenk-White (NFW) best-fitting profile, which artificially correlates different sparsities to order one. This implies that there is additional information in the mass profile beyond the NFW parametrization and that it can be exploited with multiple sparsities. In particular, from a likelihood analysis of synthetic average sparsity data, we show that cosmological parameter constraints significantly improve when increasing the number of sparsity combinations, though the constraints saturate beyond four sparsity estimates. We forecast constraints for the CHEX-MATE cluster sample and find that systematic mass bias errors mildly impact the parameter inference, though more studies are needed in this direction.
In this paper we present the complete expressions of the lepton and neutron electric dipole moments (EDMs) in the Standard Model Effective Field Theory (SMEFT), up to 1-loop and dimension-6 level and including both renormalization group running contributions and finite corrections. The latter play a fundamental role in the cases of operators that do not renormalize the dipoles, but there are also classes of operators for which they provide an important fraction, 10-20%, of the total 1-loop contribution, if the new physics scale is around Λ =5 TeV. We present the full set of bounds on each individual Wilson coefficient contributing to the EDMs using both the current experimental constraints, as well as those from future experiments, which are expected to improve by at least an order of magnitude.
We present our lens mass model of SMACS J0723.3−7327, the first strong gravitational lens observed by the James Webb Space Telescope (JWST). We use data from the Hubble Space Telescope and the Multi Unit Spectroscopic Explorer (MUSE) to build our `pre-JWST' lens model and then refine it with newly available JWST near-infrared imaging in our JWST model. To reproduce the positions of all multiple lensed images with good accuracy, the adopted mass parameterisation consists of one cluster-scale component, accounting mainly for the dark matter distribution, the galaxy cluster members, and an external shear component. The pre-JWST model has, as constraints, 19 multiple images from six background sources, of which four have secure spectroscopic redshift measurements from this work. The JWST model has more than twice the number of constraints: 30 additional multiple images from another 11 lensed sources. Both models can reproduce the multiple image positions very well, with a δrms of 0.″39 and 0.″51 for the pre-JWST and JWST models, respectively. The total mass estimates within a radius of 128 kpc (roughly the Einstein radius) are 7.9−0.2+0.3 × 1013 M⊙ and 8.7−0.2+0.2 × 1013 M⊙ for the pre-JWST and JWST models, respectively. We predict with our mass models the redshifts of the newly detected JWST sources, which is crucial information, especially for systems without spectroscopic measurements, for further studies and follow-up observations. Interestingly, one family detected with JWST is found to be at a very high redshift, z > 7.5 (68% confidence level), and with one image that has a lensing magnification of |μ| = 9.5−0.8+0.9, making it an interesting case for future studies. The lens models, including magnification maps and redshifts estimated from the model, are made publicly available, along with the full spectroscopic redshift catalogue from MUSE.
The MUSE redshift catalogue (Table A.1) and lens model files are available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/666/L9
Despite the extensive success of quantum field theories (QFTs) in particle and solid state physics there are still unsolved conceptual problems, in particular regarding the underlying mathematical foundations. In recent years, research has focused on special cases like topological QFTs (TFTs) where mathematically rigorous descriptions in the language of category theory have been found. Two of these descriptions, namely those using bordisms and higher categories, are also capable of describing defects including boundaries, interfaces between different TFTs, and point insertions. Translating examples of defect TFTs from a physics description to a rigorous mathematical model is, however, a challenging problem. A multifaceted example is given by the affine Rozansky-Witten model, which from a physics point of view is a topologically twisted supersymmetric 3D N=4 QFT. On the mathematics side, it features a description in terms of a higher category RW which covers many aspects of this model, in particular regarding its defects. For example, previous fundamental analysis of RW has shown that its two-dimensional defects are closely related to the topological Landau-Ginzburg model which forms a well-studied 2D defect TFT described by the bicategory LG. However, many aspects of the tricategory RW have not yet been studied in detail.
How are molecules linked to each other in complex systems? In a proof-of-concept study, we have developed the method mol2net (https://zenodo.org/record/7025094) to generate and analyze the molecular network of complex astrochemical data (from high-resolution Orbitrap MS1 analysis of H2O:CH3OH:NH3 interstellar ice analogs) in a data-driven and unsupervised manner, without any prior knowledge about chemical reactions. The molecular network is clustered according to the initial NH3 content and unlocked HCN, NH3, and H2O as spatially resolved key transformations. In comparison with the PubChem database, four subsets were annotated: (i) saturated C-backbone molecules without N, (ii) saturated N-backbone molecules, (iii) unsaturated C-backbone molecules without N, and (iv) unsaturated N-backbone molecules. These findings were validated with previous results (e.g., identifying the two major graph components as previously described N-poor and N-rich molecular groups) but with additional information about subclustering, key transformations, and molecular structures, and thus, the structural characterization of large complex organic molecules in interstellar ice analogs has been significantly refined.
We propose a parametrization of the leading B-meson light-cone distribution amplitude (LCDA) in heavy-quark effective theory (HQET). In position space, it uses a conformal transformation that yields a systematic Taylor expansion and an integral bound, which enables control of the truncation error. Our parametrization further produces compact analytical expressions for a variety of derived quantities. At a given reference scale, our momentum-space parametrization corresponds to an expansion in associated Laguerre polynomials, which turn into confluent hypergeometric functions 1F1 under renormalization-group evolution at one-loop accuracy. Our approach thus allows a straightforward and transparent implementation of a variety of phenomenological constraints, regardless of their origin. Moreover, we can include theoretical information on the Taylor coefficients by using the local operator product expansion. We showcase the versatility of the parametrization in a series of phenomenological pseudo-fits.
We propose an analogue of spin fields for the relativistic RNS-particle in 4 dimensions, in order to describe Ramond-Ramond states as "two-particle" excitations on the world line. On a natural representation space we identify a differential whose cohomology agrees with RR-fields equations. We then discuss the non-linear theory encoded in deformations of the latter by background fields. We also formulate a sigma model for this spin field from which we recover the RNS-formulation by imposing suitable constraints.
Cosmological simulations are an important theoretical pillar for understanding nonlinear structure formation in our Universe and for relating it to observations on large scales. In several papers, we introduce our MillenniumTNG (MTNG) project that provides a comprehensive set of high-resolution, large volume simulations of cosmic structure formation aiming to better understand physical processes on large scales and to help interpreting upcoming large-scale galaxy surveys. We here focus on the full physics box MTNG740 that computes a volume of $(740\,\mathrm{Mpc})^3$ with a baryonic mass resolution of $3.1\times~10^7\,\mathrm{M_\odot}$ using \textsc{arepo} with $80.6$~billion cells and the IllustrisTNG galaxy formation model. We verify that the galaxy properties produced by MTNG740 are consistent with the TNG simulations, including more recent observations. We focus on galaxy clusters and analyse cluster scaling relations and radial profiles. We show that both are broadly consistent with various observational constraints. We demonstrate that the SZ-signal on a deep lightcone is consistent with Planck limits. Finally, we compare MTNG740 clusters with galaxy clusters found in Planck and the SDSS-8 RedMaPPer richness catalogue in observational space, finding very good agreement as well. However, {\it simultaneously} matching cluster masses, richness, and Compton-$y$ requires us to assume that the SZ mass estimates for Planck clusters are underestimated by $0.2$~dex on average. Thanks to its unprecedented volume for a high-resolution hydrodynamical calculation, the MTNG740 simulation offers rich possibilities to study baryons in galaxies, galaxy clusters, and in large scale structure, and in particular their impact on upcoming large cosmological surveys.
Context. Millimeter astronomy provides valuable information on the birthplaces of planetary systems. In order to compare theoretical models with observations, the dust component has to be carefully calculated.
Aims: Here, we aim to study the effects of dust entrainment in photoevaporative winds, and the ejection and drag of dust due to the effects caused by radiation from the central star.
Methods: We improved and extended the existing implementation of a two-population dust and pebble description in the global Bern/Heidelberg planet formation and evolution model. Modern prescriptions for photoevaporative winds were used and we accounted for settling and advection of dust when calculating entrainment rates. In order to prepare for future population studies with varying conditions, we explored a wide range of disk, photoevaporation, and dust parameters.
Results: If dust can grow to pebble sizes, that is, if they are resistant to fragmentation or turbulence is weak, drift dominates and the entrained mass is small but larger than under the assumption of no vertical advection of grains with the gas flow. For the case of fragile dust shattering at velocities of 1m s−1 - as indicated in laboratory experiments -, an order of magnitude more dust is entrained, which becomes the main dust removal process. Radiation pressure effects disperse massive, dusty disks on timescales of a few hundred Myr.
Conclusions: These results highlight the importance of dust entrainment in winds as a solid-mass removal process. Furthermore, this model extension lays the foundations for future statistical studies of the formation of planets in their birth environment.
The cosmological constant and its phenomenology remain among the greatest puzzles in theoretical physics. We review how modifications of Einstein’s general relativity could alleviate the different problems associated with it that result from the interplay of classical gravity and quantum field theory. We introduce a modern and concise language to describe the problems associated with its phenomenology, and inspect no-go theorems and their loopholes to motivate the approaches discussed here. Constrained gravity approaches exploit minimal departures from general relativity; massive gravity introduces mass to the graviton; Horndeski theories lead to the breaking of translational invariance of the vacuum; and models with extra dimensions change the symmetries of the vacuum. We also review screening mechanisms that have to be present in some of these theories if they aim to recover the success of general relativity on small scales as well. Finally, we summarize the statuses of these models in their attempts to solve the different cosmological constant problems while being able to account for current astrophysical and cosmological observations.
We present SKiLLS, a suite of multi-band image simulations for the weak lensing analysis of the complete Kilo-Degree Survey (KiDS), dubbed KiDS-Legacy analysis. The resulting catalogues enable joint shear and redshift calibration, enhancing the realism and hence accuracy over previous efforts. To create a large volume of simulated galaxies with faithful properties and to a sufficient depth, we integrated cosmological simulations with high-quality imaging observations. We also improved the realism of simulated images by allowing the point spread function (PSF) to differ between CCD images, including stellar density variations and varying noise levels between pointings. Using realistic variable shear fields, we accounted for the impact of blended systems at different redshifts. Although the overall correction is minor, we found a clear redshift-bias correlation in the blending-only variable shear simulations, indicating the non-trivial impact of this higher-order blending effect. We also explored the impact of the PSF modelling errors and found a small yet noticeable effect on the shear bias. Finally, we conducted a series of sensitivity tests, including changing the input galaxy properties. We conclude that our fiducial shape measurement algorithm, lensfit, is robust within the requirements of lensing analyses with KiDS. As for future weak lensing surveys with tighter requirements, we suggest further investments in understanding the impact of blends at different redshifts, improving the PSF modelling algorithm and developing the shape measurement method to be less sensitive to the galaxy properties.
We report the discovery and characterization of two small transiting planets orbiting the bright M3.0V star TOI-1468 (LSPM J0106+1913), whose transit signals were detected in the photometric time series in three sectors of the TESS mission. We confirm the planetary nature of both of them using precise radial velocity measurements from the CARMENES and MAROON-X spectrographs, and supplement them with ground-based transit photometry. A joint analysis of all these data reveals that the shorter-period planet, TOI-1468 b (Pb = 1.88 d), has a planetary mass of Mb = 3.21 ± 0.24M⊕ and a radius of Rb = 1.280−0.039+0.038 R⊕, resulting in a density of ρb = 8.39−0.92+1.05 g cm−3, which is consistent with a mostly rocky composition. For the outer planet, TOI-1468 c (Pc = 15.53 d), we derive a mass of Mc = 6.64−0.68+0.67 M⊕,aradius of Rc = 2.06 ± 0.04 R⊕, and a bulk density of ρc = 2.00−0.19+0.21 g cm−3, which corresponds to a rocky core composition with a H/He gas envelope. These planets are located on opposite sides of the radius valley, making our system an interesting discovery as there are only a handful of other systems with the same properties. This discovery can further help determine a more precise location of the radius valley for small planets around M dwarfs and, therefore, shed more light on planet formation and evolution scenarios.
Radial velocities and photometry are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/666/A155
This work demonstrates that nontopological solitons with large global charges and masses, even above the Planck scale, can form in the early universe and dominate the dark matter abundance. In solitosynthesis, solitons prefer to grow as large as possible under equilibrium dynamics when an initial global charge asymmetry is present. Their abundance is set by when soliton formation via particle fusion freezes out, and their charges are set by the time it takes to accumulate free particles. This work improves the estimation of both quantities, and in particular shows that much larger-charged solitons form than previously thought. The results are estimated analytically and validated numerically by solving the coupled Boltzmann equations. Without solitosynthesis, phase transitions can still form solitons from particles left inside false-vacuum pockets and determine their present-day abundance and properties. Even with zero charge asymmetry, solitons formed in this way can have very large charges on account of statistical fluctuations in the numbers of (anti)particles inside each pocket.
Unveiling the microscopic origins of quantum phases dominated by the interplay of spin and motional degrees of freedom constitutes one of the central challenges in strongly correlated many-body physics. When holes move through an antiferromagnetic spin background, they displace the positions of spins, which in turn induces effective frustration in the magnetic environment. However, a concrete characterization of this effect in a quantum many-body system is still an unsolved problem. Here we introduce a Hamiltonian reconstruction scheme that allows for a precise quantification of hole-motion-induced frustration. In particular, we access non-local correlation functions through projective measurements of the many-body state, from which effective spin-Hamiltonians can be recovered after detaching the magnetic background from dominant charge fluctuations. The scheme is applied to systems of mixed dimensionality, where holes are restricted to move in one dimension (1D), but $\mathrm{SU}(2)$ superexchange is two-dimensional (2D). We demonstrate that hole motion drives the spin background into a highly frustrated spin liquid regime, reminiscent of Anderson's resonating valence bond paradigm in doped cuprates. We exemplify the direct applicability of the reconstruction scheme to ultracold atom experiments by recovering effective spin-Hamiltonians of experimentally obtained 1D Fermi-Hubbard snapshots. Our method can be generalized to fully 2D systems, enabling an unprecedented microscopic perspective on the doped Hubbard model.
Nature uses dynamic, molecular self-assembly to create cellular architectures that adapt to their environment. For example, a guanosine triphosphate (GTP)-driven reaction cycle activates and deactivates tubulin for dynamic assembly into microtubules. Inspired by dynamic self-assembly in biology, recent studies have developed synthetic analogs of assemblies regulated by chemically fueled reaction cycles. A challenge in these studies is to control the interplay between rapid disassembly and kinetic trapping of building blocks known as dynamic instabilities. In this work, we show how molecular design can tune the tendency of molecules to remain trapped in their assembly. We show how that design can alter the dynamic of emerging assemblies. Our work should give design rules for approaching dynamic instabilities in chemically fueled assemblies to create new adaptive nanotechnologies.
Self-organization is an ubiquitous and fundamental process that underlies all living systems. In cellular organisms, many vital processes, such as cell division and growth, are spatially and temporally regulated by proteins -- the building blocks of life. To achieve this, proteins self-organize and form spatiotemporal patterns. In general, protein patterns respond to a variety of internal and external stimuli, such as cell shape or inhomogeneities in protein activity. As a result, the dynamics of intracellular pattern formation generally span multiple spatial and temporal scales. This thesis addresses the underlying mechanisms that lead to the formation of heterogeneous patterns. The main themes of this work are organized into three parts, which are summarized below. [...]
The search for exploitable deposits of water and other volatiles at the Moon's poles has intensified considerably in recent years, due to the renewed strong interest in lunar exploration. With the return of humans to the lunar surface on the horizon, the use of locally available resources to support long-term and sustainable exploration programs, encompassing both robotic and crewed elements, has moved into focus of public and private actors alike. Our current knowledge about the distribution and concentration of water and other volatiles in the lunar rocks and regolith is, however, too limited to assess the feasibility and economic viability of resource-extraction efforts. On a more fundamental level, we currently lack sufficiently detailed data to fully understand the origins of lunar water and its migration to the polar regions. In this paper, we present LUVMI-X, a mission concept intended to address the shortage of in situ data on volatiles on the Moon that results from a recently concluded design study. Its central element is a compact rover equipped with complementary instrumentation capable of investigating both the surface and shallow subsurface of illuminated and shadowed areas at the lunar south pole. We describe the rover and instrument design, the mission's operational concept, and a preliminary landing-site analysis. We also discuss how LUVMI-X fits into the diverse landscape of lunar missions under development.
Dark matter (DM) with self-interactions is a promising solution for the small-scale problems of the standard cosmological model. Here we perform the first cosmological simulation of frequent DM self-interactions, corresponding to small-angle DM scatterings. The focus of our analysis lies in finding and understanding differences to the traditionally assumed rare DM (large-angle) self-scatterings. For this purpose, we compute the distribution of DM densities, the matter power spectrum, the two-point correlation function, and the halo and subhalo mass functions. Furthermore, we investigate the density profiles of the DM haloes and their shapes. We find that overall large-angle and small-angle scatterings behave fairly similarly with a few exceptions. In particular, the number of satellites is considerably suppressed for frequent compared to rare self-interactions with the same cross-section. Overall, we observe that while differences between the two cases may be difficult to establish using a single measure, the degeneracy may be broken through a combination of multiple ones. For instance, the combination of satellite counts with halo density or shape profiles could allow discriminating between rare and frequent self-interactions. As a by-product of our analysis, we provide - for the first time - upper limits on the cross-section for frequent self-interactions.
Aims: We show how to increase the accuracy of estimates of the two-point correlation function without sacrificing efficiency.
Methods: We quantify the error of the pair-counts and of the Landy & Szalay estimator by comparing them with exact reference values. The standard method, using random point sets, is compared to geometrically motivated estimators and estimators using quasi-Monte Carlo integration.
Results: In the standard method, the error scales proportionally to 1/√Nr, with Nr being the number of random points. In our improved methods, the error scales almost proportionally to 1/Nq, where Nq is the number of points from a low-discrepancy sequence. We study the run times of the new estimator in comparison to those of the standard estimator, keeping the same level of accuracy. For the considered case, we always see a speedup ranging from 50% up to a factor of several thousand. We also discuss how to apply these improved estimators to incompletely sampled galaxy catalogues.