Accurate characterization of the polarized dust emission from our Galaxy will be decisive in the quest for the cosmic microwave background (CMB) primordial B-modes. An incomplete modeling of its potentially complex spectral properties could lead to biases in the CMB polarization analyses and to a spurious measurement of the tensor-to-scalar ratio r. It is particularly crucial for future surveys like the LiteBIRD satellite, the goal of which is to constrain the faint primordial signal leftover by inflation with an accuracy on the tensor-to-scalar ratio r of the order of 10−3. Variations of the dust properties along and between lines of sight lead to unavoidable distortions of the spectral energy distribution (SED) that cannot be easily anticipated by standard component-separation methods. This issue can be tackled using a moment expansion of the dust SED, an innovative parametrization method imposing minimal assumptions on the sky complexity. In the present paper, we apply this formalism to the B-mode cross-angular power spectra computed from simulated LiteBIRD polarization data at frequencies between 100 and 402 GHz that contain CMB, dust, and instrumental noise. The spatial variation of the dust spectral parameters (spectral index β and temperature T) in our simulations lead to significant biases on r (∼21 σr) if not properly taken into account. Performing the moment expansion in β, as in previous studies, reduces the bias but does not lead to sufficiently reliable estimates of r. We introduce, for the first time, the expansion of the cross-angular power spectra SED in both β and T, showing that, at the sensitivity of LiteBIRD, the SED complexity due to temperature variations needs to be taken into account in order to prevent analysis biases on r. Thanks to this expansion, and despite the existing correlations between some of the dust moments and the CMB signal responsible for a rise in the error on r, we can measure an unbiased value of the tensor-to-scalar ratio with a dispersion as low as σr = 8.8 × 10−4.
We present analytic results for the two tennis-court integral families relevant to 2 → 2 scattering processes involving one massive external particle and massless propagators in terms of Goncharov polylogarithms of up to transcendental weight six. We also present analytic results for physical kinematics for the ladder-box family and the two tennis-court families in terms of real-valued polylogarithmic functions, making our solutions well-suited for phenomenological applications.
Context. At present, there are strong indications that white dwarf (WD) stars with masses well below the Chandrasekhar limit (MCh ≈ 1.4 M⊙) contribute a significant fraction of SN Ia progenitors. The relative fraction of stable iron-group elements synthesized in the explosion has been suggested as a possible discriminant between MCh and sub-MCh events. In particular, it is thought that the higher-density ejecta of MCh WDs, which favours the synthesis of stable isotopes of nickel, results in prominent [Ni II] lines in late-time spectra (≳150 d past explosion).
Aims: We study the explosive nucleosynthesis of stable nickel in SNe Ia resulting from MCh and sub-MCh progenitors. We explore the potential for lines of [Ni II] in the optical an near-infrared (at 7378 Å and 1.94 μm) in late-time spectra to serve as a diagnostic of the exploding WD mass.
Methods: We reviewed stable Ni yields across a large variety of published SN Ia models. Using 1D MCh delayed-detonation and sub-MCh detonation models, we studied the synthesis of stable Ni isotopes (in particular, 58Ni) and investigated the formation of [Ni II] lines using non-local thermodynamic equilibrium radiative-transfer simulations with the CMFGEN code.
Results: We confirm that stable Ni production is generally more efficient in MCh explosions at solar metallicity (typically 0.02-0.08 M⊙ for the 58Ni isotope), but we note that the 58Ni yield in sub-MCh events systematically exceeds 0.01 M⊙ for WDs that are more massive than one solar mass. We find that the radiative proton-capture reaction 57Co(p, γ)58Ni is the dominant production mode for 58Ni in both MCh and sub-MCh models, while the α-capture reaction on 54Fe has a negligible impact on the final 58Ni yield. More importantly, we demonstrate that the lack of [Ni II] lines in late-time spectra of sub-MCh events is not always due to an under-abundance of stable Ni; rather, it results from the higher ionization of Ni in the inner ejecta. Conversely, the strong [Ni II] lines predicted in our 1D MCh models are completely suppressed when 56Ni is sufficiently mixed with the innermost layers, which are rich in stable iron-group elements.
Conclusions: [Ni II] lines in late-time SN Ia spectra have a complex dependency on the abundance of stable Ni, which limits their use in distinguishing among MCh and sub-MCh progenitors. However, we argue that a low-luminosity SN Ia displaying strong [Ni II] lines would most likely result from a Chandrasekhar-mass progenitor.
We construct a relativistic chiral nucleon-nucleon interaction up to the next-to-next-to-leading order in covariant baryon chiral perturbation theory. We show that a good description of the n p phase shifts up to Tlab=200 MeV and even higher can be achieved with a χ∼ 2/d .o .f . less than 1. Both the next-to-leading-order results and the next-to-next-to-leading-order results describe the phase shifts equally well up to Tlab=200 MeV , but for higher energies, the latter behaves better, showing satisfactory convergence. The relativistic chiral potential provides the most essential inputs for relativistic ab initio studies of nuclear structure and reactions, which has been in need for almost two decades.
The outflows from neutrino-cooled black hole accretion disks formed in neutron-star mergers or cores of collapsing stars are expected to be neutron-rich enough to explain a large fraction of elements created by the rapid neutron-capture process, but their precise chemical composition remains elusive. Here, we investigate the role of fast neutrino flavor conversion, motivated by the findings of our post-processing analysis that shows evidence of electron-neutrino lepton-number crossings deep inside the disk, hence suggesting possibly nontrivial effects due to neutrino flavor mixing. We implement a parametric, dynamically self-consistent treatment of fast conversion in time-dependent simulations and examine the impact on the disk and its outflows. By activating the otherwise inefficient, emission of heavy-lepton neutrinos, fast conversions enhance the disk cooling rates and reduce the absorption rates of electron-type neutrinos, causing a reduction of the electron fraction in the disk by 0.03-0.06 and in the ejected material by 0.01-0.03. The rapid neutron-capture process yields are enhanced by typically no more than a factor of two, rendering the overall impact of fast conversions modest. The kilonova is prolonged as a net result of increased lanthanide opacities and enhanced radioactive heating rates. We observe only mild sensitivity to the disk mass, the condition for the onset of flavor conversion, and to the considered cases of flavor mixing. Remarkably, parametric models of flavor mixing that conserve the lepton numbers per family result in an overall smaller impact than models invoking three-flavor equipartition, often assumed in previous works.
Starting from first principles, we study radiative transfer by new feebly-interacting bosons (FIBs) such as axions, axion-like particles (ALPs), dark photons, and others. Our key simplification is to include only boson emission or absorption (including decay), but not scattering between different modes of the radiation field. Based on a given distribution of temperature and FIB absorption rate in a star, we derive explicit volume-integral expressions for the boson luminosity, reaching from the free-streaming to the strong-trapping limit. The latter is seen explicitly to correspond to quasi-thermal emission from a "FIB sphere" according to the Stefan-Boltzmann law. Our results supersede expressions and approximations found in the recent literature on FIB emission from a supernova core and, for radiatively unstable FIBs, provide explicit expressions for the nonlocal ("ballistic") transfer of energy recently discussed in horizontal-branch stars.
We introduce an open-source package called QTraj that solves the Lindblad equation for heavy-quarkonium dynamics using the quantum trajectories algorithm. The package allows users to simulate the suppression of heavy-quarkonium states using externally-supplied input from 3+1D hydrodynamics simulations. The code uses a split-step pseudo-spectral method for updating the wave-function between jumps, which is implemented using the open-source multi-threaded FFTW3 package. This allows one to have manifestly unitary evolution when using real-valued potentials. In this paper, we provide detailed documentation of QTraj 1.0, installation instructions, and present various tests and benchmarks of the code.
We analyze the flavor violating muon decay μ → eχ, where χ is a massive gauge boson, with emphasis in the regime where χ is ultralight. We first study this process from an effective field theory standpoint in terms of form factors. We then present two explicit models where μ → eχ is generated at tree level and at the one-loop level. We also comment on the prospects of observing the process μ → eχ in view of the current limits on μ → 3 e from the SINDRUM collaboration.
Knowing the Galactic 3D dust distribution is relevant for understanding many processes in the interstellar medium and for correcting many astronomical observations for dust absorption and emission. Here, we aim for a 3D reconstruction of the Galactic dust distribution with an increase in the number of meaningful resolution elements by orders of magnitude with respect to previous reconstructions, while taking advantage of the dust's spatial correlations to inform the dust map. We use iterative grid refinement to define a log-normal process in spherical coordinates. This log-normal process assumes a fixed correlation structure, which was inferred in an earlier reconstruction of Galactic dust. Our map is informed through 111 Million data points, combining data of PANSTARRS, 2MASS, Gaia DR2 and ALLWISE. The log-normal process is discretized to 122 Billion degrees of freedom, a factor of 400 more than our previous map. We derive the most probable posterior map and an uncertainty estimate using natural gradient descent and the Fisher-Laplace approximation. The dust reconstruction covers a quarter of the volume of our Galaxy, with a maximum coordinate distance of $16\,\text{kpc}$, and meaningful information can be found up to at distances of $4\,$kpc, still improving upon our earlier map by a factor of 5 in maximal distance, of $900$ in volume, and of about eighteen in angular grid resolution. Unfortunately, the maximum posterior approach chosen to make the reconstruction computational affordable introduces artifacts and reduces the accuracy of our uncertainty estimate. Despite of the apparent limitations of the presented 3D dust map, a good part of the reconstructed structures are confirmed by independent maser observations. Thus, the map is a step towards reliable 3D Galactic cartography and already can serve for a number of tasks, if used with care.
In this note, as an input to the Snowmass studies, we provide a broad-brush picture of the physics output of future colliders as a function of their center of mass energies and luminosities. Instead of relying on precise projections of physics reaches, which are lacking in many cases, we mainly focused on simple benchmarks of physics yields, such as the number of Higgs boson produced. More detailed considerations for lepton colliders are given since there have been various recent proposals. A brief summary for hadron colliders based on a simple scaling estimate of the physics reaches is also included.
Context. The Carina Nebula complex (CNC) is one of the most massive and active star-forming regions in our Galaxy and it contains several large young star clusters. The distances of the individual clusters and their physical connection were poorly known up to now, with strongly discrepant results reported in the literature.
Aims: We want to determine reliable distances of the young stellar clusters in the central Carina Nebula region (in particular, Tr 14, 15, and 16) and the prominent clusters NGC 3324 and NGC 3293 in the northwestern periphery of the CNC.
Methods: We analyzed the parallaxes in Gaia EDR3 for a comprehensive sample of 237 spectroscopically identified OB stars, as well as for 9562 X-ray-selected young stars throughout the complex. We also performed an astrometric analysis to identify members of the young cluster vdBH 99, which is located in the foreground of the northwestern part of the Carina Nebula.
Results: We find that the distances of the investigated clusters in the CNC are equal within ≤2%, and yield very consistent most likely mean distance values of 2.36−0.05+0.05 kpc for the OB star sample and 2.34−0.06+0.05 kpc for the sample of X-ray-selected young stars.
Conclusions: Our results show that the clusters in the CNC constitute a coherent star-forming region, in particular with regard to NGC 3324 and NGC 3293 at the northwestern periphery, which are (within ≤2%) at the same distance as the central Carina Nebula. For the foreground cluster vdBH 99, we find a mean distance of 441−2+2 pc and an age of ≃60 Myr. We quantified the contamination of X-ray-selected samples of Carina Nebula stars based on members of this foreground cluster.
Table 1 is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/660/A11
We report the robust detection of coherent, localized deviations from Keplerian rotation possibly associated with the presence of two giant planets embedded in the disk around HD 163296. The analysis is performed using the DISCMINER channel map modeling framework on 12CO J = 2-1 DSHARP data. Not only orbital radius but also azimuth of the planets are retrieved by our technique. One of the candidate planets, detected at R = 94 ± 6 au, ϕ = 50° ± 3° (P94), is near the center of one of the gaps in dust continuum emission and is consistent with a planet mass of 1 M Jup. The other planet, located at R = 261 ± 4 au, ϕ = 57° ± 1° (P261), is in the region where a velocity kink was previously observed in 12CO channel maps. Also, we provide a simultaneous description of the height and temperature of the upper and lower emitting surfaces of the disk and propose the line width as a solid observable to track gas substructure. Using azimuthally averaged line width profiles, we detect gas gaps at R = 38, 88, and 136 au, closely matching the location of their dust and kinematical counterparts. Furthermore, we observe strong azimuthal asymmetries in line widths around the gas gap at R = 88 au, possibly linked to turbulent motions driven by the P94 planet. Our results confirm that the DISCMINER is capable of finding localized, otherwise unseen velocity perturbations thanks to its robust statistical framework, but also that it is well suited for studies of the gas properties and vertical structure of protoplanetary disks.
Evaluation of the effective-range parameters for the $T_{cc}^+$ state in the LHCb model is examined. The finite width of $D^*$ leads to a shift of the expansion point into the complex plane to match analytical properties of the expanded amplitude. We perform an analytic continuation of the three-body scattering amplitude to the complex plane in a vicinity of the branch point and develop a robust procedure for computation of the expansion coefficients. The results yield a nearly-real scattering length, and two contributions to the the effective range which have not been accounted before.
We study the connection of matter density and its tracers from the PDF perspective. One aspect of this connection is the conditional expectation value ⟨δtracer|δm⟩ when averaging both tracer and matter density over some scale. We present a new way to incorporate a Lagrangian bias expansion of this expectation value into standard frameworks for modelling the PDF of density fluctuations and counts-in-cells statistics. Using N-body simulations and mock galaxy catalogs we confirm the accuracy of this expansion and compare it to the more commonly used Eulerian parametrization. For halos hosting typical luminous red galaxies, the Lagrangian model provides a significantly better description of ⟨δtracer|δm⟩ at second order in perturbations. A second aspect of the matter-tracer connection is shot-noise, \ie the scatter of tracer density around ⟨δtracer|δm⟩. It is well known that this noise can be significantly non-Poissonian and we validate the performance of a more general, two-parameter shot-noise model for different tracers and simulations. Both parts of our analysis are meant to pave the way for forthcoming applications to survey data.
From Hubble Frontier Fields photometry, and data from the Multi Unit Spectroscopic Explorer on the Very Large Telescope, we build the Fundamental Plane (FP) relation for the early-type galaxies of the cluster Abell S1063. We use this relation to develop an improved strong lensing model of the total mass distribution of the cluster, determining the velocity dispersions of all 222 cluster members included in the model from their measured structural parameters. Fixing the hot gas component from X-ray data, the mass density distributions of the diffuse dark matter haloes are optimised by comparing the observed and model-predicted positions of 55 multiple images of 20 background sources, distributed over the redshift range 0.73−6.11. We determine the uncertainties on the model parameters with Monte Carlo Markov chains. Compared to previous works, our model allows for the inclusion of a scatter on the relation between the total mass and the velocity dispersion of cluster members, which also shows a shallower slope. We notice a lower statistical uncertainty on the value of some parameters, such as the core radius, of the diffuse mass component of the cluster. Thanks to a new estimate of the stellar mass of all members, we measure the cumulative projected mass profiles out to a radius of 350 kpc, for all baryonic and dark matter components of the cluster. At the outermost radius, we find a baryon fraction of 0.147±0.002. We compare the sub-haloes as described by our model with recent hydrodynamical cosmological simulations. We find good agreement in terms of the stellar over total mass fraction. On the other hand, we report some discrepancies in terms of the maximum circular velocity, which is an indication of their compactness, and the sub-halo mass function in the central cluster regions.
Dew is a common form of water that deposits from saturated air on colder surfaces. Although presumably common on primordial Earth, its potential involvement in the origin of life in early replication has not been investigated in detail. Here we report that it can drive the first stages of Darwinian evolution for DNA and RNA, first by periodically denaturing their structures at low temperatures and second by promoting the replication of long strands over short, faster replicating ones. Our experiments mimicked a partially water-filled primordial rock pore in the probable CO2 atmosphere of Hadean Earth. Under heat flow, water continuously evaporated and recondensed as acidic dew droplets that created the humidity, salt and pH cycles that match many prebiotic replication chemistries. In low-salt and low-pH regimes, the strands melted at 30 K below the bulk melting temperature, whereas longer sequences preferentially accumulated at the droplet interface. Under an enzymatic replication to mimic a sped-up RNA world, long sequences of more than 1,000 nucleotides emerged. The replication was biased by the melting conditions of the dew and the initial short ATGC strands evolved into long AT-rich sequences with repetitive and structured nucleotide composition.
Phenomenological success of inflation models with axion and SU(2) gauge fields relies crucially on control of backreaction from particle production. Most of the previous study only demanded the backreaction terms in equations of motion for axion and gauge fields be small on the basis of order-of-magnitude estimation. In this paper, we solve the equations of motion with backreaction for a wide range of parameters of the spectator axion-SU(2) model. First, we find a new slow-roll solution of the axion-SU(2) system in the absence of backreaction. Next, we obtain accurate conditions for stable slow-roll solutions in the presence of backreaction. Finally, we show that the amplitude of primordial gravitational waves sourced by the gauge fields can exceed that of quantum vacuum fluctuations in spacetime by a large factor, without backreaction spoiling slow-roll dynamics. Imposing additional constraints on the power spectra of scalar and tensor modes measured at CMB scales, we find that the sourced contribution can be more than ten times the vacuum one. Imposing further a constraint of scalar modes non-linearly sourced by tensor modes, the two contributions can still be comparable.
In recent years there has been a rapidly growing body of experimental evidence for existence of exotic, multiquark hadrons, i.e. mesons which contain additional quarks, beyond the usual quark-antiquark pair and baryons which consist of more than three quarks. In all cases with robust evidence they contain at least one heavy quark Q=c or b, the majority including two heavy quarks. Two key theoretical questions have been triggered by these discoveries: (a) how are quarks organized inside these multiquark states -- as compact objects with all quarks within one confinement volume, interacting via color forces, perhaps with an important role played by diquarks, or as deuteron-like hadronic molecules, bound by light-meson exchange? (b) what other multiquark states should we expect? The two questions are tightly intertwined. Each of the interpretations provides a natural explanation of parts of the data, but neither explains all of the data. It is quite possible that both kinds of structures appear in Nature. It may also be the case that certain states are superpositions of the compact and molecular configurations. This Whitepaper brings together contributions from many leading practitioners in the field, representing a wide spectrum of theoretical interpretations. We discuss the importance of future experimental and phenomenological work, which will lead to better understandingof multiquark phenomena in QCD.
We present the second public data release (DR2) from the DECam Local Volume Exploration survey (DELVE). DELVE DR2 combines new DECam observations with archival DECam data from the Dark Energy Survey, the DECam Legacy Survey, and other DECam community programs. DELVE DR2 consists of ~160,000 exposures that cover >21,000 deg^2 of the high Galactic latitude (|b| > 10 deg) sky in four broadband optical/near-infrared filters (g, r, i, z). DELVE DR2 provides point-source and automatic aperture photometry for ~2.5 billion astronomical sources with a median 5σ point-source depth of g=24.3, r=23.9, i=23.5, and z=22.8 mag. A region of ~17,000 deg^2 has been imaged in all four filters, providing four-band photometric measurements for ~618 million astronomical sources. DELVE DR2 covers more than four times the area of the previous DELVE data release and contains roughly five times as many astronomical objects. DELVE DR2 is publicly available via the NOIRLab Astro Data Lab science platform.
The CMB lensing signal from cosmic voids and superclusters probes the growth of structure in the low-redshift cosmic web. In this analysis, we cross-correlated the Planck CMB lensing map with voids detected in the Dark Energy Survey Year 3 (Y3) data set ($\sim$5,000 deg$^{2}$), extending previous measurements using Y1 catalogues ($\sim$1,300 deg$^{2}$). Given the increased statistical power compared to Y1 data, we report a $6.6\sigma$ detection of negative CMB convergence ($\kappa$) imprints using approximately 3,600 voids detected from a redMaGiC luminous red galaxy sample. However, the measured signal is lower than expected from the MICE N-body simulation that is based on the $\Lambda$CDM model (parameters $\Omega_{\rm m} = 0.25$, $\sigma_8 = 0.8$). The discrepancy is associated mostly with the void centre region. Considering the full void lensing profile, we fit an amplitude $A_{\kappa}=\kappa_{\rm DES}/\kappa_{\rm MICE}$ to a simulation-based template with fixed shape and found a moderate $2\sigma$ deviation in the signal with $A_{\kappa}\approx0.79\pm0.12$. We also examined the WebSky simulation that is based on a Planck 2018 $\Lambda$CDM cosmology, but the results were even less consistent given the slightly higher matter density fluctuations than in MICE. We then identified superclusters in the DES and the MICE catalogues, and detected their imprints at the $8.4\sigma$ level; again with a lower-than-expected $A_{\kappa}=0.84\pm0.10$ amplitude. The combination of voids and superclusters yields a $10.3\sigma$ detection with an $A_{\kappa}=0.82\pm0.08$ constraint on the CMB lensing amplitude, thus the overall signal is $2.3\sigma$ weaker than expected from MICE.
We highlight the need for the development of comprehensive amplitude analysis methods to further our understanding of hadron spectroscopy. Reaction amplitudes constrained by first principles of $S$-matrix theory and by QCD phenomenology are needed to extract robust interpretations of the data from experiments and from lattice calculations.
Despite efforts over several decades, direct-detection experiments have not yet led to the discovery of the dark matter (DM) particle. This has led to increasing interest in alternatives to the Lambda CDM (LCDM) paradigm and alternative DM scenarios (including fuzzy DM, warm DM, self-interacting DM, etc.). In many of these scenarios, DM particles cannot be detected directly and constraints on their properties can ONLY be arrived at using astrophysical observations. The Dark Energy Spectroscopic Instrument (DESI) is currently one of the most powerful instruments for wide-field surveys. The synergy of DESI with ESA's Gaia satellite and future observing facilities will yield datasets of unprecedented size and coverage that will enable constraints on DM over a wide range of physical and mass scales and across redshifts. DESI will obtain spectra of the Lyman-alpha forest out to z~5 by detecting about 1 million QSO spectra that will put constraints on clustering of the low-density intergalactic gas and DM halos at high redshift. DESI will obtain radial velocities of 10 million stars in the Milky Way (MW) and Local Group satellites enabling us to constrain their global DM distributions, as well as the DM distribution on smaller scales. The paradigm of cosmological structure formation has been extensively tested with simulations. However, the majority of simulations to date have focused on collisionless CDM. Simulations with alternatives to CDM have recently been gaining ground but are still in their infancy. While there are numerous publicly available large-box and zoom-in simulations in the LCDM framework, there are no comparable publicly available WDM, SIDM, FDM simulations. DOE support for a public simulation suite will enable a more cohesive community effort to compare observations from DESI (and other surveys) with numerical predictions and will greatly impact DM science.
This paper will review the origins, development, and examples of new versions of Micro-Pattern Gas Detectors. The goal for MPGD development was the creation of detectors that could cost-effectively cover large areas while offering excellent position and timing resolution, and the ability to operate at high incident particle rates. The early MPGD developments culminated in the formation of the RD51 collaboration which has become the critical organization for the promotion of MPGDs and all aspects of their production, characterization, simulation, and uses in an expanding array of experimental configurations. For the Snowmass 2021 study, a number of Letters of Interest were received that illustrate ongoing developments and expansion of the use of MPGDs. In this paper, we highlight high precision timing, high rate application, trigger capability expansion of the SRS readout system, and a structure designed for low ion backflow.
Flavor violating processes in the lepton sector have highly suppressed branching ratios in the standard model. Thus, observation of lepton flavor violation (LFV) constitutes a clear indication of physics beyond the standard model (BSM). We review new physics searches in the processes that violate the conservation of lepton (muon) flavor by two units with muonia and muonium–antimuonium oscillations.
We revisit the theory of background fields constructed on the BRST-algebra of a spinning particle with $\mathcal{N}=4$ worldline supersymmetry, whose spectrum contains the graviton but no other fields. On a generic background, the closure of the BRST algebra implies the vacuum Einstein equations with a cosmological constant that is undetermined. On the other hand, in the "vacuum" background with no metric, the cohomology is given by a collection of free scalar- and vector fields. Only certain combinations of linear excitations, necessarily involving a vector field, can be extended beyond the linear level with the vector field inducing an Einstein metric.
We study the renormalization group of generic effective field theories that include gravity. We follow the on-shell amplitude approach, which provides a simple and efficient method to extract anomalous dimensions avoiding complications from gauge redundancies. As an invaluable tool we introduce a modified helicity h ∼ under which gravitons carry one unit instead of two. With this modified helicity we easily explain old and uncover new non-renormalization theorems for theories including gravitons. We provide complete results for the one-loop gravitational renormalization of a generic minimally coupled gauge theory with scalars and fermions and all orders in MPl, as well as for the renormalization of dimension-six operators including at least one graviton, all up to four external particles.
Dark matter (DM) self-interactions have been proposed to solve problems on small length scales within the standard cold DM cosmology. Here, we investigate the effects of DM self-interactions in merging systems of galaxies and galaxy clusters with equal and unequal mass ratios. We perform N-body DM-only simulations of idealized setups to study the effects of DM self-interactions that are elastic and velocity-independent. We go beyond the commonly adopted assumption of large-angle (rare) DM scatterings, paying attention to the impact of small-angle (frequent) scatterings on astrophysical observables and related quantities. Specifically, we focus on DM-galaxy offsets, galaxy-galaxy distances, halo shapes, morphology, and the phase-space distribution. Moreover, we compare two methods to identify peaks: one based on the gravitational potential and one based on isodensity contours. We find that the results are sensitive to the peak finding method, which poses a challenge for the analysis of merging systems in simulations and observations, especially for minor mergers. Large DM-galaxy offsets can occur in minor mergers, especially with frequent self-interactions. The subhalo tends to dissolve quickly for these cases. While clusters in late merger phases lead to potentially large differences between rare and frequent scatterings, we believe that these differences are non-trivial to extract from observations. We therefore study the galaxy/star populations which remain distinct even after the DM haloes have coalesced. We find that these collisionless tracers behave differently for rare and frequent scatterings, potentially giving a handle to learn about the micro-physics of DM.
We consider and derive the gravitational soft theorem up to the sub-subleading power from the perspective of effective Lagrangians. The emergent soft gauge symmetries of the effective Lagrangian provide a transparent explanation of why soft graviton emission is universal to sub-subleading power, but gauge boson emission is not. They also suggest a physical interpretation of the form of the soft factors in terms of the charges related to the soft transformations and the kinematics of the multipole expansion. The derivation is done directly at Lagrangian level, resulting in an operatorial form of the soft theorems. In order to highlight the differences and similarities of the gauge-theory and gravitational soft theorems, we include an extensive discussion of soft gauge-boson emission from scalar, fermionic and vector matter at subleading power.
We present zELDA (redshift Estimator for Line profiles of Distant Lyman Alpha emitters), an open source code to fit Lyman α (Ly α) line profiles. The main motivation is to provide the community with an easy to use and fast tool to analyse Ly α line profiles uniformly to improve the understating of Ly α emitting galaxies. zELDA is based on line profiles of the commonly used 'shell-model' pre-computed with the full Monte Carlo radiative transfer code LyaRT. Via interpolation between these spectra and the addition of noise, we assemble a suite of realistic Ly α spectra which we use to train a deep neural network.We show that the neural network can predict the model parameters to high accuracy (e.g. ≲ 0.34 dex H I column density for R ~ 12 000) and thus allows for a significant speedup over existing fitting methods. As a proof of concept, we demonstrate the potential of zELDA by fitting 97 observed Ly α line profiles from the LASD data base. Comparing the fitted value with the measured systemic redshift of these sources, we find that Ly α determines their rest frame Ly α wavelength with a remarkable good accuracy of ~0.3 Å ($\sim 75\,\, {\rm km\, s}^{-1}$). Comparing the predicted outflow properties and the observed Ly α luminosity and equivalent width, we find several possible trends. For example, we find an anticorrelation between the Ly α luminosity and the outflow neutral hydrogen column density, which might be explained by the radiative transfer process within galaxies.
Measurements of exoplanetary orbital obliquity angles for different classes of planets are an essential tool in testing various planet formation theories. Measurements for those transiting planets on relatively large orbital periods (P > 10 d) present a rather difficult observational challenge. Here we present the obliquity measurement for the warm sub-Saturn planet HD 332231 b, which was discovered through Transiting Exoplanet Survey Satellite photometry of sectors 14 and 15, on a relatively large orbital period (18.7 d). Through a joint analysis of previously obtained spectroscopic data and our newly obtained CARMENES transit observations, we estimated the spin-orbit misalignment angle, λ, to be −42.0−10.6+11.3 deg, which challenges Laplacian ideals of planet formation. Through the addition of these new radial velocity data points obtained with CARMENES, we also derived marginal improvements on other orbital and bulk parameters for the planet, as compared to previously published values. We showed the robustness of the obliquity measurement through model comparison with an aligned orbit. Finally, we demonstrated the inability of the obtained data to probe any possible extended atmosphere of the planet, due to a lack of precision, and place the atmosphere in the context of a parameter detection space.
SN 2020cxd is a representative of the family of low-energy, underluminous Type IIP supernovae (SNe), whose observations and analysis were recently reported by Yang et al. (2021). Here we re-evaluate the observational data for the diagnostic SN properties by employing the hydrodynamic explosion model of a 9 MSun red supergiant progenitor with an iron core and a pre-collapse mass of 8.75 Msun. The explosion of the star was obtained by the neutrino-driven mechanism in a fully self-consistent simulation in three dimensions (3D). Multi-band light curves and photospheric velocities for the plateau phase are computed with the one-dimensional radiation-hydrodynamics code STELLA, applied to the spherically averaged 3D explosion model as well as spherisized radial profiles in different directions of the 3D model. We find that the overall evolution of the bolometric light curve, duration of the plateau phase, and basic properties of the multi-band emission can be well reproduced by our SN model with its explosion energy of only 0.7x10^50 erg and an ejecta mass of 7.4 Msun. These values are considerably lower than the previously reported numbers, but they are compatible with those needed to explain the fundamental observational properties of the prototype low-luminosity SN 2005cs. Because of the good compatibility of our photospheric velocities with line velocities determined for SN 2005cs, we conclude that the line velocities of SN 2020cxd are probably overestimated by up to a factor of about 3. The evolution of the line velocities of SN 2005cs compared to photospheric velocities in different explosion directions might point to intrinsic asymmetries in the SN ejecta.
Feynman diagrams constitute one of the essential ingredients for making precision predictions for collider experiments. Yet, while the simplest Feynman diagrams can be evaluated in terms of multiple polylogarithms -- whose properties as special functions are well understood -- more complex diagrams often involve integrals over complicated algebraic manifolds. Such diagrams already contribute at NNLO to the self-energy of the electron, $t \bar{t}$ production, $\gamma \gamma$ production, and Higgs decay, and appear at two loops in the planar limit of maximally supersymmetric Yang-Mills theory. This makes the study of these more complicated types of integrals of phenomenological as well as conceptual importance. In this white paper contribution to the Snowmass community planning exercise, we provide an overview of the state of research on Feynman diagrams that involve special functions beyond multiple polylogarithms, and highlight a number of research directions that constitute essential avenues for future investigation.
P-type point contact (PPC) HPGe detectors are a leading technology for rare event searches due to their excellent energy resolution, low thresholds, and multi-site event rejection capabilities. We have characterized a PPC detector's response to α particles incident on the sensitive passivated and p+ surfaces, a previously poorly-understood source of background. The detector studied is identical to those in the MAJORANADEMONSTRATOR experiment, a search for neutrinoless double-beta decay (0 ν β β ) in 76Ge. α decays on most of the passivated surface exhibit significant energy loss due to charge trapping, with waveforms exhibiting a delayed charge recovery (DCR) signature caused by the slow collection of a fraction of the trapped charge. The DCR is found to be complementary to existing methods of α identification, reliably identifying α background events on the passivated surface of the detector. We demonstrate effective rejection of all surface α events (to within statistical uncertainty) with a loss of only 0.2% of bulk events by combining the DCR discriminator with previously-used methods. The DCR discriminator has been used to reduce the background rate in the 0 ν β β region of interest window by an order of magnitude in the MAJORANADEMONSTRATOR and will be used in the upcoming LEGEND-200 experiment.
In order to solve the time-independent three-dimensional Schrödinger equation, one can transform the time-dependent Schrödinger equation to imaginary time and use a parallelized iterative method to obtain the full three-dimensional eigen-states and eigen-values on very large lattices. In the case of the non-relativistic Schrödinger equation, there exists a publicly available code called quantumfdtd which implements this algorithm. In this paper, we (a) extend the quantumfdtd code to include the case of the relativistic Schrödinger equation and (b) add two optimized Fast Fourier Transform (FFT) based kinetic energy terms for non-relativistic cases. The new kinetic energy terms (two non-relativistic and one relativistic) are computed using the parallelized FFT-algorithm provided by the FFTW 3 library. The resulting quantumfdtd v3 code, which is publicly released with this paper, is backwards compatible with version 2, supporting explicit finite-differences schemes in addition to the new FFT-based schemes. Finally, we (c) extend the original code so that it supports arbitrary external file-based potentials and the option to project out distinct parity eigen-states from the solutions. Herein, we provide details of the quantumfdtd v3 implementation, comparisons and tests of the three new kinetic energy terms, and code documentation.
We investigate the deformations and rigidity of boundary Heisenberg-like algebras. In particular, we focus on the Heisenberg and Heisenberg ⊕ witt algebras which arise as symmetry algebras in three-dimensional gravity theories. As a result of the deformation procedure we find a large class of algebras. While some of these algebras are new, some of them have already been obtained as asymptotic and boundary symmetry algebras, supporting the idea that symmetry algebras associated to diverse boundary conditions and spacetime loci are algebraically interconnected through deformation of algebras. The deformation/contraction relationships between the new algebras are investigated. In addition, it is also shown that the deformation procedure reaches new algebras inaccessible to the Sugawara construction. As a byproduct of our analysis, we obtain that Heisenberg ⊕ witt and the asymptotic symmetry algebra Weyl-bms3 are not connected via single deformation but in a more subtle way.
We have obtained deep 1 and 3 mm spectral-line scans towards a candidate z ≳ 5 ALMA-identified AzTEC submillimetre galaxy (SMG) in the Subaru/XMM-Newton Deep Field (or UKIDSS UDS), ASXDF1100.053.1, using the NOrthern Extended Millimeter Array (NOEMA), aiming to obtain its spectroscopic redshift. ASXDF1100.053.1 is an unlensed optically dark millimetre-bright SMG with S1100 μm = 3.5 mJy and KAB > 25.7 (2σ), which was expected to lie at z = 5-7 based on its radio-submillimetre photometric redshift. Our NOEMA spectral scan detected line emission due to 12CO(J = 5-4) and (J = 6-5), providing a robust spectroscopic redshift, zCO = 5.2383 ± 0.0005. Energy-coupled spectral energy distribution modelling from optical to radio wavelengths indicates an infrared luminosity LIR = 8.3−1.4+1.5 × 1012 L⊙, a star formation rate SFR = 630−380+260 M⊙ yr−1, a dust mass Md = 4.4−0.3+0.4 × 108 M⊙, a stellar mass Mstellar = 3.5−1.4+3.6 × 1011 M⊙, and a dust temperature Td = 37.4−1.8+2.3 K. The CO luminosity allows us to estimate a gas mass Mgas = 3.1 ± 0.3 × 1010 M⊙, suggesting a gas-to-dust mass ratio of around 70, fairly typical for z ∼ 2 SMGs. ASXDF1100.053.1 has ALMA continuum size Re = 1.0−0.1+0.2 kpc, so its surface infrared luminosity density ΣIR is 1.2−0.2+0.1 × 1012 L⊙ kpc−2. These physical properties indicate that ASXDF1100.053.1 is a massive dusty star-forming galaxy with an unusually compact starburst. It lies close to the star-forming main sequence at z ∼ 5, with low Mgas/Mstellar = 0.09, SFR/SFRMS(RSB) = 0.6, and a gas-depletion time τdep of ≈50 Myr, modulo assumptions about the stellar initial mass function in such objects. ASXDF1100.053.1 has extreme values of Mgas/Mstellar, RSB, and τdep compared to SMGs at z ∼ 2-4, and those of ASXDF1100.053.1 are the smallest among SMGs at z > 5. ASXDF1100.053.1 is likely a late-stage dusty starburst prior to passivisation. The number of z = 5.1-5.3 unlensed SMGs now suggests a number density dN/dz = 30.4 ± 19.0 deg−2, barely consistent with the latest cosmological simulations.
The He I λ10833 Å triplet is a powerful tool for characterising the upper atmosphere of exoplanets and tracing possible mass loss. Here, we analysed one transit of GJ 1214 b observed with the CARMENES high-resolution spectrograph to study its atmosphere via transmission spectroscopy around the He I triplet. Although previous studies using lower resolution instruments have reported non-detections of He I in the atmosphere of GJ 1214 b, we report here the first potential detection. We reconcile the conflicting results arguing that previous transit observations did not present good opportunities for the detection of He I, due to telluric H2O absorption and OH emission contamination. We simulated those earlier observations, and show evidence that the planetary signal was contaminated. From our single non-telluric-contaminated transit, we determined an excess absorption of 2.10−0.50+0.45% (4.6 σ) with a full width at half maximum (FWHM) of 1.30−0.25+0.30 Å. The detection of He I is statistically significant at the 4.6 σ level, but repeatability of the detection could not be confirmed due to the availability of only one transit. By applying a hydrodynamical model and assuming an H/He composition of 98/2, we found that GJ 1214 b would undergo hydrodynamic escape in the photon-limited regime, losing its primary atmosphere with a mass-loss rate of (1.5-18) × 1010 g s−1 and an outflow temperature in the range of 2900-4400 K. Further high-resolution follow-up observations of GJ 1214 b are needed to confirm and fully characterise the detection of an extended atmosphere surrounding GJ 1214 b. If confirmed, this would be strong evidence that this planet has a primordial atmosphere accreted from the original planetary nebula. Despite previous intensive observations from space- and ground-based observatories, our He I excess absorption is the first tentative detection of a chemical species in the atmosphere of this benchmark sub-Neptune planet.
We construct "soft-collinear gravity", the effective field theory which describes the interaction of collinear and soft gravitons with matter (and themselves), to all orders in the soft-collinear power expansion. Despite the absence of collinear divergences in gravity at leading power, the construction exhibits remarkable similarities with soft-collinear effective theory of QCD (gauge fields). It reveals an emergent soft background gauge symmetry, which allows for a manifestly gauge-invariant representation of the interactions in terms of a soft covariant derivative, the soft Riemann tensor, and a covariant generalisation of the collinear light-cone gauge metric field. The gauge symmetries control both the unsuppressed collinear field components and the inherent inhomogeneity in λ of the invariant objects to all orders, resulting in a consistent expansion.
In this work, we consider the case of a strongly coupled dark/hidden sector, which extends the Standard Model (SM) by adding an additional non-Abelian gauge group. These extensions generally contain matter fields, much like the SM quarks, and gauge fields similar to the SM gluons. We focus on the exploration of such sectors where the dark particles are produced at the LHC through a portal and undergo rapid hadronization within the dark sector before decaying back, at least in part and potentially with sizeable lifetimes, to SM particles, giving a range of possibly spectacular signatures such as emerging or semi-visible jets. Other, non-QCD-like scenarios leading to soft unclustered energy patterns or glueballs are also discussed. After a review of the theory, existing benchmarks and constraints, this work addresses how to build consistent benchmarks from the underlying physical parameters and present new developments for the pythia Hidden Valley module, along with jet substructure studies. Finally, a series of improved search strategies is presented in order to pave the way for a better exploration of the dark showers at the LHC.
The advent of deep learning has yielded powerful tools to automatically compute gradients of computations. This is because training a neural network equates to iteratively updating its parameters using gradient descent to find the minimum of a loss function. Deep learning is then a subset of a broader paradigm; a workflow with free parameters that is end-to-end optimisable, provided one can keep track of the gradients all the way through. This work introduces neos: an example implementation following this paradigm of a fully differentiable high-energy physics workflow, capable of optimising a learnable summary statistic with respect to the expected sensitivity of an analysis. Doing this results in an optimisation process that is aware of the modelling and treatment of systematic uncertainties.
Solutions to vacuum Einstein field equations with cosmological constants, such as the de Sitter space and the anti-de Sitter space, are basic in different cosmological and theoretical developments. It is also well known that complex structures admit metrics of this type. The most famous example is the complex projective space endowed with the Fubini-Study metric. In this work, we perform a systematic study of Einstein complex geometries derived from a logarithmic Kähler potential. Depending on the different contribution to the argument of such logarithmic term, we shall distinguish among direct, inverted and hybrid coordinates. They are directly related to the signature of the metric and determine the maximum domain of the complex space where the geometry can be defined.
We make the case for the systematic, reliable preservation of event-wise data, derived data products, and executable analysis code. This preservation enables the analyses' long-term future reuse, in order to maximise the scientific impact of publicly funded particle-physics experiments. We cover the needs of both the experimental and theoretical particle physics communities, and outline the goals and benefits that are uniquely enabled by analysis recasting and reinterpretation. We also discuss technical challenges and infrastructure needs, as well as sociological challenges and changes, and give summary recommendations to the particle-physics community.
The non-relativistic effective theory of dark matter-nucleon interactions depends on 28 coupling strengths for dark matter spin up to 1/2. Due to the vast parameter space of the effective theory, most experiments searching for dark matter interpret the results assuming that only one of the coupling strengths is non-zero. On the other hand, dark matter models generically lead in the non-relativistic limit to several interactions which interfere with one another, therefore the published limits cannot be straightforwardly applied to model predictions. We present a method to determine a rigorous upper limit on the dark matter-nucleon interaction strength including all possible interferences among operators. We illustrate the method to derive model independent upper limits on the interaction strengths from the null search results from XENON1T, PICO-60 and IceCube. For some interactions, the limits on the coupling strengths are relaxed by more than one order of magnitude. We also present a method that allows to combine the results from different experiments, thus exploiting the synergy between different targets in exploring the parameter space of dark matter-nucleon interactions.
Recent experimental results in $B$ physics from Belle, BaBar and LHCb suggest new physics (NP) in the weak $b\to c$ charged-current and the $b\to s$ neutral-current processes. Here we focus on the charged-current case and specifically on the decay modes $B\to D^{*+}\ell^- \bar{\nu}$ with $\ell = e, \mu,$ and $\tau$. The world averages of the ratios $R_D$ and $R_D^{*}$ currently differ from the Standard Model (SM) by $3.4\sigma$ while $\Delta A_{FB} = A_{FB}(B\to D^{*} \mu\nu) - A_{FB} (B\to D^{*} e \nu)$ is found to be $4.1\sigma$ away from the SM prediction in an analysis of 2019 Belle data. These intriguing results suggest an urgent need for improved simulation and analysis techniques in $B\to D^{*+}\ell^- \bar{\nu}$ decays. Here we describe a Monte Carlo Event-generator tool based on EVTGEN developed to allow simulation of the NP signatures in $B\to D^*\ell^- \nu$, which arise due to the interference between the SM and NP amplitudes. As a demonstration of the proposed approach, we exhibit some examples of NP couplings that are consistent with current data and could explain the $\Delta A_{FB}$ anomaly in $B\to D^*\ell^- \nu$ while remaining consistent with other constraints. We show that the $\Delta$-type observables such as $\Delta A_{FB}$ and $\Delta S_5$ eliminate most QCD uncertainties from form factors and allow for clean measurements of NP. We introduce correlated observables that improve the sensitivity to NP. We discuss prospects for improved observables sensitive to NP couplings with the expected 50 ab$^{-1}$ of Belle II data, which seems to be ideally suited for this class of measurements.
We stress the importance of precise measurements of rare decays $K^+\rightarrow\pi^+\nu\bar\nu$, $K_L\rightarrow\pi^0\nu\bar\nu$, $K_{L,S}\to\mu^+\mu^-$ and $K_{L,S}\to\pi^0\ell^+\ell^-$ for the search of new physics (NP). This includes both branching ratios and the distributions in $q^2$, the invariant mass-squared of the neutrino system in the case of $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_L\rightarrow\pi^0\nu\bar\nu$ and of the $\ell^+\ell^-$ system in the case of the remaining decays. In particular the correlations between these observables and their correlations with the ratio $\varepsilon'/\varepsilon$ in $K_L\to\pi\pi$ decays, the CP-violating parameter $\varepsilon_K$ and the $K^0-\bar K^0$ mass difference $\Delta M_K$, should help to disentangle the nature of possible NP. We stress the strong sensitivity of all observables with the exception of $\Delta M_K$ to the CKM parameter $|V_{cb}|$ and list a number of $|V_{cb}|$-independent ratios within the SM which exhibit rather different dependences on the angles $\beta$ and $\gamma$ of the unitarity triangle. The particular role of these decays in probing very short distance scales far beyond the ones explored at the LHC is emphasized. In this context the role of the Standard Model Effective Field Theory (SMEFT) is very important. We also address briefly the issue of the footprints of Majorana neutrinos in $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_L\rightarrow\pi^0\nu\bar\nu$.
We search for the signature of parity-violating physics in the cosmic microwave background, called cosmic birefringence, using the Planck data release 4. We initially find a birefringence angle of β =0.30 °±0.11 ° (68% C.L.) for nearly full-sky data. The values of β decrease as we enlarge the Galactic mask, which can be interpreted as the effect of polarized foreground emission. Two independent ways to model this effect are used to mitigate the systematic impact on β for different sky fractions. We choose not to assign cosmological significance to the measured value of β until we improve our knowledge of the foreground polarization.
Cross-correlations of galaxy positions and galaxy shears with maps of gravitational lensing of the cosmic microwave background (CMB) are sensitive to the distribution of large-scale structure in the Universe. Such cross-correlations are also expected to be immune to some of the systematic effects that complicate correlation measurements internal to galaxy surveys. We present measurements and modeling of the cross-correlations between galaxy positions and galaxy lensing measured in the first three years of data from the Dark Energy Survey with CMB lensing maps derived from a combination of data from the 2500 deg$^2$ SPT-SZ survey conducted with the South Pole Telescope and full-sky data from the Planck satellite. The CMB lensing maps used in this analysis have been constructed in a way that minimizes biases from the thermal Sunyaev Zel'dovich effect, making them well suited for cross-correlation studies. The total signal-to-noise of the cross-correlation measurements is 23.9 (25.7) when using a choice of angular scales optimized for a linear (nonlinear) galaxy bias model. We use the cross-correlation measurements to obtain constraints on cosmological parameters. For our fiducial galaxy sample, which consist of four bins of magnitude-selected galaxies, we find constraints of $\Omega_{m} = 0.272^{+0.032}_{-0.052}$ and $S_{8} \equiv \sigma_8 \sqrt{\Omega_{m}/0.3}= 0.736^{+0.032}_{-0.028}$ ($\Omega_{m} = 0.245^{+0.026}_{-0.044}$ and $S_{8} = 0.734^{+0.035}_{-0.028}$) when assuming linear (nonlinear) galaxy bias in our modeling. Considering only the cross-correlation of galaxy shear with CMB lensing, we find $\Omega_{m} = 0.270^{+0.043}_{-0.061}$ and $S_{8} = 0.740^{+0.034}_{-0.029}$. Our constraints on $S_8$ are consistent with recent cosmic shear measurements, but lower than the values preferred by primary CMB measurements from Planck.
The building of planetary systems is controlled by the gas and dust dynamics of protoplanetary disks. While the gas is simultaneously accreted onto the central star and dissipated away by winds, dust grains aggregate and collapse to form planetesimals and eventually planets. This dust and gas dynamics involves instabilities, turbulence and complex non-linear interactions which ultimately control the observational appearance and the secular evolution of these disks. This chapter is dedicated to the most recent developments in our understanding of the dynamics of gaseous and dusty disks, covering hydrodynamic and magnetohydrodynamic turbulence, gas-dust instabilities, dust clumping and disk winds. We show how these physical processes have been tested from observations and highlight standing questions that should be addressed in the future.
The design of optimal test statistics is a key task in frequentist statistics and for a number of scenarios optimal test statistics such as the profile-likelihood ratio are known. By turning this argument around we can find the profile likelihood ratio even in likelihood-free cases, where only samples from a simulator are available, by optimizing a test statistic within those scenarios. We propose a likelihood-free training algorithm that produces test statistics that are equivalent to the profile likelihood ratios in cases where the latter is known to be optimal.
Context. X-ray- and extreme-ultraviolet- (XEUV-) driven photoevaporative winds acting on protoplanetary disks around young T Tauri stars may strongly impact disk evolution, affecting both gas and dust distributions. Small dust grains in the disk are entrained in the outflow and may produce a detectable signal. In this work, we investigate the possibility of detecting dusty outflows from transition disks with an inner cavity.
Aims: We compute dust densities for the wind regions of XEUV-irradiated transition disks and determine whether they can be observed at wavelengths 0.7 ≲ λobs [μm] ≲ 1.8 with current instrumentation.
Methods: We simulated dust trajectories on top of 2D hydrodynamical gas models of two transition disks with inner holes of 20 and 30 AU, irradiated by both X-ray and EUV spectra from a central T Tauri star. The trajectories and two different settling prescriptions for the dust distribution in the underlying disk were used to calculate wind density maps for individual grain sizes. Finally, the resulting dust densities were converted to synthetic observations in scattered and polarised light.
Results: For an XEUV-driven outflow around a M* = 0.7 M⊙ T Tauri star with LX = 2 × 1030 erg s-1, we find dust mass-loss rates Ṁdust ≲ 2.0 × 10−3 Ṁgas, and if we invoke vertical settling, the outflow is quite collimated. The synthesised images exhibit a distinct chimney-like structure. The relative intensity of the chimneys is low, but their detection may still be feasible with current instrumentation under optimal conditions.
Conclusions: Our results motivate observational campaigns aimed at the detection of dusty photoevaporative winds in transition disks using JWST NIRCam and SPHERE IRDIS.
We report the discovery of GJ 3929 b, a hot Earth-sized planet orbiting the nearby M3.5 V dwarf star, GJ 3929 (G 180-18, TOI-2013). Joint modelling of photometric observations from TESS sectors 24 and 25 together with 73 spectroscopic observations from CARMENES and follow-up transit observations from SAINT-EX, LCOGT, and OSN yields a planet radius of Rb = 1.150 ± 0.040 R⊕, a mass of Mb = 1.21 ± 0.42 M⊕, and an orbital period of Pb = 2.6162745 ± 0.0000030 d. The resulting density of ρb = 4.4 ± 1.6 g cm−3 is compatible with the Earth's mean density of about 5.5 g cm−3. Due to the apparent brightness of the host star (J = 8.7 mag) and its small size, GJ 3929 b is a promising target for atmospheric characterisation with the JWST. Additionally, the radial velocity data show evidence for another planet candidate with P[c] = 14.303 ± 0.035 d, which is likely unrelated to the stellar rotation period, Prot = 122 ± 13 d, which we determined from archival HATNet and ASAS-SN photometry combined with newly obtained TJO data.
RV data and stellar activity indices are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/659/A17
Joint analyses of cross-correlations between measurements of galaxy positions, galaxy lensing, and lensing of the cosmic microwave background (CMB) offer powerful constraints on the large-scale structure of the Universe. In a forthcoming analysis, we will present cosmological constraints from the analysis of such cross-correlations measured using Year 3 data from the Dark Energy Survey (DES), and CMB data from the South Pole Telescope (SPT) and Planck. Here we present two key ingredients of this analysis: (1) an improved CMB lensing map in the SPT-SZ survey footprint, and (2) the analysis methodology that will be used to extract cosmological information from the cross-correlation measurements. Relative to previous lensing maps made from the same CMB observations, we have implemented techniques to remove contamination from the thermal Sunyaev Zel'dovich effect, enabling the extraction of cosmological information from smaller angular scales of the cross-correlation measurements than in previous analyses with DES Year 1 data. We describe our model for the cross-correlations between these maps and DES data, and validate our modeling choices to demonstrate the robustness of our analysis. We then forecast the expected cosmological constraints from the galaxy survey-CMB lensing auto and cross-correlations. We find that the galaxy-CMB lensing and galaxy shear-CMB lensing correlations will on their own provide a constraint on $S_8=\sigma_8 \sqrt{\Omega_{\rm m}/0.3}$ at the few percent level, providing a powerful consistency check for the DES-only constraints. We explore scenarios where external priors on shear calibration are removed, finding that the joint analysis of CMB lensing cross-correlations can provide constraints on the shear calibration amplitude at the 5 to 10% level.
In this white paper for the Snowmass process, we discuss the prospects of probing new physics explanations of the persistent rare $B$ decay anomalies with a muon collider. If the anomalies are indirect signs of heavy new physics, non-standard rates for $\mu^+ \mu^- \to b s$ production should be observed with high significance at a muon collider with center of mass energy of $\sqrt{s} = 10$ TeV. The forward-backward asymmetry of the $b$-jet provides diagnostics of the chirality structure of the new physics couplings. In the absence of a signal, $\mu^+ \mu^- \to b s$ can indirectly probe new physics scales as large as $86$ TeV. Beam polarization would have an important impact on the new physics sensitivity.
Mini-EUSO is a telescope launched on board the International Space Station in 2019 and currently located in the Russian section of the station. Main scientific objectives of the mission are the search for nuclearites and Strange Quark Matter, the study of atmospheric phenomena such as Transient Luminous Events, meteors and meteoroids, the observation of sea bioluminescence and of artificial satellites and man-made space debris. It is also capable of observing Extensive Air Showers generated by Ultra-High Energy Cosmic Rays with an energy above 10$^{21}$ eV and detect artificial showers generated with lasers from the ground. Mini-EUSO can map the night-time Earth in the UV range (290 - 430 nm), with a spatial resolution of about 6.3 km and a temporal resolution of 2.5 $\mu$s, observing our planet through a nadir-facing UV-transparent window in the Russian Zvezda module. The instrument, launched on 2019/08/22 from the Baikonur cosmodrome, is based on an optical system employing two Fresnel lenses and a focal surface composed of 36 Multi-Anode Photomultiplier tubes, 64 channels each, for a total of 2304 channels with single photon counting sensitivity and an overall field of view of 44$^{\circ}$. Mini-EUSO also contains two ancillary cameras to complement measurements in the near infrared and visible ranges. In this paper we describe the detector and present the various phenomena observed in the first year of operation.
Mini-EUSO is a detector observing the Earth in the ultraviolet band from the International Space Station through a nadir-facing window, transparent to the UV radiation, in the Russian Zvezda module. Mini-EUSO main detector consists in an optical system with two Fresnel lenses and a focal surface composed of an array of 36 Hamamatsu Multi-Anode Photo-Multiplier tubes, for a total of 2304 pixels, with single photon counting sensitivity. The telescope also contains two ancillary cameras, in the near infrared and visible ranges, to complement measurements in these bandwidths. The instrument has a field of view of 44 degrees, a spatial resolution of about 6.3 km on the Earth surface and of about 4.7 km on the ionosphere. The telescope detects UV emissions of cosmic, atmospheric and terrestrial origin on different time scales, from a few micoseconds upwards. On the fastest timescale of 2.5 microseconds, Mini-EUSO is able to observe atmospheric phenomena as Transient Luminous Events and in particular the ELVES, which take place when an electromagnetic wave generated by intra-cloud lightning interacts with the ionosphere, ionizing it and producing apparently superluminal expanding rings of several 100 km and lasting about 100 microseconds. These highly energetic fast events have been observed to be produced in conjunction also with Terrestrial Gamma-Ray Flashes and therefore a detailed study of their characteristics (speed, radius, energy...) is of crucial importance for the understanding of these phenomena. In this paper we present the observational capabilities of ELVE detection by Mini-EUSO and specifically the reconstruction and study of ELVE characteristics.
We present cosmological constraints from the analysis of angular power spectra of cosmic shear maps based on data from the first three years of observations by the Dark Energy Survey (DES Y3). Our measurements are based on the pseudo-$C_\ell$ method and offer a view complementary to that of the two-point correlation functions in real space, as the two estimators are known to compress and select Gaussian information in different ways, due to scale cuts. They may also be differently affected by systematic effects and theoretical uncertainties, such as baryons and intrinsic alignments (IA), making this analysis an important cross-check. In the context of $\Lambda$CDM, and using the same fiducial model as in the DES Y3 real space analysis, we find ${S_8 \equiv \sigma_8 \sqrt{\Omega_{\rm m}/0.3} = 0.793^{+0.038}_{-0.025}}$, which further improves to ${S_8 = 0.784\pm 0.026 }$ when including shear ratios. This constraint is within expected statistical fluctuations from the real space analysis, and in agreement with DES~Y3 analyses of non-Gaussian statistics, but favors a slightly higher value of $S_8$, which reduces the tension with the Planck cosmic microwave background 2018 results from $2.3\sigma$ in the real space analysis to $1.5\sigma$ in this work. We explore less conservative IA models than the one adopted in our fiducial analysis, finding no clear preference for a more complex model. We also include small scales, using an increased Fourier mode cut-off up to $k_{\rm max}={5}{h{\rm Mpc}^{-1}}$, which allows to constrain baryonic feedback while leaving cosmological constraints essentially unchanged. Finally, we present an approximate reconstruction of the linear matter power spectrum at present time, which is found to be about 20% lower than predicted by Planck 2018, as reflected by the $1.5\sigma$ lower $S_8$ value.
The field of UHECRs (Ultra-High energy cosmic Rays) and the understanding of particle acceleration in the cosmos, as a key ingredient to the behaviour of the most powerful sources in the universe, is of outmost importance for astroparticle physics as well as for fundamental physics and will improve our general understanding of the universe. The current main goals are to identify sources of UHECRs and their composition. For this, increased statistics is required. A space-based detector for UHECR research has the advantage of a very large exposure and a uniform coverage of the celestial sphere. The aim of the JEM-EUSO program is to bring the study of UHECRs to space. The principle of observation is based on the detection of UV light emitted by isotropic fluorescence of atmospheric nitrogen excited by the Extensive Air Showers (EAS) in the Earth's atmosphere and forward-beamed Cherenkov radiation reflected from the Earth's surface or dense cloud tops. In addition to the prime objective of UHECR studies, JEMEUSO will do several secondary studies due to the instruments' unique capacity of detecting very weak UV-signals with extreme time-resolution around 1 microsecond: meteors, Transient Luminous Events (TLE), bioluminescence, maps of human generated UV-light, searches for Strange Quark Matter (SQM) and high-energy neutrinos, and more. The JEM-EUSO program includes several missions from ground (EUSO-TA), from stratospheric balloons (EUSO-Balloon, EUSO-SPB1, EUSO-SPB2), and from space (TUS, Mini-EUSO) employing fluorescence detectors to demonstrate the UHECR observation from space and prepare the large size missions K-EUSO and POEMMA. A review of the current status of the program, the key results obtained so far by the different projects, and the perspectives for the near future are presented.
We present new constraints on spectator axion-U(1) gauge field interactions during
inflation using the latest Planck (PR4) and BICEP/Keck 2018 data releases. This model can source
tensor perturbations from amplified gauge field fluctuations, driven by an axion rolling for a few
e-folds during inflation. The gravitational waves sourced in this way have a strongly
scale-dependent (and chiral) spectrum, with potentially visible contributions to
large/intermediate scale B-modes of the CMB. We first derive theoretical bounds on the model
imposing validity of the perturbative regime and negligible backreaction of the gauge field on the
background dynamics. Then, we determine bounds from current CMB observations, adopting a
frequentist profile likelihood approach. We study the behaviour of constraints for typical choices
of the model's parameters, analyzing the impact of different dataset combinations. We find that
observational bounds are competitive with theoretical ones and together they exclude a significant
portion of the model's parameter space. We argue that the parameter space still remains large and
interesting for future CMB experiments targeting large/intermediate scales B-modes.
Mini-EUSO is a small orbital telescope with a field of view of $44^{\circ}\times 44^{\circ}$, observing the night-time Earth mostly in 320-420 nm band. Its time resolution spanning from microseconds (triggered) to milliseconds (untriggered) and more than $300\times 300$ km of the ground covered, already allowed it to register thousands of meteors. Such detections make the telescope a suitable tool in the search for hypothetical heavy compact objects, which would leave trails of light in the atmosphere due to their high density and speed. The most prominent example are the nuclearites -- hypothetical lumps of strange quark matter that could be stabler and denser than the nuclear matter. In this paper, we show potential limits on the flux of nuclearites after collecting 42 hours of observations data.
The Fluorescence Telescope is one of the two telescopes on board the Extreme Universe Space Observatory on a Super Pressure Balloon II (EUSO-SPB2). EUSO-SPB2 is an ultra-long-duration balloon mission that aims at the detection of Ultra High Energy Cosmic Rays (UHECR) via the fluorescence technique (using a Fluorescence Telescope) and of Ultra High Energy (UHE) neutrinos via Cherenkov emission (using a Cherenkov Telescope). The mission is planned to fly in 2023 and is a precursor of the Probe of Extreme Multi-Messenger Astrophysics (POEMMA). The Fluorescence Telescope is a second generation instrument preceded by the telescopes flown on the EUSO-Balloon and EUSO-SPB1 missions. It features Schmidt optics and has a 1-meter diameter aperture. The focal surface of the telescope is equipped with a 6912-pixel Multi Anode Photo Multipliers (MAPMT) camera covering a 37.4 x 11.4 degree Field of Regard. Such a big Field of Regard, together with a flight target duration of up to 100 days, would allow, for the first time from suborbital altitudes, detection of UHECR fluorescence tracks. This contribution will provide an overview of the instrument including the current status of the telescope development.
The Extreme Universe Space Observatory Supper Pressure Balloon 2 (EUSO-SPB2) is under development, and will prototype instrumentation for future satellite-based missions, including the Probe of Extreme Multi-Messenger Astrophysics (POEMMA). EUSO-SPB2 will consist of two telescopes. The first is a Cherenkov telescope (CT) being developed to identify and estimate the background sources for future below-the-limb very high energy (E>10 PeV) astrophysical neutrino observations, as well as above-the-limb cosmic ray induced signals (E>1 PeV). The second is a fluorescence telescope (FT) being developed for detection of Ultra High Energy Cosmic Rays (UHECRs). In preparation for the expected launch in 2023, extensive simulations tuned by preliminary laboratory measurements have been preformed to understand the FT capabilities. The energy threshold has been estimated at $10^{18.2}$ eV, and results in a maximum detection rate at $10^{18.6}$ eV when taking into account the shape of the UHECR spectrum. In addition, onboard software has been developed based on the simulations as well as experience with previous EUSO missions. This includes a level 1 trigger to be run on the computationally limited flight hardware, as well as a deep learning based prioritization algorithm in order to accommodate the balloon's telemetry budget. These techniques could also be used later for future, space-based missions.
We present the status of the development of a Cherenkov telescope to be flown on a long-duration balloon flight, the Extreme Universe Space Observatory Super Pressure Balloon 2 (EUSO-SPB2). EUSO-SPB2 is an approved NASA balloon mission that is planned to fly in 2023 and is a precursor of the Probe of Extreme Multi-Messenger Astrophysics (POEMMA), a candidate for an Astrophysics probe-class mission. The purpose of the Cherenkov telescope on-board EUSOSPB2 is to classify known and unknown sources of backgrounds for future space-based neutrino detectors. Furthermore, we will use the Earth-skimming technique to search for Very-High-Energy (VHE) tau neutrinos below the limb (E > 10 PeV) and observe air showers from cosmic rays above the limb. The 0.785 m^2 Cherenkov telescope is equipped with a 512-pixel SiPM camera covering a 12.8° x 6.4° (Horizontal x Vertical) field of view. The camera signals are digitized with a 100 MS/s readout system. In this paper, we discuss the status of the telescope development, the camera integration, and simulation studies of the camera response.
Black holes are considered to be exceptional due to their time evolution and information processing. However, it was proposed recently that these properties are generic for objects, the so-called saturons, that attain the maximal entropy permitted by unitarity. In the present paper, we verify this connection within a renormalizable SU(N) invariant theory. We show that the spectrum of the theory contains a tower of bubbles representing bound states of SU(N) Goldstones. Despite the absence of gravity, a saturated bound state exhibits a striking correspondence with a black hole: Its entropy is given by the Bekenstein-Hawking formula; semi-classically, the bubble evaporates at a thermal rate with a temperature equal to its inverse radius; the information retrieval time is equal to Page's time. The correspondence goes through a trans-theoretic entity of Poincaré Goldstone. The black hole/saturon correspondence has important implications for black hole physics, both fundamental and observational.
The Extreme Universe Space Observatory - Super Pressure Balloon (EUSO-SPB2) mission will fly two custom telescopes that feature Schmidt optics to measure Čerenkov- and fluorescence-emission of extensive air-showers from cosmic rays at the PeV and EeV-scale, and search for tau-neutrinos. Both telescopes have 1-meter diameter apertures and UV/UV-visible sensitivity. The Čerenkov telescope uses a bifocal mirror segment alignment, to distinguish between a direct cosmic ray that hits the camera versus the Čerenkov light from outside the telescope. Telescope integration and laboratory calibration will be performed in Colorado. To estimate the point spread function and efficiency of the integrated telescopes, a test beam system that delivers a 1-meter diameter parallel beam of light is being fabricated. End-to-end tests of the fully integrated instruments will be carried out in a field campaign at dark sites in the Utah desert using cosmic rays, stars, and artificial light sources. Laser tracks have long been used to characterize the performance of fluorescence detectors in the field. For EUSO-SPB2 an improvement in the method that includes a correction for aerosol attenuation is anticipated by using a bi-dynamic Lidar configuration in which both the laser and the telescope are steerable. We plan to conduct these field tests in Fall 2021 and Spring 2022 to accommodate the scheduled launch of EUSO-SPB2 in 2023 from Wanaka, New Zealand.
The Extreme Universe Space Observatory on a Super Pressure Balloon II (EUSO-SPB2) is a second generation stratospheric balloon instrument for the detection of Ultra High Energy Cosmic Rays (UHECRs, E > 1 EeV) via the fluorescence technique and of Very High Energy (VHE, E > 10 PeV) neutrinos via Cherenkov emission. EUSO-SPB2 is a pathfinder mission for instruments like the proposed Probe Of Extreme Multi-Messenger Astrophysics (POEMMA). The purpose of such a space-based observatory is to measure UHECRs and UHE neutrinos with high statistics and uniform exposure. EUSO-SPB2 is designed with two Schmidt telescopes, each optimized for their respective observational goals. The Fluorescence Telescope looks at the nadir to measure the fluorescence emission from UHECR-induced extensive air shower (EAS), while the Cherenkov Telescope is optimized for fast signals ($\sim$10 ns) and points near the Earth's limb. This allows for the measurement of Cherenkov light from EAS caused by Earth skimming VHE neutrinos if pointed slightly below the limb or from UHECRs if observing slightly above. The expected launch date of EUSO-SPB2 is Spring 2023 from Wanaka, NZ with target duration of up to 100 days. Such a flight would provide thousands of VHECR Cherenkov signals in addition to tens of UHECR fluorescence tracks. Neither of these kinds of events have been observed from either orbital or suborbital altitudes before, making EUSO-SPB2 crucial to move forward towards a space-based instrument. It will also enhance the understanding of potential background signals for both detection techniques. This contribution will provide a short overview of the detector and the current status of the mission as well as its scientific goals.
It is commonly expected that a friction force on the bubble wall in a first-order phase transition can only arise from a departure from thermal equilibrium in the plasma. Recently however, it was argued that an effective friction, scaling as γ2 w (with γ w being the Lorentz factor for the bubble wall velocity), persists in local equilibrium. This was derived assuming constant plasma temperature and velocity throughout the wall. On the other hand, it is known that, at the leading order in derivatives, the plasma in local equilibrium only contributes a correction to the zero-temperature potential in the equation of motion of the background scalar field. For a constant plasma temperature, the equation of motion is then completely analogous to the vacuum case, the only change being a modified potential, and thus no friction should appear. We resolve these apparent contradictions in the calculations and their interpretation and show that the recently proposed effective friction in local equilibrium originates from inhomogeneous temperature distributions, such that the γ2 w -scaling of the effective force is violated. Further, we propose a new matching condition for the hydrodynamic quantities in the plasma valid in local equilibrium and tied to local entropy conservation. With this added constraint, bubble velocities in local equilibrium can be determined once the parameters in the equation of state are fixed, where we use the bag equation in order to illustrate this point. We find that there is a critical value of the transition strength αcrit such that bubble walls run away for α>αcrit.
The characteristics of the cosmic microwave background provide circumstantial evidence that the hot radiation-dominated epoch in the early universe was preceded by a period of inflationary expansion. Here, we show how a measurement of the stochastic gravitational wave background can reveal the cosmic history and the physical conditions during inflation, subsequent pre- and re-heating, and the beginning of the hot big bang era. This is exemplified with a particularly well-motivated and predictive minimal extension of the Standard Model which is known to provide a complete model for particle physics -- up to the Planck scale, and for cosmology -- back to inflation.
Planet-forming disks are not isolated systems. Their interaction with the surrounding medium affects their mass budget and chemical content. In the context of the ALMA-DOT program, we obtained high-resolution maps of assorted lines from six disks that are still partly embedded in their natal envelope. In this work, we examine the SO and SO2 emission that is detected from four sources: DG Tau, HL Tau, IRAS 04302+2247, and T Tau. The comparison with CO, HCO+, and CS maps reveals that the SO and SO2 emission originates at the intersection between extended streamers and the planet-forming disk. Two targets, DG Tau and HL Tau, offer clear cases of inflowing material inducing an accretion shock on the disk material. The measured rotational temperatures and radial velocities are consistent with this view. In contrast to younger Class 0 sources, these shocks are confined to the specific disk region impacted by the streamer. In HL Tau, the known accreting streamer induces a shock in the disk outskirts, and the released SO and SO2 molecules spiral toward the star in a few hundred years. These results suggest that shocks induced by late accreting material may be common in the disks of young star-forming regions with possible consequences for the chemical composition and mass content of the disk. They also highlight the importance of SO and SO2 line observations in probing accretion shocks from a larger sample.
The reduced datacubes are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/658/A104
Numerical general relativistic radiative magnetohydrodynamic simulations of accretion discs around a stellar-mass black hole with a luminosity above 0.5 of the Eddington value reveal their stratified, elevated vertical structure. We refer to these thermally stable numerical solutions as puffy discs. Above a dense and geometrically thin core of dimensionless thickness h/r ∼ 0.1, crudely resembling a classic thin accretion disc, a puffed-up, geometrically thick layer of lower density is formed. This puffy layer corresponds to h/r ∼ 1.0, with a very limited dependence of the dimensionless thickness on the mass accretion rate. We discuss the observational properties of puffy discs, particularly the geometrical obscuration of the inner disc by the elevated puffy region at higher observing inclinations, and collimation of the radiation along the accretion disc spin axis, which may explain the apparent super-Eddington luminosity of some X-ray objects. We also present synthetic spectra of puffy discs, and show that they are qualitatively similar to those of a Comptonized thin disc. We demonstrate that the existing xspec spectral fitting models provide good fits to synthetic observations of puffy discs, but cannot correctly recover the input black hole spin. The puffy region remains optically thick to scattering; in its spectral properties, the puffy disc roughly resembles that of a warm corona sandwiching the disc core. We suggest that puffy discs may correspond to X-ray binary systems of luminosities above 0.3 of the Eddington luminosity in the intermediate spectral states.
Current and future cosmological analyses with Type Ia Supernovae (SNe Ia) face three critical challenges: i) measuring redshifts from the supernova or its host galaxy; ii) classifying SNe without spectra; and iii) accounting for correlations between the properties of SNe Ia and their host galaxies. We present here a novel approach that addresses each challenge. In the context of the Dark Energy Survey (DES), we analyze a SNIa sample with host galaxies in the redMaGiC galaxy catalog, a selection of Luminous Red Galaxies. Photo-$z$ estimates for these galaxies are expected to be accurate to $\sigma_{\Delta z/(1+z)}\sim0.02$. The DES-5YR photometrically classified SNIa sample contains approximately 1600 SNe and 125 of these SNe are in redMaGiC galaxies. We demonstrate that redMaGiC galaxies almost exclusively host SNe Ia, reducing concerns with classification uncertainties. With this subsample, we find similar Hubble scatter (to within $\sim0.01$ mag) using photometric redshifts in place of spectroscopic redshifts. With detailed simulations, we show the bias due to using photo-$z$s from redMaGiC host galaxies on the measurement of the dark energy equation-of-state $w$ is up to $\Delta w \sim 0.01-0.02$. With real data, we measure a difference in $w$ when using redMaGiC photometric redshifts versus spectroscopic redshifts of $\Delta w = 0.005$. Finally, we discuss how SNe in redMaGiC galaxies appear to be a more standardizable population due to a weaker relation between color and luminosity ($\beta$) compared to the DES-3YR population by $\sim5\sigma$; this finding is consistent with predictions that redMaGiC galaxies exhibit lower reddening ratios ($\textrm{R}_\textrm{V}$) than the general population of SN host galaxies. These results establish the feasibility of performing redMaGiC SN cosmology with photometric survey data in the absence of spectroscopic data.
Catalytic particles are spatially organized in a number of biological systems across different length scales, from enzyme complexes to metabolically coupled cells. Despite operating on different scales, these systems all feature localized reactions involving partially hindered diffusive transport, which is determined by the collective arrangement of the catalysts. Yet it remains largely unexplored how different arrangements affect the interplay between the reaction and transport dynamics, which ultimately determines the flux through the reaction pathway. Here we show that two fundamental trade-offs arise, the first between efficient inter-catalyst transport and the depletion of substrate, and the second between steric confinement of intermediate products and the accessibility of catalysts to substrate. We use a model reaction pathway to characterize the general design principles for the arrangement of catalysts that emerge from the interplay of these trade-offs. We find that the question of optimal catalyst arrangements generalizes the well-known Thomson problem of electrostatics.
The $ {\varXi}_{cc}^{++}\to {\varXi}_c^{\prime +}{\pi}^{+} $ decay is observed using proton-proton collisions collected by the LHCb experiment at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 5.4 fb−1. The $ {\varXi}_{cc}^{++}\to {\varXi}_c^{\prime +}{\pi}^{+} $ decay is reconstructed partially, where the photon from the $ {\varXi}_c^{\prime +}\to {\varXi}_c^{+}\gamma $ decay is not reconstructed and the pK$^{−}$π$^{+}$ final state of the $ {\varXi}_c^{+} $ baryon is employed. The $ {\varXi}_{cc}^{++}\to {\varXi}_c^{\prime +}{\pi}^{+} $branching fraction relative to that of the $ {\varXi}_{cc}^{++}\to {\varXi}_c^{+}{\pi}^{+} $ decay is measured to be 1.41 ± 0.17 ± 0.10, where the first uncertainty is statistical and the second systematic.[graphic not available: see fulltext]
We propose a search for long lived axion-like particles (ALPs) in exotic top decays. Flavour-violating ALPs appear as low energy effective theories for various new physics scenarios such as t-channel dark sectors or Froggatt-Nielsen models. In this case the top quark may decay to an ALP and an up- or charm-quark. For masses in the few GeV range, the ALP is long lived across most of the viable parameter space, suggesting a dedicated search. We propose to search for these long lived ALPs in $ t\overline{t} $ events, using one top quark as a trigger. We focus on ALPs decaying in the hadronic calorimeter, and show that the ratio of energy deposits in the electromagnetic and hadronic calorimeters as well as track vetoes can efficiently suppress Standard Model backgrounds. Our proposed search can probe exotic top branching ratios smaller than 10$^{−4}$ with a conservative strategy at the upcoming LHC run, and potentially below the 10$^{−7}$ level with more advanced methods. Finally we also show that measurements of single top production probe these branching ratios in the very short and very long lifetime limit at the 10$^{−3}$ level.
Time irreversibility is a distinctive feature of nonequilibrium dynamics and several measures of irreversibility have been introduced to assess the distance from thermal equilibrium of a stochastically driven system. While the dynamical noise is often approximated as white, in many real applications the time correlations of the random forces can actually be significantly long-lived compared to the relaxation times of the driven system. We analyze the effects of temporal correlations in the noise on commonly used measures of irreversibility and demonstrate how the theoretical framework for white-noise-driven systems naturally generalizes to the case of colored noise. Specifically, we express the autocorrelation function, the area enclosing rates, and mean phase space velocity in terms of solutions of a Lyapunov equation and in terms of their white-noise limit values.
We discuss peculiarities that arise in the computation of real-emission contributions to observables that contain Heaviside functions. A prominent example of such a case is the zero-jettiness soft function in SCET, whose calculation at next-to-next-to-next-to-leading order in perturbative QCD is an interesting problem. Since the zero-jettiness soft function distinguishes between emissions into different hemispheres, its definition involves θ-functions of light-cone components of emitted soft partons. This prevents a direct use of multi-loop methods, based on reverse unitarity, for computing the zero-jettiness soft function in high orders of perturbation theory. We propose a way to bypass this problem and illustrate its effectiveness by computing various non-trivial contributions to the zero-jettiness soft function at NNLO and N3LO in perturbative QCD.
We present a calculation of the helicity amplitudes for the process gg → γγ in three-loop massless QCD. We employ a recently proposed method to calculate scattering amplitudes in the 't Hooft-Veltman scheme that reduces the amount of spurious non-physical information needed at intermediate stages of the computation. Our analytic results for the three-loop helicity amplitudes are remarkably compact, and can be efficiently evaluated numerically. This calculation provides the last missing building block for the computation of NNLO QCD corrections to diphoton production in gluon fusion.
Context. Classical Cepheids are primary distance indicators and a crucial stepping stone in determining the present-day value of the Hubble constant H0 to the precision and accuracy required to constrain apparent deviations from the ΛCDM Concordance Cosmological Model.
Aims: We measured the iron and oxygen abundances of a statistically significant sample of 89 Cepheids in the Large Magellanic Cloud (LMC), one of the anchors of the local distance scale, quadrupling the prior sample and including 68 of the 70 Cepheids used to constrain H0 by the SH0ES program. The goal is to constrain the extent to which the luminosity of Cepheids is influenced by their chemical composition, which is an important contributor to the uncertainty on the determination of the Hubble constant itself and a critical factor in the internal consistency of the distance ladder.
Methods: We derived stellar parameters and chemical abundances from a self-consistent spectroscopic analysis based on equivalent width of absorption lines.
Results: The iron distribution of Cepheids in the LMC can be very accurately described by a single Gaussian with a mean [Fe/H] = −0.409 ± 0.003 dex and σ = 0.076 ± 0.003 dex. We estimate a systematic uncertainty on the absolute mean values of 0.1 dex. The width of the distribution is fully compatible with the measurement error and supports the low dispersion of 0.069 mag seen in the near-infrared Hubble Space Telescope LMC period-luminosity relation. The uniformity of the abundance has the important consequence that the LMC Cepheids alone cannot provide any meaningful constraint on the dependence of the Cepheid period-luminosity relation on chemical composition at any wavelength. This revises a prior claim based on a small sample of 22 LMC Cepheids that there was little dependence (or uncertainty) between composition and near-infrared luminosity, a conclusion which would produce an apparent conflict between anchors of the distance ladder with different mean abundance. The chemical homogeneity of the LMC Cepheid population makes it an ideal environment in which to calibrate the metallicity dependence between the more metal-poor Small Magellanic Cloud and metal-rich Milky Way and NGC 4258.
Full Tables 1-8 and Appendix B are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/658/A29
Based on observations collected at the European Southern Observatory under ESO programmes 66.D-0571 and 106.21ML.003.
We present a novel double-copy prescription for gauge fields at the Lagrangian level and apply it to the original double copy, couplings to matter and the soft theorem. The Yang-Mills Lagrangian in light-cone gauge is mapped directly to the N = 0 supergravity Lagrangian in light-cone gauge to trilinear order, and we show that the obtained result is manifestly equivalent to Einstein gravity at tree level up to this order. The application of the double-copy prescription to couplings to matter is exemplified by scalar and fermionic QCD and finally the soft-collinear effective QCD Lagrangian. The mapping of the latter yields an effective description of an energetic Dirac fermion coupled to the graviton, Kalb-Ramond, and dilaton fields, from which the fermionic gravitational soft and next-to-soft theorems follow.
We present a coherent study of the impact of neutrino interactions on the r-process element nucleosynthesis and the heating rate produced by the radioactive elements synthesized in the dynamical ejecta of neutron star-neutron star (NS-NS) mergers. We have studied the material ejected from four NS-NS merger systems based on hydrodynamical simulations which handle neutrino effects in an elaborate way by including neutrino equilibration with matter in optically thick regions and re-absorption in optically thin regions. We find that the neutron richness of the dynamical ejecta is significantly affected by the neutrinos emitted by the post-merger remnant, in particular when compared to a case neglecting all neutrino interactions. Our nucleosynthesis results show that a solar-like distribution of r-process elements with mass numbers $A \gtrsim 90$ is produced, including a significant enrichment in Sr and a reduced production of actinides compared to simulations without inclusion of the nucleonic weak processes. The composition of the dynamically ejected matter as well as the corresponding rate of radioactive decay heating are found to be rather independent of the system mass asymmetry and the adopted equation of state. This approximate degeneracy in abundance pattern and heating rates can be favourable for extracting the ejecta properties from kilonova observations, at least if the dynamical component dominates the overall ejecta. Part II of this work will study the light curve produced by the dynamical ejecta of our four NS merger models.
Observations of the SNR Cassiopeia A (Cas A) show asymmetries in the reverse shock that cannot be explained by models describing a remnant expanding through a spherically symmetric wind of the progenitor star. We investigate whether a past interaction of Cas A with a massive asymmetric shell of the circumstellar medium can account for the observed asymmetries. We performed 3D MHD simulations that describe the remnant evolution from the SN to its interaction with a circumstellar shell. The initial conditions are provided by a 3D neutrino-driven SN model whose morphology resembles Cas A. We explored the parameter space of the shell, searching for a set of parameters able to produce reverse shock asymmetries at the age of 350 years analogous to those observed in Cas A. The interaction of the remnant with the shell can match the observed reverse shock asymmetries if the shell was asymmetric with the densest portion in the nearside to the northwest (NW). According to our models, the shell was thin with radius 1.5 pc. The reverse shock shows the following asymmetries at the age of Cas A: i) it moves inward in the observer frame in the NW region, while it moves outward in other regions; ii) the geometric center of the reverse shock is offset to the NW by 0.1 pc from the geometric center of the forward shock; iii) the reverse shock in the NW region has enhanced nonthermal emission because, there, the ejecta enter the reverse shock with a higher relative velocity (between 4000 and 7000 km/s) than in other regions (below 2000 km/s). Our findings suggest the interaction of Cas A with an asymmetric circumstellar shell between 180 and 240 years after the SN event. We suggest that the shell was, most likely, the result of a massive eruption from the progenitor star that occurred about 10^5 years prior to core-collapse. We estimate a total mass of the shell of approximately 2.6 Msun.
We develop a novel data-driven method for generating synthetic optical observations of galaxy clusters. In cluster weak lensing, the interplay between analysis choices and systematic effects related to source galaxy selection, shape measurement, and photometric redshift estimation can be best characterized in end-to-end tests going from mock observations to recovered cluster masses. To create such test scenarios, we measure and model the photometric properties of galaxy clusters and their sky environments from the Dark Energy Survey Year 3 (DES Y3) data in two bins of cluster richness $\lambda \in [30; 45)$, $\lambda \in [45; 60)$ and three bins in cluster redshift ($z\in [0.3; 0.35)$, $z\in [0.45; 0.5)$ and $z\in [0.6; 0.65)$. Using deep-field imaging data, we extrapolate galaxy populations beyond the limiting magnitude of DES Y3 and calculate the properties of cluster member galaxies via statistical background subtraction. We construct mock galaxy clusters as random draws from a distribution function, and render mock clusters and line-of-sight catalogues into synthetic images in the same format as actual survey observations. Synthetic galaxy clusters are generated from real observational data, and thus are independent from the assumptions inherent to cosmological simulations. The recipe can be straightforwardly modified to incorporate extra information, and correct for survey incompleteness. New realizations of synthetic clusters can be created at minimal cost, which will allow future analyses to generate the large number of images needed to characterize systematic uncertainties in cluster mass measurements.
The majority of existing results for the kilonova (or macronova) emission from material ejected during a neutron-star (NS) merger is based on (quasi-) one-zone models or manually constructed toy-model ejecta configurations. In this study, we present a kilonova analysis of the material ejected during the first $\sim 10\,$ ms of a NS merger, called dynamical ejecta, using directly the outflow trajectories from general relativistic smoothed-particle hydrodynamics simulations, including a sophisticated neutrino treatment and the corresponding nucleosynthesis results, which have been presented in Part I of this study. We employ a multidimensional two-moment radiation transport scheme with approximate M1 closure to evolve the photon field and use a heuristic prescription for the opacities found by calibration with atomic-physics-based reference results. We find that the photosphere is generically ellipsoidal but augmented with small-scale structure and produces emission that is about 1.5-3 times stronger towards the pole than the equator. The kilonova typically peaks after $0.7\!-\!1.5\,$ d in the near-infrared frequency regime with luminosities between $3\!-\!7\times 10^{40}\,$ erg s-1 and at photospheric temperatures of $2.2\!-\!2.8\times 10^3\,$ K. A softer equation of state or higher binary-mass asymmetry leads to a longer and brighter signal. Significant variations of the light curve are also obtained for models with artificially modified electron fractions, emphasizing the importance of a reliable neutrino-transport modelling. None of the models investigated here, which only consider dynamical ejecta, produces a transient as bright as AT2017gfo. The near-infrared peak of our models is incompatible with the early blue component of AT2017gfo.
MadJax is a tool for generating and evaluating differentiable matrix elements of high energy scattering processes. As such, it is a step towards a differentiable programming paradigm in high energy physics that facilitates the incorporation of high energy physics domain knowledge, encoded in simulation software, into gradient based learning and optimization pipelines. MadJax comprises two components: (a) a plugin to the general purpose matrix element generator MadGraph that integrates matrix element and phase space sampling code with the JAX differentiable programming framework, and (b) a standalone wrapping API for accessing the matrix element code and its gradients, which are computed with automatic differentiation. The MadJax implementation and example applications of simulation based inference and normalizing flow based matrix element modeling, with capabilities enabled uniquely with differentiable matrix elements, are presented.
To a good approximation, on large scales, the evolved two-point correlation function of biased tracers is related to the initial one by a convolution with a smearing kernel. For Gaussian initial conditions, the smearing kernel is Gaussian, so if the initial correlation function is parametrized using simple polynomials, then the evolved correlation function is a sum of generalized Laguerre functions of half-integer order. This motivates an analytic "Laguerre reconstruction" algorithm which previous work has shown is fast and accurate. This reconstruction requires as input the width of the smearing kernel. We show that the method can be extended to estimate the width of the smearing kernel from the same dataset. This estimate, and associated uncertainties, can then be used to marginalize over the distribution of reconstructed shapes and hence provide error estimates on the value of the distance scale. This procedure is not tied to a particular cosmological model. We also show that if, instead, we parametrize the evolved correlation function using simple polynomials, then the initial one is a sum of Hermite polynomials, again enabling fast and accurate deconvolution. If one is willing to use constraints on the smearing scale from other datasets, then marginalizing over its value is simpler for this latter, "Hermite" reconstruction, potentially providing further speed-ups in cosmological analyses.
Euclid is poised to survey galaxies across a cosmological volume of unprecedented size, providing observations of more than a billion objects distributed over a third of the full sky. Approximately 20 million of these galaxies will have their spectroscopy available, allowing us to map the three-dimensional large-scale structure of the Universe in great detail. This paper investigates prospects for the detection of cosmic voids therein and the unique benefit they provide for cosmological studies. In particular, we study the imprints of dynamic (redshift-space) and geometric (Alcock-Paczynski) distortions of average void shapes and their constraining power on the growth of structure and cosmological distance ratios. To this end, we made use of the Flagship mock catalog, a state-of-the-art simulation of the data expected to be observed with Euclid. We arranged the data into four adjacent redshift bins, each of which contains about 11 000 voids and we estimated the stacked void-galaxy cross-correlation function in every bin. Fitting a linear-theory model to the data, we obtained constraints on f/b and DMH, where f is the linear growth rate of density fluctuations, b the galaxy bias, DM the comoving angular diameter distance, and H the Hubble rate. In addition, we marginalized over two nuisance parameters included in our model to account for unknown systematic effects in the analysis. With this approach, Euclid will be able to reach a relative precision of about 4% on measurements of f/b and 0.5% on DMH in each redshift bin. Better modeling or calibration of the nuisance parameters may further increase this precision to 1% and 0.4%, respectively. Our results show that the exploitation of cosmic voids in Euclid will provide competitive constraints on cosmology even as a stand-alone probe. For example, the equation-of-state parameter, w, for dark energy will be measured with a precision of about 10%, consistent with previous more approximate forecasts.
This paper is published on behalf of the Euclid Consortium.
Galaxy cluster masses, rich with cosmological information, can be estimated from internal dark matter (DM) velocity dispersions, which in turn can be observationally inferred from satellite galaxy velocities. However, galaxies are biased tracers of the DM, and the bias can vary over host halo and galaxy properties as well as time. We precisely calibrate the velocity bias, bv - defined as the ratio of galaxy and DM velocity dispersions - as a function of redshift, host halo mass, and galaxy stellar mass threshold ($M_{\rm \star , sat}$), for massive haloes ($M_{\rm 200c}\gt 10^{13.5} \, {\rm M}_\odot$) from five cosmological simulations: IllustrisTNG, Magneticum, Bahamas + Macsis, The Three Hundred Project, and MultiDark Planck-2. We first compare scaling relations for galaxy and DM velocity dispersion across simulations; the former is estimated using a new ensemble velocity likelihood method that is unbiased for low galaxy counts per halo, while the latter uses a local linear regression. The simulations show consistent trends of bv increasing with M200c and decreasing with redshift and $M_{\rm \star , sat}$. The ensemble-estimated theoretical uncertainty in bv is 2-3 per cent, but becomes percent-level when considering only the three highest resolution simulations. We update the mass-richness normalization for an SDSS redMaPPer cluster sample, and find our improved bv estimates reduce the normalization uncertainty from 22 to 8 per cent, demonstrating that dynamical mass estimation is competitive with weak lensing mass estimation. We discuss necessary steps for further improving this precision. Our estimates for $b_v(M_{\rm 200c}, M_{\rm \star , sat}, z)$ are made publicly available.
We derive supernova (SN) bounds on muon-philic bosons, taking advantage of the recent emergence of muonic SN models. Our main innovations are to consider scalars ϕ in addition to pseudoscalars a and to include systematically the generic two-photon coupling Gγ γ implied by a muon triangle loop. This interaction allows for Primakoff scattering and radiative boson decays. The globular-cluster bound Gγ γ<0.67 ×10-10 GeV-1 carries over to the muonic Yukawa couplings as ga<3.1 ×10-9 and gϕ<4.6 ×10-9 for ma ,ϕ≲100 keV , so SN arguments become interesting mainly for larger masses. If bosons escape freely from the SN core the main constraints originate from SN 1987A γ rays and the diffuse cosmic γ -ray background. The latter allows at most 10-4 of a typical total SN energy of ESN≃3 ×1053 erg to show up as γ rays, for ma ,ϕ≳100 keV implying ga≲0.9 ×10-10 and gϕ≲0.4 ×10-10. In the trapping regime the bosons emerge as quasi-thermal radiation from a region near the neutrino sphere and match Lν for ga ,ϕ≃10-4. However, the 2 γ decay is so fast that all the energy is dumped into the surrounding progenitor-star matter, whereas at most 10-2ESN may show up in the explosion. To suppress boson emission below this level we need yet larger couplings, ga≳2 ×10-3 and gϕ≳4 ×10-3. Muonic scalars can explain the muon magnetic-moment anomaly for gϕ≃0.4 ×10-3, a value hard to reconcile with SN physics despite the uncertainty of the explosion-energy bound. For generic axionlike particles, this argument covers the "cosmological triangle" in the Ga γ γ- ma parameter space.
Previous studies have shown that dark matter-deficient galaxies (DMDG) such as NGC 1052-DF2 (hereafter DF2) can result from tidal stripping. An important question, though, is whether such a stripping scenario can explain DF2's large specific frequency of globular clusters (GCs). After all, tidal stripping and shocking preferentially remove matter from the outskirts. We examine this using idealized, high-resolution simulations of a regular dark matter-dominated galaxy that is accreted on to a massive halo. As long as the initial (pre-infall) dark matter halo of the satellite is cored, which is consistent with predictions of cosmological, hydrodynamical simulations, the tidal remnant can be made to resemble DF2 in all its properties, including its GC population. The required orbit has a pericentre at the 8.3 percentile of the distribution for subhaloes at infall, and thus is not particularly extreme. On this orbit the satellite loses 98.5 (30) per cent of its original dark matter (stellar) mass, and thus evolves into a DMDG. The fraction of GCs that is stripped off depends on the initial radial distribution. If, at infall, the median projected radius of the GC population is roughly two times that of the stars, consistent with observations of isolated galaxies, only ~20 per cent of the GCs are stripped off. This is less than for the stars, which is due to dynamical friction counteracting the tidal stirring. We predict that, if indeed DF2 was crafted by strong tides, its stellar outskirts should have a very shallow metallicity gradient.
Using bona-fide black hole (BH) mass estimates from reverberation mapping and the line ratio [Si VI] 1.963$\rm{\mu m}$/Brγbroad as tracer of the AGN ionizing continuum, a novel BH-mass scaling relation of the form log(MBH) = (6.40 ± 0.17) - (1.99 ± 0.37) × log ([Si VI]/Brγbroad), dispersion 0.47 dex, over the BH mass interval, 106-108 M⊙ is found. Following on the geometrically thin accretion disc approximation and after surveying a basic parameter space for coronal lines production, we believe one of main drivers of the relation is the effective temperature of the disc, which is effectively sampled by the [Si VI] 1.963$\rm{\mu m}$ coronal line for the range of BH masses considered. By means of CLOUDY photoionization models, the observed anticorrelation appears to be formally in line with the thin disc prediction Tdisc ∝ MBH-1/4.
The diffusive epidemic process is a paradigmatic example of an absorbing state phase transition in which healthy and infected individuals spread with different diffusion constants. Using stochastic activity spreading simulations in combination with finite-size scaling analyses we reveal two qualitatively different processes that characterize the critical dynamics: subdiffusive propagation of infection clusters and diffusive fluctuations in the healthy population. This suggests the presence of a strong-coupling regime and sheds new light on a long-standing debate about the theoretical classification of the system.
We present results on the star cluster properties from a series of high resolution smoothed particles hydrodynamics (SPH) simulations of isolated dwarf galaxies as part of the GRIFFIN project. The simulations at sub-parsec spatial resolution and a minimum particle mass of 4 M⊙ incorporate non-equilibrium heating, cooling, and chemistry processes, and realize individual massive stars. The simulations follow feedback channels of massive stars that include the interstellar-radiation field variable in space and time, the radiation input by photo-ionization and supernova explosions. Varying the star formation efficiency per free-fall time in the range ϵff = 0.2-50${{\ \rm per\ cent}}$ neither changes the star formation rates nor the outflow rates. While the environmental densities at star formation change significantly with ϵff, the ambient densities of supernovae are independent of ϵff indicating a decoupling of the two processes. At low ϵff, gas is allowed to collapse more before star formation, resulting in more massive, and increasingly more bound star clusters are formed, which are typically not destroyed. With increasing ϵff, there is a trend for shallower cluster mass functions and the cluster formation efficiency Γ for young bound clusters decreases from $50 {{\ \rm per\ cent}}$ to $\sim 1 {{\ \rm per\ cent}}$ showing evidence for cluster disruption. However, none of our simulations form low mass (<103 M⊙) clusters with structural properties in perfect agreement with observations. Traditional star formation models used in galaxy formation simulations based on local free-fall times might therefore be unable to capture star cluster properties without significant fine tuning.
The Hubble constant (H0) is one of the fundamental parameters in cosmology, but there is a heated debate around the > 4σ tension between the local Cepheid distance ladder and the early Universe measurements. Strongly lensed Type Ia supernovae (LSNe Ia) are an independent and direct way to measure H0, where a time-delay measurement between the multiple supernova (SN) images is required. In this work, we present two machine learning approaches for measuring time delays in LSNe Ia, namely, a fully connected neural network (FCNN) and a random forest (RF). For the training of the FCNN and the RF, we simulate mock LSNe Ia from theoretical SN Ia models that include observational noise and microlensing. We test the generalizability of the machine learning models by using a final test set based on empirical LSN Ia light curves not used in the training process, and we find that only the RF provides a low enough bias to achieve precision cosmology; as such, RF is therefore preferred over our FCNN approach for applications to real systems. For the RF with single-band photometry in the i band, we obtain an accuracy better than 1% in all investigated cases for time delays longer than 15 days, assuming follow-up observations with a 5σ point-source depth of 24.7, a two day cadence with a few random gaps, and a detection of the LSNe Ia 8 to 10 days before peak in the observer frame. In terms of precision, we can achieve an approximately 1.5-day uncertainty for a typical source redshift of ∼0.8 on the i band under the same assumptions. To improve the measurement, we find that using three bands, where we train a RF for each band separately and combine them afterward, helps to reduce the uncertainty to ∼1.0 day. The dominant source of uncertainty is the observational noise, and therefore the depth is an especially important factor when follow-up observations are triggered. We have publicly released the microlensed spectra and light curves used in this work.
https://github.com/shsuyu/HOLISMOKES-public/tree/main/HOLISMOKES_VII
Context. The dynamics of the intracluster medium (ICM) is affected by turbulence driven by several processes, such as mergers, accretion and feedback from active galactic nuclei.
Aims: X-ray surface brightness fluctuations have been used to constrain turbulence in galaxy clusters. Here, we use simulations to further investigate the relation between gas density and turbulent velocity fluctuations, with a focus on the effect of the stratification of the ICM.
Methods: In this work, we studied the turbulence driven by hierarchical accretion by analysing a sample of galaxy clusters simulated with the cosmological code ENZO. We used a fixed scale filtering approach to disentangle laminar from turbulent flows.
Results: In dynamically perturbed galaxy clusters, we found a relation between the root mean square of density and velocity fluctuations, albeit with a different slope than previously reported. The Richardson number is a parameter that represents the ratio between turbulence and buoyancy, and we found that this variable has a strong dependence on the filtering scale. However, we could not detect any strong relation between the Richardson number and the logarithmic density fluctuations, in contrast to results by recent and more idealised simulations. In particular, we find a strong effect from radial accretion, which appears to be the main driver for the gas fluctuations. The ubiquitous radial bias in the dynamics of the ICM suggests that homogeneity and isotropy are not always valid assumptions, even if the turbulent spectra follow Kolmogorov's scaling. Finally, we find that the slope of the velocity and density spectra are independent of cluster-centric radii.
Recent wide-area surveys have enabled us to study the Milky Way with unprecedented detail. Its inner regions, hidden behind dust and gas, have been partially unveiled with the arrival of near-infrared (IR) photometric and spectroscopic data sets. Among recent discoveries, there is a population of low-mass globular clusters, known to be missing, especially towards the Galactic bulge. In this work, five new low-luminosity globular clusters located towards the bulge area are presented. They were discovered by searching for groups in the multidimensional space of coordinates, colours, and proper motions from the Gaia EDR3 catalogue and later confirmed with deeper VVV survey near-IR photometry. The clusters show well-defined red giant branches and, in some cases, horizontal branches with their members forming a dynamically coherent structure in proper motion space. Four of them were confirmed by spectroscopic follow-up with the MUSE instrument on the ESO VLT. Photometric parameters were derived, and when available, metallicities, radial velocities, and orbits were determined. The new clusters Gran 1 and 5 are bulge globular clusters, while Gran 2, 3 and 4 present halo-like properties. Preliminary orbits indicate that Gran 1 might be related to the Main Progenitor, or the so-called 'low-energy' group, while Gran 2, 3 and 5 appears to follow the Gaia-Enceladus/Sausage structure. This study demonstrates that the Gaia proper motions, combined with the spectroscopic follow-up and colour-magnitude diagrams, are required to confirm the nature of cluster candidates towards the inner Galaxy. High stellar crowding and differential extinction may hide other low-luminosity clusters.
Recent cosmological analyses rely on the ability to accurately sample from high-dimensional posterior distributions. A variety of algorithms have been applied in the field, but justification of the particular sampler choice and settings is often lacking. Here we investigate three such samplers to motivate and validate the algorithm and settings used for the Dark Energy Survey (DES) analyses of the first 3 years (Y3) of data from combined measurements of weak lensing and galaxy clustering. We employ the full DES Year 1 likelihood alongside a much faster approximate likelihood, which enables us to assess the outcomes from each sampler choice and demonstrate the robustness of our full results. We find that the ellipsoidal nested sampling algorithm $\texttt{MultiNest}$ reports inconsistent estimates of the Bayesian evidence and somewhat narrower parameter credible intervals than the sliced nested sampling implemented in $\texttt{PolyChord}$. We compare the findings from $\texttt{MultiNest}$ and $\texttt{PolyChord}$ with parameter inference from the Metropolis-Hastings algorithm, finding good agreement. We determine that $\texttt{PolyChord}$ provides a good balance of speed and robustness, and recommend different settings for testing purposes and final chains for analyses with DES Y3 data. Our methodology can readily be reproduced to obtain suitable sampler settings for future surveys.
Quantum coherence is one of the most striking features of quantum mechanics rooted in the superposition principle. Recently, it has been demonstrated that it is possible to harvest the quantum coherence from a coherent scalar field. In order to explore a new method of detecting axion dark matter, we consider a point-like Unruh-DeWitt detector coupled to the axion field and quantify a coherent measure of the detector. We show that the detector can harvest the quantum coherence from the axion dark matter. To be more precise, we consider a two-level electron system in an atom as the detector. In this case, we obtain the coherence measure C = 2.2 × 10−6γ(T/1s) where T and γ are an observation time and the Lorentz factor. At the same time, the axion mass ma we can probe is determined by the energy gap of the detector.
We present a demonstration of the in-flight polarization angle calibration for the JAXA/ISAS second strategic large class mission, LiteBIRD, and estimate its impact on the measurement of the tensor-to-scalar ratio parameter, r, using simulated data. We generate a set of simulated sky maps with CMB and polarized foreground emission, and inject instrumental noise and polarization angle offsets to the 22 (partially overlapping) LiteBIRD frequency channels. Our in-flight angle calibration relies on nulling the EB cross correlation of the polarized signal in each channel. This calibration step has been carried out by two independent groups with a blind analysis, allowing an accuracy of the order of a few arc-minutes to be reached on the estimate of the angle offsets. Both the corrected and uncorrected multi-frequency maps are propagated through the foreground cleaning step, with the goal of computing clean CMB maps. We employ two component separation algorithms, the Bayesian-Separation of Components and Residuals Estimate Tool (B-SeCRET), and the Needlet Internal Linear Combination (NILC). We find that the recovered CMB maps obtained with algorithms that do not make any assumptions about the foreground properties, such as NILC, are only mildly affected by the angle miscalibration. However, polarization angle offsets strongly bias results obtained with the parametric fitting method. Once the miscalibration angles are corrected by EB nulling prior to the component separation, both component separation algorithms result in an unbiased estimation of the r parameter. While this work is motivated by the conceptual design study for LiteBIRD, its framework can be broadly applied to any CMB polarization experiment. In particular, the combination of simulation plus blind analysis provides a robust forecast by taking into account not only detector sensitivity but also systematic effects.
All evolutionary biological processes lead to a change in heritable traits over successive generations. The responsible genetic information encoded in DNA is altered, selected, and inherited by mutation of the base sequence. While this is well known at the biological level, an evolutionary change at the molecular level of small organic molecules is unknown but represents an important prerequisite for the emergence of life. Here, we present a class of prebiotic imidazolidine-4-thione organocatalysts able to dynamically change their constitution and potentially capable to form an evolutionary system. These catalysts functionalize their building blocks and dynamically adapt to their (self-modified) environment by mutation of their own structure. Depending on the surrounding conditions, they show pronounced and opposing selectivity in their formation. Remarkably, the preferentially formed species can be associated with different catalytic properties, which enable multiple pathways for the transition from abiotic matter to functional biomolecules.
We use a recent census of the Milky Way (MW) satellite galaxy population to constrain the lifetime of particle dark matter (DM). We consider two-body decaying dark matter (DDM) in which a heavy DM particle decays with lifetime $\tau$ comparable to the age of the Universe to a lighter DM particle (with mass splitting $\epsilon$) and to a dark radiation species. These decays impart a characteristic "kick velocity," $V_{\mathrm{kick}}=\epsilon c$, on the DM daughter particles, significantly depleting the DM content of low-mass subhalos and making them more susceptible to tidal disruption. We fit the suppression of the present-day DDM subhalo mass function (SHMF) as a function of $\tau$ and $V_{\mathrm{kick}}$ using a suite of high-resolution zoom-in simulations of MW-mass halos, and we validate this model on new DDM simulations of systems specifically chosen to resemble the MW. We implement our DDM SHMF predictions in a forward model that incorporates inhomogeneities in the spatial distribution and detectability of MW satellites and uncertainties in the mapping between galaxies and DM halos, the properties of the MW system, and the disruption of subhalos by the MW disk using an empirical model for the galaxy--halo connection. By comparing to the observed MW satellite population, we conservatively exclude DDM models with $\tau < 18\ \mathrm{Gyr}$ ($29\ \mathrm{Gyr}$) for $V_{\mathrm{kick}}=20\ \mathrm{km}\, \mathrm{s}^{-1}$ ($40\ \mathrm{km}\, \mathrm{s}^{-1}$) at $95\%$ confidence. These constraints are among the most stringent and robust small-scale structure limits on the DM particle lifetime and strongly disfavor DDM models that have been proposed to alleviate the Hubble and $S_8$ tensions.
CRESST is one of the most prominent direct detection experiments for dark matter particles with sub-GeV/c$^2$ mass. One of the advantages of the CRESST experiment is the possibility to include a large variety of nuclides in the target material used to probe dark matter interactions. In this work, we discuss in particular the interactions of dark matter particles with protons and neutrons of $^{6}$Li. This is now possible thanks to new calculations on nuclear matrix elements of this specific isotope of Li. To show the potential of using this particular nuclide for probing dark matter interactions, we used the data collected previously by a CRESST prototype based on LiAlO$_2$ and operated in an above ground test-facility at Max-Planck-Institut für Physik in Munich, Germany. In particular, the inclusion of $^{6}$Li in the limit calculation drastically improves the result obtained for spin-dependent interactions with neutrons in the whole mass range. The improvement is significant, greater than two order of magnitude for dark matter masses below 1 GeV/c$^2$, compared to the limit previously published with the same data.