We report the robust detection of coherent, localized deviations from Keplerian rotation possibly associated with the presence of two giant planets embedded in the disk around HD 163296. The analysis is performed using the DISCMINER channel map modeling framework on 12CO J = 2-1 DSHARP data. Not only orbital radius but also azimuth of the planets are retrieved by our technique. One of the candidate planets, detected at R = 94 ± 6 au, ϕ = 50° ± 3° (P94), is near the center of one of the gaps in dust continuum emission and is consistent with a planet mass of 1 M Jup. The other planet, located at R = 261 ± 4 au, ϕ = 57° ± 1° (P261), is in the region where a velocity kink was previously observed in 12CO channel maps. Also, we provide a simultaneous description of the height and temperature of the upper and lower emitting surfaces of the disk and propose the line width as a solid observable to track gas substructure. Using azimuthally averaged line width profiles, we detect gas gaps at R = 38, 88, and 136 au, closely matching the location of their dust and kinematical counterparts. Furthermore, we observe strong azimuthal asymmetries in line widths around the gas gap at R = 88 au, possibly linked to turbulent motions driven by the P94 planet. Our results confirm that the DISCMINER is capable of finding localized, otherwise unseen velocity perturbations thanks to its robust statistical framework, but also that it is well suited for studies of the gas properties and vertical structure of protoplanetary disks.
Evaluation of the effective-range parameters for the $T_{cc}^+$ state in the LHCb model is examined. The finite width of $D^*$ leads to a shift of the expansion point into the complex plane to match analytical properties of the expanded amplitude. We perform an analytic continuation of the three-body scattering amplitude to the complex plane in a vicinity of the branch point and develop a robust procedure for computation of the expansion coefficients. The results yield a nearly-real scattering length, and two contributions to the the effective range which have not been accounted before.
Dew is a common form of water that deposits from saturated air on colder surfaces. Although presumably common on primordial Earth, its potential involvement in the origin of life in early replication has not been investigated in detail. Here we report that it can drive the first stages of Darwinian evolution for DNA and RNA, first by periodically denaturing their structures at low temperatures and second by promoting the replication of long strands over short, faster replicating ones. Our experiments mimicked a partially water-filled primordial rock pore in the probable CO2 atmosphere of Hadean Earth. Under heat flow, water continuously evaporated and recondensed as acidic dew droplets that created the humidity, salt and pH cycles that match many prebiotic replication chemistries. In low-salt and low-pH regimes, the strands melted at 30 K below the bulk melting temperature, whereas longer sequences preferentially accumulated at the droplet interface. Under an enzymatic replication to mimic a sped-up RNA world, long sequences of more than 1,000 nucleotides emerged. The replication was biased by the melting conditions of the dew and the initial short ATGC strands evolved into long AT-rich sequences with repetitive and structured nucleotide composition.
Phenomenological success of inflation models with axion and SU(2) gauge fields relies crucially on control of backreaction from particle production. Most of the previous study only demanded the backreaction terms in equations of motion for axion and gauge fields be small on the basis of order-of-magnitude estimation. In this paper, we solve the equations of motion with backreaction for a wide range of parameters of the spectator axion-SU(2) model. First, we find a new slow-roll solution of the axion-SU(2) system in the absence of backreaction. Next, we obtain accurate conditions for stable slow-roll solutions in the presence of backreaction. Finally, we show that the amplitude of primordial gravitational waves sourced by the gauge fields can exceed that of quantum vacuum fluctuations in spacetime by a large factor, without backreaction spoiling slow-roll dynamics. Imposing additional constraints on the power spectra of scalar and tensor modes measured at CMB scales, we find that the sourced contribution can be more than ten times the vacuum one. Imposing further a constraint of scalar modes non-linearly sourced by tensor modes, the two contributions can still be comparable.
In recent years there has been a rapidly growing body of experimental evidence for existence of exotic, multiquark hadrons, i.e. mesons which contain additional quarks, beyond the usual quark-antiquark pair and baryons which consist of more than three quarks. In all cases with robust evidence they contain at least one heavy quark Q=c or b, the majority including two heavy quarks. Two key theoretical questions have been triggered by these discoveries: (a) how are quarks organized inside these multiquark states -- as compact objects with all quarks within one confinement volume, interacting via color forces, perhaps with an important role played by diquarks, or as deuteron-like hadronic molecules, bound by light-meson exchange? (b) what other multiquark states should we expect? The two questions are tightly intertwined. Each of the interpretations provides a natural explanation of parts of the data, but neither explains all of the data. It is quite possible that both kinds of structures appear in Nature. It may also be the case that certain states are superpositions of the compact and molecular configurations. This Whitepaper brings together contributions from many leading practitioners in the field, representing a wide spectrum of theoretical interpretations. We discuss the importance of future experimental and phenomenological work, which will lead to better understandingof multiquark phenomena in QCD.
We present the second public data release (DR2) from the DECam Local Volume Exploration survey (DELVE). DELVE DR2 combines new DECam observations with archival DECam data from the Dark Energy Survey, the DECam Legacy Survey, and other DECam community programs. DELVE DR2 consists of ~160,000 exposures that cover >21,000 deg^2 of the high Galactic latitude (|b| > 10 deg) sky in four broadband optical/near-infrared filters (g, r, i, z). DELVE DR2 provides point-source and automatic aperture photometry for ~2.5 billion astronomical sources with a median 5σ point-source depth of g=24.3, r=23.9, i=23.5, and z=22.8 mag. A region of ~17,000 deg^2 has been imaged in all four filters, providing four-band photometric measurements for ~618 million astronomical sources. DELVE DR2 covers more than four times the area of the previous DELVE data release and contains roughly five times as many astronomical objects. DELVE DR2 is publicly available via the NOIRLab Astro Data Lab science platform.
We propose a parametrization of the leading $B$-meson light-cone distribution amplitude (LCDA) in heavy-quark effective theory (HQET). In position space, it uses a conformal transformation that yields a systematic Taylor expansion and an integral bound, which enables control of the truncation error. Our parametrization further produces compact analytical expressions for a variety of derived quantities. At a given reference scale, our momentum-space parametrization corresponds to an expansion in associated Laguerre polynomials, which turn into confluent hypergeometric functions ${}_1F_1$ under renormalization-group evolution at one-loop accuracy. Our approach thus allows a straightforward and transparent implementation of a variety of phenomenological constraints, regardless of their origin. Moreover, we can include theoretical information on the Taylor coefficients by using the local operator production expansion. We showcase the versatility of the parametrization in a series of phenomenological pseudo-fits.
The CMB lensing signal from cosmic voids and superclusters probes the growth of structure in the low-redshift cosmic web. In this analysis, we cross-correlated the Planck CMB lensing map with voids detected in the Dark Energy Survey Year 3 (Y3) data set ($\sim$5,000 deg$^{2}$), extending previous measurements using Y1 catalogues ($\sim$1,300 deg$^{2}$). Given the increased statistical power compared to Y1 data, we report a $6.6\sigma$ detection of negative CMB convergence ($\kappa$) imprints using approximately 3,600 voids detected from a redMaGiC luminous red galaxy sample. However, the measured signal is lower than expected from the MICE N-body simulation that is based on the $\Lambda$CDM model (parameters $\Omega_{\rm m} = 0.25$, $\sigma_8 = 0.8$). The discrepancy is associated mostly with the void centre region. Considering the full void lensing profile, we fit an amplitude $A_{\kappa}=\kappa_{\rm DES}/\kappa_{\rm MICE}$ to a simulation-based template with fixed shape and found a moderate $2\sigma$ deviation in the signal with $A_{\kappa}\approx0.79\pm0.12$. We also examined the WebSky simulation that is based on a Planck 2018 $\Lambda$CDM cosmology, but the results were even less consistent given the slightly higher matter density fluctuations than in MICE. We then identified superclusters in the DES and the MICE catalogues, and detected their imprints at the $8.4\sigma$ level; again with a lower-than-expected $A_{\kappa}=0.84\pm0.10$ amplitude. The combination of voids and superclusters yields a $10.3\sigma$ detection with an $A_{\kappa}=0.82\pm0.08$ constraint on the CMB lensing amplitude, thus the overall signal is $2.3\sigma$ weaker than expected from MICE.
Based on the potential nonrelativistic QCD formalism, we compute the nonrelativistic QCD long-distance matrix elements (LDMEs) for inclusive production of $S$-wave heavy quarkonia. This greatly reduces the number of nonperturbative unknowns and brings in a substantial enhancement in the predictive power of the NRQCD factorization formalism. We obtain improved determinations of the LDMEs and find cross sections and polarizations of $J/\psi$, $\psi(2S)$, and excited $\Upsilon$ states that agree well with LHC data. Our results may have important implications in pinning down the heavy quarkonium production mechanism.
We highlight the need for the development of comprehensive amplitude analysis methods to further our understanding of hadron spectroscopy. Reaction amplitudes constrained by first principles of $S$-matrix theory and by QCD phenomenology are needed to extract robust interpretations of the data from experiments and from lattice calculations.
Despite efforts over several decades, direct-detection experiments have not yet led to the discovery of the dark matter (DM) particle. This has led to increasing interest in alternatives to the Lambda CDM (LCDM) paradigm and alternative DM scenarios (including fuzzy DM, warm DM, self-interacting DM, etc.). In many of these scenarios, DM particles cannot be detected directly and constraints on their properties can ONLY be arrived at using astrophysical observations. The Dark Energy Spectroscopic Instrument (DESI) is currently one of the most powerful instruments for wide-field surveys. The synergy of DESI with ESA's Gaia satellite and future observing facilities will yield datasets of unprecedented size and coverage that will enable constraints on DM over a wide range of physical and mass scales and across redshifts. DESI will obtain spectra of the Lyman-alpha forest out to z~5 by detecting about 1 million QSO spectra that will put constraints on clustering of the low-density intergalactic gas and DM halos at high redshift. DESI will obtain radial velocities of 10 million stars in the Milky Way (MW) and Local Group satellites enabling us to constrain their global DM distributions, as well as the DM distribution on smaller scales. The paradigm of cosmological structure formation has been extensively tested with simulations. However, the majority of simulations to date have focused on collisionless CDM. Simulations with alternatives to CDM have recently been gaining ground but are still in their infancy. While there are numerous publicly available large-box and zoom-in simulations in the LCDM framework, there are no comparable publicly available WDM, SIDM, FDM simulations. DOE support for a public simulation suite will enable a more cohesive community effort to compare observations from DESI (and other surveys) with numerical predictions and will greatly impact DM science.
RES-NOVA is a newly proposed experiment for the detection of neutrinos from astrophysical sources, mainly Supernovae, using an array of cryogenic detectors made of PbWO$_4$ crystals produced from archaeological Pb. This unconventional material, characterized by intrinsic high radiopurity, enables to achieve low-background levels in the region of interest for the neutrino detection via Coherent Elastic neutrino-Nucleus Scattering (CE$\nu$NS). This signal lies at the detector energy threshold, O(1 keV), and it is expected to be hidden by naturally occurring radioactive contaminants of the crystal absorber. Here, we present the results of a radiopurity assay on a 0.84 kg PbWO$_4$ crystal produced from archaeological Pb operated as a cryogenic detector. The crystal internal radioactive contaminations are: $^{232}$Th $<$40 $\mu$Bq/kg, $^{238}$U $<$30 $\mu$Bq/kg, $^{226}$Ra 1.3 mBq/kg and $^{210}$Pb 22.5 mBq/kg. We present also a background projection for the final experiment and possible mitigation strategies for further background suppression. The achieved results demonstrate the feasibility of realizing this new class of detectors.
This paper will review the origins, development, and examples of new versions of Micro-Pattern Gas Detectors. The goal for MPGD development was the creation of detectors that could cost-effectively cover large areas while offering excellent position and timing resolution, and the ability to operate at high incident particle rates. The early MPGD developments culminated in the formation of the RD51 collaboration which has become the critical organization for the promotion of MPGDs and all aspects of their production, characterization, simulation, and uses in an expanding array of experimental configurations. For the Snowmass 2021 study, a number of Letters of Interest were received that illustrate ongoing developments and expansion of the use of MPGDs. In this paper, we highlight high precision timing, high rate application, trigger capability expansion of the SRS readout system, and a structure designed for low ion backflow.
Flavor violating processes in the lepton sector have highly suppressed branching ratios in the standard model. Thus, observation of lepton flavor violation (LFV) constitutes a clear indication of physics beyond the standard model (BSM). We review new physics searches in the processes that violate the conservation of lepton (muon) flavor by two units with muonia and muonium–antimuonium oscillations.
We revisit the theory of background fields constructed on the BRST-algebra of a spinning particle with $\mathcal{N}=4$ worldline supersymmetry, whose spectrum contains the graviton but no other fields. On a generic background, the closure of the BRST algebra implies the vacuum Einstein equations with a cosmological constant that is undetermined. On the other hand, in the "vacuum" background with no metric, the cohomology is given by a collection of free scalar- and vector fields. Only certain combinations of linear excitations, necessarily involving a vector field, can be extended beyond the linear level with the vector field inducing an Einstein metric.
We study the renormalization group of generic effective field theories that include gravity. We follow the on-shell amplitude approach, which provides a simple and efficient method to extract anomalous dimensions avoiding complications from gauge redundancies. As an invaluable tool we introduce a modified helicity h ∼ under which gravitons carry one unit instead of two. With this modified helicity we easily explain old and uncover new non-renormalization theorems for theories including gravitons. We provide complete results for the one-loop gravitational renormalization of a generic minimally coupled gauge theory with scalars and fermions and all orders in MPl, as well as for the renormalization of dimension-six operators including at least one graviton, all up to four external particles.
Dark matter (DM) self-interactions have been proposed to solve problems on small length scales within the standard cold DM cosmology. Here, we investigate the effects of DM self-interactions in merging systems of galaxies and galaxy clusters with equal and unequal mass ratios. We perform N-body DM-only simulations of idealized setups to study the effects of DM self-interactions that are elastic and velocity-independent. We go beyond the commonly adopted assumption of large-angle (rare) DM scatterings, paying attention to the impact of small-angle (frequent) scatterings on astrophysical observables and related quantities. Specifically, we focus on DM-galaxy offsets, galaxy-galaxy distances, halo shapes, morphology, and the phase-space distribution. Moreover, we compare two methods to identify peaks: one based on the gravitational potential and one based on isodensity contours. We find that the results are sensitive to the peak finding method, which poses a challenge for the analysis of merging systems in simulations and observations, especially for minor mergers. Large DM-galaxy offsets can occur in minor mergers, especially with frequent self-interactions. The subhalo tends to dissolve quickly for these cases. While clusters in late merger phases lead to potentially large differences between rare and frequent scatterings, we believe that these differences are non-trivial to extract from observations. We therefore study the galaxy/star populations which remain distinct even after the DM haloes have coalesced. We find that these collisionless tracers behave differently for rare and frequent scatterings, potentially giving a handle to learn about the micro-physics of DM.
We consider and derive the gravitational soft theorem up to the sub-subleading power from the perspective of effective Lagrangians. The emergent soft gauge symmetries of the effective Lagrangian provide a transparent explanation of why soft graviton emission is universal to sub-subleading power, but gauge boson emission is not. They also suggest a physical interpretation of the form of the soft factors in terms of the charges related to the soft transformations and the kinematics of the multipole expansion. The derivation is done directly at Lagrangian level, resulting in an operatorial form of the soft theorems. In order to highlight the differences and similarities of the gauge-theory and gravitational soft theorems, we include an extensive discussion of soft gauge-boson emission from scalar, fermionic and vector matter at subleading power.
We present zELDA (redshift Estimator for Line profiles of Distant Lyman Alpha emitters), an open source code to fit Lyman α (Ly α) line profiles. The main motivation is to provide the community with an easy to use and fast tool to analyse Ly α line profiles uniformly to improve the understating of Ly α emitting galaxies. zELDA is based on line profiles of the commonly used 'shell-model' pre-computed with the full Monte Carlo radiative transfer code LyaRT. Via interpolation between these spectra and the addition of noise, we assemble a suite of realistic Ly α spectra which we use to train a deep neural network.We show that the neural network can predict the model parameters to high accuracy (e.g. ≲ 0.34 dex H I column density for R ~ 12 000) and thus allows for a significant speedup over existing fitting methods. As a proof of concept, we demonstrate the potential of zELDA by fitting 97 observed Ly α line profiles from the LASD data base. Comparing the fitted value with the measured systemic redshift of these sources, we find that Ly α determines their rest frame Ly α wavelength with a remarkable good accuracy of ~0.3 Å ($\sim 75\,\, {\rm km\, s}^{-1}$). Comparing the predicted outflow properties and the observed Ly α luminosity and equivalent width, we find several possible trends. For example, we find an anticorrelation between the Ly α luminosity and the outflow neutral hydrogen column density, which might be explained by the radiative transfer process within galaxies.
Measurements of exoplanetary orbital obliquity angles for different classes of planets are an essential tool in testing various planet formation theories. Measurements for those transiting planets on relatively large orbital periods (P > 10 d) present a rather difficult observational challenge. Here we present the obliquity measurement for the warm sub-Saturn planet HD 332231 b, which was discovered through Transiting Exoplanet Survey Satellite photometry of sectors 14 and 15, on a relatively large orbital period (18.7 d). Through a joint analysis of previously obtained spectroscopic data and our newly obtained CARMENES transit observations, we estimated the spin-orbit misalignment angle, λ, to be −42.0−10.6+11.3 deg, which challenges Laplacian ideals of planet formation. Through the addition of these new radial velocity data points obtained with CARMENES, we also derived marginal improvements on other orbital and bulk parameters for the planet, as compared to previously published values. We showed the robustness of the obliquity measurement through model comparison with an aligned orbit. Finally, we demonstrated the inability of the obtained data to probe any possible extended atmosphere of the planet, due to a lack of precision, and place the atmosphere in the context of a parameter detection space.
SN 2020cxd is a representative of the family of low-energy, underluminous Type IIP supernovae (SNe), whose observations and analysis were recently reported by Yang et al. (2021). Here we re-evaluate the observational data for the diagnostic SN properties by employing the hydrodynamic explosion model of a 9 MSun red supergiant progenitor with an iron core and a pre-collapse mass of 8.75 Msun. The explosion of the star was obtained by the neutrino-driven mechanism in a fully self-consistent simulation in three dimensions (3D). Multi-band light curves and photospheric velocities for the plateau phase are computed with the one-dimensional radiation-hydrodynamics code STELLA, applied to the spherically averaged 3D explosion model as well as spherisized radial profiles in different directions of the 3D model. We find that the overall evolution of the bolometric light curve, duration of the plateau phase, and basic properties of the multi-band emission can be well reproduced by our SN model with its explosion energy of only 0.7x10^50 erg and an ejecta mass of 7.4 Msun. These values are considerably lower than the previously reported numbers, but they are compatible with those needed to explain the fundamental observational properties of the prototype low-luminosity SN 2005cs. Because of the good compatibility of our photospheric velocities with line velocities determined for SN 2005cs, we conclude that the line velocities of SN 2020cxd are probably overestimated by up to a factor of about 3. The evolution of the line velocities of SN 2005cs compared to photospheric velocities in different explosion directions might point to intrinsic asymmetries in the SN ejecta.
Feynman diagrams constitute one of the essential ingredients for making precision predictions for collider experiments. Yet, while the simplest Feynman diagrams can be evaluated in terms of multiple polylogarithms -- whose properties as special functions are well understood -- more complex diagrams often involve integrals over complicated algebraic manifolds. Such diagrams already contribute at NNLO to the self-energy of the electron, $t \bar{t}$ production, $\gamma \gamma$ production, and Higgs decay, and appear at two loops in the planar limit of maximally supersymmetric Yang-Mills theory. This makes the study of these more complicated types of integrals of phenomenological as well as conceptual importance. In this white paper contribution to the Snowmass community planning exercise, we provide an overview of the state of research on Feynman diagrams that involve special functions beyond multiple polylogarithms, and highlight a number of research directions that constitute essential avenues for future investigation.
We evaluate the leading-order hadronic vacuum polarization contribution to the anomalous magnetic moment of the muon with two light flavors in minimal hard-wall and soft-wall holographic QCD models, as well as in simple generalizations thereof, and compare with the rather precise results available from dispersive and lattice approaches. While holographic QCD cannot be expected to shed light on the existing small discrepancies between the latter, this comparison in turn provides useful information on the holographic models, which have been used to evaluate hadronic light-by-light contributions where errors in data-driven and lattice approaches are more sizable. In particular, in the hard-wall model that has recently been used to implement the Melnikov-Vainshtein short-distance constraint on hadronic light-by-light contributions, a matching of the hadronic vacuum polarization to the data-driven approach points to the same correction of parameters that has been proposed recently in order to account for next-to-leading order effects.
P-type point contact (PPC) HPGe detectors are a leading technology for rare event searches due to their excellent energy resolution, low thresholds, and multi-site event rejection capabilities. We have characterized a PPC detector's response to α particles incident on the sensitive passivated and p+ surfaces, a previously poorly-understood source of background. The detector studied is identical to those in the MAJORANADEMONSTRATOR experiment, a search for neutrinoless double-beta decay (0 ν β β ) in 76Ge. α decays on most of the passivated surface exhibit significant energy loss due to charge trapping, with waveforms exhibiting a delayed charge recovery (DCR) signature caused by the slow collection of a fraction of the trapped charge. The DCR is found to be complementary to existing methods of α identification, reliably identifying α background events on the passivated surface of the detector. We demonstrate effective rejection of all surface α events (to within statistical uncertainty) with a loss of only 0.2% of bulk events by combining the DCR discriminator with previously-used methods. The DCR discriminator has been used to reduce the background rate in the 0 ν β β region of interest window by an order of magnitude in the MAJORANADEMONSTRATOR and will be used in the upcoming LEGEND-200 experiment.
The long-standing controversy about the isospin dependence of the effective Dirac mass in ab-initio calculations of asymmetric nuclear matter is clarified by solving the Relativistic Brueckner-Hartree-Fock equations in the full Dirac space. The symmetry energy and its slope parameter at the saturation density are $E_{\text{sym}}(\rho_0)=33.1$ MeV and $L=65.2$ MeV, in agreement with empirical and experimental values. Further applications predict the neutron star radius $R_{1.4M_\odot}\approx 12$ km and the maximum mass of a neutron star $M_{\text{max}}\leq 2.4M_\odot$.
In order to solve the time-independent three-dimensional Schrödinger equation, one can transform the time-dependent Schrödinger equation to imaginary time and use a parallelized iterative method to obtain the full three-dimensional eigen-states and eigen-values on very large lattices. In the case of the non-relativistic Schrödinger equation, there exists a publicly available code called quantumfdtd which implements this algorithm. In this paper, we (a) extend the quantumfdtd code to include the case of the relativistic Schrödinger equation and (b) add two optimized Fast Fourier Transform (FFT) based kinetic energy terms for non-relativistic cases. The new kinetic energy terms (two non-relativistic and one relativistic) are computed using the parallelized FFT-algorithm provided by the FFTW 3 library. The resulting quantumfdtd v3 code, which is publicly released with this paper, is backwards compatible with version 2, supporting explicit finite-differences schemes in addition to the new FFT-based schemes. Finally, we (c) extend the original code so that it supports arbitrary external file-based potentials and the option to project out distinct parity eigen-states from the solutions. Herein, we provide details of the quantumfdtd v3 implementation, comparisons and tests of the three new kinetic energy terms, and code documentation.
We investigate the deformations and rigidity of boundary Heisenberg-like algebras. In particular, we focus on the Heisenberg and Heisenberg ⊕ witt algebras which arise as symmetry algebras in three-dimensional gravity theories. As a result of the deformation procedure we find a large class of algebras. While some of these algebras are new, some of them have already been obtained as asymptotic and boundary symmetry algebras, supporting the idea that symmetry algebras associated to diverse boundary conditions and spacetime loci are algebraically interconnected through deformation of algebras. The deformation/contraction relationships between the new algebras are investigated. In addition, it is also shown that the deformation procedure reaches new algebras inaccessible to the Sugawara construction. As a byproduct of our analysis, we obtain that Heisenberg ⊕ witt and the asymptotic symmetry algebra Weyl-bms3 are not connected via single deformation but in a more subtle way.
In this work we report the realization of the first low-threshold cryogenic detector that uses diamond as absorber for astroparticle physics applications. We tested two 0.175$\,$g CVD diamond samples, each instrumented with a W-TES. The sensors showed transitions at about 25 mK. We present the performance of the diamond detectors and we highlight the best performing one, where we obtained an energy threshold as low as 16.8 eV. This promising result lays the foundation for the use of diamond for different fields of applications where low threshold and excellent energy resolution are required, as i.e. light dark matter searches and BSM physics with coherent elastic neutrino nucleus scattering.
Based on the potential nonrelativistic QCD formalism, we compute the nonrelativistic QCD long-distance matrix elements (LDMEs) for inclusive production of $S$-wave heavy quarkonia. This greatly reduces the number of nonperturbative unknowns and brings in a substantial enhancement in the predictive power of the NRQCD factorization formalism. We obtain improved determinations of the LDMEs and find cross sections and polarizations of $J/\psi$, $\psi(2S)$, and excited $\Upsilon$ states that agree well with LHC data. Our results may have important implications in pinning down the heavy quarkonium production mechanism.
The origin of the diffuse gamma-ray background (DGRB) detected by EGRET and Fermi-LAT, the one that remains after subtracting all individual sources from the observed gamma-ray sky, is unknown. The DGRB possibly encompasses contributions from different source populations such as star-forming galaxies, starburst galaxies, active galactic nuclei, gamma-ray bursts, or galaxy clusters. Here, we combine cosmological magnetohydrodynamical simulations of clusters of galaxies with the propagation of CRs using Monte Carlo simulations, in the redshift range $z\leq 5.0$, and find that the integrated gamma-ray flux from clusters can contribute up to $100\%$ of the DGRB flux observed by Fermi-LAT above $100$~GeV, for CR spectral indices $\alpha = 1.5 - 2.5$ and energy cutoffs $E_{\text{max}} = 10^{16} - 10^{17}$~eV. The flux is dominated by clusters with masses $10^{13}< M/M_{\odot} < 10^{15}$ and redshift $ z \leq 0.3$. Our results also predict the potential observation of high-energy gamma rays from clusters by experiments like HAWC, LHAASO, and even the upcoming CTA.
Intracellular protein patterns are described by (nearly) mass-conserving reaction-diffusion systems. While these patterns initially form out of a homogeneous steady state due to the well-understood Turing instability, no general theory exists for the dynamics of fully nonlinear patterns. We develop a unifying theory for wavelength-selection dynamics in (nearly) mass-conserving two-component reaction-diffusion systems independent of the specific mathematical model chosen. This encompasses both the dynamics of the mesa- and peak-shaped patterns found in these systems. Our analysis uncovers a diffusion- and a reaction-limited regime of the dynamics, which provides a systematic link between the dynamics of mass-conserving reaction-diffusion systems and the Cahn-Hilliard as well as conserved Allen-Cahn equations, respectively. A stability threshold in the family of stationary patterns with different wavelengths predicts the wavelength selected for the final stationary pattern. At short wavelengths, self-amplifying mass transport between single pattern domains drives coarsening while at large wavelengths weak source terms that break strict mass conservation lead to an arrest of the coarsening process. The rate of mass competition between pattern domains is calculated analytically using singular perturbation theory, and rationalized in terms of the underlying physical processes. The resulting closed-form analytical expressions enable us to quantitatively predict the coarsening dynamics and the final pattern wavelength. We find excellent agreement of these expressions with numerical results. The systematic understanding of the length-scale dynamics of fully nonlinear patterns in two-component systems provided here builds the basis to reveal the mechanisms underlying wavelength selection in multi-component systems with potentially several conservation laws.
We have obtained deep 1 and 3 mm spectral-line scans towards a candidate z ≳ 5 ALMA-identified AzTEC submillimetre galaxy (SMG) in the Subaru/XMM-Newton Deep Field (or UKIDSS UDS), ASXDF1100.053.1, using the NOrthern Extended Millimeter Array (NOEMA), aiming to obtain its spectroscopic redshift. ASXDF1100.053.1 is an unlensed optically dark millimetre-bright SMG with S1100 μm = 3.5 mJy and KAB > 25.7 (2σ), which was expected to lie at z = 5-7 based on its radio-submillimetre photometric redshift. Our NOEMA spectral scan detected line emission due to 12CO(J = 5-4) and (J = 6-5), providing a robust spectroscopic redshift, zCO = 5.2383 ± 0.0005. Energy-coupled spectral energy distribution modelling from optical to radio wavelengths indicates an infrared luminosity LIR = 8.3−1.4+1.5 × 1012 L⊙, a star formation rate SFR = 630−380+260 M⊙ yr−1, a dust mass Md = 4.4−0.3+0.4 × 108 M⊙, a stellar mass Mstellar = 3.5−1.4+3.6 × 1011 M⊙, and a dust temperature Td = 37.4−1.8+2.3 K. The CO luminosity allows us to estimate a gas mass Mgas = 3.1 ± 0.3 × 1010 M⊙, suggesting a gas-to-dust mass ratio of around 70, fairly typical for z ∼ 2 SMGs. ASXDF1100.053.1 has ALMA continuum size Re = 1.0−0.1+0.2 kpc, so its surface infrared luminosity density ΣIR is 1.2−0.2+0.1 × 1012 L⊙ kpc−2. These physical properties indicate that ASXDF1100.053.1 is a massive dusty star-forming galaxy with an unusually compact starburst. It lies close to the star-forming main sequence at z ∼ 5, with low Mgas/Mstellar = 0.09, SFR/SFRMS(RSB) = 0.6, and a gas-depletion time τdep of ≈50 Myr, modulo assumptions about the stellar initial mass function in such objects. ASXDF1100.053.1 has extreme values of Mgas/Mstellar, RSB, and τdep compared to SMGs at z ∼ 2-4, and those of ASXDF1100.053.1 are the smallest among SMGs at z > 5. ASXDF1100.053.1 is likely a late-stage dusty starburst prior to passivisation. The number of z = 5.1-5.3 unlensed SMGs now suggests a number density dN/dz = 30.4 ± 19.0 deg−2, barely consistent with the latest cosmological simulations.
The He I λ10833 Å triplet is a powerful tool for characterising the upper atmosphere of exoplanets and tracing possible mass loss. Here, we analysed one transit of GJ 1214 b observed with the CARMENES high-resolution spectrograph to study its atmosphere via transmission spectroscopy around the He I triplet. Although previous studies using lower resolution instruments have reported non-detections of He I in the atmosphere of GJ 1214 b, we report here the first potential detection. We reconcile the conflicting results arguing that previous transit observations did not present good opportunities for the detection of He I, due to telluric H2O absorption and OH emission contamination. We simulated those earlier observations, and show evidence that the planetary signal was contaminated. From our single non-telluric-contaminated transit, we determined an excess absorption of 2.10−0.50+0.45% (4.6 σ) with a full width at half maximum (FWHM) of 1.30−0.25+0.30 Å. The detection of He I is statistically significant at the 4.6 σ level, but repeatability of the detection could not be confirmed due to the availability of only one transit. By applying a hydrodynamical model and assuming an H/He composition of 98/2, we found that GJ 1214 b would undergo hydrodynamic escape in the photon-limited regime, losing its primary atmosphere with a mass-loss rate of (1.5-18) × 1010 g s−1 and an outflow temperature in the range of 2900-4400 K. Further high-resolution follow-up observations of GJ 1214 b are needed to confirm and fully characterise the detection of an extended atmosphere surrounding GJ 1214 b. If confirmed, this would be strong evidence that this planet has a primordial atmosphere accreted from the original planetary nebula. Despite previous intensive observations from space- and ground-based observatories, our He I excess absorption is the first tentative detection of a chemical species in the atmosphere of this benchmark sub-Neptune planet.
We construct "soft-collinear gravity", the effective field theory which describes the interaction of collinear and soft gravitons with matter (and themselves), to all orders in the soft-collinear power expansion. Despite the absence of collinear divergences in gravity at leading power, the construction exhibits remarkable similarities with soft-collinear effective theory of QCD (gauge fields). It reveals an emergent soft background gauge symmetry, which allows for a manifestly gauge-invariant representation of the interactions in terms of a soft covariant derivative, the soft Riemann tensor, and a covariant generalisation of the collinear light-cone gauge metric field. The gauge symmetries control both the unsuppressed collinear field components and the inherent inhomogeneity in λ of the invariant objects to all orders, resulting in a consistent expansion.
Evaluation of the effective-range parameters for the $T_{cc}^+$ state in the LHCb model is examined. The finite width of $D^*$ leads to a shift of the expansion point into the complex plane to match analytical properties of the expanded amplitude. We perform an analytic continuation of the three-body scattering amplitude to the complex plane in a vicinity of the branch point and develop a robust procedure for computation of the expansion coefficients. The results yield a nearly-real scattering length, and two contributions to the the effective range which have not been accounted before.
The advent of deep learning has yielded powerful tools to automatically compute gradients of computations. This is because training a neural network equates to iteratively updating its parameters using gradient descent to find the minimum of a loss function. Deep learning is then a subset of a broader paradigm; a workflow with free parameters that is end-to-end optimisable, provided one can keep track of the gradients all the way through. This work introduces neos: an example implementation following this paradigm of a fully differentiable high-energy physics workflow, capable of optimising a learnable summary statistic with respect to the expected sensitivity of an analysis. Doing this results in an optimisation process that is aware of the modelling and treatment of systematic uncertainties.
We compute the two-loop mixed QCD-Electroweak corrections to $q \overline{q} \to H g$ and its crossed channels $q g \to H q$, $\overline{q} g \to H \overline{q}$, limiting ourselves to the contribution of light virtual quarks. We compute the independent helicity amplitudes as well as the form factors for this process, expressing them in terms of hyperlogarithms with algebraic arguments. The Feynman integrals are computed by direct integration over Feynman parameters and the results are expressed in terms of a basis of rational prefactors.
Solutions to vacuum Einstein field equations with cosmological constants, such as the de Sitter space and the anti-de Sitter space, are basic in different cosmological and theoretical developments. It is also well known that complex structures admit metrics of this type. The most famous example is the complex projective space endowed with the Fubini-Study metric. In this work, we perform a systematic study of Einstein complex geometries derived from a logarithmic Kähler potential. Depending on the different contribution to the argument of such logarithmic term, we shall distinguish among direct, inverted and hybrid coordinates. They are directly related to the signature of the metric and determine the maximum domain of the complex space where the geometry can be defined.
Brown dwarfs and exoplanets provide unique atmospheric regimes that hold information about their formation routes and evolutionary states. Modelling mineral cloud particle formation is key to prepare for missions and instruments like CRIRES+, JWST and ARIEL as well as possible polarimetry missions like {\sc PolStar}. The aim is to support more detailed observations that demand greater understanding of microphysical cloud processes. We extend our kinetic cloud formation model that treats nucleation, condensation, evaporation and settling of mixed material cloud particles to consistently model cloud particle-particle collisions. The new hybrid code, {\sc HyLandS}, is applied to a grid of {\sc Drift-Phoenix} (T, p)-profiles. Effective medium theory and Mie theory are used to investigate the optical properties. Turbulence is the main driving process of collisions, with collisions becoming the dominant process at the cloud base ($p>10^{-4}\,{\rm bar}$). Collisions produce one of three outcomes: fragmenting atmospheres ($\log_{10}(g)=3$), coagulating atmospheres ($\log_{10}(g)=5$, $T_{\rm eff} \leq 1800\, {\rm K}$) and condensational growth dominated atmospheres ($\log_{10}(g\,)=5$, $T_{\rm eff} > 1800\, {\rm K}$). Cloud particle opacity slope at optical wavelengths (HST) is increased with fragmentation, as are the silicate features at mid-infrared wavelengths. The hybrid moment-bin method {\sc HyLandS} demonstrates the feasibility of combining a moment and a bin method whilst assuring element conservation. It provides a powerful and fast tool for capturing general trends of particle collisions, consistently with other microphysical processes. Collisions are important in exoplanet and brown dwarf atmospheres but cannot be assumed to be hit-and-stick only. The spectral effects of collisions complicates inferences of cloud particle size and material composition from observational data.
We study weak radiative $|\Delta c|=|\Delta u|=1$ decays of the charmed anti-triplett ($\Lambda_c$, $\Xi_c^{+}$, $\Xi_c^{0}$) and sextet ($\Sigma_c^{++}$, $\Sigma_c^+$, $\Sigma_c^0$, $\Xi_c^{\prime +}$, $\Xi_c^{\prime 0}$, $\Omega_c$) baryons in the standard model (SM) and beyond. We work out $SU(2)$ and $SU(3)_F$-symmetry relations. We propose to study self-analyzing decay chains such as $\Xi_c^+ \to \Sigma^+ (\to p \pi^0) \gamma$ and $\Xi_c^0 \to \Lambda (\to p \pi^-) \gamma$, which enable new physics sensitive polarization studies. SM contributions can be controlled by corresponding analysis of the Cabibbo-favored decays $\Lambda_c^+ \to \Sigma^+ (\to p \pi^0) \gamma$ and $\Xi_c^0 \to \Xi^0 (\to \Lambda \pi^0) \gamma$. Further tests of the SM are available with initially polarized baryons including $\Lambda_c \to p \gamma$ together with $\Lambda_c \to \Sigma^+ \gamma$ decays, or $\Omega_c \to \Xi^0 \gamma$ together with $\Omega_c \to (\Lambda,\Sigma^0) \gamma$. In addition, CP-violating new physics contributions to dipole operators can enhance CP-asymmetries up to few percent.
We make the case for the systematic, reliable preservation of event-wise data, derived data products, and executable analysis code. This preservation enables the analyses' long-term future reuse, in order to maximise the scientific impact of publicly funded particle-physics experiments. We cover the needs of both the experimental and theoretical particle physics communities, and outline the goals and benefits that are uniquely enabled by analysis recasting and reinterpretation. We also discuss technical challenges and infrastructure needs, as well as sociological challenges and changes, and give summary recommendations to the particle-physics community.
We propose a parametrization of the leading $B$-meson light-cone distribution amplitude (LCDA) in heavy-quark effective theory (HQET). In position space, it uses a conformal transformation that yields a systematic Taylor expansion and an integral bound, which enables control of the truncation error. Our parametrization further produces compact analytical expressions for a variety of derived quantities. At a given reference scale, our momentum-space parametrization corresponds to an expansion in associated Laguerre polynomials, which turn into confluent hypergeometric functions ${}_1F_1$ under renormalization-group evolution at one-loop accuracy. Our approach thus allows a straightforward and transparent implementation of a variety of phenomenological constraints, regardless of their origin. Moreover, we can include theoretical information on the Taylor coefficients by using the local operator production expansion. We showcase the versatility of the parametrization in a series of phenomenological pseudo-fits.
The non-relativistic effective theory of dark matter-nucleon interactions depends on 28 coupling strengths for dark matter spin up to 1/2. Due to the vast parameter space of the effective theory, most experiments searching for dark matter interpret the results assuming that only one of the coupling strengths is non-zero. On the other hand, dark matter models generically lead in the non-relativistic limit to several interactions which interfere with one another, therefore the published limits cannot be straightforwardly applied to model predictions. We present a method to determine a rigorous upper limit on the dark matter-nucleon interaction strength including all possible interferences among operators. We illustrate the method to derive model independent upper limits on the interaction strengths from the null search results from XENON1T, PICO-60 and IceCube. For some interactions, the limits on the coupling strengths are relaxed by more than one order of magnitude. We also present a method that allows to combine the results from different experiments, thus exploiting the synergy between different targets in exploring the parameter space of dark matter-nucleon interactions.
Recent experimental results in $B$ physics from Belle, BaBar and LHCb suggest new physics (NP) in the weak $b\to c$ charged-current and the $b\to s$ neutral-current processes. Here we focus on the charged-current case and specifically on the decay modes $B\to D^{*+}\ell^- \bar{\nu}$ with $\ell = e, \mu,$ and $\tau$. The world averages of the ratios $R_D$ and $R_D^{*}$ currently differ from the Standard Model (SM) by $3.4\sigma$ while $\Delta A_{FB} = A_{FB}(B\to D^{*} \mu\nu) - A_{FB} (B\to D^{*} e \nu)$ is found to be $4.1\sigma$ away from the SM prediction in an analysis of 2019 Belle data. These intriguing results suggest an urgent need for improved simulation and analysis techniques in $B\to D^{*+}\ell^- \bar{\nu}$ decays. Here we describe a Monte Carlo Event-generator tool based on EVTGEN developed to allow simulation of the NP signatures in $B\to D^*\ell^- \nu$, which arise due to the interference between the SM and NP amplitudes. As a demonstration of the proposed approach, we exhibit some examples of NP couplings that are consistent with current data and could explain the $\Delta A_{FB}$ anomaly in $B\to D^*\ell^- \nu$ while remaining consistent with other constraints. We show that the $\Delta$-type observables such as $\Delta A_{FB}$ and $\Delta S_5$ eliminate most QCD uncertainties from form factors and allow for clean measurements of NP. We introduce correlated observables that improve the sensitivity to NP. We discuss prospects for improved observables sensitive to NP couplings with the expected 50 ab$^{-1}$ of Belle II data, which seems to be ideally suited for this class of measurements.
We stress the importance of precise measurements of rare decays $K^+\rightarrow\pi^+\nu\bar\nu$, $K_L\rightarrow\pi^0\nu\bar\nu$, $K_{L,S}\to\mu^+\mu^-$ and $K_{L,S}\to\pi^0\ell^+\ell^-$ for the search of new physics (NP). This includes both branching ratios and the distributions in $q^2$, the invariant mass-squared of the neutrino system in the case of $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_L\rightarrow\pi^0\nu\bar\nu$ and of the $\ell^+\ell^-$ system in the case of the remaining decays. In particular the correlations between these observables and their correlations with the ratio $\varepsilon'/\varepsilon$ in $K_L\to\pi\pi$ decays, the CP-violating parameter $\varepsilon_K$ and the $K^0-\bar K^0$ mass difference $\Delta M_K$, should help to disentangle the nature of possible NP. We stress the strong sensitivity of all observables with the exception of $\Delta M_K$ to the CKM parameter $|V_{cb}|$ and list a number of $|V_{cb}|$-independent ratios within the SM which exhibit rather different dependences on the angles $\beta$ and $\gamma$ of the unitarity triangle. The particular role of these decays in probing very short distance scales far beyond the ones explored at the LHC is emphasized. In this context the role of the Standard Model Effective Field Theory (SMEFT) is very important. We also address briefly the issue of the footprints of Majorana neutrinos in $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_L\rightarrow\pi^0\nu\bar\nu$.
We search for the signature of parity-violating physics in the cosmic microwave background, called cosmic birefringence, using the Planck data release 4. We initially find a birefringence angle of β =0.30 °±0.11 ° (68% C.L.) for nearly full-sky data. The values of β decrease as we enlarge the Galactic mask, which can be interpreted as the effect of polarized foreground emission. Two independent ways to model this effect are used to mitigate the systematic impact on β for different sky fractions. We choose not to assign cosmological significance to the measured value of β until we improve our knowledge of the foreground polarization.
Using numerical simulations, we investigate the gravitational evolution of filamentary molecular cloud structures and their condensation into dense protostellar cores. One possible process is the so called 'edge effect', the pile-up of matter at the end of the filament due to self-gravity. This effect is predicted by theory but only rarely observed. To get a better understanding of the underlying processes we used a simple analytic approach to describe the collapse and the corresponding collapse time. We identify a model of two distinct phases: The first phase is free fall dominated, due to the self-gravity of the filament. In the second phase, after the turning point, the collapse is balanced by the ram pressure, produced by the inside material of the filament, which leads to a constant collapse velocity. This approach reproduces the established collapse time of uniform density filaments and agrees well with our hydrodynamic simulations. In addition, we investigate the influence of different radial density profiles on the collapse. We find that the deviations compared to the uniform filament are less than 10%. Therefore, the analytic collapse model of the uniform density filament is an excellent general approach.
RES-NOVA is a newly proposed experiment for the detection of neutrinos from astrophysical sources, mainly Supernovae, using an array of cryogenic detectors made of PbWO$_4$ crystals produced from archaeological Pb. This unconventional material, characterized by intrinsic high radiopurity, enables to achieve low-background levels in the region of interest for the neutrino detection via Coherent Elastic neutrino-Nucleus Scattering (CE$\nu$NS). This signal lies at the detector energy threshold, O(1 keV), and it is expected to be hidden by naturally occurring radioactive contaminants of the crystal absorber. Here, we present the results of a radiopurity assay on a 0.84 kg PbWO$_4$ crystal produced from archaeological Pb operated as a cryogenic detector. The crystal internal radioactive contaminations are: $^{232}$Th $<$40 $\mu$Bq/kg, $^{238}$U $<$30 $\mu$Bq/kg, $^{226}$Ra 1.3 mBq/kg and $^{210}$Pb 22.5 mBq/kg. We present also a background projection for the final experiment and possible mitigation strategies for further background suppression. The achieved results demonstrate the feasibility of realizing this new class of detectors.
Cross-correlations of galaxy positions and galaxy shears with maps of gravitational lensing of the cosmic microwave background (CMB) are sensitive to the distribution of large-scale structure in the Universe. Such cross-correlations are also expected to be immune to some of the systematic effects that complicate correlation measurements internal to galaxy surveys. We present measurements and modeling of the cross-correlations between galaxy positions and galaxy lensing measured in the first three years of data from the Dark Energy Survey with CMB lensing maps derived from a combination of data from the 2500 deg$^2$ SPT-SZ survey conducted with the South Pole Telescope and full-sky data from the Planck satellite. The CMB lensing maps used in this analysis have been constructed in a way that minimizes biases from the thermal Sunyaev Zel'dovich effect, making them well suited for cross-correlation studies. The total signal-to-noise of the cross-correlation measurements is 23.9 (25.7) when using a choice of angular scales optimized for a linear (nonlinear) galaxy bias model. We use the cross-correlation measurements to obtain constraints on cosmological parameters. For our fiducial galaxy sample, which consist of four bins of magnitude-selected galaxies, we find constraints of $\Omega_{m} = 0.272^{+0.032}_{-0.052}$ and $S_{8} \equiv \sigma_8 \sqrt{\Omega_{m}/0.3}= 0.736^{+0.032}_{-0.028}$ ($\Omega_{m} = 0.245^{+0.026}_{-0.044}$ and $S_{8} = 0.734^{+0.035}_{-0.028}$) when assuming linear (nonlinear) galaxy bias in our modeling. Considering only the cross-correlation of galaxy shear with CMB lensing, we find $\Omega_{m} = 0.270^{+0.043}_{-0.061}$ and $S_{8} = 0.740^{+0.034}_{-0.029}$. Our constraints on $S_8$ are consistent with recent cosmic shear measurements, but lower than the values preferred by primary CMB measurements from Planck.
The building of planetary systems is controlled by the gas and dust dynamics of protoplanetary disks. While the gas is simultaneously accreted onto the central star and dissipated away by winds, dust grains aggregate and collapse to form planetesimals and eventually planets. This dust and gas dynamics involves instabilities, turbulence and complex non-linear interactions which ultimately control the observational appearance and the secular evolution of these disks. This chapter is dedicated to the most recent developments in our understanding of the dynamics of gaseous and dusty disks, covering hydrodynamic and magnetohydrodynamic turbulence, gas-dust instabilities, dust clumping and disk winds. We show how these physical processes have been tested from observations and highlight standing questions that should be addressed in the future.
The design of optimal test statistics is a key task in frequentist statistics and for a number of scenarios optimal test statistics such as the profile-likelihood ratio are known. By turning this argument around we can find the profile likelihood ratio even in likelihood-free cases, where only samples from a simulator are available, by optimizing a test statistic within those scenarios. We propose a likelihood-free training algorithm that produces test statistics that are equivalent to the profile likelihood ratios in cases where the latter is known to be optimal.
The most common predictions for rare $K$ and $B$ decay branching ratios in the Standard Model are based on the CKM elements $|V_{cb}|$ and $|V_{ub}|$ resulting from global fits, that are in the ballpark of their inclusive and exclusive determinations, respectively. In the present paper we follow another route. We assume that the future true values of $|V_{cb}|$ and $|V_{ub}|$ will be both from exclusive determinations and set them equal to the most recent ones from FLAG. An unusual pattern of SM predictions results from this study with some existing tensions being dwarfed and new tensions being born. In particular using the HPQCD $B^0_{s,d}-\bar B^0_{s,d}$ hadronic matrix elements a $3.1\sigma$ tension in $\Delta M_s$ independently of $\gamma$ is found. For $60^\circ\le\gamma\le 75^\circ$ the tension in $\Delta M_d$ between $4.0\sigma$ and $1.1\sigma$ is found and in the case of $\epsilon_K$ between $5.2\sigma$ and $2.1\sigma$. Moreover, the room for new physics in $K^+\to\pi^+\nu\bar\nu$, $K_L\to\pi^0\nu\bar\nu$ and $B\to K(K^*)\nu\bar\nu$ decays is significantly increased. We compare the results in this EXCLUSIVE scenario with the HYBRID one in which $|V_{cb}|$ in the former scenario is replaced by the most recent inclusive $|V_{cb}|$ and present the dependence of all observables considered by us in both scenarios as functions of $\gamma$. We also compare the determination of $|V_{cb}|$ from $\Delta M_s$, $\Delta M_d$, $\epsilon_K$ and $S_{\psi K_S}$ using $B^0_{s,d}-\bar B^0_{s,d}$ hadronic matrix elements from LQCD with $2+1+1$ flavours, $2+1$ flavours and their average. Only for the $2+1+1$ case values for $\beta$ and $\gamma$ exist for which the same value of $|V_{cb}|$ is found: $|V_{cb}|=42.6(7)\times 10^{-3}$. This in turn implies a $2.7\sigma$ anomaly in $B_s\to\mu^+\mu^-$.
We highlight the need for the development of comprehensive amplitude analysis methods to further our understanding of hadron spectroscopy. Reaction amplitudes constrained by first principles of $S$-matrix theory and by QCD phenomenology are needed to extract robust interpretations of the data from experiments and from lattice calculations.
A pseudoscalar "axionlike" field, $\phi$, may explain the $3\sigma$ hint of cosmic birefringence observed in the $EB$ power spectrum of the cosmic microwave background (CMB) polarization data. Is $\phi$ dark energy or dark matter? A tomographic approach can answer this question. The effective mass of dark energy field responsible for the accelerated expansion of the Universe today must be smaller than $m_\phi\simeq 10^{-33}$ eV. If $m_\phi \gtrsim 10^{-32}$ eV, $\phi$ starts evolving before the epoch of reionization and we should observe different amounts of birefringence from the $EB$ power spectrum at low ($l\lesssim 10$) and high multipoles. Such an observation, which requires a full-sky satellite mission, would rule out $\phi$ being dark energy. If $m_\phi \gtrsim 10^{-28}$ eV, $\phi$ starts oscillating during the epoch of recombination, leaving a distinct signature in the $EB$ power spectrum at high multipoles, which can be measured precisely by ground-based CMB observations. Our tomographic approach relies on the shape of the $EB$ power spectrum and is less sensitive to miscalibration of polarization angles.
The persistent tensions between inclusive and exclusive determinations of $|V_{cb}|$ and $|V_{ub}|$ weaken the power of theoretically clean rare $K$ and $B$ decays in the search for new physics (NP). We demonstrate how this uncertainty can be practically removed by considering within the SM suitable ratios of various branching ratios. This includes the branching ratios for $K^+\to\pi^+\nu\bar\nu$, $K_{L}\to\pi^0\nu\bar\nu$, $K_S\to\mu^+\mu^-$, $B_{s,d}\to\mu^+\mu^-$ and $B\to K(K^*)\nu\bar\nu$. Also $\epsilon_K$, $\Delta M_d$, $\Delta M_s$ and the mixing induced CP-asymmetry $S_{\psi K_S}$, all measured already very precisely, play an important role in this analysis. The highlights of our analysis are 16 $|V_{cb}|$ and $|V_{ub}|$ independent ratios that often are independent of the CKM arameters or depend only on the angles $\beta$ and $\gamma$ in the Unitarity Triangle with $\beta$ already precisely known and $\gamma$ to be measured precisely in the coming years by the LHCb and Belle II collaborations. Once $\gamma$ Once $\gamma$ is measured precisely these 16 ratios taken together are expected to be a powerful tool in the search for new physics. Assuming no NP in $|\epsilon_K|$ and $S_{\psi K_S}$ we determine independently of $|V_{cb}|$: $\mathcal{B}(K^+\to\pi^+\nu\bar\nu)_\text{SM}= (8.60\pm0.42)\times 10^{-11}$ and $\mathcal{B}(K_L\to\pi^0\nu\bar\nu)_\text{SM}=(2.94\pm 0.15)\times 10^{-11}$. This are the most precise determinations to date. Assuming no NP in $\Delta M_{s,d}$ allows to obtain analogous results for all $B$ decay branching ratios considered in our paper without any CKM uncertainties.
Context. X-ray- and extreme-ultraviolet- (XEUV-) driven photoevaporative winds acting on protoplanetary disks around young T Tauri stars may strongly impact disk evolution, affecting both gas and dust distributions. Small dust grains in the disk are entrained in the outflow and may produce a detectable signal. In this work, we investigate the possibility of detecting dusty outflows from transition disks with an inner cavity.
Aims: We compute dust densities for the wind regions of XEUV-irradiated transition disks and determine whether they can be observed at wavelengths 0.7 ≲ λobs [μm] ≲ 1.8 with current instrumentation.
Methods: We simulated dust trajectories on top of 2D hydrodynamical gas models of two transition disks with inner holes of 20 and 30 AU, irradiated by both X-ray and EUV spectra from a central T Tauri star. The trajectories and two different settling prescriptions for the dust distribution in the underlying disk were used to calculate wind density maps for individual grain sizes. Finally, the resulting dust densities were converted to synthetic observations in scattered and polarised light.
Results: For an XEUV-driven outflow around a M* = 0.7 M⊙ T Tauri star with LX = 2 × 1030 erg s-1, we find dust mass-loss rates Ṁdust ≲ 2.0 × 10−3 Ṁgas, and if we invoke vertical settling, the outflow is quite collimated. The synthesised images exhibit a distinct chimney-like structure. The relative intensity of the chimneys is low, but their detection may still be feasible with current instrumentation under optimal conditions.
Conclusions: Our results motivate observational campaigns aimed at the detection of dusty photoevaporative winds in transition disks using JWST NIRCam and SPHERE IRDIS.
We present new constraints on spectator axion-${\rm U}(1)$ gauge field interactions during inflation using the latest Planck ($PR4$) and BICEP/Keck 2018 data releases. This model can source tensor perturbations from amplified gauge field fluctuations, driven by an axion rolling for a few e-folds during inflation. The gravitational waves sourced in this way have a strongly scale-dependent (and chiral) spectrum, with potentially visible contributions to large/intermediate scale $B$-modes of the CMB. We first derive theoretical bounds on the model imposing validity of the perturbative regime and negligible backreaction of the gauge field on the background dynamics. Then, we determine bounds from current CMB observations, adopting a frequentist profile likelihood approach. We study the behaviour of constraints for typical choices of the model's parameters, analyzing the impact of different dataset combinations. We find that observational bounds are competitive with theoretical ones and together they exclude a significant portion of the model's parameter space. We argue that the parameter space still remains large and interesting for future CMB experiments targeting large/intermediate scales $B$-modes.
We report the discovery of GJ 3929 b, a hot Earth-sized planet orbiting the nearby M3.5 V dwarf star, GJ 3929 (G 180-18, TOI-2013). Joint modelling of photometric observations from TESS sectors 24 and 25 together with 73 spectroscopic observations from CARMENES and follow-up transit observations from SAINT-EX, LCOGT, and OSN yields a planet radius of Rb = 1.150 ± 0.040 R⊕, a mass of Mb = 1.21 ± 0.42 M⊕, and an orbital period of Pb = 2.6162745 ± 0.0000030 d. The resulting density of ρb = 4.4 ± 1.6 g cm−3 is compatible with the Earth's mean density of about 5.5 g cm−3. Due to the apparent brightness of the host star (J = 8.7 mag) and its small size, GJ 3929 b is a promising target for atmospheric characterisation with the JWST. Additionally, the radial velocity data show evidence for another planet candidate with P[c] = 14.303 ± 0.035 d, which is likely unrelated to the stellar rotation period, Prot = 122 ± 13 d, which we determined from archival HATNet and ASAS-SN photometry combined with newly obtained TJO data.
RV data and stellar activity indices are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/659/A17
The full optimization of the design and operation of instruments whose functioning relies on the interaction of radiation with matter is a super-human task, given the large dimensionality of the space of possible choices for geometry, detection technology, materials, data-acquisition, and information-extraction techniques, and the interdependence of the related parameters. On the other hand, massive potential gains in performance over standard, "experience-driven" layouts are in principle within our reach if an objective function fully aligned with the final goals of the instrument is maximized by means of a systematic search of the configuration space. The stochastic nature of the involved quantum processes make the modeling of these systems an intractable problem from a classical statistics point of view, yet the construction of a fully differentiable pipeline and the use of deep learning techniques may allow the simultaneous optimization of all design parameters. In this document we lay down our plans for the design of a modular and versatile modeling tool for the end-to-end optimization of complex instruments for particle physics experiments as well as industrial and medical applications that share the detection of radiation as their basic ingredient. We consider a selected set of use cases to highlight the specific needs of different applications.
Joint analyses of cross-correlations between measurements of galaxy positions, galaxy lensing, and lensing of the cosmic microwave background (CMB) offer powerful constraints on the large-scale structure of the Universe. In a forthcoming analysis, we will present cosmological constraints from the analysis of such cross-correlations measured using Year 3 data from the Dark Energy Survey (DES), and CMB data from the South Pole Telescope (SPT) and Planck. Here we present two key ingredients of this analysis: (1) an improved CMB lensing map in the SPT-SZ survey footprint, and (2) the analysis methodology that will be used to extract cosmological information from the cross-correlation measurements. Relative to previous lensing maps made from the same CMB observations, we have implemented techniques to remove contamination from the thermal Sunyaev Zel'dovich effect, enabling the extraction of cosmological information from smaller angular scales of the cross-correlation measurements than in previous analyses with DES Year 1 data. We describe our model for the cross-correlations between these maps and DES data, and validate our modeling choices to demonstrate the robustness of our analysis. We then forecast the expected cosmological constraints from the galaxy survey-CMB lensing auto and cross-correlations. We find that the galaxy-CMB lensing and galaxy shear-CMB lensing correlations will on their own provide a constraint on $S_8=\sigma_8 \sqrt{\Omega_{\rm m}/0.3}$ at the few percent level, providing a powerful consistency check for the DES-only constraints. We explore scenarios where external priors on shear calibration are removed, finding that the joint analysis of CMB lensing cross-correlations can provide constraints on the shear calibration amplitude at the 5 to 10% level.
We study the inner structure of the group-scale lens CASSOWARY 31 (CSWA 31) by adopting both strong lensing and dynamical modeling. CSWA 31 is a peculiar lens system. The brightest group galaxy (BGG) is an ultra-massive elliptical galaxy at z = 0.683 with a weighted mean velocity dispersion of $\sigma = 432 \pm 31$ km s$^{-1}$. It is surrounded by group members and several lensed arcs probing up to ~150 kpc in projection. Our results significantly improve previous analyses of CSWA 31 thanks to the new HST imaging and MUSE integral-field spectroscopy. From the secure identification of five sets of multiple images and measurements of the spatially-resolved stellar kinematics of the BGG, we conduct a detailed analysis of the multi-scale mass distribution using various modeling approaches, both in the single and multiple lens-plane scenarios. Our best-fit mass models reproduce the positions of multiple images and provide robust reconstructions for two background galaxies at z = 1.4869 and z = 2.763. The relative contributions from the BGG and group-scale halo are remarkably consistent in our three reference models, demonstrating the self-consistency between strong lensing analyses based on image position and extended image modeling. We find that the ultra-massive BGG dominates the projected total mass profiles within 20 kpc, while the group-scale halo dominates at larger radii. The total projected mass enclosed within $R_{eff}$ = 27.2 kpc is $1.10_{-0.04}^{+0.02} \times 10^{13}$ M$_\odot$. We find that CSWA 31 is a peculiar fossil group, strongly dark-matter dominated towards the central region, and with a projected total mass profile similar to higher-mass cluster-scale halos. The total mass-density slope within the effective radius is shallower than isothermal, consistent with previous analyses of early-type galaxies in overdense environments.
In this white paper for the Snowmass process, we discuss the prospects of probing new physics explanations of the persistent rare $B$ decay anomalies with a muon collider. If the anomalies are indirect signs of heavy new physics, non-standard rates for $\mu^+ \mu^- \to b s$ production should be observed with high significance at a muon collider with center of mass energy of $\sqrt{s} = 10$ TeV. The forward-backward asymmetry of the $b$-jet provides diagnostics of the chirality structure of the new physics couplings. In the absence of a signal, $\mu^+ \mu^- \to b s$ can indirectly probe new physics scales as large as $86$ TeV. Beam polarization would have an important impact on the new physics sensitivity.
First-principle simulations are at the heart of the high-energy physics research program. They link the vast data output of multi-purpose detectors with fundamental theory predictions and interpretation. This review illustrates a wide range of applications of modern machine learning to event generation and simulation-based inference, including conceptional developments driven by the specific requirements of particle physics. New ideas and tools developed at the interface of particle physics and machine learning will improve the speed and precision of forward simulations, handle the complexity of collision data, and enhance inference as an inverse simulation problem.
Mini-EUSO is a telescope launched on board the International Space Station in 2019 and currently located in the Russian section of the station. Main scientific objectives of the mission are the search for nuclearites and Strange Quark Matter, the study of atmospheric phenomena such as Transient Luminous Events, meteors and meteoroids, the observation of sea bioluminescence and of artificial satellites and man-made space debris. It is also capable of observing Extensive Air Showers generated by Ultra-High Energy Cosmic Rays with an energy above 10$^{21}$ eV and detect artificial showers generated with lasers from the ground. Mini-EUSO can map the night-time Earth in the UV range (290 - 430 nm), with a spatial resolution of about 6.3 km and a temporal resolution of 2.5 $\mu$s, observing our planet through a nadir-facing UV-transparent window in the Russian Zvezda module. The instrument, launched on 2019/08/22 from the Baikonur cosmodrome, is based on an optical system employing two Fresnel lenses and a focal surface composed of 36 Multi-Anode Photomultiplier tubes, 64 channels each, for a total of 2304 channels with single photon counting sensitivity and an overall field of view of 44$^{\circ}$. Mini-EUSO also contains two ancillary cameras to complement measurements in the near infrared and visible ranges. In this paper we describe the detector and present the various phenomena observed in the first year of operation.
Joint analyses of cross-correlations between measurements of galaxy positions, galaxy lensing, and lensing of the cosmic microwave background (CMB) offer powerful constraints on the large-scale structure of the Universe. In a forthcoming analysis, we will present cosmological constraints from the analysis of such cross-correlations measured using Year 3 data from the Dark Energy Survey (DES), and CMB data from the South Pole Telescope (SPT) and Planck. Here we present two key ingredients of this analysis: (1) an improved CMB lensing map in the SPT-SZ survey footprint, and (2) the analysis methodology that will be used to extract cosmological information from the cross-correlation measurements. Relative to previous lensing maps made from the same CMB observations, we have implemented techniques to remove contamination from the thermal Sunyaev Zel'dovich effect, enabling the extraction of cosmological information from smaller angular scales of the cross-correlation measurements than in previous analyses with DES Year 1 data. We describe our model for the cross-correlations between these maps and DES data, and validate our modeling choices to demonstrate the robustness of our analysis. We then forecast the expected cosmological constraints from the galaxy survey-CMB lensing auto and cross-correlations. We find that the galaxy-CMB lensing and galaxy shear-CMB lensing correlations will on their own provide a constraint on $S_8=\sigma_8 \sqrt{\Omega_{\rm m}/0.3}$ at the few percent level, providing a powerful consistency check for the DES-only constraints. We explore scenarios where external priors on shear calibration are removed, finding that the joint analysis of CMB lensing cross-correlations can provide constraints on the shear calibration amplitude at the 5 to 10% level.
Mini-EUSO is a detector observing the Earth in the ultraviolet band from the International Space Station through a nadir-facing window, transparent to the UV radiation, in the Russian Zvezda module. Mini-EUSO main detector consists in an optical system with two Fresnel lenses and a focal surface composed of an array of 36 Hamamatsu Multi-Anode Photo-Multiplier tubes, for a total of 2304 pixels, with single photon counting sensitivity. The telescope also contains two ancillary cameras, in the near infrared and visible ranges, to complement measurements in these bandwidths. The instrument has a field of view of 44 degrees, a spatial resolution of about 6.3 km on the Earth surface and of about 4.7 km on the ionosphere. The telescope detects UV emissions of cosmic, atmospheric and terrestrial origin on different time scales, from a few micoseconds upwards. On the fastest timescale of 2.5 microseconds, Mini-EUSO is able to observe atmospheric phenomena as Transient Luminous Events and in particular the ELVES, which take place when an electromagnetic wave generated by intra-cloud lightning interacts with the ionosphere, ionizing it and producing apparently superluminal expanding rings of several 100 km and lasting about 100 microseconds. These highly energetic fast events have been observed to be produced in conjunction also with Terrestrial Gamma-Ray Flashes and therefore a detailed study of their characteristics (speed, radius, energy...) is of crucial importance for the understanding of these phenomena. In this paper we present the observational capabilities of ELVE detection by Mini-EUSO and specifically the reconstruction and study of ELVE characteristics.
We present cosmological constraints from the analysis of angular power spectra of cosmic shear maps based on data from the first three years of observations by the Dark Energy Survey (DES Y3). Our measurements are based on the pseudo-$C_\ell$ method and offer a view complementary to that of the two-point correlation functions in real space, as the two estimators are known to compress and select Gaussian information in different ways, due to scale cuts. They may also be differently affected by systematic effects and theoretical uncertainties, such as baryons and intrinsic alignments (IA), making this analysis an important cross-check. In the context of $\Lambda$CDM, and using the same fiducial model as in the DES Y3 real space analysis, we find ${S_8 \equiv \sigma_8 \sqrt{\Omega_{\rm m}/0.3} = 0.793^{+0.038}_{-0.025}}$, which further improves to ${S_8 = 0.784\pm 0.026 }$ when including shear ratios. This constraint is within expected statistical fluctuations from the real space analysis, and in agreement with DES~Y3 analyses of non-Gaussian statistics, but favors a slightly higher value of $S_8$, which reduces the tension with the Planck cosmic microwave background 2018 results from $2.3\sigma$ in the real space analysis to $1.5\sigma$ in this work. We explore less conservative IA models than the one adopted in our fiducial analysis, finding no clear preference for a more complex model. We also include small scales, using an increased Fourier mode cut-off up to $k_{\rm max}={5}{h{\rm Mpc}^{-1}}$, which allows to constrain baryonic feedback while leaving cosmological constraints essentially unchanged. Finally, we present an approximate reconstruction of the linear matter power spectrum at present time, which is found to be about 20% lower than predicted by Planck 2018, as reflected by the $1.5\sigma$ lower $S_8$ value.
The field of UHECRs (Ultra-High energy cosmic Rays) and the understanding of particle acceleration in the cosmos, as a key ingredient to the behaviour of the most powerful sources in the universe, is of outmost importance for astroparticle physics as well as for fundamental physics and will improve our general understanding of the universe. The current main goals are to identify sources of UHECRs and their composition. For this, increased statistics is required. A space-based detector for UHECR research has the advantage of a very large exposure and a uniform coverage of the celestial sphere. The aim of the JEM-EUSO program is to bring the study of UHECRs to space. The principle of observation is based on the detection of UV light emitted by isotropic fluorescence of atmospheric nitrogen excited by the Extensive Air Showers (EAS) in the Earth's atmosphere and forward-beamed Cherenkov radiation reflected from the Earth's surface or dense cloud tops. In addition to the prime objective of UHECR studies, JEMEUSO will do several secondary studies due to the instruments' unique capacity of detecting very weak UV-signals with extreme time-resolution around 1 microsecond: meteors, Transient Luminous Events (TLE), bioluminescence, maps of human generated UV-light, searches for Strange Quark Matter (SQM) and high-energy neutrinos, and more. The JEM-EUSO program includes several missions from ground (EUSO-TA), from stratospheric balloons (EUSO-Balloon, EUSO-SPB1, EUSO-SPB2), and from space (TUS, Mini-EUSO) employing fluorescence detectors to demonstrate the UHECR observation from space and prepare the large size missions K-EUSO and POEMMA. A review of the current status of the program, the key results obtained so far by the different projects, and the perspectives for the near future are presented.
We present new constraints on spectator axion-U(1) gauge field interactions during
inflation using the latest Planck (PR4) and BICEP/Keck 2018 data releases. This model can source
tensor perturbations from amplified gauge field fluctuations, driven by an axion rolling for a few
e-folds during inflation. The gravitational waves sourced in this way have a strongly
scale-dependent (and chiral) spectrum, with potentially visible contributions to
large/intermediate scale B-modes of the CMB. We first derive theoretical bounds on the model
imposing validity of the perturbative regime and negligible backreaction of the gauge field on the
background dynamics. Then, we determine bounds from current CMB observations, adopting a
frequentist profile likelihood approach. We study the behaviour of constraints for typical choices
of the model's parameters, analyzing the impact of different dataset combinations. We find that
observational bounds are competitive with theoretical ones and together they exclude a significant
portion of the model's parameter space. We argue that the parameter space still remains large and
interesting for future CMB experiments targeting large/intermediate scales B-modes.
Mini-EUSO is a small orbital telescope with a field of view of $44^{\circ}\times 44^{\circ}$, observing the night-time Earth mostly in 320-420 nm band. Its time resolution spanning from microseconds (triggered) to milliseconds (untriggered) and more than $300\times 300$ km of the ground covered, already allowed it to register thousands of meteors. Such detections make the telescope a suitable tool in the search for hypothetical heavy compact objects, which would leave trails of light in the atmosphere due to their high density and speed. The most prominent example are the nuclearites -- hypothetical lumps of strange quark matter that could be stabler and denser than the nuclear matter. In this paper, we show potential limits on the flux of nuclearites after collecting 42 hours of observations data.
The Fluorescence Telescope is one of the two telescopes on board the Extreme Universe Space Observatory on a Super Pressure Balloon II (EUSO-SPB2). EUSO-SPB2 is an ultra-long-duration balloon mission that aims at the detection of Ultra High Energy Cosmic Rays (UHECR) via the fluorescence technique (using a Fluorescence Telescope) and of Ultra High Energy (UHE) neutrinos via Cherenkov emission (using a Cherenkov Telescope). The mission is planned to fly in 2023 and is a precursor of the Probe of Extreme Multi-Messenger Astrophysics (POEMMA). The Fluorescence Telescope is a second generation instrument preceded by the telescopes flown on the EUSO-Balloon and EUSO-SPB1 missions. It features Schmidt optics and has a 1-meter diameter aperture. The focal surface of the telescope is equipped with a 6912-pixel Multi Anode Photo Multipliers (MAPMT) camera covering a 37.4 x 11.4 degree Field of Regard. Such a big Field of Regard, together with a flight target duration of up to 100 days, would allow, for the first time from suborbital altitudes, detection of UHECR fluorescence tracks. This contribution will provide an overview of the instrument including the current status of the telescope development.
The Extreme Universe Space Observatory Supper Pressure Balloon 2 (EUSO-SPB2) is under development, and will prototype instrumentation for future satellite-based missions, including the Probe of Extreme Multi-Messenger Astrophysics (POEMMA). EUSO-SPB2 will consist of two telescopes. The first is a Cherenkov telescope (CT) being developed to identify and estimate the background sources for future below-the-limb very high energy (E>10 PeV) astrophysical neutrino observations, as well as above-the-limb cosmic ray induced signals (E>1 PeV). The second is a fluorescence telescope (FT) being developed for detection of Ultra High Energy Cosmic Rays (UHECRs). In preparation for the expected launch in 2023, extensive simulations tuned by preliminary laboratory measurements have been preformed to understand the FT capabilities. The energy threshold has been estimated at $10^{18.2}$ eV, and results in a maximum detection rate at $10^{18.6}$ eV when taking into account the shape of the UHECR spectrum. In addition, onboard software has been developed based on the simulations as well as experience with previous EUSO missions. This includes a level 1 trigger to be run on the computationally limited flight hardware, as well as a deep learning based prioritization algorithm in order to accommodate the balloon's telemetry budget. These techniques could also be used later for future, space-based missions.
We present the status of the development of a Cherenkov telescope to be flown on a long-duration balloon flight, the Extreme Universe Space Observatory Super Pressure Balloon 2 (EUSO-SPB2). EUSO-SPB2 is an approved NASA balloon mission that is planned to fly in 2023 and is a precursor of the Probe of Extreme Multi-Messenger Astrophysics (POEMMA), a candidate for an Astrophysics probe-class mission. The purpose of the Cherenkov telescope on-board EUSOSPB2 is to classify known and unknown sources of backgrounds for future space-based neutrino detectors. Furthermore, we will use the Earth-skimming technique to search for Very-High-Energy (VHE) tau neutrinos below the limb (E > 10 PeV) and observe air showers from cosmic rays above the limb. The 0.785 m^2 Cherenkov telescope is equipped with a 512-pixel SiPM camera covering a 12.8° x 6.4° (Horizontal x Vertical) field of view. The camera signals are digitized with a 100 MS/s readout system. In this paper, we discuss the status of the telescope development, the camera integration, and simulation studies of the camera response.
The Extreme Universe Space Observatory - Super Pressure Balloon (EUSO-SPB2) mission will fly two custom telescopes that feature Schmidt optics to measure Čerenkov- and fluorescence-emission of extensive air-showers from cosmic rays at the PeV and EeV-scale, and search for tau-neutrinos. Both telescopes have 1-meter diameter apertures and UV/UV-visible sensitivity. The Čerenkov telescope uses a bifocal mirror segment alignment, to distinguish between a direct cosmic ray that hits the camera versus the Čerenkov light from outside the telescope. Telescope integration and laboratory calibration will be performed in Colorado. To estimate the point spread function and efficiency of the integrated telescopes, a test beam system that delivers a 1-meter diameter parallel beam of light is being fabricated. End-to-end tests of the fully integrated instruments will be carried out in a field campaign at dark sites in the Utah desert using cosmic rays, stars, and artificial light sources. Laser tracks have long been used to characterize the performance of fluorescence detectors in the field. For EUSO-SPB2 an improvement in the method that includes a correction for aerosol attenuation is anticipated by using a bi-dynamic Lidar configuration in which both the laser and the telescope are steerable. We plan to conduct these field tests in Fall 2021 and Spring 2022 to accommodate the scheduled launch of EUSO-SPB2 in 2023 from Wanaka, New Zealand.
The Extreme Universe Space Observatory on a Super Pressure Balloon II (EUSO-SPB2) is a second generation stratospheric balloon instrument for the detection of Ultra High Energy Cosmic Rays (UHECRs, E > 1 EeV) via the fluorescence technique and of Very High Energy (VHE, E > 10 PeV) neutrinos via Cherenkov emission. EUSO-SPB2 is a pathfinder mission for instruments like the proposed Probe Of Extreme Multi-Messenger Astrophysics (POEMMA). The purpose of such a space-based observatory is to measure UHECRs and UHE neutrinos with high statistics and uniform exposure. EUSO-SPB2 is designed with two Schmidt telescopes, each optimized for their respective observational goals. The Fluorescence Telescope looks at the nadir to measure the fluorescence emission from UHECR-induced extensive air shower (EAS), while the Cherenkov Telescope is optimized for fast signals ($\sim$10 ns) and points near the Earth's limb. This allows for the measurement of Cherenkov light from EAS caused by Earth skimming VHE neutrinos if pointed slightly below the limb or from UHECRs if observing slightly above. The expected launch date of EUSO-SPB2 is Spring 2023 from Wanaka, NZ with target duration of up to 100 days. Such a flight would provide thousands of VHECR Cherenkov signals in addition to tens of UHECR fluorescence tracks. Neither of these kinds of events have been observed from either orbital or suborbital altitudes before, making EUSO-SPB2 crucial to move forward towards a space-based instrument. It will also enhance the understanding of potential background signals for both detection techniques. This contribution will provide a short overview of the detector and the current status of the mission as well as its scientific goals.
It is commonly expected that a friction force on the bubble wall in a first-order phase transition can only arise from a departure from thermal equilibrium in the plasma. Recently however, it was argued that an effective friction, scaling as γ2 w (with γ w being the Lorentz factor for the bubble wall velocity), persists in local equilibrium. This was derived assuming constant plasma temperature and velocity throughout the wall. On the other hand, it is known that, at the leading order in derivatives, the plasma in local equilibrium only contributes a correction to the zero-temperature potential in the equation of motion of the background scalar field. For a constant plasma temperature, the equation of motion is then completely analogous to the vacuum case, the only change being a modified potential, and thus no friction should appear. We resolve these apparent contradictions in the calculations and their interpretation and show that the recently proposed effective friction in local equilibrium originates from inhomogeneous temperature distributions, such that the γ2 w -scaling of the effective force is violated. Further, we propose a new matching condition for the hydrodynamic quantities in the plasma valid in local equilibrium and tied to local entropy conservation. With this added constraint, bubble velocities in local equilibrium can be determined once the parameters in the equation of state are fixed, where we use the bag equation in order to illustrate this point. We find that there is a critical value of the transition strength αcrit such that bubble walls run away for α>αcrit.
The characteristics of the cosmic microwave background provide circumstantial evidence that the hot radiation-dominated epoch in the early universe was preceded by a period of inflationary expansion. Here, we show how a measurement of the stochastic gravitational wave background can reveal the cosmic history and the physical conditions during inflation, subsequent pre- and re-heating, and the beginning of the hot big bang era. This is exemplified with a particularly well-motivated and predictive minimal extension of the Standard Model which is known to provide a complete model for particle physics -- up to the Planck scale, and for cosmology -- back to inflation.
Planet-forming disks are not isolated systems. Their interaction with the surrounding medium affects their mass budget and chemical content. In the context of the ALMA-DOT program, we obtained high-resolution maps of assorted lines from six disks that are still partly embedded in their natal envelope. In this work, we examine the SO and SO2 emission that is detected from four sources: DG Tau, HL Tau, IRAS 04302+2247, and T Tau. The comparison with CO, HCO+, and CS maps reveals that the SO and SO2 emission originates at the intersection between extended streamers and the planet-forming disk. Two targets, DG Tau and HL Tau, offer clear cases of inflowing material inducing an accretion shock on the disk material. The measured rotational temperatures and radial velocities are consistent with this view. In contrast to younger Class 0 sources, these shocks are confined to the specific disk region impacted by the streamer. In HL Tau, the known accreting streamer induces a shock in the disk outskirts, and the released SO and SO2 molecules spiral toward the star in a few hundred years. These results suggest that shocks induced by late accreting material may be common in the disks of young star-forming regions with possible consequences for the chemical composition and mass content of the disk. They also highlight the importance of SO and SO2 line observations in probing accretion shocks from a larger sample.
The reduced datacubes are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/658/A104
Numerical general relativistic radiative magnetohydrodynamic simulations of accretion discs around a stellar-mass black hole with a luminosity above 0.5 of the Eddington value reveal their stratified, elevated vertical structure. We refer to these thermally stable numerical solutions as puffy discs. Above a dense and geometrically thin core of dimensionless thickness h/r ∼ 0.1, crudely resembling a classic thin accretion disc, a puffed-up, geometrically thick layer of lower density is formed. This puffy layer corresponds to h/r ∼ 1.0, with a very limited dependence of the dimensionless thickness on the mass accretion rate. We discuss the observational properties of puffy discs, particularly the geometrical obscuration of the inner disc by the elevated puffy region at higher observing inclinations, and collimation of the radiation along the accretion disc spin axis, which may explain the apparent super-Eddington luminosity of some X-ray objects. We also present synthetic spectra of puffy discs, and show that they are qualitatively similar to those of a Comptonized thin disc. We demonstrate that the existing xspec spectral fitting models provide good fits to synthetic observations of puffy discs, but cannot correctly recover the input black hole spin. The puffy region remains optically thick to scattering; in its spectral properties, the puffy disc roughly resembles that of a warm corona sandwiching the disc core. We suggest that puffy discs may correspond to X-ray binary systems of luminosities above 0.3 of the Eddington luminosity in the intermediate spectral states.
In nuclear collisions the incident protons generate a Coulomb field which acts on produced charged particles. The impact of these interactions on charged pion transverse-mass and rapidity spectra, as well as on pion-pion momentum correlations is investigated in Au+Au collisions at $\sqrt{s_{NN}}$ = 2.4 GeV. We show that the low-mt part of the data ($m_t < 0.2$ GeV/c$^2$) can be well described with a Coulomb-modified Boltzmann distribution that also takes changes of the Coulomb field during the expansion of the fireball into account. The observed centrality dependence of the fitted mean Coulomb potential deviates strongly from a $A_{part}^{2/3}$ scaling, indicating that, next to the fireball, the non-interacting charged spectators have to be taken into account. For the most central collisions, the Coulomb modifications of the HBT source radii are found to be consistent with the potential extracted from the single-pion transverse-mass distributions. This finding suggests that the region of homogeneity obtained from two-pion correlations coincides with the region in which the pions freeze-out. Using the inferred mean-square radius of the charge distribution at freeze-out, we have deduced a baryon density, in fair agreement with values obtained from statistical hadronization model fits to the particle yields.
Current and future cosmological analyses with Type Ia Supernovae (SNe Ia) face three critical challenges: i) measuring redshifts from the supernova or its host galaxy; ii) classifying SNe without spectra; and iii) accounting for correlations between the properties of SNe Ia and their host galaxies. We present here a novel approach that addresses each challenge. In the context of the Dark Energy Survey (DES), we analyze a SNIa sample with host galaxies in the redMaGiC galaxy catalog, a selection of Luminous Red Galaxies. Photo-$z$ estimates for these galaxies are expected to be accurate to $\sigma_{\Delta z/(1+z)}\sim0.02$. The DES-5YR photometrically classified SNIa sample contains approximately 1600 SNe and 125 of these SNe are in redMaGiC galaxies. We demonstrate that redMaGiC galaxies almost exclusively host SNe Ia, reducing concerns with classification uncertainties. With this subsample, we find similar Hubble scatter (to within $\sim0.01$ mag) using photometric redshifts in place of spectroscopic redshifts. With detailed simulations, we show the bias due to using photo-$z$s from redMaGiC host galaxies on the measurement of the dark energy equation-of-state $w$ is up to $\Delta w \sim 0.01-0.02$. With real data, we measure a difference in $w$ when using redMaGiC photometric redshifts versus spectroscopic redshifts of $\Delta w = 0.005$. Finally, we discuss how SNe in redMaGiC galaxies appear to be a more standardizable population due to a weaker relation between color and luminosity ($\beta$) compared to the DES-3YR population by $\sim5\sigma$; this finding is consistent with predictions that redMaGiC galaxies exhibit lower reddening ratios ($\textrm{R}_\textrm{V}$) than the general population of SN host galaxies. These results establish the feasibility of performing redMaGiC SN cosmology with photometric survey data in the absence of spectroscopic data.
The $ {\varXi}_{cc}^{++}\to {\varXi}_c^{\prime +}{\pi}^{+} $ decay is observed using proton-proton collisions collected by the LHCb experiment at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 5.4 fb−1. The $ {\varXi}_{cc}^{++}\to {\varXi}_c^{\prime +}{\pi}^{+} $ decay is reconstructed partially, where the photon from the $ {\varXi}_c^{\prime +}\to {\varXi}_c^{+}\gamma $ decay is not reconstructed and the pK$^{−}$π$^{+}$ final state of the $ {\varXi}_c^{+} $ baryon is employed. The $ {\varXi}_{cc}^{++}\to {\varXi}_c^{\prime +}{\pi}^{+} $branching fraction relative to that of the $ {\varXi}_{cc}^{++}\to {\varXi}_c^{+}{\pi}^{+} $ decay is measured to be 1.41 ± 0.17 ± 0.10, where the first uncertainty is statistical and the second systematic.[graphic not available: see fulltext]
Many low-threshold experiments observe sharply rising event rates of yet unknown origins below a few hundred eV, and larger than expected from known backgrounds. Due to the significant impact of this excess on the dark matter or neutrino sensitivity of these experiments, a collective effort has been started to share the knowledge about the individual observations. For this, the EXCESS Workshop was initiated. In its first iteration in June 2021, ten rare event search collaborations contributed to this initiative via talks and discussions. The contributing collaborations were CONNIE, CRESST, DAMIC, EDELWEISS, MINER, NEWS-G, NUCLEUS, RICOCHET, SENSEI and SuperCDMS. They presented data about their observed energy spectra and known backgrounds together with details about the respective measurements. In this paper, we summarize the presented information and give a comprehensive overview of the similarities and differences between the distinct measurements. The provided data is furthermore publicly available on the workshop's data repository together with a plotting tool for visualization.
The importance of alternative methods to measure the Hubble constant such as time-delay cosmography is highlighted by the recent Hubble tension. It is paramount to thoroughly investigate and rule out systematic biases in all measurement methods before we can accept new physics as the source of this tension. In this study, we perform a check for systematic biases in the lens modelling procedure of time-delay cosmography by comparing independent and blind time-delay predictions of the system WGD 2038$-$4008 from two teams using two different software programs: Glee and lenstronomy. The predicted time delays from both teams incorporate the stellar kinematics of the deflector and the external convergence from line-of-sight structures. The unblinded time-delay predictions from the two teams agree within $1.2\sigma$ implying that once the time delay is measured the inferred Hubble constant will also be mutually consistent. However, there is a $\sim$4$\sigma$ discrepancy between the power-law model slope and external shear, which is a significant discrepancy at the level of lens models before incorporating the stellar kinematics and the external convergence. We identify the difference in the reconstructed point spread function (PSF) to be the source of this discrepancy. If the same reconstructed PSF is used by both teams, then we achieve excellent agreement within $\sim$0.6$\sigma$, indicating that potential systematics stemming from source reconstruction algorithms and investigator choices are well under control. We recommend future studies to supersample the PSF as needed and marginalize over multiple algorithms/realizations for the PSF reconstruction to mitigate the systematic associated with the PSF. A future study will measure the time delays of the system WGD 2038$-$4008 and infer the Hubble constant based on our mass models.
Time irreversibility is a distinctive feature of nonequilibrium dynamics and several measures of irreversibility have been introduced to assess the distance from thermal equilibrium of a stochastically driven system. While the dynamical noise is often approximated as white, in many real applications the time correlations of the random forces can actually be significantly long-lived compared to the relaxation times of the driven system. We analyze the effects of temporal correlations in the noise on commonly used measures of irreversibility and demonstrate how the theoretical framework for white-noise-driven systems naturally generalizes to the case of colored noise. Specifically, we express the autocorrelation function, the area enclosing rates, and mean phase space velocity in terms of solutions of a Lyapunov equation and in terms of their white-noise limit values.
Natural ecosystems, in particular on the microbial scale, are inhabited by a large number of species. The population size of each species is affected by interactions of individuals with each other and by spatial and temporal changes in environmental conditions, such as resource abundance. Here, we use a generic population dynamics model to study how, and under what conditions, a periodic temporal environmental variation can alter an ecosystem's composition and biodiversity. We demonstrate that using time scale separation allows one to qualitatively predict the long-term population dynamics of interacting species in varying environments. We show that the notion of competitive exclusion, a well-known principle that applies for constant environments, can be extended to temporally varying environments if the time scale of environmental changes (e.g., the circadian cycle of a host) is much faster than the time scale of population growth (doubling time in bacteria). When these time scales are similar, our analysis shows that a varying environment deters the system from reaching a steady state, and coexistence between multiple species becomes possible. Our results posit that biodiversity can in parts be attributed to natural environmental variations.
We discuss peculiarities that arise in the computation of real-emission contributions to observables that contain Heaviside functions. A prominent example of such a case is the zero-jettiness soft function in SCET, whose calculation at next-to-next-to-next-to-leading order in perturbative QCD is an interesting problem. Since the zero-jettiness soft function distinguishes between emissions into different hemispheres, its definition involves θ-functions of light-cone components of emitted soft partons. This prevents a direct use of multi-loop methods, based on reverse unitarity, for computing the zero-jettiness soft function in high orders of perturbation theory. We propose a way to bypass this problem and illustrate its effectiveness by computing various non-trivial contributions to the zero-jettiness soft function at NNLO and N3LO in perturbative QCD.
We present a calculation of the helicity amplitudes for the process gg → γγ in three-loop massless QCD. We employ a recently proposed method to calculate scattering amplitudes in the 't Hooft-Veltman scheme that reduces the amount of spurious non-physical information needed at intermediate stages of the computation. Our analytic results for the three-loop helicity amplitudes are remarkably compact, and can be efficiently evaluated numerically. This calculation provides the last missing building block for the computation of NNLO QCD corrections to diphoton production in gluon fusion.
Context. Classical Cepheids are primary distance indicators and a crucial stepping stone in determining the present-day value of the Hubble constant H0 to the precision and accuracy required to constrain apparent deviations from the ΛCDM Concordance Cosmological Model.
Aims: We measured the iron and oxygen abundances of a statistically significant sample of 89 Cepheids in the Large Magellanic Cloud (LMC), one of the anchors of the local distance scale, quadrupling the prior sample and including 68 of the 70 Cepheids used to constrain H0 by the SH0ES program. The goal is to constrain the extent to which the luminosity of Cepheids is influenced by their chemical composition, which is an important contributor to the uncertainty on the determination of the Hubble constant itself and a critical factor in the internal consistency of the distance ladder.
Methods: We derived stellar parameters and chemical abundances from a self-consistent spectroscopic analysis based on equivalent width of absorption lines.
Results: The iron distribution of Cepheids in the LMC can be very accurately described by a single Gaussian with a mean [Fe/H] = −0.409 ± 0.003 dex and σ = 0.076 ± 0.003 dex. We estimate a systematic uncertainty on the absolute mean values of 0.1 dex. The width of the distribution is fully compatible with the measurement error and supports the low dispersion of 0.069 mag seen in the near-infrared Hubble Space Telescope LMC period-luminosity relation. The uniformity of the abundance has the important consequence that the LMC Cepheids alone cannot provide any meaningful constraint on the dependence of the Cepheid period-luminosity relation on chemical composition at any wavelength. This revises a prior claim based on a small sample of 22 LMC Cepheids that there was little dependence (or uncertainty) between composition and near-infrared luminosity, a conclusion which would produce an apparent conflict between anchors of the distance ladder with different mean abundance. The chemical homogeneity of the LMC Cepheid population makes it an ideal environment in which to calibrate the metallicity dependence between the more metal-poor Small Magellanic Cloud and metal-rich Milky Way and NGC 4258.
Full Tables 1-8 and Appendix B are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/658/A29
Based on observations collected at the European Southern Observatory under ESO programmes 66.D-0571 and 106.21ML.003.
We present a novel double-copy prescription for gauge fields at the Lagrangian level and apply it to the original double copy, couplings to matter and the soft theorem. The Yang-Mills Lagrangian in light-cone gauge is mapped directly to the N = 0 supergravity Lagrangian in light-cone gauge to trilinear order, and we show that the obtained result is manifestly equivalent to Einstein gravity at tree level up to this order. The application of the double-copy prescription to couplings to matter is exemplified by scalar and fermionic QCD and finally the soft-collinear effective QCD Lagrangian. The mapping of the latter yields an effective description of an energetic Dirac fermion coupled to the graviton, Kalb-Ramond, and dilaton fields, from which the fermionic gravitational soft and next-to-soft theorems follow.
We present a coherent study of the impact of neutrino interactions on the r-process element nucleosynthesis and the heating rate produced by the radioactive elements synthesized in the dynamical ejecta of neutron star-neutron star (NS-NS) mergers. We have studied the material ejected from four NS-NS merger systems based on hydrodynamical simulations which handle neutrino effects in an elaborate way by including neutrino equilibration with matter in optically thick regions and re-absorption in optically thin regions. We find that the neutron richness of the dynamical ejecta is significantly affected by the neutrinos emitted by the post-merger remnant, in particular when compared to a case neglecting all neutrino interactions. Our nucleosynthesis results show that a solar-like distribution of r-process elements with mass numbers $A \gtrsim 90$ is produced, including a significant enrichment in Sr and a reduced production of actinides compared to simulations without inclusion of the nucleonic weak processes. The composition of the dynamically ejected matter as well as the corresponding rate of radioactive decay heating are found to be rather independent of the system mass asymmetry and the adopted equation of state. This approximate degeneracy in abundance pattern and heating rates can be favourable for extracting the ejecta properties from kilonova observations, at least if the dynamical component dominates the overall ejecta. Part II of this work will study the light curve produced by the dynamical ejecta of our four NS merger models.
Observations of the SNR Cassiopeia A (Cas A) show asymmetries in the reverse shock that cannot be explained by models describing a remnant expanding through a spherically symmetric wind of the progenitor star. We investigate whether a past interaction of Cas A with a massive asymmetric shell of the circumstellar medium can account for the observed asymmetries. We performed 3D MHD simulations that describe the remnant evolution from the SN to its interaction with a circumstellar shell. The initial conditions are provided by a 3D neutrino-driven SN model whose morphology resembles Cas A. We explored the parameter space of the shell, searching for a set of parameters able to produce reverse shock asymmetries at the age of 350 years analogous to those observed in Cas A. The interaction of the remnant with the shell can match the observed reverse shock asymmetries if the shell was asymmetric with the densest portion in the nearside to the northwest (NW). According to our models, the shell was thin with radius 1.5 pc. The reverse shock shows the following asymmetries at the age of Cas A: i) it moves inward in the observer frame in the NW region, while it moves outward in other regions; ii) the geometric center of the reverse shock is offset to the NW by 0.1 pc from the geometric center of the forward shock; iii) the reverse shock in the NW region has enhanced nonthermal emission because, there, the ejecta enter the reverse shock with a higher relative velocity (between 4000 and 7000 km/s) than in other regions (below 2000 km/s). Our findings suggest the interaction of Cas A with an asymmetric circumstellar shell between 180 and 240 years after the SN event. We suggest that the shell was, most likely, the result of a massive eruption from the progenitor star that occurred about 10^5 years prior to core-collapse. We estimate a total mass of the shell of approximately 2.6 Msun.
We develop a novel data-driven method for generating synthetic optical observations of galaxy clusters. In cluster weak lensing, the interplay between analysis choices and systematic effects related to source galaxy selection, shape measurement, and photometric redshift estimation can be best characterized in end-to-end tests going from mock observations to recovered cluster masses. To create such test scenarios, we measure and model the photometric properties of galaxy clusters and their sky environments from the Dark Energy Survey Year 3 (DES Y3) data in two bins of cluster richness $\lambda \in [30; 45)$, $\lambda \in [45; 60)$ and three bins in cluster redshift ($z\in [0.3; 0.35)$, $z\in [0.45; 0.5)$ and $z\in [0.6; 0.65)$. Using deep-field imaging data, we extrapolate galaxy populations beyond the limiting magnitude of DES Y3 and calculate the properties of cluster member galaxies via statistical background subtraction. We construct mock galaxy clusters as random draws from a distribution function, and render mock clusters and line-of-sight catalogues into synthetic images in the same format as actual survey observations. Synthetic galaxy clusters are generated from real observational data, and thus are independent from the assumptions inherent to cosmological simulations. The recipe can be straightforwardly modified to incorporate extra information, and correct for survey incompleteness. New realizations of synthetic clusters can be created at minimal cost, which will allow future analyses to generate the large number of images needed to characterize systematic uncertainties in cluster mass measurements.
We present component-separated maps of the primary cosmic microwave background/kinematic Sunyaev-Zel'dovich (SZ) amplitude and the thermal SZ Compton-y parameter, created using data from the South Pole Telescope (SPT) and the Planck satellite. These maps, which cover the ~2500 deg2 of the southern sky imaged by the SPT-SZ survey, represent a significant improvement over previous such products available in this region by virtue of their higher angular resolution ( $1\buildrel{\,\prime}\over{.} 25$ for our highest-resolution Compton-y maps) and lower noise at small angular scales. In this work we detail the construction of these maps using linear combination techniques, including our method for limiting the correlation of our lowest-noise Compton-y map products with the cosmic infrared background. We perform a range of validation tests on these data products to test our sky modeling and combination algorithms, and we find good performance in all of these tests. Recognizing the potential utility of these data products for a wide range of astrophysical and cosmological analyses, including studies of the gas properties of galaxies, groups, and clusters, we make these products publicly available at pole.uchicago.edu/public/data/sptsz_ymap and on the NASA/LAMBDA website.
The majority of existing results for the kilonova (or macronova) emission from material ejected during a neutron-star (NS) merger is based on (quasi-) one-zone models or manually constructed toy-model ejecta configurations. In this study, we present a kilonova analysis of the material ejected during the first $\sim 10\,$ ms of a NS merger, called dynamical ejecta, using directly the outflow trajectories from general relativistic smoothed-particle hydrodynamics simulations, including a sophisticated neutrino treatment and the corresponding nucleosynthesis results, which have been presented in Part I of this study. We employ a multidimensional two-moment radiation transport scheme with approximate M1 closure to evolve the photon field and use a heuristic prescription for the opacities found by calibration with atomic-physics-based reference results. We find that the photosphere is generically ellipsoidal but augmented with small-scale structure and produces emission that is about 1.5-3 times stronger towards the pole than the equator. The kilonova typically peaks after $0.7\!-\!1.5\,$ d in the near-infrared frequency regime with luminosities between $3\!-\!7\times 10^{40}\,$ erg s-1 and at photospheric temperatures of $2.2\!-\!2.8\times 10^3\,$ K. A softer equation of state or higher binary-mass asymmetry leads to a longer and brighter signal. Significant variations of the light curve are also obtained for models with artificially modified electron fractions, emphasizing the importance of a reliable neutrino-transport modelling. None of the models investigated here, which only consider dynamical ejecta, produces a transient as bright as AT2017gfo. The near-infrared peak of our models is incompatible with the early blue component of AT2017gfo.
MadJax is a tool for generating and evaluating differentiable matrix elements of high energy scattering processes. As such, it is a step towards a differentiable programming paradigm in high energy physics that facilitates the incorporation of high energy physics domain knowledge, encoded in simulation software, into gradient based learning and optimization pipelines. MadJax comprises two components: (a) a plugin to the general purpose matrix element generator MadGraph that integrates matrix element and phase space sampling code with the JAX differentiable programming framework, and (b) a standalone wrapping API for accessing the matrix element code and its gradients, which are computed with automatic differentiation. The MadJax implementation and example applications of simulation based inference and normalizing flow based matrix element modeling, with capabilities enabled uniquely with differentiable matrix elements, are presented.
To a good approximation, on large scales, the evolved two-point correlation function of biased tracers is related to the initial one by a convolution with a smearing kernel. For Gaussian initial conditions, the smearing kernel is Gaussian, so if the initial correlation function is parametrized using simple polynomials, then the evolved correlation function is a sum of generalized Laguerre functions of half-integer order. This motivates an analytic "Laguerre reconstruction" algorithm which previous work has shown is fast and accurate. This reconstruction requires as input the width of the smearing kernel. We show that the method can be extended to estimate the width of the smearing kernel from the same dataset. This estimate, and associated uncertainties, can then be used to marginalize over the distribution of reconstructed shapes and hence provide error estimates on the value of the distance scale. This procedure is not tied to a particular cosmological model. We also show that if, instead, we parametrize the evolved correlation function using simple polynomials, then the initial one is a sum of Hermite polynomials, again enabling fast and accurate deconvolution. If one is willing to use constraints on the smearing scale from other datasets, then marginalizing over its value is simpler for this latter, "Hermite" reconstruction, potentially providing further speed-ups in cosmological analyses.
Euclid is poised to survey galaxies across a cosmological volume of unprecedented size, providing observations of more than a billion objects distributed over a third of the full sky. Approximately 20 million of these galaxies will have their spectroscopy available, allowing us to map the three-dimensional large-scale structure of the Universe in great detail. This paper investigates prospects for the detection of cosmic voids therein and the unique benefit they provide for cosmological studies. In particular, we study the imprints of dynamic (redshift-space) and geometric (Alcock-Paczynski) distortions of average void shapes and their constraining power on the growth of structure and cosmological distance ratios. To this end, we made use of the Flagship mock catalog, a state-of-the-art simulation of the data expected to be observed with Euclid. We arranged the data into four adjacent redshift bins, each of which contains about 11 000 voids and we estimated the stacked void-galaxy cross-correlation function in every bin. Fitting a linear-theory model to the data, we obtained constraints on f/b and DMH, where f is the linear growth rate of density fluctuations, b the galaxy bias, DM the comoving angular diameter distance, and H the Hubble rate. In addition, we marginalized over two nuisance parameters included in our model to account for unknown systematic effects in the analysis. With this approach, Euclid will be able to reach a relative precision of about 4% on measurements of f/b and 0.5% on DMH in each redshift bin. Better modeling or calibration of the nuisance parameters may further increase this precision to 1% and 0.4%, respectively. Our results show that the exploitation of cosmic voids in Euclid will provide competitive constraints on cosmology even as a stand-alone probe. For example, the equation-of-state parameter, w, for dark energy will be measured with a precision of about 10%, consistent with previous more approximate forecasts.
This paper is published on behalf of the Euclid Consortium.
Galaxy cluster masses, rich with cosmological information, can be estimated from internal dark matter (DM) velocity dispersions, which in turn can be observationally inferred from satellite galaxy velocities. However, galaxies are biased tracers of the DM, and the bias can vary over host halo and galaxy properties as well as time. We precisely calibrate the velocity bias, bv - defined as the ratio of galaxy and DM velocity dispersions - as a function of redshift, host halo mass, and galaxy stellar mass threshold ($M_{\rm \star , sat}$), for massive haloes ($M_{\rm 200c}\gt 10^{13.5} \, {\rm M}_\odot$) from five cosmological simulations: IllustrisTNG, Magneticum, Bahamas + Macsis, The Three Hundred Project, and MultiDark Planck-2. We first compare scaling relations for galaxy and DM velocity dispersion across simulations; the former is estimated using a new ensemble velocity likelihood method that is unbiased for low galaxy counts per halo, while the latter uses a local linear regression. The simulations show consistent trends of bv increasing with M200c and decreasing with redshift and $M_{\rm \star , sat}$. The ensemble-estimated theoretical uncertainty in bv is 2-3 per cent, but becomes percent-level when considering only the three highest resolution simulations. We update the mass-richness normalization for an SDSS redMaPPer cluster sample, and find our improved bv estimates reduce the normalization uncertainty from 22 to 8 per cent, demonstrating that dynamical mass estimation is competitive with weak lensing mass estimation. We discuss necessary steps for further improving this precision. Our estimates for $b_v(M_{\rm 200c}, M_{\rm \star , sat}, z)$ are made publicly available.
We derive supernova (SN) bounds on muon-philic bosons, taking advantage of the recent emergence of muonic SN models. Our main innovations are to consider scalars ϕ in addition to pseudoscalars a and to include systematically the generic two-photon coupling Gγ γ implied by a muon triangle loop. This interaction allows for Primakoff scattering and radiative boson decays. The globular-cluster bound Gγ γ<0.67 ×10-10 GeV-1 carries over to the muonic Yukawa couplings as ga<3.1 ×10-9 and gϕ<4.6 ×10-9 for ma ,ϕ≲100 keV , so SN arguments become interesting mainly for larger masses. If bosons escape freely from the SN core the main constraints originate from SN 1987A γ rays and the diffuse cosmic γ -ray background. The latter allows at most 10-4 of a typical total SN energy of ESN≃3 ×1053 erg to show up as γ rays, for ma ,ϕ≳100 keV implying ga≲0.9 ×10-10 and gϕ≲0.4 ×10-10. In the trapping regime the bosons emerge as quasi-thermal radiation from a region near the neutrino sphere and match Lν for ga ,ϕ≃10-4. However, the 2 γ decay is so fast that all the energy is dumped into the surrounding progenitor-star matter, whereas at most 10-2ESN may show up in the explosion. To suppress boson emission below this level we need yet larger couplings, ga≳2 ×10-3 and gϕ≳4 ×10-3. Muonic scalars can explain the muon magnetic-moment anomaly for gϕ≃0.4 ×10-3, a value hard to reconcile with SN physics despite the uncertainty of the explosion-energy bound. For generic axionlike particles, this argument covers the "cosmological triangle" in the Ga γ γ- ma parameter space.