We compute the first moments of the q2 distribution in inclusive semileptonic B decays as functions of the lower cut on q2, confirming a number of results given in the literature and adding the O (αs2β0 ) BLM contributions. We then include the q2-moments recently measured by Belle and Belle II in a global fit to the moments. The new data are compatible with the other measurements and slightly decrease the uncertainty on the nonperturbative parameters and on |Vcb|. Our updated value is |Vcb| = (41.97 ± 0.48) × 10−3.
The evolved stages of massive stars are poorly understood, but invaluable constraints can be derived from spatially resolved observations of nearby red supergiants, such as Betelgeuse. Atacama Large Millimeter/submillimeter Array (ALMA) observations of Betelgeuse showing a dipolar velocity field have been interpreted as evidence for a projected rotation rate of about 5 km s‑1. This is 2 orders of magnitude larger than predicted by single-star evolution, which led to suggestions that Betelgeuse is a binary merger. We propose instead that large-scale convective motions can mimic rotation, especially if they are only partially resolved. We support this claim with 3D CO5BOLD simulations of nonrotating red supergiants that we postprocessed to predict ALMA images and SiO spectra. We show that our synthetic radial velocity maps have a 90% chance of being falsely interpreted as evidence for a projected rotation rate of 2 km s‑1 or larger for our fiducial simulation. We conclude that we need at least another ALMA observation to firmly establish whether Betelgeuse is indeed rapidly rotating. Such observations would also provide insight into the role of angular momentum and binary interaction in the late evolutionary stages. The data will further probe the structure and complex physical processes in the atmospheres of red supergiants, which are immediate progenitors of supernovae and are believed to be essential in the formation of gravitational-wave sources.
The paradigm-changing possibility of collective neutrino-antineutrino oscillations was recently advanced in analogy to collective flavor oscillations. However, the amplitude for the backward scattering process νp1ν¯p2→νp2ν¯p1 is helicity suppressed and vanishes for massless neutrinos, implying that there is no off-diagonal refractive index between ν and ν ¯ of a single flavor of massless neutrinos. For a nonvanishing mass, collective helicity oscillations are possible, representing de facto ν -ν ¯ oscillations in the Majorana case. However, such phenomena are suppressed by the smallness of neutrino masses as discussed in the previous literature.
Core-Collapse supernovae are driven by neutrinos. Coherent neutrino-neutrino forward scattering leads to flavor conversion phenomena. These are expected to have an impact on the dynamics of a supernova. Due to the complexity of the problem, a fully self-consistent treatment is not possible. In this study, I use a parameterized prescription, aimed at maximum flavor conversions, and infer the maximum impact of flavor conversions.
Context. Recently, large and homogeneous samples of cataclysmic variables identified by the Sloan Digital Sky Survey (SDSS) were published. In these samples, the famous orbital period gap, which is a dearth of systems in the orbital period range ∼2 − 3 h and the defining feature of most evolutionary models for cataclysmic variables, has been claimed not to be clearly present. If true, this finding would completely change our picture of cataclysmic variable evolution.
Aims: In this Letter we focus on potential differences with respect to the orbital period gap between cataclysmic variables in which the magnetic field of the white dwarf is strong enough to connect with that of the donor star, so-called polars, and non-polar cataclysmic variables as the white dwarf magnetic field in polars has been predicted to reduce the strength of angular momentum loss through magnetic braking.
Methods: We separated the SDSS I-IV sample of cataclysmic variables into polars and non-polar systems and performed statistical tests to evaluate whether the period distributions are bimodal as predicted by the standard model for cataclysmic variable evolution or not. We also compared the SDSS I-IV period distribution of non-polars to that of other samples of cataclysmic variables.
Results: We confirm the existence of a period gap in the SDSS I-IV sample of non-polar cataclysmic variables with > 98% confidence. The boundaries of the orbital period gap are 147 and 191 min, with the lower boundary being different to previously published values (129 min). The orbital period distribution of polars from SDSS I-IV is clearly different and does not show a similar period gap.
Conclusions: The SDSS samples as well as previous samples of cataclysmic variables are consistent with the standard theory of cataclysmic variable evolution. Magnetic braking does indeed seem get disrupted around the fully convective boundary, which causes a detached phase during cataclysmic variable evolution. In polars, the white dwarf magnetic field reduces the strength of magnetic braking and consequently the orbital period distribution of polars does not display an equally profound and extended period gap as non-polars. It remains unclear why the breaking rates derived from the rotation of single stars in open clusters favour prescriptions that are unable to explain the orbital period distribution of cataclysmic variables.
We describe a family of twisted partition functions for the relativistic spinning particle models. For suitable choices of fugacities this computes a refined Euler characteristics that counts the dimension of the physical states for arbitrary picture and, furthermore, encodes the complete BV-spectrum of the effective space-time gauge theory originating from this model upon second quantization. The relation between twisted world-line partition functions and the spectrum of the space-time theory is most easily seen on-shell but we will give an off-shell description as well. Finally we discuss the construction of a space-time action in terms of the world-line fields in analogy to string field theory.
The orbital distribution of the S-star cluster surrounding the supermassive black hole in the center of the Milky Way is analyzed. A tight dependence of the pericenter distance r p on orbital eccentricity e ⋆ is found, $\mathrm{log}({r}_{{\rm{p}}})\sim (1-{e}_{\star })$ , which cannot be explained simply by a random distribution of semimajor axis and eccentricities. No stars are found in the region with high e ⋆ and large $\mathrm{log}({r}_{{\rm{p}}})$ or in the region with low e ⋆ and small $\mathrm{log}({r}_{{\rm{p}}})$ . Although the sample is still small, the G-clouds show a very similar distribution. The likelihood $P(\mathrm{log}({r}_{{\rm{p}}}),(1-{e}_{\star }))$ to determine the orbital parameters of S-stars is determined. P is very small for stars with large e ⋆ and large $\mathrm{log}({r}_{{\rm{p}}})$ . S-stars might exist in this region. To determine their orbital parameters, one however needs observations over a longer time period. On the other hand, if stars would exist in the region of low $\mathrm{log}({r}_{{\rm{p}}})$ and small e ⋆, their orbital parameters should by now have been determined. That this region is unpopulated therefore indicates that no S-stars exist with these orbital characteristics, providing constraints for their formation. We call this region, defined by $\mathrm{log}({r}_{{\rm{p}}}/\mathrm{AU})\lt 1.57+2.6(1-{e}_{\star })$ , the zone of avoidance. Finally, it is shown that the observed frequency of eccentricities and pericenter distances is consistent with a random sampling of $\mathrm{log}({r}_{{\rm{p}}})$ and e ⋆ if one takes into account the fact that no stars exist in the zone of avoidance and that orbital parameters cannot yet be determined for stars with large r p and large e ⋆.
Aims: The BL Lac 1ES 2344+514 is known for temporary extreme properties characterised by a shift of the synchrotron spectral energy distribution (SED) peak energy νsynch, p above 1 keV. While those extreme states have only been observed during high flux levels thus far, additional multi-year observing campaigns are required to achieve a coherent picture. Here, we report the longest investigation of the source from radio to very high energy (VHE) performed so far, focussing on a systematic characterisation of the intermittent extreme states.
Methods: We organised a monitoring campaign covering a 3-year period from 2019 to 2021. More than ten instruments participated in the observations in order to cover the emission from radio to VHE. In particular, sensitive X-ray measurements by XMM-Newton, NuSTAR, and AstroSat took place simultaneously with multi-hour MAGIC observations, providing an unprecedented constraint of the two SED components for this blazar.
Results: While our results confirm that 1ES 2344+514 typically exhibits νsynch, p > 1 keV during elevated flux periods, we also find periods where the extreme state coincides with low flux activity. A strong spectral variability thus happens in the quiescent state, and is likely caused by an increase in the electron acceleration efficiency without a change in the electron injection luminosity. On the other hand, we also report a strong X-ray flare (among the brightest for 1ES 2344+514) without a significant shift of νsynch, p. During this particular flare, the X-ray spectrum is among the softest of the campaign. It unveils complexity in the spectral evolution, where the common harder-when-brighter trend observed in BL Lacs is violated. By combining Swift-XRT and Swift-UVOT measurements during a low and hard X-ray state, we find an excess of the UV flux with respect to an extrapolation of the X-ray spectrum to lower energies. This UV excess implies that at least two regions significantly contribute to the infrared/optical/ultraviolet/X-ray emission. Using the simultaneous MAGIC, XMM-Newton, NuSTAR, and AstroSat observations, we argue that a region possibly associated with the 10 GHz radio core may explain such an excess. Finally, we investigate a VHE flare, showing an absence of simultaneous variability in the 0.3−2 keV band. Using time-dependent leptonic modelling, we show that this behaviour, in contradiction to single-zone scenarios, can instead be explained by a two-component model.
We present high-definition observations with the James Webb Space Telescope (JWST) of >1000 Cepheids in a geometric anchor of the distance ladder, NGC 4258, and in five hosts of eight Type Ia supernovae, a far greater sample than previous studies with JWST. These galaxies individually contain the largest samples of Cepheids, an average of >150 each, producing the strongest statistical comparison to those previously measured with the Hubble Space Telescope (HST) in the near-infrared (NIR). They also span the distance range of those used to determine the Hubble constant with HST, allowing us to search for a distance-dependent bias in HST measurements. The superior resolution of JWST negates crowding noise, the largest source of variance in the NIR Cepheid period–luminosity relations (Leavitt laws) measured with HST. Together with the use of two epochs to constrain Cepheid phases and three filters to remove reddening, we reduce the dispersion in the Cepheid P–L relations by a factor of 2.5. We find no significant difference in the mean distance measurements determined from HST and JWST, with a formal difference of ‑0.01 ± 0.03 mag. This result is independent of zero-points and analysis variants including metallicity dependence, local crowding, choice of filters, and slope of the relations. We can reject the hypothesis of unrecognized crowding of Cepheid photometry from HST that grows with distance as the cause of the "Hubble tension" at 8.2σ, i.e., greater confidence than that of the Hubble tension itself. We conclude that errors in photometric measurements of Cepheids across the distance ladder do not significantly contribute to the tension.
We develop a neural network based pipeline to estimate masses of galaxy clusters with a known redshift directly from photon information in X-rays. Our neural networks are trained using supervised learning on simulations of eROSITA observations, focusing in this paper on the Final Equatorial Depth Survey (eFEDS). We use convolutional neural networks which are modified to include additional information of the cluster, in particular its redshift. In contrast to existing work, we utilize simulations including background and point sources to develop a tool which is usable directly on observational eROSITA data for an extended mass range from group size halos to massive clusters with masses in between 1013M⊙<M<1015M⊙. Using this method, we are able to provide for the first time neural network mass estimation for the observed eFEDS cluster sample from Spectrum-Roentgen-Gamma/eROSITA observations and we find consistent performance with weak lensing calibrated masses. In this measurement, we do not use weak lensing information and we only use previous cluster mass information which was used to calibrate the cluster properties in the simulations. When compared to simulated data, we observe a reduced scatter with respect to luminosity and count-rate based scaling relations.
We comment on the application for other upcoming eROSITA All-Sky Survey observations.
The Euclid mission of the European Space Agency will perform a survey of weak lensing cosmic shear and galaxy clustering in order to constrain cosmological models and fundamental physics. We expand and adjust the mock Euclid likelihoods of the MontePython software in order to match the exact recipes used in previous Euclid Fisher matrix forecasts for several probes: weak lensing cosmic shear, photometric galaxy clustering, the cross-correlation between the latter observables, and spectroscopic galaxy clustering. We also establish which precision settings are required when running the Einstein-Boltzmann solvers CLASS and CAMB in the context of Euclid. For the minimal cosmological model, extended to include dynamical dark energy, we perform Fisher matrix forecasts based directly on a numerical evaluation of second derivatives of the likelihood with respect to model parameters. We compare our results with those of other forecasting methods and tools. We show that such MontePython forecasts agree very well with previous Fisher forecasts published by the Euclid Collaboration, and also, with new forecasts produced by the CosmicFish code, now interfaced directly with the two Einstein-Boltzmann solvers CAMB and CLASS. Moreover, to establish the validity of the Gaussian approximation, we show that the Fisher matrix marginal error contours coincide with the credible regions obtained when running Monte Carlo Markov Chains with MontePython while using the exact same mock likelihoods. The new Euclid forecast pipelines presented here are ready for use with additional cosmological parameters, in order to explore extended cosmological models.
Nuclear double-beta decays are a unique probe to search for new physics beyond the standard model. Hypothesized particles, non-standard interactions, or the violation of fundamental symmetries would affect the decay kinematics, creating detectable and characteristic experimental signatures. In particular, the energy distribution of the electrons emitted in the decay gives an insight into the decay mechanism and has been studied in several isotopes and experiments. No deviations from the prediction of the standard model have been reported yet. However, several new experiments are underway or in preparation and will soon increase the sensitivity of these beyond-the-standard-model physics searches, exploring uncharted parts of the parameter space. This review brings together phenomenological and experimental aspects related to new-physics searches in double-beta decay experiments, focusing on the testable models, the most-sensitive detection techniques, and the discovery opportunities of this field.
The origin of obscuration in active galactic nuclei (AGNs) is still an open debate. In particular, it is unclear what drives the relative contributions to the line-of-sight column densities from galaxy-scale and torus-linked obscuration. The latter source is expected to play a significant role in Unification Models, while the former is thought to be relevant in both Unification and Evolutionary models. In this work, we make use of a combination of cosmological semi-analytic models and semi-empirical prescriptions for the properties of galaxies and AGN, to study AGN obscuration. We consider a detailed object-by-object modelling of AGN evolution, including different AGN light curves (LCs), gas density profiles, and also AGN feedback-induced gas cavities. Irrespective of our assumptions on specific AGN LC or galaxy gas fractions, we find that, on the strict assumption of an exponential profile for the gas component, galaxy-scale obscuration alone can hardly reproduce the fraction of log (NH/cm-2) ≥ 24 sources at least at z ≲ 3. This requires an additional torus component with a thickness that decreases with luminosity to match the data. The torus should be present in all evolutionary stages of a visible AGN to be effective, although galaxy-scale gas obscuration may be sufficient to reproduce the obscured fraction with 22 < log (NH/cm-2) < 24 (Compton-thin, CTN) if we assume extremely compact gas disc components. The claimed drop of CTN fractions with increasing luminosity does not appear to be a consequence of AGN feedback, but rather of gas reservoirs becoming more compact with decreasing stellar mass.
We study the 10 Myr evolution of parsec-scale stellar discs with initial masses of Mdisc = 1.0-$7.5 \times 10^4\, \mathrm{M}_\odot$ and eccentricities einit = 0.1-0.9 around supermassive black holes (SMBHs). Our disc models are embedded in a spherical background potential and have top-heavy single and binary star initial mass functions (IMF) with slopes of 0.25-1.7. The systems are evolved with the N-body code BIFROST, including post-Newtonian (PN) equations of motion and simplified stellar evolution. All discs are unstable and evolve on Myr time-scales towards similar eccentricity distributions peaking at e⋆ ~ 0.3-0.4. Models with high einit also develop a very eccentric (e⋆ ≳ 0.9) stellar population. For higher disc masses Mdisc ≳ 3 × 104 M⊙, the disc disruption dynamics is more complex than the standard secular eccentric disc instability with opposite precession directions at different disc radii - a precession direction instability. We present an analytical model describing this behaviour. A milliparsec population of N ~ 10-100 stars forms around the SMBH in all models. For low einit, stars migrate inward while for einit ≳ 0.6 stars are captured by the Hills mechanism. Without PN, after 6 Myr, the captured stars have a sub-thermal eccentricity distribution. We show that including PN effects prevents this thermalization by suppressing resonant relaxation effects and cannot be ignored. The number of tidally disrupted stars is similar or larger than the number of milliparsec stars. None of the simulated models can simultaneously reproduce the kinematic and stellar population properties of the Milky Way centre clockwise disc and the S-cluster.
High-velocity stellar collisions driven by a supermassive black hole (BH) or BH-driven disruptive collisions in dense, nuclear clusters can rival the energetics of supergiant star explosions following the gravitational collapse of their iron core. Starting from a sample of red-giant star collisions simulated with the hydrodynamics code AREPO, we generated photometric and spectroscopic observables using the nonlocal thermodynamic equilibrium time-dependent radiative transfer code CMFGEN. Collisions from more extended giants or more violent collisions (with higher velocities or smaller impact parameters) yield bolometric luminosities on the order of 1043 erg s−1 at 1 d, evolving on a timescale of a week to a bright plateau at ∼1041 erg s−1 before plunging precipitously after 20-40 d at the end of the optically thick phase. This luminosity falls primarily in the UV in the first few days, thus when it is at its maximum, and shifts to the optical thereafter. Collisions at lower velocities or from less extended stars produce ejecta that are fainter but can remain optically thick for up to 40 d if they have a low expansion rate. This collision debris shows a similar spectral evolution as that observed or modeled for Type II supernovae from blue-supergiant star explosions, differing only in the more rapid transition to the nebular phase. Such BH-driven disruptive collisions should be detectable by high-cadence surveys in the UV such as ULTRASAT.
Dark photons, aside from constituting non-relativistic dark matter, can also be generated relativistically through the decay or annihilation of other dark matter candidates, contributing to a galactic dark photon background. The production of dark photons tends to favor specific polarization modes, determined by the microscopic coupling between dark matter and dark photons. We leverage data obtained from previous searches for dark photon dark matter using a superconducting radio-frequency cavity to explore galactic dark photon fluxes. The interplay of anisotropic directions and Earth's rotation introduces a diurnal modulation of signals within the cavities, manifesting distinct variation patterns for longitudinal and transverse modes. Our findings highlight the efficacy of superconducting radio-frequency cavities, characterized by significantly high-quality factors, as powerful telescopes for detecting galactic dark photons, unveiling a novel avenue in the indirect search for dark matter through multi-messenger astronomy.
Context. Recent observations of close detached eclipsing M and K dwarf binaries have provided substantial support for magnetic saturation when stars rotate sufficiently fast, leading to a magnetic braking (MB) torque proportional to the spin of the star.
Aims: We investigated here how strong MB torques need to be to reproduce the observationally inferred relative numbers of white dwarf plus M dwarf post-common-envelope binaries under the assumption of magnetic saturation.
Methods: We carried out binary population simulations with the BSE code adopting empirically derived inter-correlated main-sequence binary distributions as initial binary populations and compared the simulation outcomes with observations.
Results: We found that the dearth of extreme mass ratio binaries in the inter-correlated initial distributions is key to reproduce the large fraction of post-common-envelope binaries hosting low-mass M dwarfs (∼0.1 − 0.2 M⊙). In addition, orbital angular momentum loss rates due to MB should be high for M dwarfs with radiative cores and orders of magnitude smaller for fully convective stars to explain the observed dramatic change of the fraction of short-period binaries at the fully convective boundary.
Conclusions: We conclude that saturated but disrupted, that is, dropping drastically at the fully convective boundary, MB can explain the observations of both close main-sequence binaries containing M and K dwarfs and post-common-envelope binaries. Whether a similar prescription can explain the spin down rates of single stars and of binaries containing more massive stars needs to be tested.
The eROSITA telescope array aboard the Spektrum Roentgen Gamma (SRG) satellite began surveying the sky in December 2019, with the aim of producing all-sky X-ray source lists and sky maps of an unprecedented depth. Here we present catalogues of both point-like and extended sources using the data acquired in the first six months of survey operations (eRASS1; completed June 2020) over the half sky whose proprietary data rights lie with the German eROSITA Consortium. We describe the observation process, the data analysis pipelines, and the characteristics of the X-ray sources. With nearly 930 000 entries detected in the most sensitive 0.2-2.3 keV energy range, the eRASS1 main catalogue presented here increases the number of known X-ray sources in the published literature by more than 60%, and provides a comprehensive inventory of all classes of X-ray celestial objects, covering a wide range of physical processes. A smaller catalogue of 5466 sources detected in the less sensitive but harder 2.3-5 keV band is the result of the first true imaging survey of the entire sky above 2 keV. We present methods to identify and flag potential spurious sources in the catalogues, which we applied for this work, and we tested and validated the astrometric accuracy via cross-comparison with other X-ray and multi-wavelength catalogues. We show that the number counts of X-ray sources in eRASSl are consistent with those derived over narrower fields by past X-ray surveys of a similar depth, and we explore the number counts variation as a function of the location in the sky. Adopting a uniform all-sky flux limit (at 50% completeness) of F05-2 keV > 5 × 10−14 erg s−1 cm−2, we estimate that the eROSITA all-sky survey resolves into individual sources about 20% of the cosmic X-ray background in the 1-2 keV range. The catalogues presented here form part of the first data release (DR1) of the SRG/eROSITA all-sky survey. Beyond the X-ray catalogues, DR1 contains all detected and calibrated event files, source products (light curves and spectra), and all-sky maps. Illustrative examples of these are provided.
The catalogue is available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (ftp://130.79.128.5) or via https://cdsarc.cds.unistra.fr/viz-bin/cat/J/A+A/682/A34
This is the second paper in a series presenting the results from a 500 $h^{-1}$Mpc large constrained hydro-dynamical simulation of the local Universe (SLOW). The initial conditions are based on peculiar velocities derived from the CosmicFlows-2 catalogue. The inclusion of galaxy formation treatment, allows to directly predict observable properties of the Intra-Cluster Medium (ICM) within galaxy clusters. Comparing the properties of observed galaxy clusters within the local Universe with the properties of their simulated counterparts, enables us to assess the effectiveness of the initial condition constraints in accurately replicating the non-linear properties of the largest, collapsed objects within the simulation. Based on the combination of several, publicly available surveys, we identified 45 local Universe galaxy clusters in SLOW, including the 13 most massive from the Planck SZ catalog and 70% of those with $M_{500} > 2\times 10^{14}$ M$_{\odot}$. We then derived the probability of the cross identification based on mass, X-ray luminosity, temperature and Compton-y by comparing it to a random selection. In relation to previous constrained simulations of the local volume, we found in SLOW a much larger amount of replicated galaxy clusters, where their simulation based mass prediction falls within the uncertainties of the observational mass estimates. Comparing the median observed and simulated masses of our cross identified sample allows to independently deduce a hydrostatic mass bias of $(1-b)\approx0.87$. The SLOW constrained simulation of the local Universe faithfully reproduces numerous fundamental characteristics of the galaxy clusters within our local neighbourhood, opening a new avenue for studying the formation and evolution of a large set of individual galaxy clusters as well as testing our understanding of physical processes governing the ICM.
We present a novel approach to measuring the expansion rate and the geometry of the Universe, which combines time-delay cosmography in lens galaxy clusters with pure samples of `cosmic chronometers' by probing the member galaxies. The former makes use of the measured time delays between the multiple images of time-varying sources strongly lensed by galaxy clusters, while the latter exploits the most massive and passive cluster member galaxies to measure the differential time evolution of the Universe. We applied two different statistical techniques, adopting realistic errors on the measured quantities, to assess the accuracy and the gain in precision on the values of the cosmological parameters. We demonstrate that the proposed combined method allows for a robust and accurate measurement of the value of the Hubble constant. In addition, this provides valuable information on the other cosmological parameters thanks to the complementarity between the two different probes in breaking parameter degeneracies. Finally, we showcased the immediate observational feasibility of the proposed joint method by taking advantage of the existing high-quality spectro-photometric data for several lens galaxy clusters.
We investigate the main tensions within the current standard model of cosmology from the perspective of the main statistics of cosmic voids, using the final BOSS DR12 data set. For this purpose, we present the first estimate of the S8 ≡ σ8 Ωm/0.3 and H0 parameters obtained from void number counts and shape distortions. To analyze void counts we relied on an extension of the popular volume-conserving model for the void size function, tailored to the application on data, including geometric and dynamic distortions. We calibrated the two nuisance parameters of this model with the official BOSS Collaboration mock catalogs and propagated their uncertainty through the statistical analysis of the BOSS void number counts. The constraints from void shapes come from the study of the geometric distortions of the stacked void-galaxy cross-correlation function. In this work we focus our analysis on the Ωm − σ8 and Ωm − H0 parameter planes and derive the marginalized constraints S8 = 0.813−0.068+0.093 and H0 = 67.3−9.1+10.0 km s−1 Mpc−1, which are fully compatible with constraints from the literature. These results are expected to notably improve in precision when analyzed jointly with independent probes and will open a new viewing angle on the rising cosmological tensions in the near future.
At z≲1, shock heating caused by large-scale velocity flows and possibly violent feedback from galaxy formation, converts a significant fraction of the cool gas (T∼104 K) in the intergalactic medium (IGM) into warm-hot phase (WHIM) with T>105K, resulting in a significant deviation from the previously tight power-law IGM temperature-density relationship, T=T0(ρ/ρ¯)γ−1. This study explores the impact of the WHIM on measurements of the low-z IGM thermal state, [T0,γ], based on the b-NHI distribution of the Lyman-α forest. Exploiting a machine learning-enabled simulation-based inference method trained on Nyx hydrodynamical simulations, we demonstrate that [T0, γ] can still be reliably measured from the b-NHI distribution at z=0.1, notwithstanding the substantial WHIM in the IGM. To investigate the effects of different feedback, we apply this inference methodology to mock spectra derived from the IllustrisTNG and Illustris simulations at z=0.1. The results suggest that the underlying [T0,γ] of both simulations can be recovered with biases as low as |Δlog(T0/K)|≲0.05 dex, |Δγ|≲0.1, smaller than the precision of a typical measurement. Given the large differences in the volume-weighted WHIM fractions between the three simulations (Illustris 38\%, IllustrisTNG 10\%, Nyx 4\%) we conclude that the b-NHI distribution is not sensitive to the WHIM under realistic conditions. Finally, we investigate the physical properties of the detectable Lyman-α absorbers, and discover that although their T and Δ distributions remain mostly unaffected by feedback, they are correlated with the photoionization rate used in the simulation.
We present the angular diameter distance measurement obtained with the Baryonic Acoustic Oscillation feature from galaxy clustering in the completed Dark Energy Survey, consisting of six years (Y6) of observations. We use the Y6 BAO galaxy sample, optimized for BAO science in the redshift range 0.6<z<1.2, with an effective redshift at zeff=0.85 and split into six tomographic bins. The sample has nearly 16 million galaxies over 4,273 square degrees. Our consensus measurement constrains the ratio of the angular distance to sound horizon scale to DM(zeff)/rd = 19.51±0.41 (at 68.3% confidence interval), resulting from comparing the BAO position in our data to that predicted by Planck ΛCDM via the BAO shift parameter α=(DM/rd)/(DM/rd)Planck. To achieve this, the BAO shift is measured with three different methods, Angular Correlation Function (ACF), Angular Power Spectrum (APS), and Projected Correlation Function (PCF) obtaining α= 0.952±0.023, 0.962±0.022, and 0.955±0.020, respectively, which we combine to α= 0.957±0.020, including systematic errors. When compared with the ΛCDM model that best fits Planck data, this measurement is found to be 4.3% and 2.1σ below the angular BAO scale predicted. To date, it represents the most precise angular BAO measurement at z>0.75 from any survey and the most precise measurement at any redshift from photometric surveys. The analysis was performed blinded to the BAO position and it is shown to be robust against analysis choices, data removal, redshift calibrations and observational systematics.
In this paper we present and validate the galaxy sample used for the analysis of the baryon acoustic oscillation (BAO) signal in the Dark Energy Survey (DES) Y6 data. The definition is based on a color and redshift-dependent magnitude cut optimized to select galaxies at redshifts higher than 0.6, while ensuring a high-quality photo-z determination. The optimization is performed using a Fisher forecast algorithm, finding the optimal i-magnitude cut to be given by i<19.64+2.894zph. For the optimal sample, we forecast an increase in precision in the BAO measurement of ∼25% with respect to the Y3 analysis. Our BAO sample has a total of 15,937,556 galaxies in the redshift range 0.6<zph<1.2, and its angular mask covers 4,273.42 deg2 to a depth of i=22.5. We validate its redshift distributions with three different methods: directional neighborhood fitting algorithm (DNF), which is our primary photo-z estimation; direct calibration with spectroscopic redshifts from VIPERS; and clustering redshift using SDSS galaxies. The fiducial redshift distribution is a combination of these three techniques performed by modifying the mean and width of the DNF distributions to match those of VIPERS and clustering redshift. In this paper we also describe the methodology used to mitigate the effect of observational systematics, which is analogous to the one used in the Y3 analysis. This paper is one of the two dedicated to the analysis of the BAO signal in DES Y6. In its companion paper, we present the angular diameter distance constraints obtained through the fitting to the BAO scale.
Core-Collapse supernovae are driven by neutrinos. Coherent neutrino-neutrino forward scattering leads to flavor conversion phenomena. These are expected to have an impact on the dynamics of a supernova. Due to the complexity of the problem, a fully self-consistent treatment is not possible. In this study, I use a parameterized prescription, aimed at maximum flavor conversions, and infer the maximum impact of flavor conversions.
Abstract
In the course of the planned upgrade of the Large Hadron Collider at CERN to higher inter-
action rates, detectors and electronics of the experiments located there will also be exchanged
and replaced by more powerful components. One of these experiments is the ATLAS experi-
ment. This underwent, among other things, an upgrade of the inner forward muon spectrom-
eter in the first upgrade period (Phase 1). The Small Wheel located there was replaced by
the New Small Wheel (NSW). The New Small Wheel consists of two detector technologies:
the small-strip Thin Gap Chambers (sTGCs) and the MICRO-MEsh GAseous Structure (Mi-
cromegas) detectors. [...]
The goal of this white paper is to provide a snapshot of the data availability and data needs primarily for the Ariel space mission, but also for related atmospheric studies of exoplanets and cool stars. It covers the following data-related topics: molecular and atomic line lists, line profiles, computed cross-sections and opacities, collision-induced absorption and other continuum data, optical properties of aerosols and surfaces, atmospheric chemistry, UV photodissociation and photoabsorption cross-sections, and standards in the description and format of such data. These data aspects are discussed by addressing the following questions for each topic, based on the experience of the 'data-provider' and 'data-user' communities: (1) what are the types and sources of currently available data, (2) what work is currently in progress, and (3) what are the current and anticipated data needs. We present a GitHub platform for Ariel-related data, with the goal to provide a go-to place for both data-users and data-providers, for the users to make requests for their data needs and for the data-providers to link to their available data. Our aim throughout the paper is to provide practical information on existing sources of data whether in data bases, theoretical, or literature sources.
With the rapid development of urban and social economies, the safety accidents in the construction process of the new chemical plant have caused huge losses to the city. The purpose of this study is to evaluate the risks in the construction process of chemical projects and propose preventive measures. A novel risk assessment model based on multi-intelligence algorithm optimization projection pursuit was developed to assess the construction safety risk and determine the risk level. In this model, the best-worst method and the entropy weight method were used as subjective and objective evaluation methods, respectively. The theory based on the idea of the distance function was applied to the model to calculate the combined weight value. The results showed that the three evaluation objects with the highest risk value were the air compression station plant, regional control room, and hazardous and solid waste temporary repository. The risk values of these three buildings were 2.2557, 2.2160, and 2.1654, respectively, and the corresponding risk level was high. On-site safety managers should take immediate measures in these high-risk buildings to reduce the possibility of accidents. This study is a new attempt to consider the construction safety risk of the new chemical project.
Recent asteroseismic studies have revealed that the convective core of γ Doradus stars rotates faster than their radiative interior. We study the development of differential rotation near the convective core to test angular momentum transport processes that are typically adopted in stellar evolution models. Models that only include the advection of angular momentum by meridional circulation and shear instabilities cannot reproduce current rotational constraints, irrespective of the initial conditions. The latest formulation of internal magnetic fields based on the Tayler instability is indeed able to reproduce the internal rotation rate of post-main sequence stars; however, it appears too efficient during the main sequence and has thus been disfavoured. A less efficient version of the same transport process can simultaneously reproduce the rotation rate of the convective core, the rotation rate in radiative regions as probed by gravity-modes, and the surface rotational velocities of γ Doradus stars. Our work suggests that there are additional physical processes apart from internal magnetic fields at work in the stellar interiors of post-main sequence stars.
The core-collapse of massive stars and the merger of neutron star binaries are among the most promising candidate sites for the production of high-energy cosmic neutrinos. We demonstrate that the high-energy neutrinos produced in such extreme environments can experience efficient flavor conversions on scales much shorter than those expected in vacuum, due to their coherent forward scatterings with the bath of decohered low-energy neutrinos emitted from the central engine. These low-energy neutrinos, which exist as mass eigenstates, provide a very special and peculiar dominant background for the propagation of the high-energy ones. We point out that the high-energy neutrino flavor ratio is modified to a value independent of neutrinos energies, which is distinct from the conventional prediction with the matter effect. We also suggest that the signals can be used as a novel probe of new neutrino interactions beyond the Standard Model. This is yet another context where neutrino-neutrino interactions can play a crucial role in their flavor evolution.
When QCD is described by a nonrelativistic effective field theory, operators consisting of gluonic correlators of two chromoelectric or -magnetic fields will often appear in descriptions of quarkonium physics. At zero T, these correlators give the masses of gluelumps and the moments of these correlators can be used to understand the inclusive P-wave decay of quarkonium. At finite T these correlators define the diffusion of the heavy quarkonium. However, these correlators come with a divergent term in lattice spacing which needs to be taken care of. We inspect these correlators in pure gauge theory with gradient flow smearing, which should allow us to reduce and remove the divergence more carefully. In these proceedings, we focus on the effect of gradient flow to these correlators and the reduction of this divergence.
We introduce our novel Bayesian parton density determination code, PartonDensity.jl. The motivation for this new code, the framework and its validation are described. As we show, PartonDensity.jl provides both a flexible environment for the determination of parton densities and a wealth of information concerning the knowledge update provided by the analyzed data set.
The streaming instability, a promising mechanism to drive planetesimal formation in dusty protoplanetary discs, relies on aerodynamic drag naturally induced by the background radial pressure gradient. This gradient should vary in disks, but its effect on the streaming instability has not been sufficiently explored. For this purpose, we use numerical simulations of an unstratified disc to study the non-linear saturation of the streaming instability with mono-disperse dust particles and survey a wide range of gradients for two distinct combinations of the particle stopping time and the dust-to-gas mass ratio. As the gradient increases, we find most kinematic and morphological properties increase but not always in linear proportion. The density distributions of tightly-coupled particles are insensitive to the gradient whereas marginally-coupled particles tend to concentrate by more than an order of magnitude as the gradient decreases. Moreover, dust-gas vortices for tightly-coupled particles shrink as the gradient decreases, and we note higher resolutions are required to trigger the instability in this case. In addition, we find various properties at saturation that depend on the gradient may be observable and may help reconstruct models of observed discs dominated by streaming turbulence. In general, increased dust diffusion from stronger gradients can lower the concentration of dust filaments and can explain the higher solid abundances needed to trigger strong particle clumping and the reduced planetesimal formation efficiency previously found in vertically-stratified simulations.
We discuss F-theory backgrounds associated to flat torus bundles over Ricci-flat manifolds. In this setting the F-theory background can be understood as a IIB orientifold with a large radius limit described by a supersymmetric compactification of IIB supergravity on a smooth, Ricci flat, but in general non-spin geometry. When compactified on an additional circle these backgrounds are T-dual to IIA compactifications on smooth non-orientable manifolds with a Pin− structure.
We analyze TYPHOON long-slit-absorption line spectra of the starburst barred spiral galaxy NGC 1365 obtained with the Progressive Integral Step Method covering an area of 15 kpc2. Applying a population synthesis technique, we determine the spatial distribution of ages and metallicities of the young and old stellar populations together with star formation rates, reddening, extinction, and the ratio R V of extinction to reddening. We detect a clear indication of inside-out growth of the stellar disk beyond 3 kpc characterized by an outward increasing luminosity fraction of the young stellar population, a decreasing average age, and a history of mass growth, which was finished 2 Gyr later in the outermost disk. The metallicity of the young stellar population is clearly super solar but decreases toward larger galactocentric radii with a gradient of -0.02 dex kpc-1. On the other hand, the metal content of the old population does not show a gradient and stays constant at a level roughly 0.4 dex lower than that of the young population. In the center of NGC 1365, we find a confined region where the metallicity of the young population drops dramatically and becomes lower than that of the old population. We attribute this to the infall of metal-poor gas, and additionally, to interrupted chemical evolution where star formation is stopped by active galactic nuclei and supernova feedback and then after several gigayears resumes with gas ejected by stellar winds from earlier generations of stars. We provide a simple model calculation as support for the latter.
The existence of a nucleon-ϕ (N-ϕ) bound state has been subject of theoretical and experimental investigations for decades. In this letter, indication of a p-ϕ bound state is found, using for the first time two-particle correlation functions as alternative to invariant mass spectra. Newly available lattice calculations for the spin 3/2 N-ϕ interaction by the HAL QCD collaboration are used to constrain the spin 1/2 counterpart from the fit of the experimental p-ϕ correlation function measured by ALICE. The corresponding scattering length and effective range are f0(1/2) = (-1.54-0.53+0.53 (stat .)-0.09+0.16 (syst .) + i ⋅0.00-0.00+0.35 (stat .)-0.00+0.16 (syst .)) fm and d0(1/2) = (0.39-0.09+0.09 (stat .)-0.03+0.02 (syst .) + i ⋅ 0.00-0.04+0.00 (stat .)-0.02+0.00 (syst .)) fm, respectively. The results imply the appearance of a p-ϕ bound state with an estimated binding energy in the range of 12.8 - 56.1 MeV.
The nuclear equation of state (EOS) is at the center of numerous theoretical and experimental efforts in nuclear physics. With advances in microscopic theories for nuclear interactions, the availability of experiments probing nuclear matter under conditions not reached before, endeavors to develop sophisticated and reliable transport simulations to interpret these experiments, and the advent of multi-messenger astronomy, the next decade will bring new opportunities for determining the nuclear matter EOS, elucidating its dependence on density, temperature, and isospin asymmetry. Among controlled terrestrial experiments, collisions of heavy nuclei at intermediate beam energies (from a few tens of MeV/nucleon to about 25 GeV/nucleon in the fixed-target frame) probe the widest ranges of baryon density and temperature, enabling studies of nuclear matter from a few tenths to about 5 times the nuclear saturation density and for temperatures from a few to well above a hundred MeV, respectively. Collisions of neutron-rich isotopes further bring the opportunity to probe effects due to the isospin asymmetry. However, capitalizing on the enormous scientific effort aimed at uncovering the dense nuclear matter EOS, both at RHIC and at FRIB as well as at other international facilities, depends on the continued development of state-of-the-art hadronic transport simulations. This white paper highlights the essential role that heavy-ion collision experiments and hadronic transport simulations play in understanding strong interactions in dense nuclear matter, with an emphasis on how these efforts can be used together with microscopic approaches and neutron star studies to uncover the nuclear EOS.
We review the current status of astrophysical bounds on QCD axions, primarily based on the observational effects of nonstandard energy losses on stars, including black-hole superradiance. Over the past few years, many of the traditional arguments have been reexamined both theoretically and using modern data and new ideas have been put forth. This compact review updates similar Lecture Notes written by one of us in 2006 [Lect. Notes Phys. 741 (2008) 51-71].
The direct detection of core-collapse supernova (SN) progenitor stars is a powerful way of probing the last stages of stellar evolution. However, detections in archival Hubble Space Telescope images are limited to about one detection per year. Here, we explore whether we can increase the detection rate by using data from ground-based wide-field surveys. Due to crowding and atmospheric blurring, progenitor stars can typically not be identified in preexplosion images alone. Instead, we combine many pre-SN and late-time images to search for the disappearance of the progenitor star. As a proof of concept, we implement our search of ZTF data. For a few hundred images, we achieve limiting magnitudes of ~23 mag in the g and r bands. However, no progenitor stars or long-lived outbursts are detected for 29 SNe within z ≤ 0.01, and the ZTF limits are typically several magnitudes less constraining than detected progenitors in the literature. Next, we estimate progenitor detection rates for the Legacy Survey of Space and Time (LSST) with the Vera C. Rubin telescope by simulating a population of nearby SNe. The background from bright host galaxies reduces the nominal LSST sensitivity by, on average, 0.4 mag. Over the 10 yr survey, we expect the detection of ~50 red supergiant progenitors and several yellow and blue supergiants. The progenitors of Type Ib and Ic SNe will be detectable if they are brighter than -4.7 or -4.0 mag in the LSST i band, respectively. In addition, we expect the detection of hundreds of pre-SN outbursts depending on their brightness and duration.
We present the Cardinal mock galaxy catalogs, a new version of the Buzzard simulation that has been updated to support ongoing and future cosmological surveys, including the Dark Energy Survey (DES), DESI, and LSST. These catalogs are based on a one-quarter sky simulation populated with galaxies out to a redshift of z = 2.35 to a depth of m r = 27. Compared to the Buzzard mocks, the Cardinal mocks include an updated subhalo abundance matching model that considers orphan galaxies and includes mass-dependent scatter between galaxy luminosity and halo properties. This model can simultaneously fit galaxy clustering and group-galaxy cross-correlations measured in three different luminosity threshold samples. The Cardinal mocks also feature a new color assignment model that can simultaneously fit color-dependent galaxy clustering in three different luminosity bins. We have developed an algorithm that uses photometric data to further improve the color assignment model and have also developed a novel method to improve small-scale lensing below the ray-tracing resolution. These improvements enable the Cardinal mocks to accurately reproduce the abundance of galaxy clusters and the properties of lens galaxies in the DES data. As such, these simulations will be a valuable tool for future cosmological analyses based on large sky surveys.
The J-region Asymptotic Giant Branch (JAGB) method is a standard candle that leverages the constant luminosities of color-selected, carbon-rich AGB stars, measured in the near-infrared at 1.2 μm. The Chicago-Carnegie Hubble Program has obtained JWST imaging of the SN Ia host galaxies NGC 7250, NGC 4536, and NGC 3972. With these observations, the JAGB method can be studied for the first time using JWST. Lee et al. demonstrated the JAGB magnitude is optimally measured in the outer disks of galaxies, because in the inner regions the JAGB magnitude can vary significantly due to a confluence of reddening, blending, and crowding effects. However, determining where the "outer disk" lies can be subjective. Therefore, we introduce a novel method for systematically selecting the outer disk. In a given galaxy, the JAGB magnitude is first separately measured in concentric regions, and the "outer disk" is then defined as the first radial bin where the JAGB magnitude stabilizes to a few hundredths of a magnitude. After successfully employing this method in our JWST galaxy sample, we find the JAGB stars are well segregated from other stellar populations in color-magnitude space, and have observed dispersions about their individual F115W modes of σ N7250 = 0.32 mag, σ N4536 = 0.34 mag, and σ N3972 = 0.35 mag. These measured dispersions are similar to the scatter measured for the JAGB stars in the LMC using 2MASS data (σ = 0.33 mag). In conclusion, the JAGB stars as observed with JWST clearly demonstrate their considerable power both as high-precision extragalactic distance indicators and as SN Ia supernova calibrators.
In dense neutrino environments like core-collapse supernovae (CCSNe) and neutron star mergers (NSMs), neutrinos can undergo fast flavor conversions (FFC) when their angular distribution of neutrino electron lepton number ($\nu$ELN) crosses zero along some directions. While previous studies have demonstrated the detection of axisymmetric $\nu$ELN crossings in these extreme environments, non-axisymmetric crossings have remained elusive, mostly due to the absence of models for their angular distributions. In this study, we present a pioneering analysis of the detection of non-axisymmetric $\nu$ELN crossings using machine learning (ML) techniques. Our ML models are trained on data from two CCSN simulations, one with rotation and one without, where non-axisymmetric features in neutrino angular distributions play a crucial role. We demonstrate that our ML models achieve detection accuracies exceeding 90\%. This is an important improvement, especially considering that a significant portion of $\nu$ELN crossings in these models eluded detection by earlier methods.
Dark matter particles captured in neutron stars deposit their energy as heat. This DM heating effect can be observed only if it dominates over other internal heating effects in neutron stars. In this work, as an example of such an internal heating source, we consider the frictional heating caused by the creep motion of neutron superfluid vortex lines in the neutron star crust. The luminosity of this heating effect is controlled by the strength of the interaction between the vortex lines and nuclei in the crust, which can be estimated from the many-body calculation of a high-density nuclear system as well as through the temperature observation of old neutron stars. We show that both the temperature observation and theoretical calculation suggest that the vortex creep heating dominates over the DM heating. The vortex-nuclei interaction must be smaller than the estimated values by several orders of magnitude to overturn this.
When hypothetical neutrino secret interactions (ν SI ) are large, they form a fluid in a supernova (SN) core, flow out with sonic speed, and stream away as a fireball. For the first time, we tackle the complete dynamical problem and solve all steps, systematically using relativistic hydrodynamics. The impact on SN physics and the neutrino signal is remarkably small. For complete thermalization within the fireball, the observable spectrum changes in a way that is independent of the coupling strength. One potentially large effect beyond our study is quick deleptonization if ν SI violate lepton number. By present evidence, however, SN physics leaves open a large region in parameter space, where laboratory searches and future high-energy neutrino telescopes will probe ν SI .
Neutrino-neutrino scattering could have a large secret component that would turn neutrinos within a supernova (SN) core into a self-coupled fluid. Neutrino transport within the SN core, emission from its surface, expansion into space, and the flux spectrum and time structure at Earth might all be affected. We examine these questions from first principles. First, diffusive transport differs only by a modified spectral average of the interaction rate. We next study the fluid energy transfer between a hot and a cold blackbody surface in plane-parallel and spherical geometry. The key element is the decoupling process within the radiating bodies, which themselves are taken to be isothermal. For a zero-temperature cold plate, mimicking radiation into free space by the hot plate, the energy flux is 3%-4% smaller than the usual Stefan-Boltzmann law. The fluid energy density just outside the hot plate is numerically 0.70 of the standard case, the outflow velocity is the speed of sound vs=c /√{3 } , conspiring to a nearly unchanged energy flux. Our results provide the crucial boundary condition for the expansion of the self-interacting fluid into space, assuming an isothermal neutrino sphere. We also derive a dynamical solution, assuming the emission suddenly begins at some instant. A neutrino front expands in space with luminal speed, whereas the outflow velocity at the radiating surface asymptotically approaches vs from above. Asymptotically, one thus recovers the steady-state emission found in the two-plate model. A sudden end to neutrino emission leads to a fireball with constant thickness equal to the duration of neutrino emission.
Compact binary mergers forming in star clusters may exhibit distinctive features that can be used to identify them among observed gravitational-wave (GW) sources. Such features likely depend on the host cluster structure and the physics of massive star evolution. Here, we dissect the population of compact binary mergers in the DRAGON-II simulation database, a suite of 19 direct N-body models representing dense star clusters with up to 106 stars and $<33~{{\%}}$ of stars in primordial binaries. We find a substantial population of black hole binary (BBH) mergers, some of them involving an intermediate-mass BH (IMBH), and a handful mergers involving a stellar BH and either a neutron star (NS) or a white dwarf (WD). Primordial binary mergers, $\sim 30~{{\%}}$ of the whole population, dominate ejected mergers. Dynamical mergers, instead, dominate the population of in-cluster mergers and are systematically heavier than primordial ones. Around 20 % of DRAGON-II mergers are eccentric in the LISA band and 5 % in the LIGO band. We infer a mean cosmic merger rate of $\mathcal {R}\sim 30(4.4)(1.2)$ yr-1 Gpc-3 for BBHs, NS-BH, and WD-BH binary mergers, respectively, and discuss the prospects for multimessenger detection of WD-BH binaries with LISA. We model the rate of pair-instability supernovae (PISNe) in star clusters and find that surveys with a limiting magnitude mbol = 25 can detect ~1 - 15 yr-1 PISNe. Comparing these estimates with future observations could help to pin down the impact of massive star evolution on the mass spectrum of compact stellar objects in star clusters.
We present the data release of the Uchuu-SDSS galaxies: a set of 32 high-fidelity galaxy lightcones constructed from the large Uchuu 2.1 trillion particles N-body simulation using Planck cosmology. We adopt subhalo abundance matching to populate the Uchuu-box halo catalogues with SDSS galaxy luminosities. These box catalogues generated at several redshifts are combined to create a set of lightcones with redshift-evolving galaxy properties. The Uchuu-SDSS galaxy lightcones are built to reproduce the footprint and statistical properties of the SDSS main galaxy survey, along with stellar masses and star formation rates. This facilitates a direct comparison of the observed SDSS and simulated Uchuu-SDSS data. Our lightcones reproduce a large number of observational results, such as the distribution of galaxy properties, galaxy clustering, stellar mass functions, and halo occupation distributions. Using simulated and real data we select samples of bright red galaxies at zeff = 0.15 to explore Redshift Space Distortions and Baryon Acoustic Oscillations (BAO) by fitting the full two-point correlation function and the BAO peak. We create a set of 5100 galaxy lightcones using GLAM N-body simulations to compute covariance errors. We report a $\sim 30~{{\%}}$ precision increase on fσ8 and the pre-reconstruction BAO scale, due to our better estimate of the covariance matrix. From our BAO-inferred α∥ and α⊥ parameters, we obtain the first SDSS measurements of the Hubble and angular diameter distances $D_\mathrm{H}(z=0.15) / r_d = 27.9^{+3.1}_{-2.7}$, $D_\mathrm{M}(z=0.15) / r_d = 5.1^{+0.4}_{-0.4}$. Overall, we conclude that the Planck Λ CDM cosmology nicely explains the observed large-scale structure statistics of SDSS. All data sets are made publicly available.
Long-lived radioactive by-products of nucleosynthesis provide an opportunity to trace the flow of ejecta away from its sources for times beyond where ejecta can be seen otherwise. Gamma rays from such radioactive decay in interstellar space can be measured with space-borne telescopes. A prominent useful example is 26Al with a radioactive decay time of one My. Such observations have revealed that typical surroundings of massive stars are composed of large cavities, extending to kpc sizes. Implications are that material recycling into new stars is twofold: rather direct as parental clouds are hosts to new star formation triggered by feedback, and more indirect as these large cavities merge with ambient interstellar gas after some delay. Kinematic measurements of hot interstellar gas carrying such ejecta promises important measurements complementing stellar and dense gas kinematics.
The application of Graph Neural Networks (GNN) in track reconstruction is a promising approach to cope with the challenges arising at the High-Luminosity upgrade of the Large Hadron Collider (HL-LHC). GNNs show good track-finding performance in high-multiplicity scenarios and are naturally parallelizable on heterogeneous compute architectures. Typical high-energy-physics detectors have high resolution in the innermost layers to support vertex reconstruction but lower resolution in the outer parts. GNNs mainly rely on 3D space-point information, which can cause reduced track-finding performance in the outer regions. In this contribution, we present a novel combination of GNN-based track finding with the classical Combinatorial Kalman Filter (CKF) algorithm to circumvent this issue: The GNN resolves the track candidates in the inner pixel region, where 3D space points can represent measurements very well. These candidates are then picked up by the CKF in the outer regions, where the CKF performs well even for 1D measurements. Using the ACTS infrastructure, we present a proof of concept based on truth tracking in the pixels as well as a dedicated GNN pipeline trained on $t\bar{t}$ events with pile-up 200 in the OpenDataDetector.
We compute the static force on the lattice in the quenched case directly through generalized Wilson loops. We modify the Wilson loop by inserting an $E$-field component on one of the temporal Wilson lines. However, chromo-field components prevent us from performing the continuum limit properly, hence, we use gradient flow to renormalize the field insertion. As a result, we obtain continuum results and compare them to perturbative expression to extract $\Lambda_0$, and we predict the value $\sqrt{8t_0} \Lambda_{\overline{\textrm{MS}}}^{n_f=0} =0.629^{+22}_{-26}$. This work serves as preparation for similar operators with field insertions required in nonrelativistic effective field theories.
Context. The study of protoplanetary disks is fundamental to understand their evolution and interaction with the surrounding environment, and to constrain planet formation mechanisms.
Aims: We aim to characterise the young binary system HD 34700 A, which shows a wealth of structures.
Methods: Taking advantage of the high-contrast imaging instruments SPHERE at the VLT, LMIRCam at the LBT, and of ALMA observations, we analyse this system at multiple wavelengths. We study the morphology of the rings and spiral arms and the scattering properties of the dust. We discuss the possible causes of all the observed features.
Results: We detect for the first time, in the Hα band, a ring extending from ~65 au to ~120 au, inside the ring which is already known from recent studies. These two have different physical and geometrical properties. Based on the scattering properties, the outer ring may consist of grains with a typical size of aout ≥ 4 µm, while the inner ring has a smaller typical size of ain ≤ 0.4 µm. Two extended logarithmic spiral arms stem from opposite sides of the disk. The outer ring appears as a spiral arm itself, with a variable radial distance from the centre and extended substructures. ALMA data confirm the presence of a millimetric dust substructure centred just outside the outer ring, and detect misaligned gas rotation patterns for HD 34700 A and B.
Conclusions: The complexity of HD 34700 A, revealed by the variety of observed features, suggests the existence of one or more disk-shaping physical mechanisms. Our findings are compatible with the presence inside the disk of an as of yet undetected planet of several Jupiter masses and the system interaction with the surroundings, by means of gas cloudlet capture or flybys. Further observations with JWST/MIRI or ALMA (gas kinematics) could shed more light on them.
When a galaxy falls into a cluster, its outermost parts are the most affected by the environment. In this paper, we are interested in studying the influence of a dense environment on different galaxy's components to better understand how this affects the evolution of galaxies. We use, as laboratory for this study, the Hydra cluster which is close to virialization; yet it still shows evidence of substructures. We present a multiwavelength bulge-disc decomposition performed simultaneously in 12 bands from S-PLUS (Southern Photometric Local Universe Survey) data for 52 galaxies brighter than mr = 16. We model the galaxies with a Sérsic profile for the bulge and an exponential profile for the disc. We find that the smaller, more compact, and bulge-dominated galaxies tend to exhibit a redder colour at a fixed stellar mass. This suggests that the same mechanisms (ram-pressure and tidal stripping) that are causing the compaction in these galaxies are also causing them to stop forming stars. The bulge size is unrelated to the galaxy's stellar mass, while the disc size increases with greater stellar mass, indicating the dominant role of the disc in the overall galaxy mass-size relation found. Furthermore, our analysis of the environment unveils that quenched galaxies are prevalent in regions likely associated with substructures. However, these areas also harbour a minority of star-forming galaxies, primarily resulting from galaxy interactions. Lastly, we find that ~37 per cent of the galaxies exhibit bulges that are bluer than their discs, indicative of an outside-in quenching process in this type of dense environments.
We present the first multi-wavelength study of Mrk 501 including very-high-energy (VHE) gamma-ray observations simultaneous to X-ray polarization measurements from the Imaging X-ray Polarimetry Explorer (IXPE). We use radio-to-VHE data from a multi-wavelength campaign organized between 2022-03-01 and 2022-07-19. The observations were performed by MAGIC, Fermi-LAT, NuSTAR, Swift (XRT and UVOT), and several instruments covering the optical and radio bands. During the IXPE pointings, the VHE state is close to the average behavior with a 0.2-1 TeV flux of 20%-50% the emission of the Crab Nebula. Despite the average VHE activity, an extreme X-ray behavior is measured for the first two IXPE pointings in March 2022 with a synchrotron peak frequency >1 keV. For the third IXPE pointing in July 2022, the synchrotron peak shifts towards lower energies and the optical/X-ray polarization degrees drop. The X-ray polarization is systematically higher than at lower energies, suggesting an energy-stratification of the jet. While during the IXPE epochs the polarization angle in the X-ray, optical and radio bands align well, we find a clear discrepancy in the optical and radio polarization angles in the middle of the campaign. We model the broad-band spectra simultaneous to the IXPE pointings assuming a compact zone dominating in the X-rays and VHE, and an extended zone stretching further downstream the jet dominating the emission at lower energies. NuSTAR data allow us to precisely constrain the synchrotron peak and therefore the underlying electron distribution. The change between the different states observed in the three IXPE pointings can be explained by a change of magnetization and/or emission region size, which directly connects the shift of the synchrotron peak to lower energies with the drop in polarization degree.
We present a Lagrangian approach to counting degrees of freedom in first-order field theories. The emphasis is on the systematic attainment of a complete set of constraints. In particular, we provide the first comprehensive procedure to ensure the functional independence of all constraints and discuss in detail the possible closures of the constraint algorithm. We argue degrees of freedom can but need not correspond to physical modes. The appendix comprises fully worked out, physically relevant examples of varying complexity.
J191213.72 - 441045.1 is a binary system composed of a white dwarf and an M-dwarf in a 4.03-h orbit. It shows emission in radio, optical, and X-ray, all modulated at the white dwarf spin period of 5.3 min, as well as various orbital sideband frequencies. Like in the prototype of the class of radio-pulsing white dwarfs, AR Scorpii, the observed pulsed emission seems to be driven by the binary interaction. In this work, we present an analysis of far-ultraviolet spectra obtained with the Cosmic Origins Spectrograph at the Hubble Space Telescope, in which we directly detect the white dwarf in J191213.72 - 441045.1. We find that the white dwarf has a temperature of Teff = 11485 ± 90 K and mass of 0.59 ± 0.05 M⊙. We place a tentative upper limit on the magnetic field of ≈50 MG. If the white dwarf is in thermal equilibrium, its physical parameters would imply that crystallization has not started in the core of the white dwarf. Alternatively, the effective temperature could have been affected by compressional heating, indicating a past phase of accretion. The relatively low upper limit to the magnetic field and potential lack of crystallization that could generate a strong field pose challenges to pulsar-like models for the system and give preference to propeller models with a low magnetic field. We also develop a geometric model of the binary interaction which explains many salient features of the system.
Context. Elongated trails of infalling gas, often referred to as "streamers," have recently been observed around young stellar objects (YSOs) at different evolutionary stages. This asymmetric infall of material can significantly alter star and planet formation processes, especially in the more evolved YSOs. Aims. In order to ascertain the infalling nature of observed streamer-like structures and then systematically characterize their dynamics, we developed the code TIPSY (Trajectory of Infalling Particles in Streamers around Young stars). Methods. Using TIPSY, the streamer molecular line emission is first isolated from the disk emission. Then the streamer emission, which is effectively a point cloud in three-dimensional (3D) position-position-velocity space, is simplified to a curve-like representation. The observed streamer curve is then compared to the theoretical trajectories of infalling material. The best-fit trajectories are used to constrain streamer features, such as the specific energy, the specific angular momenta, the infall timescale, and the 3D morphology. Results. We used TIPSY to fit molecular-line ALMA observations of streamers around a Class II binary system, S CrA, and a Class I/II protostar, HL Tau. Our results indicate that both of the streamers are consistent with infalling motion. TIPSY results and mass estimates suggest that S CrA and HL Tau are accreting material at a rate of $\gtrsim27$ M$_{jupiter}$ Myr$^{-1}$ and $\gtrsim5$ M$_{jupiter}$ Myr$^{-1}$, respectively, which can significantly increase the mass budget available to form planets. Conclusions. TIPSY can be used to assess whether the morphology and kinematics of observed streamers are consistent with infalling motion and to characterize their dynamics, which is crucial for quantifying their impact on the protostellar systems.
We compute the planar three-loop Quantum Chromodynamics (QCD) corrections to the helicity amplitudes involving a vector boson V = Z ,W± ,γ⁎, two quarks and a gluon. These amplitudes are relevant to vector-boson-plus-jet production at hadron colliders and other precision QCD observables. The planar corrections encompass the leading colour factors N3, N2Nf, NNf2 and Nf3 . We provide the finite remainders of the independent helicity amplitudes in terms of multiple polylogarithms, continued to all kinematic regions and in a form which is compact and lends itself to efficient numerical evaluation. The presented amplitude respects the conjectured symbol-adjacency constraints for amplitudes with three massless and one massive leg.
The eROSITA instrument aboard the Spectrum Roentgen Gamma (SRG) satellite has performed its first all-sky survey between December 2019 and June 2020. This paper presents the resulting hard X-ray (2.3-5 keV) sample, the first created from an all-sky imaging survey in the 2-8 keV band, for sources within western galactic sky. The 5466 hard X-ray selected sources detected with eROSITA are presented and discussed. The Bayesian statistics-based code NWAY is used to identify the counterparts for the X-ray sources. These sources are classified based on their multiwavelength properties, and the literature is searched to identify spectroscopic redshifts, which further inform the source classification. A total of 2547 sources are found to have good-quality counterparts, and 111 of these are detected only in the hard band. Comparing with other hard X-ray selected surveys, the eROSITA hard sample covers a larger redshift range and probes dimmer sources, providing a complementary and expanded sample as compared to Swift-BAT. Examining the column density distribution of missed and detected eROSITA sources present in the follow-up catalog of Swift BAT 70 month sources, it is demonstrated that eROSITA can detect obscured sources with column densities $>10^{24}$ cm$^{-2}$, but that the completeness drops rapidly after $10^{23}$ cm$^{-2}$. A sample of hard-only sources, many of which are likely to be heavily obscured AGN, is also presented and discussed. X-ray spectral fitting reveals that these sources have extremely faint soft X-ray emission and their optical images suggest that they are found in more edge-on galaxies with lower b/a. The resulting X-ray catalog is demonstrated to be a powerful tool for understanding AGN, in particular heavily obscured AGN found in the hard-only sample.
Cold dark matter axions produced in the post-inflationary Peccei-Quinn symmetry breaking scenario serve as clear targets for their experimental detection, since it is in principle possible to give a sharp prediction for their mass once we understand precisely how they are produced from the decay of global cosmic strings in the early Universe. In this paper, we perform a dedicated analysis of the spectrum of axions radiated from strings based on large scale numerical simulations of the cosmological evolution of the Peccei-Quinn field on a static lattice. Making full use of the massively parallel code and computing resources, we executed the simulations with up to $11264^3$ lattice sites, which allows us to improve our understanding of the dependence on the parameter controlling the string tension and thus give a more accurate extrapolation of the numerical results. We found that there are several systematic effects that have been overlooked in previous works, such as the dependence on the initial conditions, contaminations due to oscillations in the spectrum, and discretisation effects, some of which could explain the discrepancy in the literature. We confirmed the trend that the spectral index of the axion emission spectrum increases with the string tension, but did not find a clear evidence of whether it continues to increase or saturates to a constant at larger values of the string tension due to the severe discretisation effects. Taking this uncertainty into account and performing the extrapolation with a simple power law assumption on the spectrum, we find that the dark matter mass is predicted in the range of $m_a \approx 95$-$450\,\mu\mathrm{eV}$.
Context. About 30% - 40% of the baryons in the local Universe remain unobserved. Many of these "missing" baryons are expected to reside in the warm-hot intergalactic medium (WHIM) of the cosmic web filaments that connect clusters of galaxies. SRG/eROSITA performance verification (PV) observations covered about 15 square degrees of the A3391/95 system and have revealed a ~15 Mpc continuous soft emission connecting several galaxy groups and clusters.
Aims: We aim to characterize the gas properties in the cluster outskirts (R500 < r < R200) and in the detected inter-cluster filaments (> R200) and to compare them to predictions.
Methods: We performed X-ray image and spectral analyses using the eROSITA PV data in order to assess the gas morphology and properties in the outskirts and the filaments in the directions of the previously detected Northern and Southern Filament of the A3391/95 system. We constructed surface brightness profiles using particle-induced background-subtracted, exposure- and Galactic absorption-corrected eROSITA products in the soft band (0.3-2.0 keV). We constrained the temperatures, metallicities, and electron densities through X-ray spectral fitting and compared them with the expected properties of the WHIM. We took particular care of the foreground.
Results: In the filament-facing outskirts of A3391 and the Northern Clump, we find higher temperatures than typical cluster outskirts profiles, with a significance of between 1.6 and 2.8σ, suggesting heating due to their connections with the filaments. We confirm surface brightness excess in the profiles of the Northern, Eastern, and Southern Filaments. From spectral analysis, we detect hot gas of 0.96−0.14+0.17 keV and 1.09−0.06+0.09 for the Northern and Southern Filament, respectively, which are close to the upper WHIM temperature limit. The filament metallicities are below 10% solar metallicity and the electron densities are ranging between 2.6 and 6.3 × 10−5 cm−3. The characteristic properties of the Little Southern Clump (LSC), which is located at a distance of ~1.5R200 from A3395S in the Southern Filament, suggest that it is a small galaxy group. Excluding the LSC from the analysis of the Southern Filament does not significantly change the temperature or metallicity of the gas, but it decreases the gas density by 30%. This shows the importance of taking into account any clumps in order to avoid overestimation of the gas measurement in the outskirts and filament regions.
Conclusions: We present measurements of morphology, temperature, metallicity, and density of individual warm-hot filaments. The electron densities of the filaments are consistent with the WHIM properties as predicted by cosmological simulations, but the temperatures are higher. As both filaments are short (1.8 and 2.7 Mpc) and located in a denser environment, stronger gravitational heating may be responsible for this temperature enhancement. The metallicities are low, but still within the expected range from the simulations.
Image that is displayed in Fig. 1 is available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (ftp://130.79.128.5) or via https://cdsarc.cds.unistra.fr/viz-bin/cat/J/A+A/681/A108
Multiparton interactions are a fascinating phenomenon that occur in almost every high-energy hadron-hadron collision yet are remarkably difficult to study quantitatively. In this Letter, we present a strategy to optimally disentangle multiparton interactions from the primary scattering in a collision. That strategy enables probes of multiparton interactions that are significantly beyond the state of the art, including their characteristic momentum scale, the interconnection between primary and secondary scatters, and the pattern of three and potentially even more simultaneous hard scatterings. This opens a path to powerful new constraints on multiparton interactions for LHC phenomenology and to the investigation of their rich field-theoretical structure.
We investigate the impact of AGN feedback, on the entropy and characteristic temperature measurements of galaxy groups detected in the SRG/eROSITA's first All-Sky Survey (eRASS1) to shed light on the characteristics of the feedback mechanisms. We analyze deeper eROSITA observations of 1178 galaxy groups detected in eRASS1. We divide the sample into 271 subsamples and extract average thermodynamic properties, including electron density, temperature, and entropy at three characteristic radii along with the integrated temperature by jointly analyzing X-ray images and spectra following a Bayesian approach. We present the tightest constraints on the impact of AGN feedback through our average entropy and characteristic temperature measurements of the largest group sample used in X-ray studies, incorporating major systematics in our analysis. We find that entropy shows an increasing trend with temperature in the form of a power-law-like relation at the higher intra-group medium temperatures, while for the low mass groups, a slight flattening is observed on the average entropy. Overall, the observed entropy measurements agree well with the earlier measurements in the literature. The comparisons with the state-of-the-art cosmological hydrodynamic simulations (MillenniumTNG, Magneticum, OWL simulations) after the applications of the selection function calibrated for our galaxy groups reveal that observed entropy profiles in the cores are below the predictions of simulations. At the mid-region, the entropy measurements agree well with the Magneticum simulations, whereas the predictions of MillenniumTNG and OWL simulations fall below observations. At the outskirts, the overall agreement between the observations and simulations improves, with Magneticum simulations reproducing the observations the best. Our measurements will pave the way for more realistic AGN feedback implementations in simulations.
The 1D power spectrum P1D of the Ly α forest provides important information about cosmological and astrophysical parameters, including constraints on warm dark matter models, the sum of the masses of the three neutrino species, and the thermal state of the intergalactic medium. We present the first measurement of P1D with the quadratic maximum likelihood estimator (QMLE) from the Dark Energy Spectroscopic Instrument (DESI) survey early data sample. This early sample of 54 600 quasars is already comparable in size to the largest previous studies, and we conduct a thorough investigation of numerous instrumental and analysis systematic errors to evaluate their impact on DESI data with QMLE. We demonstrate the excellent performance of the spectroscopic pipeline noise estimation and the impressive accuracy of the spectrograph resolution matrix with 2D image simulations of raw DESI images that we processed with the DESI spectroscopic pipeline. We also study metal line contamination and noise calibration systematics with quasar spectra on the red side of the Ly α emission line. In a companion paper, we present a similar analysis based on the Fast Fourier Transform estimate of the power spectrum. We conclude with a comparison of these two approaches and discuss the key sources of systematic error that we need to address with the upcoming DESI Year 1 analysis.
The Dark Energy Spectroscopic Instrument (DESI) was designed to conduct a survey covering 14,000 deg2 over five years to constrain the cosmic expansion history through precise measurements of Baryon Acoustic Oscillations (BAO). The scientific program for DESI was evaluated during a five month Survey Validation (SV) campaign before beginning full operations. This program produced deep spectra of tens of thousands of objects from each of the stellar (MWS), bright galaxy (BGS), luminous red galaxy (LRG), emission line galaxy (ELG), and quasar target classes. These SV spectra were used to optimize redshift distributions, characterize exposure times, determine calibration procedures, and assess observational overheads for the five-year program. In this paper, we present the final target selection algorithms, redshift distributions, and projected cosmology constraints resulting from those studies. We also present a `One-Percent survey' conducted at the conclusion of Survey Validation covering 140 deg2 using the final target selection algorithms with exposures of a depth typical of the main survey. The Survey Validation indicates that DESI will be able to complete the full 14,000 deg2 program with spectroscopically-confirmed targets from the MWS, BGS, LRG, ELG, and quasar programs with total sample sizes of 7.2, 13.8, 7.46, 15.7, and 2.87 million, respectively. These samples will allow exploration of the Milky Way halo, clustering on all scales, and BAO measurements with a statistical precision of 0.28% over the redshift interval z<1.1, 0.39% over the redshift interval 1.1<z<1.9, and 0.46% over the redshift interval 1.9<z<3.5.
We investigate the potential of low-redshift Lyman alpha (Lyα) forest for constraining active galactic nuclei (AGN) feedback models by analyzing the Illustris and IllustrisTNG simulation at z=0.1. These simulations are ideal for studying the impact of AGN feedback on the intergalactic medium (IGM) as they share initial conditions with significant differences in the feedback prescriptions. Both simulations reveal that the IGM is significantly impacted by AGN feedback. Specifically, feedback is stronger in Illustris and results in reducing cool baryon fraction to 23% relative to 39% in IllustrisTNG. However, when comparing various statistics of Lyα forest such as 2D and marginalized distributions of Doppler widths and H I column density, line density, and flux power spectrum with real data, we find that most of these statistics are largely insensitive to the differences in feedback models. This lack of sensitivity arises because of the fundamental degeneracy between the fraction of cool baryons and the H I photoionization rate (ΓHI) as their product determines the optical depth of the Lyα forest. Since the ΓHI cannot be precisely predicted from first principles, it needs to be treated as a nuisance parameter adjusted to match the observed Lyα line density. After adjusting ΓHI, the distinctions in the considered statistics essentially fade away. Only the Lyα flux power spectrum at small spatial scales exhibits potentially observable differences, although this may be specific to the relatively extreme feedback model employed in Illustris. Without independent constraints on either ΓHI or cool baryon fraction, constraining AGN feedback with low-redshift Lyα forest will be very challenging.
It has been recently proposed that at each infinite distance limit in the moduli space of quantum gravity a perturbative description emerges with fundamental degrees of freedom given by those infinite towers of states whose typical mass scale is parametrically not larger than the ultraviolet cutoff, identified with the species scale. This proposal is applied to the familiar ten-dimensional type IIA and IIB superstring theories, when considering the limit of infinite string coupling. For type IIB, the light towers of states are given by excitations of the D 1 -brane, as expected from self-duality. Instead, for type IIA at strong coupling, which is dual to M-theory on S1, we make the observation that the emergent degrees of freedom are bound states of transversal M 2 - and M 5 -branes with Kaluza-Klein momentum along the circle. We speculate on the interpretation of the necessity of including all these states for a putative quantum formulation of M-theory.
Axion-gluon interaction induces quadratic couplings between the axion and the matter fields. We find that, if the axion is an ultralight dark matter field, it induces small oscillations of the mass of the hadrons as well as other nuclear quantities. As a result, atomic energy levels oscillate. We use currently available atomic spectroscopy data to constrain such axion-gluon coupling. We also project the sensitivities of future experiments, such as ones using molecular and nuclear clock transitions. We show that current and near-future experiments constrain a finely tuned parameter space of axion models. These can compete with or dominate the already-existing constraints from oscillating neutron electric dipole moment and supernova bound, in addition to those expected from near future magnetometer-based experiments. We also briefly discuss the reach of accelerometers and interferometers.
Neutrinos in dense environments like core-collapse supernovae (CCSNe) and neutron star mergers (NSMs) can undergo fast flavor conversions (FFCs) once the angular distribution of neutrino lepton number crosses zero along a certain direction. Recent advancements have demonstrated the effectiveness of machine learning (ML) in detecting these crossings. In this study, we enhance prior research in two significant ways. First, we utilize realistic data from CCSN simulations, where neutrino transport is solved using the full Boltzmann equation. We evaluate the ML methods' adaptability in a real-world context, enhancing their robustness. In particular, we demonstrate that when working with artificial data, simpler models outperform their more complex counterparts, a noteworthy illustration of the bias-variance tradeoff in the context of ML. We also explore methods to improve artificial datasets for ML training. In addition, we extend our ML techniques to detect the crossings in the heavy-leptonic channels, accommodating scenarios where νx and ν¯x may differ. Our research highlights the extensive versatility and effectiveness of ML techniques, presenting an unparalleled opportunity to evaluate the occurrence of FFCs in CCSN and NSM simulations.
There is currently no evidence for a baryon asymmetry in our Universe. Instead, cosmological observations have only demonstrated the existence of a quark-antiquark asymmetry, which does not necessarily imply a baryon asymmetric Universe, since the baryon number of the dark sector particles is unknown. In this paper we discuss a framework where the total baryon number of the Universe is equal to zero, and where the observed quark-antiquark asymmetry arises from neutron portal interactions with a dark sector fermion N that carries baryon number. In order to render a baryon symmetric universe throughout the whole cosmological history, we introduce a complex scalar χ, with opposite baryon number and with the same initial abundance as N. Notably, due to the baryon number conservation, χ is absolutely stable and could have an abundance today equal to the observed dark matter abundance. Therefore, in this simple framework, the existence of a quark-antiquark asymmetry is intimately related to the existence (and the stability) of dark matter.
Life continuously transduces energy to perform critical functions using energy stored in reactive molecules like ATP or NADH. ATP dynamically phosphorylates active sites on proteins and thereby regulates their function. Inspired by such machinery, regulating supramolecular functions using energy stored in reactive molecules has gained traction. Enzyme-free, synthetic systems that use dynamic phosphorylation to regulate supramolecular processes do not exist.
We present an enzyme-free reaction cycle that consumes phosphorylating agents by transiently phosphorylating amino acids. The phosphorylated amino acids are labile and deactivate through hydrolysis. The cycle exhibits versatility and tunability, allowing for the dynamic phosphorylation of multiple precursors with a tunable half-life. Notably, we show the resulting phosphorylated products can regulate the peptide’s phase separation, leading to active droplets that require the continuous conversion of fuel to sustain. Our new reaction cycle will be valuable as a model for biological phosphorylation but can also offer insights into protocell formation.
The last few years have seen the development of a promising theoretical framework for statistics of the cosmic large-scale structure -- the theory of large deviations (LDT) for modelling weak-lensing one-point statistics in the mildly non-linear regime. The goal of this series of papers is to make the leap and lay out the steps to perform an actual data analysis with this theoretical tool. Building upon the LDT framework, in this work (Paper I) we demonstrate how to accurately model the Probability Distribution Function (PDF) of a reconstructed Kaiser-Squires convergence field under a realistic mask, that of the third data release of the Dark Energy Survey (DES). We also present how weak lensing systematics and higher-order lensing corrections due to intrinsic alignments, shear biases, photo-z errors and baryonic feedback can be incorporated in the modelling of the reconstructed convergence PDF. In an upcoming work (Paper II) we will then demonstrate the robustness of our modelling through simulated likelihood analyses, the final step required before applying our method to actual data.
Accurate redshift calibration is required to obtain unbiased cosmological information from large-scale galaxy surveys. In a forward modelling approach, the redshift distribution n(z) of a galaxy sample is measured using a parametric galaxy population model constrained by observations. We use a model that captures the redshift evolution of the galaxy luminosity functions, colours, and morphology, for red and blue samples. We constrain this model via simulation-based inference, using factorized Approximate Bayesian Computation (ABC) at the image level. We apply this framework to HSC deep field images, complemented with photometric redshifts from COSMOS2020. The simulated telescope images include realistic observational and instrumental effects. By applying the same processing and selection to real data and simulations, we obtain a sample of n(z) distributions from the ABC posterior. The photometric properties of the simulated galaxies are in good agreement with those from the real data, including magnitude, colour and redshift joint distributions. We compare the posterior n(z) from our simulations to the COSMOS2020 redshift distributions obtained via template fitting photometric data spanning the wavelength range from UV to IR. We mitigate sample variance in COSMOS by applying a reweighting technique. We thus obtain a good agreement between the simulated and observed redshift distributions, with a difference in the mean at the 1σ level up to a magnitude of 24 in the i band. We discuss how our forward model can be applied to current and future surveys and be further extended. The ABC posterior and further material will be made publicly available at this https URL.
We present a new high-precision strong lensing model of PLCK G287.0+32.9, a massive lens galaxy cluster at z=0.383, with the aim to get an accurate estimation of its effective Einstein radius and total mass distribution. We also present a spectroscopic catalog containing accurate redshift measurements for 490 objects, including multiply-lensed sources and cluster member galaxies. We exploit high-quality spectroscopic data from three pointings of the VLT Multi Unit Spectroscopic Explorer, covering a central 3 arcmin2 region of the cluster. We complete the spectroscopic catalog by including redshift measurements from VLT-VIMOS and KECK-DEIMOS. We identify 129 spectroscopic cluster member galaxies, with redshift values 0.360≤z≤0.405 and mF160W≤21, and 24 photometric ones identified with a Convolutional Neural Network from ancillary HST imaging. We also identify 114 multiple images from 28 background sources, of which 84 images from 16 sources are new and the remaining ones were identified in previous work. The best-fitting lens model shows a root mean square separation value between the predicted and observed positions of the multiple images of 0.75″, corresponding to an improvement in reconstructing the observed positions of the multiple images of a factor of 2.5 with respect to previous models. Using the predictive power of our new lens model we find 3 new multiple images and we confirm the configuration of three systems of multiple images that were not used for the optimization of the model. The derived total mass distribution confirms this cluster to be a very prominent gravitational lens with an effective Einstein θE=43.4″±0.1″, that is in agreement with previous estimates and corresponds to a total mass enclosed in the critical curve of ME=3.33+0.02−0.07×1014M⊙.
High-resolution 3D maps of interstellar dust are critical for probing the underlying physics shaping the structure of the interstellar medium, and for foreground correction of astrophysical observations affected by dust. We aim to construct a new 3D map of the spatial distribution of interstellar dust extinction out to a distance of 1.25 kpc from the Sun. We leveraged distance and extinction estimates to 54 million nearby stars derived from the Gaia BP/RP spectra. Using the stellar distance and extinction information, we inferred the spatial distribution of dust extinction. We modeled the logarithmic dust extinction with a Gaussian process in a spherical coordinate system via iterative charted refinement and a correlation kernel inferred in previous work. In total, our posterior has over 661 million degrees of freedom. We probed the posterior distribution using the variational inference method MGVI. Our 3D dust map has an angular resolution of up to 14' (Nside = 256), and we achieve parsec-scale distance resolution, sampling the dust in 516 logarithmically spaced distance bins spanning 69 pc to 1250 pc. We generated 12 samples from the variational posterior of the 3D dust distribution and release the samples alongside the mean 3D dust map and its corresponding uncertainty. Our map resolves the internal structure of hundreds of molecular clouds in the solar neighborhood and will be broadly useful for studies of star formation, Galactic structure, and young stellar populations. It is available for download in a variety of coordinate systems online and can also be queried via the publicly available dustmaps Python package.
Using the relativistic Hartree-Bogoliubov framework with separable pairing force coupled with the latest covariant density functionals, i.e., PC-L3R, PC-X, DD-PCX, and DD-MEX, we systematically explore the ground-state properties of all isotopes of Z=8-110. These properties consist of the binding energies, one- and two-neutron separation energies (Sn and S2n), root-mean-square radius of matter, of neutron, of proton, and of charge distributions, Fermi surfaces, ground-state spins and parities. We then predict the edges of nuclear landscape and bound nuclei for the isotopic chains from oxygen (Z=8) to darmstadtium (Z=110) based on these latest covariant density functionals. The number of bound nuclei predicted by PC-L3R, PC-X, DD-PCX, and DD-MEX, are 9004, 9162, 6799, and 7112, respectively. The root-mean-square deviations of Sn (S2n) yielded from PC-L3R, PCX, DD-PCX, and DD-MEX are 0.962 (1.300) MeV, 0.920 (1.483) MeV, 0.993 (1.753) MeV, and 1.010 (1.544) MeV, respectively. The root-mean-square deviations of charge radius distributions of comparing the available experimental values with the theoretical counterparts resulted from PC-L3R, PC-X, DD-PCX, and DD-MEX are 0.035 fm, 0.037 fm, 0.035 fm, and 0.034 fm, respectively. We notice pronounced differences between the empirical and theoretical root-mean-square radii of neutron at nuclei near the neutron drip line of the Mg, Ca, and Kr isotopic chains, suggesting the possible existence of the halo or giant halo phenomena.
In this series of papers, we present an emulator-based halo model for the non-linear clustering of galaxies in modified gravity cosmologies. In the first paper, we present emulators for the following halo properties: the halo mass function, concentration-mass relation and halo-matter cross-correlation function. The emulators are trained on data extracted from the FORGE and BRIDGE suites of N-body simulations, respectively, for two modified gravity (MG) theories: f(R) gravity, and the DGP model, varying three standard cosmological parameters Ωm0, H0, σ8, and one MG parameter, either $\bar{f}_{R0}$ or rc. Our halo property emulators achieve an accuracy of ${\lesssim}1\ \hbox{per cent}$ on independent test data sets. We demonstrate that the emulators can be combined with a galaxy-halo connection prescription to accurately predict the galaxy-galaxy and galaxy-matter correlation functions using the halo model framework.
In this chapter, we review the processes involved in the formation of planetesimals and comets. We will start with a description of the physics of dust grain growth and how this is mediated by gas-dust interactions in planet-forming disks. We will then delve into the various models of planetesimal formation, describing how these planetesimals form as well as their resulting structure. In doing so, we focus on and compare two paradigms for planetesimal formation: the gravitational collapse of particle overdensities (which can be produced by a variety of mechanisms) and the growth of particles into planetesimals via collisional and gravitational coagulation.
Cosmological simulations fail to reproduce realistic galaxy populations without energy injection from active galactic nuclei (AGN) into the interstellar medium (ISM) and circumgalactic medium (CGM); a process called `AGN feedback'. Consequently, observational work searches for evidence that luminous AGN impact their host galaxies. Here, we review some of this work. Multi-phase AGN outflows are common, some with potential for significant impact. Additionally, multiple feedback channels can be observed simultaneously; e.g., radio jets from `radio quiet' quasars can inject turbulence on ISM scales, and displace CGM-scale molecular gas. However, caution must be taken comparing outflows to simulations (e.g., kinetic coupling efficiencies) to infer feedback potential, due to a lack of comparable predictions. Furthermore, some work claims limited evidence for feedback because AGN live in gas-rich, star-forming galaxies. However, simulations do not predict instantaneous, global impact on molecular gas or star formation. The impact is expected to be cumulative, over multiple episodes.
We summarize the recent strategy for an efficient hunting of new animalcula with the help of rare K and B decays that avoids the use of the |Vcb| and |Vub| parameters that are subject to tensions between their determinations from inclusive and exclusive decays. In particular we update the values of the |Vcb|-independent ratios of various K and B decay branching ratios predicted by the Standard Model. We also stress the usefulness of the |Vcb|–γ plots in the search for new physics. We select the magnificant seven among rare K and B decays that should play a leading role in the search for new physics due to their theoretical cleanness: B+ → K+(K*)vv- and K+ → π+vv- measured recently by Belle II and NA62, respectively, KL → π0vv- investigated by KOTO and also Bs,d → μ+μ‑ and KS → μ+μ‑ measured by the LHCb, CMS and ATLAS experiments at CERN.
We explore the potential of using the low-redshift Lyman-α (Lyα) forest surrounding luminous red galaxies (LRGs) as a tool to constrain active galactic nuclei (AGN) feedback models. Our analysis is based on snapshots from the Illustris and IllustrisTNG simulations at a redshift of z=0.1. These simulations offer an ideal platform for studying the influence of AGN feedback on the gas surrounding galaxies, as they share the same initial conditions and underlying code but incorporate different feedback prescriptions. Both simulations show significant impacts of feedback on the temperature and density of the gas around massive halos. Following our previous work, we adjusted the UV background in both simulations to align with the observed number density of Lyα lines (dN/dz) in the intergalactic medium and study the Lyα forest around massive halos hosting LRGs, at impact parameters (r⊥) ranging from 0.1 to 100 pMpc. Our findings reveal that dN/dz, as a function of r⊥, is approximately 1.5 to 2 times higher in IllustrisTNG compared to Illustris up to r⊥ of ∼10 pMpc. To further assess whether existing data can effectively discern these differences, we search for archival data containing spectra of background quasars probing foreground LRGs. Through a feasibility analysis based on this data, we demonstrate that dN/dz(r⊥) measurements can distinguish between feedback models of IllustrisTNG and Illustris with a precision exceeding 12σ. This underscores the potential of dN/dz(r⊥) measurements around LRGs as a valuable benchmark observation for discriminating between different feedback models.
We perform the first measurement of the thermal and ionization state of the intergalactic medium (IGM) across 0.9 < z < 1.5 using 301 \lya absorption lines fitted from 12 HST STIS quasar spectra, with a total pathlength of \Delta z=2.1. We employ the machine-learning-based inference method that uses joint b-N distributions obtained from \lyaf decomposition. Our results show that the HI photoionization rates, \Gamma, are in good agreement with the recent UV background synthesis models, with \log (\Gamma/s^{-1})={-11.79}^{0.18}_{-0.15}, -11.98}^{0.09}_{-0.09}, and {-12.32}^{0.10}_{-0.12} at z=1.4, 1.2, and 1 respectively. We obtain the IGM temperature at the mean density, T_0, and the adiabatic index, \gamma, as [\log (T_0/K), \gamma]= [{4.13}^{+0.12}_{-0.10}, {1.34}^{+0.10}_{-0.15}], [{3.79}^{+0.11}_{-0.11}, {1.70}^{+0.09}_{-0.09}] and [{4.12}^{+0.15}_{-0.25}, {1.34}^{+0.21}_{-0.26}] at z=1.4, 1.2 and 1 respectively. Our measurements of T_0 at z=1.4 and 1.2 are consistent with the expected trend from z<3 temperature measurements as well as theoretical expectations that, in the absence of any non-standard heating, the IGM should cool down after HeII reionization. Whereas, our T_0 measurements at z=1 show unexpectedly high IGM temperature. However, because of the relatively large uncertainty in these measurements of the order of \Delta T_0~5000 K, mostly emanating from the limited redshift path length of available data in these bins, we can not definitively conclude whether the IGM cools down at z<1.5. Lastly, we generate a mock dataset to test the constraining power of future measurement with larger datasets. The results demonstrate that, with redshift pathlength \Delta z \sim 2 for each redshift bin, three times the current dataset, we can constrain the T_0 of IGM within 1500K. Such precision would be sufficient to conclusively constrain the history of IGM thermal evolution at z < 1.5.
The Dark Energy Spectroscopic Instrument (DESI) completed its five-month Survey Validation in May 2021. Spectra of stellar and extragalactic targets from Survey Validation constitute the first major data sample from the DESI survey. This paper describes the public release of those spectra, the catalogs of derived properties, and the intermediate data products. In total, the public release includes good-quality spectral information from 466,447 objects targeted as part of the Milky Way Survey, 428,758 as part of the Bright Galaxy Survey, 227,318 as part of the Luminous Red Galaxy sample, 437,664 as part of the Emission Line Galaxy sample, and 76,079 as part of the Quasar sample. In addition, the release includes spectral information from 137,148 objects that expand the scope beyond the primary samples as part of a series of secondary programs. Here, we describe the spectral data, data quality, data products, Large-Scale Structure science catalogs, access to the data, and references that provide relevant background to using these spectra.
We present a publicly-available code to generate mock Lyman-α (\lya) forest data sets. The code is based on the Fluctuating Gunn-Peterson Approximation (FGPA) applied to Gaussian random fields and on the use of fast Fourier transforms (FFT). The output includes spectra of lya transmitted flux fraction, F, a quasar catalog, and a catalog of high-column-density systems. While these three elements have realistic correlations, additional code is then used to generate realistic quasar spectra, to add absorption by high-column-density systems and metals, and to simulate instrumental effects. Redshift space distortions (RSD) are implemented by including the large-scale velocity-gradient field in the FGPA resulting in a correlation function of F that can be accurately predicted. One hundred realizations have been produced over the 14,000 deg2 Dark Energy Spectroscopy Instrument (DESI) survey footprint with 100 quasars per deg2, and they are being used for the Extended Baryon Oscillation Survey (eBOSS) and DESI surveys. The analysis of these realizations shows that the correlation of F follows the prediction within the accuracy of eBOSS survey. The most time-consuming part of the production occurs before application of the FGPA, and the existing pre-FGPA forests can be used to easily produce new mock sets with modified redshift-dependent bias parameters or observational conditions.
We present preliminary results of a partial-wave analysis of τ−→π−π−π+νττ−→π−π−π+ντ in data from the Belle experiment at the KEKB e+e−e+e− collider. We demonstrate the presence of the a1(1420)a1(1420) and a1(1640)a1(1640) resonances in tauon decays and measure their masses and widths. We also present validation of our findings using a model-independent approach. Our results can improve modeling in simulation studies necessary for measuring the tauon electric and magnetic dipole moments and Michel parameters.
Experimental High Energy Physics has entered an era of precision measurements. However, measurements of many of the accessible processes assume that the final states' underlying kinematic distribution is the same as the Standard Model prediction. This assumption introduces an implicit model-dependency into the measurement, rendering the reinterpretation of the experimental analysis complicated without reanalysing the underlying data. We present a novel reweighting method in order to perform reinterpretation of particle physics measurements. It makes use of reweighting the Standard Model templates according to kinematic signal distributions of alternative theoretical models, prior to performing the statistical analysis. The generality of this method allows us to perform statistical inference in the space of theoretical parameters, assuming different kinematic distributions, according to a beyond Standard Model prediction. We implement our method as an extension to the pyhf software and interface it with the EOS software, which allows us to perform flavor physics phenomenology studies. Furthermore, we argue that, beyond the pyhf or HistFactory likelihood specification, only minimal information is necessary to make a likelihood model-agnostic and hence easily reinterpretable. We showcase that publishing such likelihoods is crucial for a full exploitation of experimental results.
Disc winds and planet-disc interactions are two crucial mechanisms that define the structure, evolution and dispersal of protoplanetary discs. While winds are capable of removing material from discs, eventually leading to their dispersal, massive planets can shape their disc by creating sub-structures such as gaps and spiral arms. We study the interplay between an X-ray photoevaporative disc wind and the substructures generated due to planet-disc interactions to determine how their mutual interactions affect the disc's and the planet's evolution. We perform three-dimensional hydrodynamic simulations of viscous (α=6.9⋅10−4) discs that host a Jupiter-like planet and undergo X-ray photoevaporation. We trace the gas flows within the disc and wind and measure the accretion rate onto the planet, as well as the gravitational torque that is acting on it. Our results show that the planetary gap takes away the wind's pressure support, allowing wind material to fall back into the gap. This opens new pathways for material from the inner disc (and part of the outer disc) to be redistributed through the wind towards the gap. Consequently, the gap becomes shallower, and the flow of mass across the gap in both directions is significantly increased, as well as the planet's mass-accretion rate (by factors ≈5 and ≈2, respectively). Moreover, the wind-driven redistribution results in a denser inner disc and less dense outer disc, which, combined with the recycling of a significant portion of the inner wind, leads to longer lifetimes of the inner disc, contrary to the expectation in a planet-induced photoevaporation (PIPE) scenario that has been proposed in the past.
We present a new approach for identifying neutrino flares. Using the unsupervised machine learning algorithm expectation maximization, we reduce computing times compared to conventional approaches by a factor of 105 on a single CPU. Expectation maximization is also easily expandable to multiple flares. We explain the application of the algorithm and fit the neutrino flare of TXS 0506+056 as an example.
The concept of attention, numerical weights that emphasize the importance of particular data, has proven to be very relevant in artificial intelligence. Relative entropy (RE, aka Kullback-Leibler divergence) plays a central role in communication theory. Here we combine these concepts, attention and RE. RE guides optimal encoding of messages in bandwidth-limited communication as well as optimal message decoding via the maximum entropy principle (MEP). In the coding scenario, RE can be derived from four requirements, namely being analytical, local, proper, and calibrated. Weighted RE, used for attention steering in communications, turns out to be improper. To see how proper attention communication can emerge, we analyze a scenario of a message sender who wants to ensure that the receiver of the message can perform well-informed actions. If the receiver decodes the message using the MEP, the sender only needs to know the receiver's utility function to inform optimally, but not the receiver's initial knowledge state. In case only the curvature of the utility function maxima are known, it becomes desirable to accurately communicate an attention function, in this case a by this curvature weighted and re-normalized probability function. Entropic attention communication is here proposed as the desired generalization of entropic communication that permits weighting while being proper, thereby aiding the design of optimal communication protocols in technical applications and helping to understand human communication. For example, our analysis shows how to derive the level of cooperation expected under misaligned interests of otherwise honest communication partners.
We extend the multireference covariant density-functional theory (MR-CDFT) by including fluctuations in quadrupole deformations and average isovector pairing gaps simultaneously for the nuclear matrix elements (NMEs) of neutrinoless double-beta (0νββ) decay in the candidate nuclei 76Ge, 82Se, 100Mo, 130Te, and 136Xe assuming the exchange of either light or heavy neutrinos. The results indicate a linear correlation between the predicted NMEs and the isovector pairing strengths, as well as the excitation energies of 2+1 and 4+1 states. By adjusting the pairing strengths based on the excitation energies of the 2+1 states, we calculate the NMEs for 0νββ decay, which are reduced by approximately 12% to 62% compared with the results obtained in the previous studies by Song et al. [Phys. Rev. C 95, 024305 (2017)]. Additionally, upon introducing the average isovector pairing gap as an additional generator coordinate in the calculation, the NMEs increase by a factor ranging from 56% to 218%.
GraphNeT is an open-source python framework aimed at providing high quality, user friendly, end-to-end functionality to perform reconstruction tasks at neutrino telescopes using graph neural networks (GNNs). GraphNeT makes it fast and easy to train complex models that can provide event reconstruction with state-of-the-art performance, for arbitrary detector configurations, with inference times that are orders of magnitude faster than traditional reconstruction techniques. GNNs from GraphNeT are flexible enough to be applied to data from all neutrino telescopes, including future projects such as IceCube extensions or P-ONE. This means that GNN-based reconstruction can be used to provide state-of-the-art performance on most reconstruction tasks in neutrino telescopes, at real-time event rates, across experiments and physics analyses, with vast potential impact for neutrino and astro-particle physics.
We introduce a new statistical test based on the observed spacings of ordered data. The statistic is sensitive to detect non-uniformity in random samples, or short-lived features in event time series. Under some conditions, this new test can outperform existing ones, such as the well known Kolmogorov-Smirnov or Anderson-Darling tests, in particular when the number of samples is small and differences occur over a small quantile of the null hypothesis distribution. A detailed description of the test statistic is provided including a detailed discussion of the parameterization of its distribution via asymptotic bootstrapping as well as a novel per-quantile error estimation of the empirical distribution. Two example applications are provided, using the test to boost the sensitivity in generic "bump hunting", and employing the test to detect supernovae. The article is rounded off with an extended performance comparison to other, established goodness-of-fit tests.
Event reconstruction is a central step in many particle physics experiments, turning detector observables into parameter estimates; for example estimating the energy of an interaction given the sensor readout of a detector. A corresponding likelihood function is often intractable, and approximations need to be constructed. In our work, we first show how the full likelihood for a many-sensor detector can be broken apart into smaller terms, and secondly how we can train neural networks to approximate all terms solely based on forward simulation. Our technique results in a fast, flexible, and close-to-optimal surrogate model proportional to the likelihood and can be used in conjunction with standard inference techniques allowing for a consistent treatment of uncertainties. We illustrate our technique for parameter inference in neutrino telescopes based on maximum likelihood and Bayesian posterior sampling. Given its great flexibility, we also showcase our method for geometry optimization enabling to learn optimal detector designs. Lastly, we apply our method to realistic simulation of a ton-scale water-based liquid scintillator detector.
The reconstruction of neutrino events in the IceCube experiment is crucial for many scientific analyses, including searches for cosmic neutrino sources. The Kaggle competition "IceCube -- Neutrinos in Deep ice" was a public machine learning challenge designed to encourage the development of innovative solutions to improve the accuracy and efficiency of neutrino event reconstruction. Participants worked with a dataset of simulated neutrino events and were tasked with creating a suitable model to predict the direction vector of incoming neutrinos. From January to April 2023, hundreds of teams competed for a total of $50k prize money, which was awarded to the best performing few out of the many thousand submissions. In this contribution I will present some insights into the organization of this large outreach project, and summarize some of the main findings, results and takeaways.
We present a simple and promising new method to measure the expansion rate and the geometry of the universe that combines observations related to the time delays between the multiple images of time-varying sources, strongly lensed by galaxy clusters, and Type Ia supernovae, exploding in galaxies belonging to the same lens clusters. By means of two different statistical techniques that adopt realistic errors on the relevant quantities, we quantify the accuracy of the inferred cosmological parameter values. We show that the estimate of the Hubble constant is robust and competitive, and depends only mildly on the chosen cosmological model. Remarkably, the two probes separately produce confidence regions on the cosmological parameter planes that are oriented in complementary ways, thus providing in combination valuable information on the values of the other cosmological parameters. We conclude by illustrating the immediate observational feasibility of the proposed joint method in a well-studied lens galaxy cluster, with a relatively small investment of telescope time for monitoring from a 2 to 3 m class ground-based telescope.
In previous hydrodynamical simulations, we found a mechanism for nearly circular binary stars, such as Kepler-413, to trap two planets in a stable 1:1 resonance. Therefore, the stability of coorbital configurations becomes a relevant question for planet formation around binary stars. For this work, we investigated the coorbital planet stability using a Kepler-413 analogue as an example and then expanded the parameters to study a general n-body stability of planet pairs in eccentric horseshoe orbits around binaries. The stability was tested by evolving the planet orbits for 105 binary periods with varying initial semi-major axes and planet eccentricities. The unstable region of a single circumbinary planet is used as a comparison to the investigated coorbital configurations in this work. We confirm previous findings on the stability of single planets and find a first order linear relation between the orbit eccentricity ep and pericentre to identify stable orbits for various binary configurations. Such a linear relation is also found for the stability of 1:1 resonant planets around binaries. Stable orbits for eccentric horseshoe configurations exist with a pericentre closer than seven binary separations and, in the case of Kepler-413, the pericentre of the first stable orbit can be approximated by rc,peri = (2.90 ep + 2.46) abin.
We devise and demonstrate a method to search for non-gravitational couplings of ultralight dark matter to standard model particles using space-time separated atomic clocks and cavity-stabilized lasers. By making use of space-time separated sensors, which probe different values of an oscillating dark matter field, we can search for couplings that cancel in typical local experiments. We demonstrate this method using existing data from a frequency comparison of lasers stabilized to two optical cavities connected via a 2220 km fiber link [Nat. Commun. 13, 212 (2022)]. The absence of significant oscillations in the data results in constraints on the coupling of scalar dark matter to electrons, d_me, for masses between 1e-19 eV and 2e-15 eV. These are the first constraints on d_me alone in this mass range, and improve the dark matter constraints on any scalar-Fermion coupling by up to two orders of magnitude.
We show in experiments that a long, underdense, relativistic proton bunch propagating in plasma undergoes the oblique instability, that we observe as filamentation. We determine a threshold value for the ratio between the bunch transverse size and plasma skin depth for the instability to occur. At the threshold, the outcome of the experiment alternates between filamentation and self-modulation instability (evidenced by longitudinal modulation into microbunches). Time-resolved images of the bunch density distribution reveal that filamentation grows to an observable level late along the bunch, confirming the spatio-temporal nature of the instability. We calculate the amplitude of the magnetic field generated in the plasma by the instability and show that the associated magnetic energy increases with plasma density.
Pseudospin symmetry (PSS) is a relativistic dynamical symmetry connected with the lower component of the Dirac spinor. Here, we investigate the conservation and breaking of PSS in the single-nucleon resonant states, as an example, using Green's function method that provides a novel way to precisely describe not only the resonant energies and widths but also the spacial density distributions for both narrow and wide resonances. The PSS conservation and breaking are perfectly displayed in the evolution of resonant parameters and density distributions with the potential depth: In the PSS limit, i.e., when the attractive scalar and repulsive vector potentials have the same magnitude but opposite sign, PSS is exactly conserved with strictly the same energy and width between the PS partners as well as identical density distributions of the lower components. As the potential depth increases, the PSS is broken gradually with energy and width splittings and a phase shift in the density distributions.
Methyl cyanide (CH3CN) is one of the most abundant and widely spread interstellar complex organic molecules (iCOMs). Several studies found that, in hot corinos, methyl cyanide and methanol abundances are correlated suggesting a chemical link, often interpreted as a synthesis of them on the interstellar grain surfaces. In this article, we present a revised network of the reactions forming methyl cyanide in the gas phase. We carried out an exhaustive review of the gas-phase CH3CN formation routes, propose two new reactions, and performed new quantum mechanics calculations of several reactions. We found that 13 of the 15 reactions reported in the databases KIDA and UDfA have incorrect products and/or rate constants. The new corrected reaction network contains 10 reactions leading to methyl cyanide. We tested the relative importance of those reactions in forming CH3CN using our astrochemical model. We confirm that the radiative association of CH3+ and HCN, forming CH3CNH+, followed by the electron recombination of CH3CNH+, is the most important CH3CN formation route in both cold and warm environments, notwithstanding that we significantly corrected the rate constants and products of both reactions. The two newly proposed reactions play an important role in warm environments. Finally, we found a very good agreement between the CH3CN predicted abundances with those measured in cold (~10 K) and warm (~90 K) objects. Unexpectedly, we also found a chemical link between methanol and methyl cyanide via the CH$_{3}^{+}$ ion, which can explain the observed correlation between the CH3OH and CH3CN abundances measured in hot corinos.
The upcoming ByCycle project on the VISTA/4MOST multi-object spectrograph will offer new prospects of using a massive sample of ~1 million high spectral resolution (R = 20 000) background quasars to map the circumgalactic metal content of foreground galaxies (observed at R = 4000-7000), as traced by metal absorption. Such large surveys require specialized analysis methodologies. In the absence of early data, we instead produce synthetic 4MOST high-resolution fibre quasar spectra. To do so, we use the TNG50 cosmological magnetohydrodynamical simulation, combining photo-ionization post-processing and ray tracing, to capture Mg II (λ2796, λ2803) absorbers. We then use this sample to train a convolutional neural network (CNN) which searches for, and estimates the redshift of, Mg II absorbers within these spectra. For a test sample of quasar spectra with uniformly distributed properties ($\lambda _{\rm {Mg\, {\small II},2796}}$, $\rm {EW}_{\rm {Mg\, {\small II},2796}}^{\rm {rest}} = 0.05\!-\!5.15$ Å, $\rm {SNR} = 3\!-\!50$), the algorithm has a robust classification accuracy of 98.6 per cent and a mean wavelength accuracy of 6.9 Å. For high signal-to-noise (SNR) spectra ($\rm {SNR \gt 20}$), the algorithm robustly detects and localizes Mg II absorbers down to equivalent widths of $\rm {EW}_{\rm {Mg\, {\small II},2796}}^{\rm {rest}} = 0.05$ Å. For the lowest SNR spectra ($\rm {SNR=3}$), the CNN reliably recovers and localizes EW$_{\rm {Mg\, {\small II},2796}}^{\rm {rest}}$ ≥0.75 Å absorbers. This is more than sufficient for subsequent Voigt profile fitting to characterize the detected Mg II absorbers. We make the code publicly available through GitHub. Our work provides a proof-of-concept for future analyses of quasar spectra data sets numbering in the millions, soon to be delivered by the next generation of surveys.