We study stellar population and structural properties of massive log (M⋆/M⊙) > 11 galaxies at z ≈ 2.7 in the Magneticum and IllustrisTNG hydrodynamical simulations and GAEA semi-analytic model. We find stellar mass functions broadly consistent with observations, with no scarcity of massive, quiescent galaxies at z ≈ 2.7, but with a higher quiescent galaxy fraction at high masses in IllustrisTNG. Average ages of simulated quiescent galaxies are between ≈0.8 and ${1.0\, \textrm {Gyr}}$, older by a factor ≈2 than observed in spectroscopically confirmed quiescent galaxies at similar redshift. Besides being potentially indicative of limitations of simulations in reproducing observed star formation histories, this discrepancy may also reflect limitations in the estimation of observed ages. We investigate the purity of simulated UVJ rest-frame colour-selected massive quiescent samples with photometric uncertainties typical of deep surveys (e.g. COSMOS). We find evidence for significant contamination (up to ${60\, \rm {per\, cent}}$) by dusty star-forming galaxies in the UVJ region that is typically populated by older quiescent sources. Furthermore, the completeness of UVJ-selected quiescent samples at this redshift may be reduced by $\approx {30\, \rm {per\, cent}}$ due to a high fraction of young quiescent galaxies not entering the UVJ quiescent region. Massive, quiescent galaxies in simulations have on average lower angular momenta and higher projected axis ratios and concentrations than star-forming counterparts. Average sizes of simulated quiescent galaxies are broadly consistent with observations within the uncertainties. The average size ratio of quiescent and star-forming galaxies in the probed mass range is formally consistent with observations, although this result is partly affected by poor statistics.
We discuss the gravitational wave spectrum produced by first-order phase transitions seeded by domain wall networks. This setup is important for many two-step phase transitions as seen for example in the singlet extension of the standard model. Whenever the correlation length of the domain wall network is larger than the typical bubble size, this setup leads to a gravitational wave signal that is shifted to lower frequencies and with an enhanced amplitude compared to homogeneous phase transitions without domain walls. We discuss our results in light of the recent PTA hints for gravitational waves.
In this paper, we present COMET, a Gaussian process emulator of the galaxy power spectrum multipoles in redshift space. The model predictions are based on one-loop perturbation theory and we consider two alternative descriptions of redshift-space distortions: one that performs a full expansion of the real- to redshift-space mapping, as in recent effective field theory models, and another that preserves the non-perturbative impact of small-scale velocities by means of an effective damping function. The outputs of COMET can be obtained at arbitrary redshifts, for arbitrary fiducial background cosmologies, and for a large parameter space that covers the shape parameters ωc, ωb, and ns, as well as the evolution parameters h, As, ΩK, w0, and wa. This flexibility does not impair COMET's accuracy, since we exploit an exact degeneracy between the evolution parameters that allows us to train the emulator on a significantly reduced parameter space. While the predictions are sped up by two orders of magnitude, validation tests reveal an accuracy of $0.1\, {{\ \rm per\ cent}}$ for the monopole and quadrupole ($0.3\, {{\ \rm per\ cent}}$ for the hexadecapole), or alternatively, better than $0.25\, \sigma$ for all three multipoles in comparison to statistical uncertainties expected for the Euclid survey with a tenfold increase in volume. We show that these differences translate into shifts in mean posterior values that are at most of the same size, meaning that COMET can be used with the same confidence as the exact underlying models. COMET is a publicly available PYTHON package that also provides the tree-level bispectrum multipoles and Gaussian covariance matrices.
$Z^\prime$ models belong to the ones that can most easily explain the anomalies in $b\to s \mu^+\mu^-$ transitions. However, such an explanation by a single $Z^\prime$ gauge boson, as done in the literature, is severly constrained by the $B^0_s-\bar B_s^0$ mixing. Also the recent finding, hat the mass differences $\Delta M_s$, $\Delta M_d$, the CP-violating parameter $\varepsilon_K$, and the mixing induced CP-asymmetries $S_{\psi K_S}$ and $S_{\psi \phi}$ can be simultaneously well described within the SM without new physics (NP) contributions, is a challenge for $Z^\prime$ models with a single $Z^\prime$ contributing at tree-level to quark mixing. We point out that including a second $Z^\prime$ in the model allows to eliminate simultaneously tree-level contributions to the five $\Delta F=2$ observables used in the determination of the CKM parameters while leaving the room for NP in $\Delta M_K$ and $\Delta M_D$. The latter one can be removed at the price of infecting $\Delta M_s$ or $\Delta M_d$ by NP which is presently disfavoured. This pattern is transparently seen using the new mixing matrix for $Z^\prime$ interactions with quarks. This strategy allows significant tree-level contributions to $K$, $B_s$ and $B_d$ decays thereby allowing to explain the existing anomalies in $b\to s\mu^+\mu^-$ transitions and the anticipated anomaly in the ratio $\varepsilon'/\varepsilon$ much easier than in $Z^\prime$-Single scenarios. The proposed $Z^\prime$-Tandem mechanism bears some similarities to the GIM mechanism for the suppression of the FCNCs in the SM with the role of the charm quark played here by the second $Z^\prime$. However, it differs from the latter profoundly in that only NP contributions to quark mixing are eliminated at tree-level. We discuss briefly the implied flavour patterns in $K$ and $B$ decay observables in this NP scenario.
In this paper we consider signal-background interference effects in Higgs-mediated diphoton production at the LHC. After reviewing earlier works that show how to use these effects to constrain the Higgs boson total decay width, we provide predictions beyond NLO accuracy for the interference and related observables, and study the impact of QCD radiative corrections on the Higgs width determination. In particular, we use the so-called soft-virtual approximation to estimate interference effects at NNLO in QCD. The inclusion of these effects reduces the NNLO prediction for the total Higgs cross-section in the diphoton channel by about 1.7%. We study in detail the impact of QCD corrections on the Higgs-boson line-shape and its implications for the Higgs boson width extraction. In particular, we find that the shift of the Higgs resonance peak arising from interference effects gets reduced by about 30% with respect to the NLO prediction. Assuming an experimental resolution of about 150 MeV on interference-induced modifications of the Higgs-boson line-shape, our NNLO analysis shows that one could constrain the Higgs-boson total width to about 10-20 times its Standard Model value.
Context. The BL Lac object 1ES 0647+250 is one of the few distant γ-ray emitting blazars detected at very high energies (VHEs; ≳100 GeV) during a non-flaring state. It was detected with the MAGIC telescopes during a period of low activity in the years 2009−2011 as well as during three flaring activities in the years 2014, 2019, and 2020, with the highest VHE flux in the last epoch. An extensive multi-instrument data set was collected as part of several coordinated observing campaigns over these years.
Aims: We aim to characterise the long-term multi-band flux variability of 1ES 0647+250, as well as its broadband spectral energy distribution (SED) during four distinct activity states selected in four different epochs, in order to constrain the physical parameters of the blazar emission region under certain assumptions.
Methods: We evaluated the variability and correlation of the emission in the different energy bands with the fractional variability and the Z-transformed discrete correlation function, as well as its spectral evolution in X-rays and γ rays. Owing to the controversy in the redshift measurements of 1ES 0647+250 reported in the literature, we also estimated its distance in an indirect manner through a comparison of the GeV and TeV spectra from simultaneous observations with Fermi-LAT and MAGIC during the strongest flaring activity detected to date. Moreover, we interpret the SEDs from the four distinct activity states within the framework of one-component and two-component leptonic models, proposing specific scenarios that are able to reproduce the available multi-instrument data.
Results: We find significant long-term variability, especially in X-rays and VHE γ rays. Furthermore, significant (3−4σ) correlations were found between the radio, optical, and high-energy (HE) γ-ray fluxes, with the radio emission delayed by about ∼400 days with respect to the optical and γ-ray bands. The spectral analysis reveals a harder-when-brighter trend during the non-flaring state in the X-ray domain. However, no clear patterns were observed for either the enhanced states or the HE (30 MeV < E < 100 GeV) and VHE γ-ray emission of the source. The indirect estimation of the redshift yielded a value of z = 0.45 ± 0.05, which is compatible with some of the values reported in the literature. The SEDs related to the low-activity state and the three flaring states of 1ES 0647+250 can be described reasonably well with the both one-component and two-component leptonic scenarios. However, the long-term correlations indicate the need for an additional radio-producing region located about 3.6 pc downstream from the gamma-ray producing region.
The nucleosynthetic isotope dichotomy between carbonaceous (CC) and non-carbonaceous (NC) meteorites has been interpreted as evidence for spatial separation and the coexistence of two distinct planet-forming reservoirs for several million years in the solar protoplanetary disk. The rapid formation of Jupiter's core within one million years after the formation of calcium-aluminium-rich inclusions (CAIs) has been suggested as a potential mechanism for spatial and temporal separation. In this scenario, Jupiter's core would open a gap in the disk and trap inward-drifting dust grains in the pressure bump at the outer edge of the gap, separating the inner and outer disk materials from each other. We performed simulations of dust particles in a protoplanetary disk with a gap opened by an early-formed Jupiter core, including dust growth and fragmentation as well as dust transport, using the dust evolution software DustPy. Our numerical experiments indicate that particles trapped in the outer edge of the gap rapidly fragment and are transported through the gap, contaminating the inner disk with outer disk material on a timescale that is inconsistent with the meteoritic record. This suggests that other processes must have initiated or at least contributed to the isotopic separation between the inner and outer Solar System.
Context: Type II supernovae provide a direct way to estimate distances through the expanding photosphere method, which is independent of the cosmic distance ladder. A recently introduced Gaussian process-based method allows for a fast and precise modelling of spectral time series, which puts accurate and computationally cheap Type II-based absolute distance determinations within reach. Aims: The goal of the paper is to assess the internal consistency of this new modelling technique coupled with the distance estimation empirically, using the spectral time series of supernova siblings, i.e. supernovae that exploded in the same host galaxy. Methods: We use a recently developed spectral emulator code, which is trained on \textsc{Tardis} radiative transfer models and is capable of a fast maximum likelihood parameter estimation and spectral fitting. After calculating the relevant physical parameters of supernovae we apply the expanding photosphere method to estimate their distances. Finally, we test the consistency of the obtained values by applying the formalism of Bayes factors. Results: The distances to four different host galaxies were estimated based on two supernovae in each. The distance estimates are not only consistent within the errors for each of the supernova sibling pairs, but in the case of two hosts they are precise to better than 5\%. Conclusions: Even though the literature data we used was not tailored for the requirements of our analysis, the agreement of the final estimates shows that the method is robust and is capable of inferring both precise and consistent distances. By using high-quality spectral time series, this method can provide precise distance estimates independent of the distance ladder, which are of high value for cosmology.
Low-luminosity active galactic nuclei (LLAGN) are special among their kind due to the profound structural changes that the central engine experiences at low accretion rates (≲ 10−3 in Eddington units). The disappearance of the accretion disc - the blue bump - leaves behind a faint optical nuclear continuum whose nature has been largely debated. This is mainly due to serious limitations on the observational side imposed by the starlight contamination from the host galaxy and the absorption by hydrogen, preventing the detection of these weak nuclei in the infrared (IR) to ultraviolet (UV) range. We addressed these challenges by combining multi-wavelength sub-arcsecond resolution observations - able to isolate the genuine nuclear continuum - with nebular lines in the mid-IR, which allowed us to indirectly probe the shape of the extreme UV continuum. We found that eight of the nearest prototype LLAGN are compatible with pure compact jet emission over more than ten orders of magnitude in frequency. This consists of self-absorbed synchrotron emission from radio to the UV plus the associated synchrotron self-Compton component dominating the emission in the UV to X-ray range. Additionally, the LLAGN continua show two particular characteristics when compared with the typical jet spectrum seen in radio galaxies: (i) a very steep spectral slope in the IR-to-optical/UV range (−3.7 < α0 < −1.3; Fν ∝ να0); and (ii) a very high turnover frequency (0.2-30 THz; 1.3 mm-10 μm) that separates the optically thick radio emission from the optically thin continuum in the IR-to-optical/UV range. These attributes can be explained if the synchrotron continuum is mainly dominated by thermalised particles at the jet base or the corona with considerably high temperatures, whereas only a small fraction of the energy (∼20%) would be distributed along the high-energy power-law tail of accelerated particles. On the other hand, the nebular gas excitation in LLAGN is in agreement with photo-ionisation from inverse Compton radiation (αx ∼ −0.7), which would dominate the nuclear continuum shortwards of ∼3000 Å, albeit a possible contribution from low-velocity shocks (< 500 km s−1) to the line excitation cannot be discarded. No sign of a standard hot accretion disc is seen in our sample of LLAGN, nevertheless, a weak cold disc (< 3000 K) is detected at the nucleus of the Sombrero galaxy, though its contribution to the nebular gas excitation is negligible. Our results suggest that the continuum emission in LLAGN is dominated at all wavelengths by undeveloped jets, powered by a thermalised particle distribution with high energies, on average. This is in agreement with their compact morphology and their high turnover frequencies. This behaviour is similar to that observed in peaked-spectrum radio sources and also compact jets in quiescent black hole X-ray binaries. Nevertheless, the presence of extended jet emission at kiloparsec scales for some of the objects in the sample is indicative of past jet activity, suggesting that these nuclei may undergo a rejuvenation event after a more active phase that produced their extended jets. These results imply that the dominant channel for energy release in LLAGN is mainly kinetic via the jet, rather than the radiative one. This has important implications in the context of galaxy evolution, since LLAGN probably represent a major but underestimated source of kinetic feedback in galaxies.
The flux distribution of the nine LLAGN in the sample are only available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (ftp://130.79.128.5) or via https://cdsarc.cds.unistra.fr/viz-bin/cat/J/A+A/670/A22
Resistive strip Micromegas (MICRO-MEsh GAseous Structure) detectors provide even at square meter sizes a high spatial resolution for the reconstruction of Minimum Ionizing Particles (MIPs) like muons. Micromegas detectors consist of three parallel planar structures. A cathode, a grounded mesh and a segmented anode structure form the detector. Square meter sizes challenge the high-voltage stability during operation, especially when using the frequently used gas mixture of Ar:CO2 (93:7 vol%) with low quencher content. To improve the HV-stability and to enhance the discharge quenching different gas mixtures have been investigated. A very promising one has an 2% admixture of isobutane forming the ternary gas Ar:CO2:iC4H10 (93:5:2 vol%). Long term irradiation studies investigating both gas mixtures interrupted by cosmic muon tracking efficiency measurements have been performed by irradiation with neutrons and gammas from a 10 GBq Am-Be source for a period of two years. The comparison shows gain increase under Ar:CO2:iC4H10 and a considerably improved HV-stable operation of the detector. It is investigated for any performance deterioration for each of the two gas mixtures with focus on pulse-height and changes of efficiency.
The Min proteins constitute the best-studied model system for pattern formation in cell biology. We theoretically predict and experimentally show that the propagation direction of in vitro Min protein patterns can be controlled by a hydrodynamic flow of the bulk solution. We find downstream propagation of Min wave patterns for low MinE:MinD concentration ratios, upstream propagation for large ratios, but multistability of both propagation directions in between. Whereas downstream propagation can be described by a minimal model that disregards MinE conformational switching, upstream propagation can be reproduced by a reduced switch model, where increased MinD bulk concentrations on the upstream side promote protein attachment. Our study demonstrates that a differential flow, where bulk flow advects protein concentrations in the bulk, but not on the surface, can control surface-pattern propagation. This suggests that flow can be used to probe molecular features and to constrain mathematical models for pattern-forming systems.
The nuclear equation of state (EOS) is at the center of numerous theoretical and experimental efforts in nuclear physics. With advances in microscopic theories for nuclear interactions, the availability of experiments probing nuclear matter under conditions not reached before, endeavors to develop sophisticated and reliable transport simulations to interpret these experiments, and the advent of multi-messenger astronomy, the next decade will bring new opportunities for determining the nuclear matter EOS, elucidating its dependence on density, temperature, and isospin asymmetry. Among controlled terrestrial experiments, collisions of heavy nuclei at intermediate beam energies (from a few tens of MeV/nucleon to about 25 GeV/nucleon in the fixed-target frame) probe the widest ranges of baryon density and temperature, enabling studies of nuclear matter from a few tenths to about 5 times the nuclear saturation density and for temperatures from a few to well above a hundred MeV, respectively. Collisions of neutron-rich isotopes further bring the opportunity to probe effects due to the isospin asymmetry. However, capitalizing on the enormous scientific effort aimed at uncovering the dense nuclear matter EOS, both at RHIC and at FRIB as well as at other international facilities, depends on the continued development of state-of-the-art hadronic transport simulations. This white paper highlights the role that heavy-ion collision experiments and hadronic transport simulations play in understanding strong interactions in dense nuclear matter, with an emphasis on how these efforts can be used together with microscopic approaches and neutron star studies to uncover the nuclear EOS.
We perform an effective field theory analysis to correlate the charged lepton flavor violating processes ℓi→ℓjγ γ and ℓi→ℓjγ . Using the current upper bounds on the rate for ℓi→ℓjγ , we derive model-independent upper limits on the rates for ℓi→ℓjγ γ . Our indirect limits are about three orders of magnitude stronger than the direct bounds from current searches for μ →e γ γ , and four orders of magnitude better than current bounds for τ →ℓγ γ . We also stress the relevance of Belle II or a Super Tau Charm Facility to discover the rare decay τ →ℓγ γ .
In some scenarios, the dark matter particle predominantly scatters inelastically with the target, producing a heavier neutral particle in the final state. In this class of scenarios, the reach in parameter space of direct detection experiments is limited by the velocity of the dark matter particle, usually taken as the escape velocity from the Milky Way. On the other hand, it has been argued that a fraction of the dark matter particles in the Solar System could be bound to the envelope of the Local Group or to the Virgo Supercluster, and not to our Galaxy, and therefore could carry velocities larger than the escape velocity from the Milky Way. In this paper we estimate the enhancement in sensitivity of current direct detection experiments to inelastic dark matter scatterings with nucleons or electrons due to the non-galactic diffuse components, and we discuss the implications for some well motivated models.
In this series of papers we present an emulator-based halo model for the non-linear clustering of galaxies in modified gravity cosmologies. In the first paper, we present emulators for the following halo properties: the halo mass function, concentration-mass relation and halo-matter cross-correlation function. The emulators are trained on data extracted from the \textsc{FORGE} and \textsc{BRIDGE} suites of $N$-body simulations, respectively for two modified gravity (MG) theories: $f(R)$ gravity and the DGP model, varying three standard cosmological parameters $\Omega_{\mathrm{m0}}, H_0, \sigma_8$, and one MG parameter, either $\bar{f}_{R0}$ or $r_{\mathrm{c}}$. Our halo property emulators achieve an accuracy of $\lesssim 1\%$ on independent test data sets. We demonstrate that the emulators can be combined with a galaxy-halo connection prescription to accurately predict the galaxy-galaxy and galaxy-matter correlation functions using the halo model framework.
Study Analysis Group 21 (SAG21) of NASA's Exoplanet Exploration Program Analysis Group was organized to study the effect of stellar contamination on space-based transmission spectroscopy, a method for studying exoplanetary atmospheres by measuring the wavelength-dependent radius of a planet as it transits its star. Transmission spectroscopy relies on a precise understanding of the spectrum of the star being occulted. However, stars are not homogeneous, constant light sources but have temporally evolving photospheres and chromospheres with inhomogeneities like spots, faculae, plages, granules, and flares. This SAG brought together an interdisciplinary team of more than 100 scientists, with observers and theorists from the heliophysics, stellar astrophysics, planetary science, and exoplanetary atmosphere research communities, to study the current research needs that can be addressed in this context to make the most of transit studies from current NASA facilities like Hubble Space Telescope and JWST. The analysis produced 14 findings, which fall into three science themes encompassing (i) how the Sun is used as our best laboratory to calibrate our understanding of stellar heterogeneities ('The Sun as the Stellar Benchmark'), (ii) how stars other than the Sun extend our knowledge of heterogeneities ('Surface Heterogeneities of Other Stars'), and (iii) how to incorporate information gathered for the Sun and other stars into transit studies ('Mapping Stellar Knowledge to Transit Studies'). In this invited review, we largely reproduce the final report of SAG21 as a contribution to the peer-reviewed literature.
MOdified Newtonian Dynamics (MOND) is an alternative to the standard Cold Dark Matter (CDM) paradigm which proposes an alteration of Newton's laws of motion at low accelerations, characterized by a universal acceleration scale a0. It attempts to explain observations of galactic rotation curves and predicts a specific scaling relation of the baryonic and total acceleration in galaxies, referred to as the Rotational Acceleration Relation (RAR), which can be equivalently formulated as a Mass Discrepancy Acceleration Relation (MDAR). The appearance of these relations in observational data such as SPARC has lead to investigations into the existence of similar relations in cosmological simulations using the standard ΛCDM model. Here, we report the existence of an RAR and MDAR similar to that predicted by MOND in ΛCDM using a large sample of galaxies extracted from a cosmological, hydrodynamical simulation (Magneticum). Furthermore, by using galaxies in Magneticum at different redshifts, a prediction for the evolution of the inferred acceleration parameter a0 with cosmic time is derived by fitting a MOND force law to these galaxies. In Magneticum, the best fit for a0 is found to increase by a factor ≃3 from redshift z = 0 to z = 2.3. This offers a powerful test from cosmological simulations to distinguish between MOND and ΛCDM observationally.
The next-generation Event Horizon Telescope (ngEHT) will be a significant enhancement of the Event Horizon Telescope (EHT) array, with ∼10 new antennas and instrumental upgrades of existing antennas. The increased uv-coverage, sensitivity, and frequency coverage allow a wide range of new science opportunities to be explored. The ngEHT Analysis Challenges have been launched to inform the development of the ngEHT array design, science objectives, and analysis pathways. For each challenge, synthetic EHT and ngEHT datasets are generated from theoretical source models and released to the challenge participants, who analyze the datasets using image reconstruction and other methods. The submitted analysis results are evaluated with quantitative metrics. In this work, we report on the first two ngEHT Analysis Challenges. These have focused on static and dynamical models of M87* and Sgr A* and shown that high-quality movies of the extended jet structure of M87* and near-horizon hourly timescale variability of Sgr A* can be reconstructed by the reference ngEHT array in realistic observing conditions using current analysis algorithms. We identify areas where there is still room for improvement of these algorithms and analysis strategies. Other science cases and arrays will be explored in future challenges.
Current best limits on the 21 cm signal during reionization are provided at large scales (≳100 Mpc). To model these scales, enormous simulation volumes are required, which are computationally expensive. We find that the primary source of uncertainty at these large scales is sample variance, which determines the minimum size of simulations required to analyse current and upcoming observations. In large-scale structure simulations, the method of `fixing' the initial conditions (ICs) to exactly follow the initial power spectrum and `pairing' two simulations with exactly out-of-phase ICs has been shown to significantly reduce sample variance. Here we apply this `fixing and pairing' (F&P) approach to reionization simulations whose clustering signal originates from both density fluctuations and reionization bubbles. Using a semi-numerical code, we show that with the traditional method, simulation boxes of L ≃ 500 (300) Mpc are required to model the large-scale clustering signal at k = 0.1 Mpc−1 with a precision of 5 (10)%. Using F&P, the simulation boxes can be reduced by a factor of 2 to obtain the same precision level. We conclude that the computing costs can be reduced by at least a factor of 4 when using the F&P approach.
The reservoir of molecular gas (H2) represents the fuel for the star formation (SF) of a galaxy. Connecting the star formation rate (SFR) to the available H2 is key to accurately model SF in cosmological simulations of galaxy formation. We investigate how modifying the underlying modelling of H2 and the description of stellar feedback in low-metallicity environments (LMF, i.e. low-metallicity stellar feedback) in cosmological zoomed-in simulations of a Milky Way-size halo influences the formation history of the forming, spiral galaxy, and its final properties. We exploit two different models to compute the molecular fraction of cold gas ($f_{\rm H_{2}}$): (i) the theoretical model by Krumholz et al. (2009b) and (ii) the phenomenological prescription by Blitz and Rosolowsky (2006). We find that the model adopted to estimate $f_{\rm H_{2}}$ plays a key role in determining final properties and in shaping the morphology of the galaxy. The clumpier interstellar medium (ISM) and the more complex H2 distribution that the Krumholz et al. model predicts result in better agreement with observations of nearby disc galaxies. This shows how crucial it is to link the SFR to the physical properties of the star-forming, molecular ISM. The additional source of energy that LMF supplies in a metal-poor ISM is key in controlling SF at high redshift and in regulating the reservoir of SF across cosmic time. Not only is LMF able to regulate cooling properties of the ISM, but it also reduces the stellar mass of the galaxy bulge. These findings can foster the improvement of the numerical modelling of SF in cosmological simulations.
The CRESST experiment employs cryogenic calorimeters for the sensitive measurement of nuclear recoils induced by dark matter particles. The recorded signals need to undergo a careful cleaning process to avoid wrongly reconstructed recoil energies caused by pile-up and read-out artefacts. We frame this process as a time series classification task and propose to automate it with neural networks. With a data set of over one million labeled records from 68 detectors, recorded between 2013 and 2019 by CRESST, we test the capability of four commonly used neural network architectures to learn the data cleaning task. Our best performing model achieves a balanced accuracy of 0.932 on our test set. We show on an exemplary detector that about half of the wrongly predicted events are in fact wrongly labeled events, and a large share of the remaining ones have a context-dependent ground truth. We furthermore evaluate the recall and selectivity of our classifiers with simulated data. The results confirm that the trained classifiers are well suited for the data cleaning task.
One of the main scientific goals of the TESS mission is the discovery of transiting small planets around the closest and brightest stars in the sky. Here, using data from the CARMENES, MAROON-X, and HIRES spectrographs, together with TESS, we report the discovery and mass determination of a planetary system around the M1.5 V star GJ 806 (TOI-4481). GJ 806 is a bright (V=10.8 mag, J=7.3 mag) and nearby (d=12 pc) M dwarf that hosts at least two planets. The innermost planet, GJ 806 b, is transiting and has an ultra-short orbital period of 0.93 d, a radius of 1.331+-0.023 Re, a mass of 1.90+-0.17 Me, a mean density of 4.40+-0.45 g/cm3, and an equilibrium temperature of 940+-10 K. We detect a second, non-transiting, super-Earth planet in the system, GJ 806c, with an orbital period of 6.6 d, a minimum mass of 5.80+-0.30 Me, and an equilibrium temperature of 490+-5 K. The radial velocity data also shows evidence for a third periodicity at 13.6 d, although the current dataset does not provide sufficient evidence to unambiguously distinguish between a third super-Earth mass (Msin(i)=8.50+-0.45 Me) planet or stellar activity. Additionally, we report one transit observation of GJ 806 b taken with CARMENES in search for a possible extended atmosphere of H or He, but we can only place upper limits to its existence. This is not surprising as our evolutionary models support the idea that any possible primordial H/He atmosphere that GJ 806 b might have had, would long have been lost. However, GJ 806b's bulk density makes it likely that the planet hosts some type of volatile atmosphere. In fact, with a transmission spectroscopy metrics (TSM) of 44 and an emission spectroscopy metrics (ESM) of 24, GJ 806 b the third-ranked terrestrial planet around an M dwarf suitable for transmission spectroscopy studies, and the most promising terrestrial planet for emission spectroscopy studies.
We present a detailed overview of the science goals and predictions for the Prime-Cam direct-detection camera-spectrometer being constructed by the CCAT-prime collaboration for dedicated use on the Fred Young Submillimeter Telescope (FYST). The FYST is a wide-field, 6 m aperture submillimeter telescope being built (first light in late 2023) by an international consortium of institutions led by Cornell University and sited at more than 5600 m on Cerro Chajnantor in northern Chile. Prime-Cam is one of two instruments planned for FYST and will provide unprecedented spectroscopic and broadband measurement capabilities to address important astrophysical questions ranging from Big Bang cosmology through reionization and the formation of the first galaxies to star formation within our own Milky Way. Prime-Cam on the FYST will have a mapping speed that is over 10 times greater than existing and near-term facilities for high-redshift science and broadband polarimetric imaging at frequencies above 300 GHz. We describe details of the science program enabled by this system and our preliminary survey strategies.
Joint analyses of cross-correlations between measurements of galaxy positions, galaxy lensing, and lensing of the cosmic microwave background (CMB) offer powerful constraints on the large-scale structure of the Universe. In a forthcoming analysis, we will present cosmological constraints from the analysis of such cross-correlations measured using Year 3 data from the Dark Energy Survey (DES), and CMB data from the South Pole Telescope (SPT) and Planck. Here we present two key ingredients of this analysis: (1) an improved CMB lensing map in the SPT-SZ survey footprint and (2) the analysis methodology that will be used to extract cosmological information from the cross-correlation measurements. Relative to previous lensing maps made from the same CMB observations, we have implemented techniques to remove contamination from the thermal Sunyaev Zel'dovich effect, enabling the extraction of cosmological information from smaller angular scales of the cross-correlation measurements than in previous analyses with DES Year 1 data. We describe our model for the cross-correlations between these maps and DES data, and validate our modeling choices to demonstrate the robustness of our analysis. We then forecast the expected cosmological constraints from the galaxy survey-CMB lensing auto and cross-correlations. We find that the galaxy-CMB lensing and galaxy shear-CMB lensing correlations will on their own provide a constraint on S8=σ8√{Ωm/0.3 } at the few percent level, providing a powerful consistency check for the DES-only constraints. We explore scenarios where external priors on shear calibration are removed, finding that the joint analysis of CMB lensing cross-correlations can provide constraints on the shear calibration amplitude at the 5% to 10% level.
For Majorana fermions the anapole moment is the only allowed electromagnetic multipole moment. In this work we calculate the anapole moment induced at one-loop by the Yukawa and gauge interactions of a Majorana fermion, using the pinch technique to ensure the finiteness and gauge-invariance of the result. As archetypical example of a Majorana fermion, we calculate the anapole moment for the lightest neutralino in the Minimal Supersymmetric Standard Model, and specifically in the bino, wino and higgsino limits. Finally, we briefly discuss the implications of the anapole moment for the direct detection of dark matter in the form of Majorana fermions.
We investigate the asymptotia of decelerating and spatially flat Friedmann-Lemaître-Robertson-Walker (FLRW) spacetimes at future null infinity. We find that the asymptotic algebra of diffeomorphisms can be enlarged to the recently discovered Weyl-Bondi-van der Burg-Metzner-Sachs (BMS) algebra for asymptotically flat spacetimes by relaxing the boundary conditions. This algebra remains undeformed in the cosmological setting contrary to previous extensions of the BMS algebra. We then study the equations of motion for asymptotically FLRW spacetimes with finite fluxes and show that the dynamics is fully constrained by the energy-momentum tensor of the source. Finally, we propose an expression for the charges that are associated with the cosmological supertranslations and whose evolution equation features a novel contribution arising from the Hubble-Lemaître flow.
Context. X-ray observations of galaxies with high spatial resolution instruments such as Chandra have revealed that major contributions to their diffuse emission originate from X-ray-bright point sources in the galactic stellar field. It has been established that these point sources, called X-ray binaries, are accreting compact objects with stellar donors in a binary configuration. They are classified according to the predominant accretion process: wind-fed in the case of high-mass donors and Roche-lobe mass transfer in the case of low-mass donors. Observationally, it is challenging to reliably disentangle these two populations from each other because of their similar spectra.
Aims: We provide a numerical framework with which spatially and spectrally accurate representations of X-ray binary populations can be studied from hydrodynamical cosmological simulations. We construct average spectra, accounting for a hot gas component, and verify the emergence of observed scaling relations between galaxy-wide X-ray luminosity (LX) and stellar mass (M*) and between LX and the star-formation rate (SFR).
Methods: Using simulated galaxy halos extracted from the (48 h−1 cMpc)3 volume of the Magneticum Pathfinder cosmological simulations at z = 0.07, we generate mock spectra with the X-ray photon-simulator PHOX. We extend the PHOX code to account for the stellar component in the simulation and study the resulting contribution in composite galactic spectra.
Results: Well-known X-ray binary scaling relations with galactic SFR and M* emerge self-consistently, verifying our numerical approach. Average X-ray luminosity functions are perfectly reproduced up to the one-photon luminosity limit. Comparing our resulting LX − SFR − M* relation for X-ray binaries with recent observations of field galaxies in the Virgo galaxy cluster, we find significant overlap. Invoking a metallicity-dependent model for high-mass X-ray binaries yields an anticorrelation between mass-weighted stellar metallicity and SFR-normalized luminosity. The spatial distribution of high-mass X-ray binaries coincides with star-formation regions of simulated galaxies, while low-mass X-ray binaries follow the stellar mass surface density. X-ray binary emission is the dominant contribution in the hard X-ray band (2-10 keV) in the absence of an actively accreting central super-massive black hole, and it provides a ∼50% contribution in the soft X-ray band (0.5-2 keV), rivaling the hot gas component.
Conclusions: We conclude that our modeling remains consistent with observations despite the uncertainties connected to our approach. The predictive power and easily extendable framework hold great value for future investigations of galactic X-ray spectra.
Cross-correlations of galaxy positions and galaxy shears with maps of gravitational lensing of the cosmic microwave background (CMB) are sensitive to the distribution of large-scale structure in the Universe. Such cross-correlations are also expected to be immune to some of the systematic effects that complicate correlation measurements internal to galaxy surveys. We present measurements and modeling of the cross-correlations between galaxy positions and galaxy lensing measured in the first three years of data from the Dark Energy Survey with CMB lensing maps derived from a combination of data from the 2500 deg2 SPT-SZ survey conducted with the South Pole Telescope and full-sky data from the Planck satellite. The CMB lensing maps used in this analysis have been constructed in a way that minimizes biases from the thermal Sunyaev Zel'dovich effect, making them well suited for cross-correlation studies. The total signal-to-noise of the cross-correlation measurements is 23.9 (25.7) when using a choice of angular scales optimized for a linear (nonlinear) galaxy bias model. We use the cross-correlation measurements to obtain constraints on cosmological parameters. For our fiducial galaxy sample, which consist of four bins of magnitude-selected galaxies, we find constraints of Ωm=0.272-0.052+0.032 and S8≡σ8√{Ωm/0.3 }=0.736-0.028+0.032 (Ωm=0.245-0.044+0.026 and S8=0.734-0.028+0.035 ) when assuming linear (nonlinear) galaxy bias in our modeling. Considering only the cross-correlation of galaxy shear with CMB lensing, we find Ωm=0.270-0.061+0.043 and S8=0.740-0.029+0.034 . Our constraints on S8 are consistent with recent cosmic shear measurements, but lower than the values preferred by primary CMB measurements from Planck.
Aims. We want to find the distribution of initial conditions that best reproduces disc observations at the population level. Methods. We first ran a parameter study using a 1D model that includes the viscous evolution of a gas disc, dust, and pebbles, coupled with an emission model to compute the millimetre flux observable with ALMA. This was used to train a machine learning surrogate model that can compute the relevant quantity for comparison with observations in seconds. This surrogate model was used to perform parameter studies and synthetic disc populations. Results. Performing a parameter study, we find that internal photoevaporation leads to a lower dependency of disc lifetime on stellar mass than external photoevaporation. This dependence should be investigated in the future. Performing population synthesis, we find that under the combined losses of internal and external photoevaporation, discs are too short lived. Conclusions. To match observational constraints, future models of disc evolution need to include one or a combination of the following processes: infall of material to replenish the discs, shielding of the disc from internal photoevaporation due to magnetically driven disc winds, and extinction of external high-energy radiation. Nevertheless, disc properties in low-external-photoevaporation regions can be reproduced by having more massive and compact discs. Here, the optimum values of the $\alpha$ viscosity parameter lie between $3\times10^{-4}$ and $10^{-3}$ and with internal photoevaporation being the main mode of disc dispersal.
Recent experimental results in B physics from Belle, BABAR, and LHCb suggest new physics (NP) in the weak b →c charged-current processes. Here we focus specifically on the decay modes B¯0→D*+ℓ-ν¯ with ℓ=e and μ . The world averages of the ratios RD and RD* currently differ from the Standard Model (SM) predictions by 3.4 σ while recently a new anomaly has been observed in the forward-backward asymmetry measurement, AFB , in B¯0→D*+μ-ν ¯ decay. It is found that Δ AFB=AFB(B →D*μ ν )-AFB(B →D*e ν ) is around 4.1 σ away from the SM prediction in an analysis of 2019 Belle data. In this work we explore possible solutions to the Δ AFB anomaly and point out correlated NP signals in other angular observables. These correlations between angular observables must be present in the case of beyond the Standard Model physics. We stress the importance of Δ type observables that are obtained by taking the difference of the observable for the muon and the electron mode. These quantities cancel form-factor uncertainties in the SM and allow for clean tests of NP. These intriguing results also suggest an urgent need for improved simulation and analysis techniques in B¯0→D*+ℓ-ν¯ decays. Here we also describe a new Monte Carlo event generator tool based on EVTGEN that we developed to allow simulation of the NP signatures in B¯0→D*+ℓ-ν, which arise due to the interference between the SM and NP amplitudes. We then discuss prospects for improved observables sensitive to NP couplings with 1, 5, 50, and 250 ab -1 of Belle II data, which seem to be ideally suited for this class of measurements.
We study the relation between the metallicities of ionised and neutral gas in star-forming galaxies at z=1-3 using the EAGLE cosmological, hydrodynamical simulations. This is done by constructing a dense grid of sightlines through the simulated galaxies and obtaining the star formation rate- and HI column density-weighted metallicities, Z_{SFR} and Z_{HI}, for each sightline as proxies for the metallicities of ionised and neutral gas, respectively. We find Z_{SFR} > Z_{HI} for almost all sightlines, with their difference generally increasing with decreasing metallicity. The stellar masses of galaxies do not have a significant effect on this trend, but the positions of the sightlines with respect to the galaxy centres play an important role: the difference between the two metallicities decreases when moving towards the galaxy centres, and saturates to a minimum value in the central regions of galaxies, irrespective of redshift and stellar mass. This implies that the mixing of the two gas phases is most efficient in the central regions of galaxies where sightlines generally have high column densities of HI. However, a high HI column density alone does not guarantee a small difference between the two metallicities. In galaxy outskirts, the inefficiency of the mixing of star-forming gas with HI seems to dominate over the dilution of heavy elements in HI through mixing with the pristine gas. We find good agreement between the limited amount of available observational data and the Z_{SFR}-Z_{HI} relation predicted by the EAGLE simulations, but more data is required for stringent tests.
We update the Standard Model (SM) predictions for B-meson lifetimes within the heavy quark expansion (HQE). Including for the first time the contribution of the Darwin operator, SU(3)F breaking corrections to the matrix element of dimension-six four-quark operators and the so-called eye-contractions, we obtain for the total widths Γ (B+)=(0.58−0.07+0.11)ps−1,Γ (Bd)=(0.63−0.07+0.11)ps−1,Γ (Bs)=(0.63−0.07+0.11)ps−1, and for the lifetime ratios τ(B+)/τ(Bd) = 1.086 ± 0.022, τ(Bs)/τ(Bd) = 1.003 ± 0.006 (1.028 ± 0.011). The two values for the last observable arise from using two different sets of input for the non-perturbative parameters μπ2(Bd),μG2(Bd), and ρD3(Bd) as well as from different estimates of the SU(3)F breaking in these parameters. Our results are overall in very good agreement with the corresponding experimental data, however, there seems to emerge a tension in τ(Bs)/τ(Bd) when considering the second set of input parameters. Specifically, this observable is extremely sensitive to the size of the parameter ρD3(Bd) and of the SU(3)F breaking effects in μπ2,μG2 and ρD3; hence, it is of utmost importance to be able to better constrain all these parameters. In this respect, an extraction of μπ2(Bs),μG2(Bs),ρD3(Bs) from future experimental data on inclusive semileptonic Bs-meson decays or from direct non-perturbative calculations, as well as more insights about the value of ρD3(B ) extracted from fits to inclusive semileptonic B-decays, would be very helpful in reducing the corresponding theory uncertainties.
Gravitational time delays provide a powerful one-step measurement of H0, independent of all other probes. One key ingredient in time-delay cosmography are high-accuracy lens models. Those are currently expensive to obtain, both, in terms of computing and investigator time (105-106 CPU hours and ~0.5-1 yr, respectively). Major improvements in modelling speed are therefore necessary to exploit the large number of lenses that are forecast to be discovered over the current decade. In order to bypass this roadblock, we develop an automated modelling pipeline and apply it to a sample of 31 lens systems, observed by the Hubble Space Telescope in multiple bands. Our automated pipeline can derive models for 30/31 lenses with few hours of human time and <100 CPU hours of computing time for a typical system. For each lens, we provide measurements of key parameters and predictions of magnification as well as time delays for the multiple images. We characterize the cosmography-readiness of our models using the stability of differences in the Fermat potential (proportional to time delay) with respect to modelling choices. We find that for 10/30 lenses, our models are cosmography or nearly cosmography grade (<3 per cent and 3-5 per cent variations). For 6/30 lenses, the models are close to cosmography grade (5-10 per cent). These results utilize informative priors and will need to be confirmed by further analysis. However, they are also likely to improve by extending the pipeline modelling sequence and options. In conclusion, we show that uniform cosmography grade modelling of large strong lens samples is within reach.
Strong gravitational lensing and microlensing of supernovae (SNe) are emerging as a new probe of cosmology and astrophysics in recent years. We provide an overview of this nascent research field, starting with a summary of the first discoveries of strongly lensed SNe. We describe the use of the time delays between multiple SN images as a way to measure cosmological distances and thus constrain cosmological parameters, particularly the Hubble constant, whose value is currently under heated debates. New methods for measuring the time delays in lensed SNe have been developed, and the sample of lensed SNe from the upcoming Rubin Observatory Legacy Survey of Space and Time (LSST) is expected to provide competitive cosmological constraints. Lensed SNe are also powerful astrophysical probes. We review the usage of lensed SNe to constrain SN progenitors, acquire high-z SN spectra through lensing magnifications, infer SN sizes via microlensing, and measure properties of dust in galaxies. The current challenge in the field is the rarity and difficulty in finding lensed SNe. We describe various methods and ongoing efforts to find these spectacular explosions, forecast the properties of the expected sample of lensed SNe from upcoming surveys particularly the LSST, and summarize the observational follow-up requirements to enable the various scientific studies. We anticipate the upcoming years to be exciting with a boom in lensed SN discoveries.
Context. Weak lensing and clustering statistics beyond two-point functions can capture non-Gaussian information about the matter density field, thereby improving the constraints on cosmological parameters relative to the mainstream methods based on correlation functions and power spectra.
Aims: This paper presents a cosmological analysis of the fourth data release of the Kilo Degree Survey based on the density split statistics, which measures the mean shear profiles around regions classified according to foreground densities. The latter is constructed from a bright galaxy sample, which we further split into red and blue samples, allowing us to probe their respective connection to the underlying dark matter density.
Methods: We used the state-of-the-art model of the density splitting statistics and validated its robustness against mock data infused with known systematic effects such as intrinsic galaxy alignment and baryonic feedback.
Results: After marginalising over the photometric redshift uncertainty and the residual shear calibration bias, we measured for the full KiDS-bright sample a structure growth parameter of S_8\equiv σ8 \sqrt{Ωm/0.3}=0.73+0.03-0.02 that is competitive and consistent with two-point cosmic shear results, a matter density of Ωm = 0.27 ± 0.02, and a constant galaxy bias of b = 1.37 ± 0.10.
We introduce an observable relevant for the determination of the $W$-boson mass $m_W$ at hadron colliders. This observable is defined as an asymmetry around the jacobian peak of the charged-lepton transverse-momentum distribution in the charged-current Drell-Yan process. We discuss the observable's theoretical prediction, presenting results at different orders in QCD, and showing its perturbative stability. Its definition as a single scalar number and its linear sensitivity to $m_W$ allow a clean extraction of the latter and a straightforward discussion of the associated theoretical systematics: a determination of $m_W$ with a perturbative QCD uncertainty at the $\pm 5$ MeV level is viable, with the advantage of solely relying on charged-current Drell-Yan information. The observable displays desirable properties also from the experimental viewpoint, especially for the unfolding of detector effects. We show that, with a conservative estimate of systematic errors, it can lead to an experimental determination of $m_W$ at the level of $\pm 15$ MeV at the LHC.
In this paper, we investigate the asymptotic structure of gauge theories in decelerating and spatially flat Friedmann-Lemaître-Robertson-Walker universes. Firstly, we thoroughly explore the asymptotic symmetries of electrodynamics in this background, which reveals a major inconsistency already present in the flat case. Taking advantage of this treatment, we derive the associated memory effects, discussing their regime of validity and differences with respect to their flat counterparts. Next, we extend our analysis to non-Abelian Yang-Mills, coupling it dynamically and simultaneously to a Dirac spinor and a complex scalar field. Within this novel setting, we examine the possibility of constructing Poisson superbrackets based on the covariant phase space formalism.
We present cosmological constraints from the analysis of two-point correlation functions between galaxy positions and galaxy lensing measured in Dark Energy Survey (DES) Year 3 data and measurements of cosmic microwave background (CMB) lensing from the South Pole Telescope (SPT) and Planck. When jointly analyzing the DES-only two-point functions and the DES cross-correlations with SPT +P l a n c k CMB lensing, we find Ωm=0.344 ±0.030 and S8≡σ8(Ωm/0.3 )0.5=0.773 ±0.016 , assuming Λ CDM . When additionally combining with measurements of the CMB lensing autospectrum, we find Ωm=0.306-0.021+0.018 and S8=0.792 ±0.012 . The high signal-to-noise of the CMB lensing cross-correlations enables several powerful consistency tests of these results, including comparisons with constraints derived from cross-correlations only, and comparisons designed to test the robustness of the galaxy lensing and clustering measurements from DES. Applying these tests to our measurements, we find no evidence of significant biases in the baseline cosmological constraints from the DES-only analyses or from the joint analyses with CMB lensing cross-correlations. However, the CMB lensing cross-correlations suggest possible problems with the correlation function measurements using alternative lens galaxy samples, in particular the REDMAGIC galaxies and high-redshift MAGLIM galaxies, consistent with the findings of previous studies. We use the CMB lensing cross-correlations to identify directions for further investigating these problems.
Thermal bombs are a widely used method to artificially trigger explosions of core-collapse supernovae (CCSNe) to determine their nucleosynthesis or ejecta and remnant properties. Recently, their use in spherically symmetric (1D) hydrodynamic simulations led to the result that 56,57Ni and 44Ti are massively underproduced compared to observational estimates for Supernova 1987A, if the explosions are slow, i.e. if the explosion mechanism of CCSNe releases the explosion energy on long time-scales. It was concluded that rapid explosions are required to match observed abundances, i.e. the explosion mechanism must provide the CCSN energy nearly instantaneously on time-scales of some ten to order 100 ms. This result, if valid, would disfavour the neutrino-heating mechanism, which releases the CCSN energy on time-scales of seconds. Here, we demonstrate by 1D hydrodynamic simulations and nucleosynthetic post-processing that these conclusions are a consequence of disregarding the initial collapse of the stellar core in the thermal-bomb modelling before the bomb releases the explosion energy. We demonstrate that the anticorrelation of 56Ni yield and energy-injection time-scale vanishes when the initial collapse is included and that it can even be reversed, i.e. more 56Ni is made by slower explosions, when the collapse proceeds to small radii similar to those where neutrino heating takes place in CCSNe. We also show that the 56Ni production in thermal-bomb explosions is sensitive to the chosen mass cut and that a fixed mass layer or fixed volume for the energy deposition cause only secondary differences. Moreover, we propose a most appropriate setup for thermal bombs.
Context. The space density of X-ray-luminous, blindly selected active galactic nuclei (AGN) traces the population of rapidly accreting super-massive black holes through cosmic time. It is encoded in the X-ray luminosity function, whose bright end remains poorly constrained in the first billion years after the Big Bang as X-ray surveys have thus far lacked the required cosmological volume. With the eROSITA Final Equatorial-Depth Survey (eFEDS), the largest contiguous and homogeneous X-ray survey to date, X-ray AGN population studies can now be extended to new regions of the luminosity-redshift space (L2 − 10 keV > 1045 erg s−1 and z > 6).
Aims: The current study aims at identifying luminous quasars at z > 5.7 among X-ray-selected sources in the eFEDS field in order to place a lower limit on black hole accretion well into the epoch of re-ionisation. A secondary goal is the characterisation of the physical properties of these extreme coronal emitters at high redshifts.
Methods: Cross-matching eFEDS catalogue sources to optical counterparts from the DESI Legacy Imaging Surveys, we confirm the low significance detection with eROSITA of a previously known, optically faint z = 6.56 quasar from the Subaru High-z Exploration of Low-luminosity Quasars (SHELLQs) survey. We obtained a pointed follow-up observation of the source with the Chandra X-ray telescope in order to confirm the low-significance eROSITA detection. Using new near-infrared spectroscopy, we derived the physical properties of the super-massive black hole. Finally, we used this detection to infer a lower limit on the black hole accretion density rate at z > 6.
Results: The Chandra observation confirms the eFEDS source as the most distant blind X-ray detection to date. The derived X-ray luminosity is high with respect to the rest-frame optical emission of the quasar. With a narrow MgII line, low derived black hole mass, and high Eddington ratio, as well as its steep photon index, the source shows properties that are similar to local narrow-line Seyfert 1 galaxies, which are thought to be powered by young super-massive black holes. In combination with a previous high-redshift quasar detection in the field, we show that quasars with L2 − 10 keV > 1045 erg s−1 dominate accretion onto super-massive black holes at z ∼ 6.
We present a high-resolution kinematic study of the massive main-sequence star-forming galaxy (SFG) SDSS J090122.37+181432.3 (J0901) at z = 2.259, using ~0.″36 Atacama Large Millimeter/submillimeter Array CO(3-2) and ~0.″1-0.″5 SINFONI/VLT Hα observations. J0901 is a rare, strongly lensed but otherwise normal massive ( $\mathrm{log}({M}_{\star }/{M}_{\odot })\sim 11$ ) main-sequence SFG, offering a unique opportunity to study a typical massive SFG under the microscope of lensing. Through forward dynamical modeling incorporating lensing deflection, we fit the CO and Hα kinematics in the image plane out to about one disk effective radius (R e ~ 4 kpc) at an ~600 pc delensed physical resolution along the kinematic major axis. Our results show high intrinsic dispersions of the cold molecular and warm ionized gas (σ 0,mol. ~ 40 km s-1 and σ 0,ion. ~ 66 km s-1) that remain constant out to R e; a moderately low dark matter fraction (f DM ~ 0.3-0.4) within R e; and a centrally peaked Toomre Q parameter-agreeing well with the previously established σ 0 versus z, f DM versus Σbaryon, and Q's radial trends using large-sample non-lensed main-sequence SFGs. Our data further reveal a high stellar mass concentration within ~1-2 kpc with little molecular gas, and a clumpy molecular gas ring-like structure at R ~ 2-4 kpc, in line with the inside-out quenching scenario. Our further analysis indicates that J0901 had assembled half of its stellar mass only ~400 Myr before its observed cosmic time, and the cold gas ring and dense central stellar component are consistent with signposts of a recent wet compaction event of a highly turbulent disk found in recent simulations.
The large total infrared (TIR) luminosities (LTIR; ≳1012 L⊙) observed in z ~ 6 quasars are generally converted into high star-formation rates (SFRs; $\gtrsim\!{10}^2~{\rm M}_{\odot }\, {\rm yr}^{-1}$) of their host galaxies. However, these estimates rely on the assumption that dust heating is dominated by stellar radiation, neglecting the contribution from the central active galactic nucleus (AGN). We test the validity of this assumption by combining cosmological hydrodynamic simulations with radiative transfer calculations. We find that, when AGN radiation is included in the simulations, the mass (luminosity)-weighted dust temperature in the host galaxies increases from T ≈ 50 K (T ≈ 70 K) to T ≈ 80 K (T ≈ 200 K), suggesting that AGN effectively heats the bulk of dust in the host galaxy. We compute the AGN-host galaxy SFR from the synthetic spectral energy distribution by using standard SFR - LTIR relations, and compare the results with the 'true' values in the simulations. We find that the SFR is overestimated by a factor of ≈3 (≳10) for AGN bolometric luminosities of Lbol ≈ 1012 L⊙ (≳1013 L⊙), implying that the SFRs of z ~ 6 quasars can be overestimated by over an order of magnitude.
The dispersion of fast radio bursts (FRBs) is a measure of the large-scale electron distribution. It enables measurements of cosmological parameters, especially of the expansion rate and the cosmic baryon fraction. The number of events is expected to increase dramatically over the coming years, and of particular interest are bursts with identified host galaxy and therefore redshift information. In this paper, we explore the covariance matrix of the dispersion measure (DM) of FRBs induced by the large-scale structure, as bursts from a similar direction on the sky are correlated by long wavelength modes of the electron distribution. We derive analytical expressions for the covariance matrix and examine the impact on parameter estimation from the FRB dispersion measure - redshift relation. The covariance also contains additional information that is missed by analysing the events individually. For future samples containing over $\sim300$ FRBs with host identification over the full sky, the covariance needs to be taken into account for unbiased inference, and the effect increases dramatically for smaller patches of the sky.
Wide, deep, blind continuum surveys at submillimetre/millimetre (submm/mm) wavelengths are required to provide a full inventory of the dusty, distant Universe. However, conducting such surveys to the necessary depth, with sub-arcsec angular resolution, is prohibitively time-consuming, even for the most advanced submm/mm telescopes. Here, we report the most recent results from the ALMACAL project, which exploits the 'free' calibration data from the Atacama Large Millimetre/submillimetre Array (ALMA) to map the lines of sight towards and beyond the ALMA calibrators. ALMACAL has now covered 1001 calibrators, with a total sky coverage around 0.3 deg2, distributed across the sky accessible from the Atacama desert, and has accumulated more than 1000 h of integration. The depth reached by combining multiple visits to each field makes ALMACAL capable of searching for faint, dusty, star-forming galaxies (DSFGs), with detections at multiple frequencies to constrain the emission mechanism. Based on the most up-to-date ALMACAL data base, we report the detection of 186 DSFGs with flux densities down to S870 µm ~ 0.2 mJy, comparable with existing ALMA large surveys but less susceptible to cosmic variance. We report the number counts at five wavelengths between 870 μm and 3 mm, in ALMA bands 3, 4, 5, 6, and 7, providing a benchmark for models of galaxy formation and evolution. By integrating the observed number counts and the best-fitting functions, we also present the resolved fraction of the cosmic infrared background (CIB) and the CIB spectral shape. Combining existing surveys, ALMA has currently resolved about half of the CIB in the submm/mm regime.
In this conference paper, we consider effective field theories of non-relativistic dark matter particles interacting with a light force mediator in the early expanding universe. We present a general framework, where to account in a systematic way for the relevant processes that may affect the dynamics during thermal freeze-out. In the temperature regime where near-threshold effects, most notably the formation of bound states and Sommerfeld enhancement, have a large impact on the dark matter relic density, we scrutinize possible contributions from higher excited states and radiative corrections in the annihilations and decays of dark-matter pairs.
The details of the strategy adopted by the Borexino collaboration for successfully isolating the spectral components of the pp-chain neutrinos signal from residual backgrounds in the total energy spectrum will be presented.
ComPol is a proposed CubeSat mission dedicated to long-term study of gamma-ray polarisation of astrophysical objects. Besides spectral and timing measurements, polarisation analysis can be a powerful tool in constraining current models of the geometry, magnetic field structure and acceleration mechanisms of different astrophysical sources. The ComPol payload is a Compton telescope optimised for polarimetry and consists of a 2 layer stacked detector configuration. The top layer, the scatterer, is a Silicon Drift Detector matrix developed by the Max Planck Institute for Physics and Politecnico di Milano. The second layer is a calorimeter consisting of a CeBr<math display="inline" id="d1e778" altimg="si10.svg"><msub><mrow/><mrow><mn>3</mn></mrow></msub></math> scintillator read-out by silicon photo-multipliers developed at CEA Saclay. This paper presents the results of the prototype calorimeter calibration campaign, executed in March 2022 at IJCLab Orsay and simulations of the expected performance of the polarimeter using updated performance figures of the detectors.
The mechanisms that maintain turbulence in the interstellar medium (ISM) are still not identified. This work investigates how we can distinguish between two fundamental driving mechanisms: the accumulated effect of stellar feedback versus the energy injection from galactic scales. We perform a series of numerical simulations describing a stratified star-forming ISM subject to self-consistent stellar feedback. Large-scale external turbulent driving, of various intensities, is added to mimic galactic driving mechanisms. We analyse the resulting column density maps with a technique called Multi-scale non-Gaussian segmentation, which separates the coherent structures and the Gaussian background. This effectively discriminates between the various simulations and is a promising method to understand the ISM structure. In particular, the power spectrum of the coherent structures flattens above 60 pc when turbulence is driven only by stellar feedback. When large-scale driving is applied, the turn-over shifts to larger scales. A systematic comparison with the Large Magellanic Cloud (LMC) is then performed. Only 1 out of 25 regions has a coherent power spectrum that is consistent with the feedback-only simulation. A detailed study of the turn-over scale leads us to conclude that regular stellar feedback is not enough to explain the observed ISM structure on scales larger than 60 pc. Extreme feedback in the form of supergiant shells likely plays an important role but cannot explain all the regions of the LMC. If we assume ISM structure is generated by turbulence, another large-scale driving mechanism is needed to explain the entirety of the observations.
The second workshop on the HEP Analysis Ecosystem took place 23-25 May 2022 at IJCLab in Orsay, to look at progress and continuing challenges in scaling up HEP analysis to meet the needs of HL-LHC and DUNE, as well as the very pressing needs of LHC Run 3 analysis. The workshop was themed around six particular topics, which were felt to capture key questions, opportunities and challenges. Each topic arranged a plenary session introduction, often with speakers summarising the state-of-the art and the next steps for analysis. This was then followed by parallel sessions, which were much more discussion focused, and where attendees could grapple with the challenges and propose solutions that could be tried. Where there was significant overlap between topics, a joint discussion between them was arranged. In the weeks following the workshop the session conveners wrote this document, which is a summary of the main discussions, the key points raised and the conclusions and outcomes. The document was circulated amongst the participants for comments before being finalised here.
We report our analysis for the static energy in (2+1+1)-flavor QCD over a wide range of lattice spacings and several quark masses. We obtain results for the static energy out to distances of nearly 1 fm, allowing us to perform a simultaneous determination of the lattice scales $r_2$, $r_1$ and $r_0$ as well as the string tension, $\sigma$. While our results for ${r_0}/{r_1}$ and $r_0$ $\sqrt{\sigma}$ agree with published (2+1)-flavor results, our result for ${r_1}/{r_2}$ differs significantly from the value obtained in the (2+1)-flavor case, likely due to the effect of the charm quark. We study in detail the effect of the charm quark on the static energy by comparing our results on the finest lattices with the previously published (2+1)-flavor QCD results at similar lattice spacing. The lattice results agree well with the two-loop perturbative expression of the static energy incorporating finite charm mass effects.
Mildly relativistic perpendicular, collisionless multiple-ion gamma-ray burst shocks are analyzed using 2D3V particle-in-cell simulations. A characteristic feature of multiple-ion shocks is alternating maxima of the α particle and the proton densities, at least in the early downstream. Turbulence, shock-drift acceleration, and evidence of stochastic acceleration are observed. We performed simulations with both in-plane (B y ) and out-of-plane (B z ) magnetic fields, as well as in a perpendicular shock setup with φ = 45°, and saw multiple differences: while with B z , the highest-energetic particles mostly gain energy at the beginning of the shock, with B y , particles continue gaining energy and it does not appear that they have reached their final energy level. A larger magnetization σ leads to more high-energetic particles in our simulations. One important quantity for astronomers is the electron acceleration efficiency ϵ e , which is measurable due to synchrotron radiation. This quantity hardly changes when changing the amount of α particles while keeping σ constant. It is, however, noteworthy that ϵ e strongly differs for in-plane and out-of-plane magnetic fields. When looking at the proton and α acceleration efficiency, ϵ p and ϵ α , the energy of α particles always decreases when passing the shock into the downstream, whereas the energy of protons can increase if α particles account for the majority of the ions.
Context. Over the last years, large (sub-)millimetre surveys of protoplanetary disks in different star forming regions have well constrained the demographics of disks, such as their millimetre luminosities, spectral indices, and disk radii. Additionally, several high-resolution observations have revealed an abundance of substructures in the disk's dust continuum. The most prominent are ring like structures, which are likely caused by pressure bumps trapping dust particles. The origins and characteristics of these pressure bumps, nevertheless, need to be further investigated.
Aims: The purpose of this work is to study how dynamic pressure bumps affect observational properties of protoplanetary disks. We further aim to differentiate between the planetary- versus zonal flow-origin of pressure bumps.
Methods: We perform one-dimensional gas and dust evolution simulations, setting up models with varying pressure bump features, including their amplitude and location, growth time, and number of bumps. We subsequently run radiative transfer calculations to obtain synthetic images, from which we obtain the different quantities of observations.
Results: We find that the outermost pressure bump determines the disk's dust size across different millimetre wavelengths and confirm that the observed dust masses of disks with optically thick inner bumps (<40 au) are underestimated by up to an order of magnitude. Our modelled dust traps need to form early (<0.1 Myr), fast (on viscous timescales), and must be long lived (>Myr) to obtain the observed high millimetre luminosities and low spectral indices of disks. While the planetary bump models can reproduce these observables irrespectively of the opacity prescription, the highest opacities are needed for the dynamic bump model, which mimics zonal flows in disks, to be in line with observations.
Conclusions: Our findings favour the planetary- over the zonal flow-origin of pressure bumps and support the idea that planet formation already occurs in early class 0-1 stages of circumstellar disks. The determination of the disk's effective size through its outermost pressure bump also delivers a possible answer to why disks in recent low-resolution surveys appear to have the same sizes across different millimetre wavelengths.
Planets are born from the gas and dust discs surrounding young stars. Energetic radiation from the central star can drive thermal outflows from the discs atmospheres, strongly affecting the evolution of the discs and the nascent planetary system. In this context, several numerical models of varying complexity have been developed to study the process of disc photoevaporation from their central stars. We describe the numerical techniques, the results and the predictivity of current models and identify observational tests to constrain them.
We study the inner structure of the group-scale lens CASSOWARY 31 (CSWA 31) by adopting both strong lensing and dynamical modeling. CSWA 31 is a peculiar lens system. The brightest group galaxy (BGG) is an ultra-massive elliptical galaxy at z = 0.683 with a weighted mean velocity dispersion of σ = 432 ± 31 km s−1. It is surrounded by group members and several lensed arcs probing up to ≃150 kpc in projection. Our results significantly improve on previous analyses of CSWA 31 thanks to the new HST imaging and MUSE integral-field spectroscopy. From the secure identification of five sets of multiple images and measurements of the spatially resolved stellar kinematics of the BGG, we conduct a detailed analysis of the multi-scale mass distribution using various modeling approaches, in both the single and multiple lens-plane scenarios. Our best-fit mass models reproduce the positions of multiple images and provide robust reconstructions for two background galaxies at z = 1.4869 and z = 2.763. Despite small variations related to the different sets of input constraints, the relative contributions from the BGG and group-scale halo are remarkably consistent in our three reference models, demonstrating the self-consistency between strong lensing analyses based on image position and extended image modeling. We find that the ultra-massive BGG dominates the projected total mass profiles within 20 kpc, while the group-scale halo dominates at larger radii. The total projected mass enclosed within Reff = 27.2 kpc is 1.10−0.04+0.02 × 1013 M⊙. We find that CSWA 31 is a peculiar fossil group, strongly dark-matter dominated toward the central region, and with a projected total mass profile similar to higher-mass cluster-scale halos. The total mass-density slope within the effective radius is shallower than isothermal, consistent with previous analyses of early-type galaxies in overdense environments.
Full Table B.1 is only available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (ftp://130.79.128.5) or via https://cdsarc.cds.unistra.fr/viz-bin/cat/J/A+A/668/A162
Cytoskeletal networks form complex intracellular structures. Here we investigate a minimal model for filament-motor mixtures in which motors act as depolymerases and thereby regulate filament length. Combining agent-based simulations and hydrodynamic equations, we show that resource-limited length regulation drives the formation of filament clusters despite the absence of mechanical interactions between filaments. Even though the orientation of individual remains fixed, collective filament orientation emerges in the clusters, aligned orthogonal to their interfaces.
We compute the three-loop helicity amplitudes for q q ¯ → gg and its crossed partonic channels, in massless QCD. Our analytical results provide a non-trivial check of the color quadrupole contribution to the infrared poles for external states in different color representations. At high energies, the qg → qg amplitude shows the predicted factorized form from Regge theory and confirms previous results for the gluon Regge trajectory extracted from qq' → qq' and gg → gg scattering.
This paper presents a comprehensive review of both the theory and experimental successes of Quantum Chromodynamics, starting with its emergence as a well defined theory in 1972-73 and following developments and results up to the present day. Topics include a review of the earliest theoretical and experimental foundations; the fundamental constants of QCD; an introductory discussion of lattice QCD, the only known method for obtaining exact predictions from QCD; methods for approximating QCD, with special focus on effective field theories; QCD under extreme conditions; measurements and predictions of meson and baryon states; a special discussion of the structure of the nucleon; techniques for study of QCD at high energy, including treatment of jets and showers; measurements at colliders; weak decays and quark mixing; and a section on the future, which discusses new experimental facilities or upgrades currently funded. The paper is intended to provide a broad background for Ph.D. students and postdocs starting their career. Some contributions include personal accounts of how the ideas or experiments were developed.
Context. Models of planetary core growth by either planetesimal or pebble accretion are traditionally disconnected from the models of dust evolution and formation of the first gravitationally bound planetesimals. State-of-the-art models typically start with massive planetary cores already present.
Aims: We aim to study the formation and growth of planetary cores in a pressure bump, motivated by the annular structures observed in protoplanetary disks, starting with submicron-sized dust grains.
Methods: We connect the models of dust coagulation and drift, planetesimal formation in the streaming instability, gravitational interactions between planetesimals, pebble accretion, and planet migration into one uniform framework.
Results: We find that planetesimals forming early at the massive end of the size distribution grow quickly, predominantly by pebble accretion. These few massive bodies grow on timescales of ~100 000 yr and stir the planetesimals that form later, preventing the emergence of further planetary cores. Additionally, a migration trap occurs, allowing for retention of the growing cores.
Conclusions: Pressure bumps are favourable locations for the emergence and rapid growth of planetary cores by pebble accretion as the dust density and grain size are increased and the pebble accretion onset mass is reduced compared to a smooth-disc model.
Multiply imaged time-variable sources can be used to measure absolute distances as a function of redshifts and thus determine cosmological parameters, chiefly the Hubble Constant H0. In the two decades up to 2020, through a number of observational and conceptual breakthroughs, this so-called time-delay cosmography has reached a precision sufficient to be an important independent voice in the current "Hubble tension" debate between early- and late-universe determinations of H0. The 2020s promise to deliver major advances in time-delay cosmography, owing to the large number of lenses to be discovered by new and upcoming surveys and the vastly improved capabilities for follow-up and analysis. In this review, after a brief summary of the foundations of the method and recent advances, we outline the opportunities for the decade and the challenges that will need to be overcome in order to meet the goal of the determination of H0 from time-delay cosmography with 1% precision and accuracy.
We use gradient flow to compute the static force based on a Wilson loop with a chromoelectric field insertion. The result can be compared on one hand to the static force from the numerical derivative of the lattice static energy, and on the other hand to the perturbative calculation, allowing a precise extraction of the $\Lambda_0$ parameter. This study may open the way to gradient flow calculations of correlators of chromoelectric and chromomagnetic fields, which typically arise in the nonrelativistic effective field theory factorization.
We investigate the main tensions within the current standard model of cosmology from the perspective of the void size function in BOSS DR12 data. For this purpose, we present the first cosmological constraints on the parameters $S_8\equiv \sigma_8\sqrt{\Omega_{\rm m}/0.3}$ and $H_0$ obtained from voids as a stand-alone probe. We rely on an extension of the popular volume-conserving model for the void size function, tailored to the application on data, including geometric and dynamic distortions. We calibrate the two nuisance parameters of this model with the official BOSS collaboration mock catalogs and propagate their uncertainty through the statistical analysis of the BOSS void number counts. We focus our analysis on the $\Omega_{\rm m}$--$\sigma_8$ and $\Omega_{\rm m}$--$H_0$ parameter planes and derive the marginalized constraints $S_8 = 0.78^{+0.16}_{-0.14}$ and $H_0=65.2^{+4.5}_{-3.6}$ $\mathrm{km} \ \mathrm{s}^{-1} \ \mathrm{Mpc}^{-1}$. Our estimate of $S_8$ is fully compatible with constraints from the literature, while our $H_0$ value slightly disagrees by more than $1\sigma$ with recent local distance ladder measurements of type Ia supernovae. Our results open up a new viewing angle on the rising cosmological tensions and are expected to improve notably in precision when jointly analyzed with independent probes.
Context. Recent observations with the Atacama Large Millimeter Array (ALMA) have shown that the large dust aggregates observed at millimeter wavelengths settle to the midplane into a remarkably thin layer. This sets strong limits on the strength of the turbulence and other gas motions in these disks.
Aims: We intend to find out if the geometric thinness of these layers is evidence against the vertical shear instability (VSI) operating in these disks. We aim to verify if a dust layer consisting of large enough dust aggregates could remain geometrically thin enough to be consistent with the latest observations of these dust layers, even if the disk is unstable to the VSI. If this is falsified, then the observed flatness of these dust layers proves that these disks are stable against the VSI, even out to the large radii at which these dust layers are observed.
Methods: We performed hydrodynamic simulations of a protoplanetary disk with a locally isothermal equation of state, and let the VSI fully develop. We sprinkled dust particles with a given grain size at random positions near the midplane and followed their motion as they got stirred up by the VSI, assuming no feedback onto the gas. We repeated the experiment for different grain sizes and determined for which grain size the layer becomes thin enough to be consistent with ALMA observations. We then verified if, with these grain sizes, it is still possible (given the constraints of dust opacity and gravitational stability) to generate a moderately optically thick layer at millimeter wavelengths, as observations appear to indicate.
Results: We found that even very large dust aggregates with Stokes numbers close to unity get stirred up to relatively large heights above the midplane by the VSI, which is in conflict with the observed geometric thinness. For grains so large that the Stokes number exceeds unity, the layer can be made to remain thin, but we show that it is hard to make dust layers optically thick at ALMA wavelengths (e.g., τ1.3mm ≳ 1) with such large dust aggregates.
Conclusions: We conclude that protoplanetary disks with geometrically thin midplane dust layers cannot be VSI unstable, at least not down to the disk midplane. Explanations for the inhibition of the VSI out to several hundreds of au include a high dust-to-gas ratio of the midplane layer, a modest background turbulence, and/or a reduced dust-to-gas ratio of the small dust grains that are responsible for the radiative cooling of the disk. A reduction of small grains by a factor of between 10 and 100 is sufficient to quench the VSI. Such a reduction is plausible in dust growth models, and still consistent with observations at optical and infrared wavelengths.
In the past few years, the Event Horizon Telescope (EHT) has provided the first-ever event horizon-scale images of the supermassive black holes (BHs) M87* and Sagittarius A* (Sgr A*). The next-generation EHT project is an extension of the EHT array that promises larger angular resolution and higher sensitivity to the dim, extended flux around the central ring-like structure, possibly connecting the accretion flow and the jet. The ngEHT Analysis Challenges aim to understand the science extractability from synthetic images and movies so as to inform the ngEHT array design and analysis algorithm development. In this work, we take a look at the numerical fluid simulations used to construct the source models in the challenge set, which currently target M87* and Sgr A*. We have a rich set of models encompassing steady-state radiatively-inefficient accretion flows with time-dependent shearing hotspots, radiative and non-radiative general relativistic magneto-hydrodynamic simulations that incorporate electron heating and cooling. We find that the models exhibit remarkably similar temporal and spatial properties, except for the electron temperature since radiative losses substantially cool down electrons near the BH and the jet sheath. We restrict ourselves to standard torus accretion flows, and leave larger explorations of alternate accretion models to future work.
Context. An excess of galaxy-galaxy strong lensing (GGSL) in galaxy clusters compared to expectations from the Λ cold-dark-matter (CDM) cosmological model has recently been reported. Theoretical estimates of the GGSL probability are based on the analysis of numerical hydrodynamical simulations in ΛCDM cosmology.
Aims: We quantify the impact of the numerical resolution and active galactic nucleus (AGN) feedback scheme adopted in cosmological simulations on the predicted GGSL probability, and determine if varying these simulation properties can alleviate the gap with observations.
Methods: We analyze cluster-size halos (M200 > 5 × 1014 M⊙) simulated with different mass and force resolutions and implementing several independent AGN feedback schemes. Our analysis focuses on galaxies with Einstein radii in the range 0.″5 ≤ θE ≤ 3″.
Results: We find that improving the mass resolution by factors of 10 and 25, while using the same galaxy formation model that includes AGN feedback, does not affect the GGSL probability. We find similar results regarding the choice of gravitational softening. On the contrary, adopting an AGN feedback scheme that is less efficient at suppressing gas cooling and star formation leads to an increase in the GGSL probability by a factor of between 3 and 6. However, we notice that such simulations form overly massive galaxies whose contribution to the lensing cross section would be significant but that their Einstein radii are too large to be consistent with the observations. The primary contributors to the observed GGSL cross sections are galaxies with smaller masses that are compact enough to become critical for lensing. The population with these required characteristics appears to be absent from simulations. Conclusion. Based on these results, we reaffirm the tension between observations of GGSL and theoretical expectations in the framework of the ΛCDM cosmological model. The GGSL probability is sensitive to the galaxy formation model implemented in the simulations. Still, all the tested models have difficulty simultaneously reproducing the stellar mass function and the internal structure of galaxies.
The existence of a nucleon-$\phi$ (N-$\phi$) bound state has been subject of theoretical and experimental investigations for decades. In this letter a re-analysis of the p-$\phi$ correlation measured at the LHC is presented, using as input recent lattice calculations of the N-$\phi$ interaction in the spin 3/2 channel obtained by the HAL QCD collaboration. A constrained fit of the experimental data allows to determine the spin 1/2 channel of the p-$\phi$ interaction with evidence of the formation of a p-$\phi$ bound state. The scattering length and effective range extracted from the spin 1/2 channel are $f_0^{(1/2)}=(-1.47^{+0.44}_{-0.37}(\mathrm{stat.})^{+0.14}_{-0.17}(\mathrm{syst.})+i\cdot0.00^{+0.26}_{-0.00}(\mathrm{stat.})^{+0.15}_{-0.00}(\mathrm{syst.}))$ fm and $d_0^{(1/2)}=(0.37^{+0.07}_{-0.08}(\mathrm{stat.})^{+0.03}_{-0.03}(\mathrm{syst.})+i\cdot~0.00^{+0.00}_{-0.02}(\mathrm{stat.})^{+0.00}_{-0.01}(\mathrm{syst.}))$ fm, respectively. The corresponding binding energy is estimated to be in the range $14.7-56.6$ MeV. This is the first experimental evidence of a p-$\phi$ bound state.
We present limits on the spin-independent interaction cross section of dark matter particles with silicon nuclei, derived from data taken with a cryogenic calorimeter with 0.35 g target mass operated in the CRESST-III experiment. A baseline nuclear recoil energy resolution of $(1.36\pm 0.05)$ eV$_{\text{nr}}$, currently the lowest reported for macroscopic particle detectors, and a corresponding energy threshold of $(10.0\pm 0.2)$ eV$_{\text{nr}}$ have been achieved, improving the sensitivity to light dark matter particles with masses below 160 MeV/c$^2$ by a factor of up to 20 compared to previous results. We characterize the observed low energy excess, and we exclude noise triggers and radioactive contaminations on the crystal surfaces as dominant contributions.
In many astrophysical applications, the cost of solving a chemical network represented by a system of ordinary differential equations (ODEs) grows significantly with the size of the network and can often represent a significant computational bottleneck, particularly in coupled chemo-dynamical models. Although standard numerical techniques and complex solutions tailored to thermochemistry can somewhat reduce the cost, more recently, machine learning algorithms have begun to attack this challenge via data-driven dimensional reduction techniques. In this work, we present a new class of methods that take advantage of machine learning techniques to reduce complex data sets (autoencoders), the optimization of multiparameter systems (standard backpropagation), and the robustness of well-established ODE solvers to to explicitly incorporate time dependence. This new method allows us to find a compressed and simplified version of a large chemical network in a semiautomated fashion that can be solved with a standard ODE solver, while also enabling interpretability of the compressed, latent network. As a proof of concept, we tested the method on an astrophysically relevant chemical network with 29 species and 224 reactions, obtaining a reduced but representative network with only 5 species and 12 reactions, and an increase in speed by a factor 65.
Context. Disk winds are an important mechanism for accretion and disk evolution around young stars. The accreting intermediate-mass T-Tauri star RY Tau has an active jet and a previously known disk wind. Archival optical and new near-infrared observations of the RY Tau system show two horn-like components stretching out as a cone from RY Tau. Scattered light from the disk around RY Tau is visible in the near-infrared, but not seen at optical wavelengths. In the near-infrared, dark wedges separate the horns from the disk, indicating that we may see the scattered light from a disk wind.
Aims: We aim to test the hypothesis that a dusty disk wind could be responsible for the optical effect in which the disk around RY Tau is hidden in the I band, but visible in the H band. This could be the first detection of a dusty disk wind in scattered light. We also want to constrain the grain size and dust mass in the wind and the wind-launching region.
Methods: We used archived Atacama-Large-Millimetre-Array (ALMA) and Spectro-Polarimetric High-contrast Exoplanet REsearch (SPHERE) I band observations combined with newly acquired SPHERE H band observations and available literature to build a simple geometric model of the RY Tau disk and disk wind. We used Monte Carlo radiative transfer modelling MCMax3D to create comparable synthetic observations that test the effect of a dusty wind on the optical effect in the observations. We constrained the grain size and dust mass needed in the disk wind to reproduce the effect from the observations.
Results: A model geometrically reminiscent of a dusty disk wind with small micron to sub-micron-sized grains elevated above the disk can reproduce the optical effect seen in the observations. The mass in the obscuring component of the wind has been constrained to 1 × 10−9 M⊙ ≤ M ≤ 5 × 10−8 M⊙, which corresponds to a mass-loss rate in the wind of about ~1 × 10−8 M⊙ yr−1.
Conclusions: A simple model of a disk wind with micron to sub-micron-sized grains elevated above the disk is able to prevent stellar radiation to scatter in the disk at optical wavelengths while allowing photons to reach the disk in the near-infrared. Estimates of mass-loss rate correspond to previously presented theoretical models and points towards the idea that a magneto-hydrodynamic-type wind is the more likely scenario.
We critically reconsider the argument based on 't Hooft anomaly matching that aims at proving chiral symmetry breaking in QCD-like theories with $N_c>2$ colors and $N_f$ flavors of vectorlike quarks in the fundamental representation. The main line of reasoning relies on a property of the solutions of the anomaly matching and persistent mass equations called $N_f$-independence. The validity of $N_f$-independence was assumed based on qualitative arguments, but it was never proven rigorously. We provide a detailed proof and clarify under which (dynamical) conditions it holds. Our result is valid for a generic spectrum of massless composite fermions including baryons and exotics. We then present a novel argument that does not require any dynamical assumption and is based on downlifting solutions to smaller values of $N_f$. When applied to QCD ($N_c=3)$, our theorem implies that chiral symmetry must be spontaneously broken for $3\leq N_f<N_f^{CFT}$, where $N_f^{CFT}$ is the lower edge of the conformal window. A second argument is also presented based on continuity, which assumes the absence of phase transitions when the quark masses are sent to infinity. When applied to QCD, this result explains why chiral symmetry is broken for $N_f=2$, despite integer solutions of the equations exist in this case. Explicit examples and a numerical analysis are presented in a companion paper.
Magnetars are isolated young neutron stars characterised by the most intense magnetic fields known in the Universe, which power a wide variety of high-energy emissions from giant flares to fast radio bursts. The origin of their magnetic field is still a challenging question. In situ magnetic field amplification by dynamo action could potentially generate ultra-strong magnetic fields in fast-rotating progenitors. However, it is unclear whether the fraction of progenitors harbouring fast core rotation is sufficient to explain the entire magnetar population. To address this point, we propose a new scenario for magnetar formation involving a slowly rotating progenitor, in which a slow-rotating proto-neutron star is spun up by the supernova fallback. We argue that this can trigger the development of the Tayler-Spruit dynamo while other dynamo processes are disfavoured. Using the findings of previous studies of this dynamo and simulation results characterising the supernova fallback, we derive equations modelling the coupled evolution of the proto-neutron star rotation and magnetic field. Their time integration for different accreted masses is successfully compared with analytical estimates of the amplification timescales and saturation value of the magnetic field. We find that the magnetic field is amplified within 20 − 40 s after the core bounce, and that the radial magnetic field saturates at intensities between ∼1013 and 1015 G, therefore spanning the full range of a magnetar's dipolar magnetic fields. The toroidal magnetic field is predicted to be a factor of 10-100 times stronger, lying between ∼1015 and 3 × 1016 G. We also compare the saturation mechanisms proposed respectively by H.C. Spruit and J. Fuller, showing that magnetar-like magnetic fields can be generated for a neutron star spun up to rotation periods of ≲8 ms and ≲28 ms, corresponding to accreted masses of ≳ 4 × 10−2 M⊙ and ≳ 1.1 × 10−2 M⊙, respectively. Therefore, our results suggest that magnetars can be formed from slow-rotating progenitors for accreted masses compatible with recent supernova simulations and leading to plausible initial rotation periods of the proto-neutron star.
A subfraction of dark matter or new particles trapped inside celestial objects can significantly alter their macroscopic properties. We investigate the new physics imprint on celestial objects by using a generic framework to solve the Tolman-Oppenheimer-Volkoff (TOV) equations for up to two fluids. We test the impact of populations of new particles on celestial objects, including the sensitivity to self-interaction sizes, new particle mass, and net population mass. Applying our setup to neutron stars and boson stars, we find rich phenomenology for a range of these parameters, including the creation of extended atmospheres. These atmospheres are detectable by their impact on the tidal Love number, which can be measured at upcoming gravitational wave experiments such as Advanced LIGO, the Einstein Telescope, and LISA. We release our calculation framework as a publicly available code, allowing the TOV equations to be generically solved for arbitrary new physics models in novel and admixed celestial objects.
We introduce a PYTHON package that provides simple and unified access to a collection of datasets from fundamental physics research—including particle physics, astroparticle physics, and hadron- and nuclear physics—for supervised machine learning studies. The datasets contain hadronic top quarks, cosmic-ray-induced air showers, phase transitions in hadronic matter, and generator-level histories. While public datasets from multiple fundamental physics disciplines already exist, the common interface and provided reference models simplify future work on cross-disciplinary machine learning and transfer learning in fundamental physics. We discuss the design and structure and line out how additional datasets can be submitted for inclusion. As showcase application, we present a simple yet flexible graph-based neural network architecture that can easily be applied to a wide range of supervised learning tasks. We show that our approach reaches performance close to dedicated methods on all datasets. To simplify adaptation for various problems, we provide easy-to-follow instructions on how graph-based representations of data structures, relevant for fundamental physics, can be constructed and provide code implementations for several of them. Implementations are also provided for our proposed method and all reference algorithms.
Simulations of idealized star-forming filaments of finite length typically show core growth that is dominated by two cores forming at its respective end. The end cores form due to a strong increasing acceleration at the filament ends that leads to a sweep-up of material during the filament collapse along its axis. As this growth mode is typically faster than any other core formation mode in a filament, the end cores usually dominate in mass and density compared to other cores forming inside a filament. However, observations of star-forming filaments do not show this prevalence of cores at the filament ends. We explore a possible mechanism to slow the growth of the end cores using numerical simulations of simultaneous filament and embedded core formation, in our case a radially accreting filament forming in a finite converging flow. While such a set-up still leads to end cores, they soon begin to move inwards and a density gradient is formed outside of the cores by the continued accumulation of material. As a result, the outermost cores are no longer located at the exact ends of the filament and the density gradient softens the inward gravitational acceleration of the cores. Therefore, the two end cores do not grow as fast as expected and thus do not dominate over other core formation modes in the filament.
Small grains play an essential role in astrophysical processes such as chemistry, radiative transfer, and gas/dust dynamics. The population of small grains is mainly maintained by the fragmentation process due to colliding grains. An accurate treatment of dust fragmentation is required in numerical modelling. However, current algorithms for solving fragmentation equation suffer from an overdiffusion in the conditions of 3D simulations. To tackle this challenge, we developed a discontinuous Galerkin scheme to solve efficiently the non-linear fragmentation equation with a limited number of dust bins.
The Sunyaev-Zeldovich (SZ) effect is a powerful tool in modern cosmology. With future observations promising ever improving SZ measurements, the relativistic corrections to the SZ signals from galaxy groups and clusters are increasingly relevant. As such, it is important to understand the differences between three temperature measures: (a) the average relativistic SZ (rSZ) temperature, (b) the mass-weighted temperature relevant for the thermal SZ (tSZ) effect, and (c) the X-ray spectroscopic temperature. In this work, we compare these cluster temperatures, as predicted by the BAHAMAS & MACSIS, ILLUSTRISTNG, MAGNETICUM, and THE THREE HUNDRED PROJECT simulations. Despite the wide range of simulation parameters, we find the SZ temperatures are consistent across the simulations. We estimate a $\simeq 10{{\ \rm per\ cent}}$ level correction from rSZ to clusters with Y ≃ 10-4 Mpc-2. Our analysis confirms a systematic offset between the three temperature measures; with the rSZ temperature $\simeq 20{{\ \rm per\ cent}}$ larger than the other measures, and diverging further at higher redshifts. We demonstrate that these measures depart from simple self-similar evolution and explore how they vary with the defined radius of haloes. We investigate how different feedback prescriptions and resolutions affect the observed temperatures, and discover the SZ temperatures are rather insensitive to these details. The agreement between simulations indicates an exciting avenue for observational and theoretical exploration, determining the extent of relativistic SZ corrections. We provide multiple simulation-based fits to the scaling relations for use in future SZ modelling.
Disc winds and planet formation are considered to be two of the most important mechanisms that drive the evolution and dispersal of protoplanetary discs and in turn define the environment in which planets form and evolve. While both have been studied extensively in the past, we combine them into one model by performing three-dimensional radiation-hydrodynamic simulations of giant planet hosting discs that are undergoing X-ray photoevaporation, with the goal to analyse the interactions between both mechanisms. In order to study the effect on observational diagnostics, we produce synthetic observations of commonly used wind-tracing forbidden emission lines with detailed radiative transfer and photoionization calculations. We find that a sufficiently massive giant planet carves a gap in the gas disc that is deep enough to affect the structure and kinematics of the pressure-driven photoevaporative wind significantly. This effect can be strong enough to be visible in the synthetic high-resolution observations of some of our wind diagnostic lines, such as the [O I] 6300 Å or [S II] 6730 Å lines. When the disc is observed at inclinations around 40° and higher, the spectral line profiles may exhibit a peak in the redshifted part of the spectrum, which cannot easily be explained by simple wind models alone. Moreover, massive planets can induce asymmetric substructures within the disc and the photoevaporative wind, giving rise to temporal variations of the line profiles that can be strong enough to be observable on time-scales of less than a quarter of the planet's orbital period.
We explore the potential of our novel triaxial modelling machinery in recovering the viewing angles, the shape, and the orbit distribution of galaxies by using a high-resolution N-body merger simulation. Our modelling technique includes several recent advancements. (i) Our new triaxial deprojection algorithm shape3d is able to significantly shrink the range of possible orientations of a triaxial galaxy and therefore to constrain its shape relying only on photometric information. It also allows to probe degeneracies, i.e. to recover different deprojections at the same assumed orientation. With this method we can constrain the intrinsic shape of the N-body simulation, i.e. the axis ratios p = b/a and q = c/a, with Δp and Δq ≲ 0.1 using only photometric information. The typical accuracy of the viewing angles reconstruction is 15°-20°. (ii) Our new triaxial Schwarzschild code smart exploits the full kinematic information contained in the entire non-parametric line-of-sight velocity distributions along with a 5D orbital sampling in phase space. (iii) We use a new generalized Akaike information criterion AICp to optimize the smoothing and to select the best-fitting model, avoiding potential biases in purely χ2-based approaches. With our deprojected densities, we recover the correct orbital structure and anisotropy parameter β with Δβ ≲ 0.1. These results are valid regardless of the tested orientation of the simulation and suggest that even despite the known intrinsic photometric and kinematic degeneracies the above described advanced methods make it possible to recover the shape and the orbital structure of triaxial bodies with unprecedented accuracy.
Motivated by the discrepancy between Bayesian and frequentist upper limits on the tensor-to-scalar ratio parameter r found by the SPIDER collaboration, we investigate whether a similar trend is also present in the latest Planck and BICEP/Keck Array data. We derive a new upper bound on r using the frequentist profile likelihood method. We vary all the relevant cosmological parameters of the ΛCDM model, as well as the nuisance parameters. Unlike the Bayesian analysis using Markov Chain Monte Carlo (MCMC), our analysis is independent of the choice of priors. Using Planck Public Release 4, BICEP/Keck Array 2018, Planck cosmic microwave background lensing, and baryon acoustic oscillation data, we find an upper limit of r < 0.037 at 95% Confidence Level (C.L.), similar to the Bayesian MCMC result of r < 0.038 for a flat prior on r and a conditioned Planck lowlEB covariance matrix.
Gamma-ray bursts (GRBs) are the most luminous transients in the universe and are utilized as probes of early stars, gravitational wave counterparts and collisionless shock physics. In spite of studies on polarimetry of GRBs in individual wavelengths that characterized intriguing properties of prompt emission and afterglow, no coordinated multi-wavelength measurements have yet been performed. Here we report the first coordinated simultaneous polarimetry in the optical and radio bands for the afterglow associated with the typical long GRB 191221B. Our observations successfully caught the radio emission, which is not affected by synchrotron self-absorption, and show that the emission is depolarized in the radio band compared with the optical one. Our simultaneous polarization angle measurement and temporal polarization monitoring indicate the existence of cool electrons that increase the estimate of jet kinetic energy by a factor of more than 4 for this GRB afterglow. Further coordinated multi-wavelength polarimetric campaigns would improve our understanding of the total jet energies and magnetic field configurations in the emission regions of various types of GRBs, which are required to comprehend the mass scales of their progenitor systems and the physics of collisionless shocks.
Recent cosmological analyses with large-scale structure and weak lensing measurements, usually referred to as 3$\times$2pt, had to discard a lot of signal-to-noise from small scales due to our inability to precisely model non-linearities and baryonic effects. Galaxy-galaxy lensing, or the position-shear correlation between lens and source galaxies, is one of the three two-point correlation functions that are included in such analyses, usually estimated with the mean tangential shear. However, tangential shear measurements at a given angular scale $\theta$ or physical scale $R$ carry information from all scales below that, forcing the scale cuts applied in real data to be significantly larger than the scale at which theoretical uncertainties become problematic. Recently there have been a few independent efforts that aim to mitigate the non-locality of the galaxy-galaxy lensing signal. Here we perform a comparison of the different methods, including the Y transformation described in Park et al. (2021), the point-mass marginalization methodology presented in MacCrann et al. (2020) and the Annular Differential Surface Density statistic described in Baldauf et al. (2010). We do the comparison at the cosmological constraints level in a noiseless simulated combined galaxy clustering and galaxy-galaxy lensing analysis. We find that all the estimators perform equivalently using a Rubin Observatory Legacy Survey of Space and Time (LSST) Year 1 like setup. This is because all the estimators project out the mode responsible for the non-local nature of the galaxy-galaxy lensing measurements, which we have identified as $1/R^2$. We finally apply all the estimators to DES Y3 data and confirm that they all give consistent results.
The evolution of the Kelvin-Helmholtz Instability (KHI) is widely used to assess the performance of numerical methods. We employ this instability to test both the smoothed particle hydrodynamics (SPH) and the meshless finite mass (MFM) implementation in OPENGADGET3. We quantify the accuracy of SPH and MFM in reproducing the linear growth of the KHI with different numerical and physical set-ups. Among them, we consider: (i) numerical induced viscosity, and (ii) physically motivated, Braginskii viscosity, and compare their effect on the growth of the KHI. We find that the changes of the inferred numerical viscosity when varying nuisance parameters such as the set-up or the number of neighbours in our SPH code are comparable to the differences obtained when using different hydrodynamical solvers, i.e. MFM. SPH reproduces the expected reduction of the growth rate in the presence of physical viscosity and recovers well the threshold level of physical viscosity needed to fully suppress the instability. In the case of galaxy clusters with a virial temperature of 3 × 107 K, this level corresponds to a suppression factor of ≍10-3 of the classical Braginskii value. The intrinsic, numerical viscosity of our SPH implementation in such an environment is inferred to be at least an order of magnitude smaller (i.e. ≍10-4), re-ensuring that modern SPH methods are suitable to study the effect of physical viscosity in galaxy clusters.
We present the first results of a comprehensive supernova (SN) radiative-transfer (RT) code-comparison initiative (StaNdaRT), where the emission from the same set of standardised test models is simulated by currently used RT codes. We ran a total of ten codes on a set of four benchmark ejecta models of Type Ia SNe. We consider two sub-Chandrasekhar-mass (Mtot = 1.0 M⊙) toy models with analytic density and composition profiles and two Chandrasekhar-mass delayed-detonation models that are outcomes of hydrodynamical simulations. We adopt spherical symmetry for all four models. The results of the different codes, including the light curves, spectra, and the evolution of several physical properties as a function of radius and time are provided in electronic form in a standard format via a public repository. We also include the detailed test model profiles and several Python scripts for accessing and presenting the input and output files. We also provide the code used to generate the toy models studied here. In this paper, we describe the test models, radiative-transfer codes, and output formats in detail, and provide access to the repository. We present example results of several key diagnostic features.
The detection of the accelerated expansion of the Universe has been one of the major breakthroughs in modern cosmology. Several cosmological probes (Cosmic Microwave Background, Supernovae Type Ia, Baryon Acoustic Oscillations) have been studied in depth to better understand the nature of the mechanism driving this acceleration, and they are being currently pushed to their limits, obtaining remarkable constraints that allowed us to shape the standard cosmological model. In parallel to that, however, the percent precision achieved has recently revealed apparent tensions between measurements obtained from different methods. These are either indicating some unaccounted systematic effects, or are pointing toward new physics. Following the development of CMB, SNe, and BAO cosmology, it is critical to extend our selection of cosmological probes. Novel probes can be exploited to validate results, control or mitigate systematic effects, and, most importantly, to increase the accuracy and robustness of our results. This review is meant to provide a state-of-art benchmark of the latest advances in emerging "beyond-standard" cosmological probes. We present how several different methods can become a key resource for observational cosmology. In particular, we review cosmic chronometers, quasars, gamma-ray bursts, standard sirens, lensing time-delay with galaxies and clusters, cosmic voids, neutral hydrogen intensity mapping, surface brightness fluctuations, stellar ages of the oldest objects, secular redshift drift, and clustering of standard candles. The review describes the method, systematics, and results of each probe in a homogeneous way, giving the reader a clear picture of the available innovative methods that have been introduced in recent years and how to apply them. The review also discusses the potential synergies and complementarities between the various probes, exploring how they will contribute to the future of modern cosmology.
Several tentative associations between high-energy neutrinos and astrophysical sources have been recently reported, but a conclusive identification of these potential neutrino emitters remains challenging. We explore the use of Monte Carlo simulations of source populations to gain deeper insight into the physical implications of proposed individual source-neutrino associations. In particular, we focus on the IC170922A-TXS 0506+056 observation. Assuming a null model, we find a 7.6% chance of mistakenly identifying coincidences between γ-ray flares from blazars and neutrino alerts in 10-year surveys. We confirm that a blazar-neutrino connection based on the γ-ray flux is required to find a low chance coincidence probability and, therefore, a significant IC170922A-TXS 0506+056 association. We then assume this blazar-neutrino connection for the whole population and find that the ratio of neutrino to γ-ray fluxes must be ≲10−2 in order not to overproduce the total number of neutrino alerts seen by IceCube. For the IC170922A-TXS 0506+056 association to make sense, we must either accept this low flux ratio or suppose that only some rare sub-population of blazars is capable of high-energy neutrino production. For example, if we consider neutrino production only in blazar flares, we expect the flux ratio of between 10−3 and 10−1 to be consistent with a single coincident observation of a neutrino alert and flaring γ-ray blazar. These constraints should be interpreted in the context of the likelihood models used to find the IC170922A-TXS 0506+056 association, which assumes a fixed power-law neutrino spectrum of E−2.13 for all blazars.
Aims: Stellar flares emit thermal and nonthermal radiation in the X-ray and ultraviolet (UV) regime. Although high energetic radiation from flares is a potential threat to exoplanet atmospheres and may lead to surface sterilization, it might also provide the extra energy for low-mass stars needed to trigger and sustain prebiotic chemistry. Despite the UV continuum emission being constrained partly by the flare temperature, few efforts have been made to determine the flare temperature for ultra-cool M-dwarfs. We investigate two flares on TRAPPIST-1, an ultra-cool dwarf star that hosts seven exoplanets of which three lie within its habitable zone. The flares are detected in all four passbands of the MuSCAT2 instrument allowing a determination of their temperatures and bolometric energies.
Methods: We analyzed the light curves of the MuSCATl (multicolor simultaneous camera for studying atmospheres of transiting exoplanets) and MuSCAT2 instruments obtained between 2016 and 2021 in g, r, i, zs-filters. We conducted an automated flare search and visually confirmed possible flare events. The black body temperatures were inferred directly from the spectral energy distribution (SED) by extrapolating the filter-specific flux. We studied the temperature evolution, the global temperature, and the peak temperature of both flares.
Results: White-light M-dwarf flares are frequently described in the literature by a black body with a temperature of 9000-10 000 K. For the first time we infer effective black body temperatures of flares that occurred on TRAPPIST-1. The black body temperatures for the two TRAPPIST-1 flares derived from the SED are consistent with TSED = 7940−390+430 K and TSED = 6030−270+300 K. The flare black body temperatures at the peak are also calculated from the peak SED yielding TSEDp = 13 620−1220+1520 K and TSEDp = 8290−550+660 K. We update the flare frequency distribution of TRAPPIST-1 and discuss the impacts of lower black body temperatures on exoplanet habitability.
Conclusions: We show that for the ultra-cool M-dwarf TRAPPIST-1 the flare black body temperatures associated with the total continuum emission are lower and not consistent with the usually adopted assumption of 9000-10 000 K in the context of exoplanet research. For the peak emission, both flares seem to be consistent with the typical range from 9000 to 14 000 K, respectively. This could imply different and faster cooling mechanisms. Further multi-color observations are needed to investigate whether or not our observations are a general characteristic of ultra-cool M-dwarfs. This would have significant implications for the habitability of exoplanets around these stars because the UV surface flux is likely to be overestimated by the models with higher flare temperatures.
The photometry of the two flares in g, r, i, and zs filters is only available at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (ftp://130.79.128.5) or via https://cdsarc.cds.unistra.fr/viz-bin/cat/J/A+A/668/A111
In this chapter, we review the processes involved in the formation of planetesimals and comets. We will start with a description of the physics of dust grain growth and how this is mediated by gas-dust interactions in planet-forming disks. We will then delve into the various models of planetesimal formation, describing how these planetesimals form as well as their resulting structure. In doing so, we focus on and compare two paradigms for planetesimal formation: the gravitational collapse of particle over-densities (which can be produced by a variety of mechanisms) and the growth of particles into planetesimals via collisional and gravitational coagulation. Finally, we compare the predictions from these models with data collected by the Rosetta and New Horizons missions and that obtained via observations of distant Kuiper Belt Objects.
We explore the possible phases of a condensed dark matter (DM) candidate taken to be in the form of a fermion with a Yukawa coupling to a scalar particle, at zero temperature but at finite density. This theory essentially depends on only four parameters, the Yukawa coupling, the fermion mass, the scalar mediator mass, and the DM density. At low-fermion densities we delimit the Bardeen-Cooper-Schrieffer (BCS), Bose-Einstein condensate (BEC), and crossover phases as a function of model parameters using the notion of scattering length. We further study the BCS phase by consistently including emergent effects such as the scalar-density condensate and superfluid gaps. Within the mean-field approximation, we derive the consistent set of gap equations, retaining their momentum dependence, and valid in both the nonrelativistic and relativistic regimes. We present numerical solutions to the set of gap equations, in particular when the mediator mass is smaller and larger than the DM mass. Finally, we discuss the equation of state and possible astrophysical implications for asymmetric DM.
Context. Winds in protoplanetary disks play an important role in their evolution and dispersal. However, the physical process that is actually driving the winds is still unclear (i.e. magnetically versus thermally driven), and can only be understood by directly confronting theoretical models with observational data.
Aims: We aim to interpret observational data for molecular hydrogen and atomic oxygen lines that show kinematic disk-wind signatures in order to investigate whether or not purely thermally driven winds are consistent with the data.
Methods: We use hydrodynamic photoevaporative disk-wind models and post-process them with a thermochemical model to produce synthetic observables for the spectral lines o-H2 1-0 S(1) at 2.12 µm and [OI] 1D2-3P2 at 0.63 µm and directly compare the results to a sample of observations.
Results: We find that our photoevaporative disk-wind model is consistent with the observed signatures of the blueshifted narrow low-velocity component (NLVC) - which is usually associated with slow disk winds - for both tracers. Only for one out of seven targets that show blueshifted NLVCs does the photoevaporative model fail to explain the observed line kinematics. Our results also indicate that interpreting spectral line profiles using simple methods, such as the thin-disk approximation, to determine the line emitting region is not appropriate for the majority of cases and can yield misleading conclusions. This is due to the complexity of the line excitation, wind dynamics, and the impact of the actual physical location of the line-emitting regions on the line profiles.
Conclusions: The photoevaporative disk-wind models are largely consistent with the studied observational data set, but it is not possible to clearly discriminate between different wind-driving mechanisms. Further improvements to the models are necessary, such as consistent modelling of the dynamics and chemistry, and detailed modelling of individual targets (i.e. disk structure) would be beneficial. Furthermore, a direct comparison of magnetically driven disk-wind models to the observational data set is necessary in order to determine whether or not spatially unresolved observations of multiple wind tracers are sufficient to discriminate between theoretical models.
Recently, two new families of non-linear massive electrodynamics have been proposed: Proca-Nuevo and Extended Proca-Nuevo. We explicitly show that both families are irremediably ghostful in two dimensions. Our calculations indicate the need to revisit the classical consistency of (Extended) Proca-Nuevo in higher dimensions before these settings can be regarded as ghostfree.
Neutron stars (NSs) and black holes (BHs) are born when the final collapse of the stellar core terminates the lives of stars more massive than about 9 Msun. This can trigger the powerful ejection of a large fraction of the star's material in a core-collapse supernova (CCSN), whose extreme luminosity is energized by the decay of radioactive isotopes such as 56Ni and 56Co. When evolving in close binary systems, the compact relics of such infernal catastrophes spiral towards each other on orbits gradually decaying by gravitational-wave emission. Ultimately, the violent collision of the two components forms a more massive, rapidly spinning remnant, again accompanied by the ejection of considerable amounts of matter. These merger events can be observed by high-energy bursts of gamma rays with afterglows and electromagnetic transients called kilonovae, which radiate the energy released in radioactive decays of freshly assembled rapid neutron-capture elements. By means of their mass ejection and the nuclear and neutrino reactions taking place in the ejecta, both CCSNe and compact object mergers (COMs) are prominent sites of heavy-element nucleosynthesis and play a central role in the cosmic cycle of matter and the chemical enrichment history of galaxies. The nuclear equation of state (EoS) of NS matter, from neutron-rich to proton-dominated conditions and with temperatures ranging from about zero to ~100 MeV, is a crucial ingredient in these astrophysical phenomena. It determines their dynamical processes, their remnant properties even at the level of deciding between NS or BH, and the properties of the associated emission of neutrinos, whose interactions govern the thermodynamic conditions and the neutron-to-proton ratio for nucleosynthesis reactions in the innermost ejecta. This chapter discusses corresponding EoS dependent effects of relevance in CCSNe as well as COMs. (slightly abridged)
In the context of the ESO-VLT Multi-Instrument Kinematic Survey (MIKiS) of Galactic globular clusters, here we present the line-of-sight velocity dispersion profile of NGC 6440, a massive globular cluster located in the Galactic bulge. By combining the data acquired with four different spectrographs, we obtained the radial velocity of a sample of $\sim 1800$ individual stars distributed over the entire cluster extension, from $\sim$0.1$"$ to 778$"$ from the center. Using a properly selected sample of member stars with the most reliable radial velocity measures, we derived the velocity dispersion profile up to 250$"$ from the center. The profile is well described by the same King model that best fits the projected star density distribution, with a constant inner plateau (at ${\sigma}_0 \sim $ 12 km s$^{-1}$) and no evidence of a central cusp or other significant deviations. Our data allowed to study the presence of rotation only in the innermost regions of the cluster (r < 5$"$), revealing a well-defined pattern of ordered rotation with a position angle of the rotation axis of $\sim$132 $\pm$ 2° and an amplitude of $\sim$3 km s$^{-1}$ (corresponding to Vrot/${\sigma}_0 \sim$ 0.3). Also, a flattening of the system qualitatively consistent with the rotation signal has been detected in the central region.
It has been suggested that a trail of diffuse galaxies, including two dark-matter-deficient galaxies (DMDGs), in the vicinity of NGC 1052 formed because of a high-speed collision between two gas-rich dwarf galaxies, one bound to NGC 1052 and the other one on an unbound orbit. The collision compresses the gas reservoirs of the colliding galaxies, which in turn triggers a burst of star formation. In contrast, the dark matter and preexisting stars in the progenitor galaxies pass through it. Since the high pressures in the compressed gas are conducive to the formation of massive globular clusters (GCs), this scenario can explain the formation of DMDGs with large populations of massive GCs, consistent with the observations of NGC 1052-DF2 (DF2) and NGC 1052-DF4. A potential difficulty with this "mini bullet cluster" scenario is that the observed spatial distributions of GCs in DMDGs are extended. GCs experience dynamical friction causing their orbits to decay with time. Consequently, their distribution at formation should have been even more extended than that observed at present. Using a semianalytic model, we show that the observed positions and velocities of the GCs in DF2 imply that they must have formed at a radial distance of 5-10 kpc from the center of DF2. However, as we demonstrate, the scenario is difficult to reconcile with the fact that the strong tidal forces from NGC 1052 strip the extendedly distributed GCs from DF2, requiring 33-59 massive GCs to form at the collision to explain observations.
The recently developed B-Mesogenesis scenario predicts decays of B mesons into a baryon and hypothetical dark antibaryon Ψ. We suggest a method to calculate the amplitude of the simplest exclusive decay mode B+ → pΨ. Considering two models of B-Mesogenesis, we obtain the B → p hadronic matrix elements by applying QCD light-cone sum rules with the proton light-cone distribution amplitudes. We estimate the B+ → pΨ decay width as a function of the mass and effective coupling of the dark antibaryon.
We investigate the formation and evolution of 'primordial' dusty rings occurring in the inner regions of protoplanetary discs, with the help of long-term, coupled dust-gas, magnetohydrodynamic simulations. The simulations are global and start from the collapse phase of the parent cloud core, while the dead zone is calculated via an adaptive α formulation by taking into account the local ionization balance. The evolution of the dusty component includes its growth and back reaction on to the gas. Previously, using simulations with only a gas component, we showed that dynamical rings form at the inner edge of the dead zone. We find that when dust evolution, as well as magnetic field evolution in the flux-freezing limit are included, the dusty rings formed are more numerous and span a larger radial extent in the inner disc, while the dead zone is more robust and persists for a much longer time. We show that these dynamical rings concentrate enough dust mass to become streaming unstable, which should result in a rapid planetesimal formation even in the embedded phases of the system. The episodic outbursts caused by the magnetorotational instability have a significant impact on the evolution of the rings. The outbursts drain the inner disc of grown dust, however, the period between bursts is sufficiently long for the planetesimal growth via streaming instability. The dust mass contained within the rings is large enough to ultimately produce planetary systems with the core accretion scenario. The low-mass systems rarely undergo outbursts, and, thus, the conditions around such stars can be especially conducive for planet formation.
We present multiple results on the production of loosely bound molecules in bottomonium annihilations and e+e- collisions at √{s }=10.58 GeV . We perform the first comprehensive test of several models for deuteron production against all the existing data in this energy region. We fit the free parameters of the models to reproduce the observed cross sections, and we predict the deuteron spectrum and production and the cross section for the e+e-→d d ¯+X process both at the ϒ (1 ,2 ,3 S ) resonances and at √{s }=10.58 GeV . The predicted spectra show differences but are all compatible with the uncertainties of the existing data. These differences could be addressed if larger datasets are collected by the Belle II experiment. Fixing the source size parameter to reproduce the deuteron data, we then predict the production rates for H dibaryon and hypertriton in this energy region using a simple coalescence model. Our prediction on the H dibaryon production rate is below the limits set by the direct search at the Belle experiment, but in the range accessible to the Belle II experiment. The systematic effect due to the MC modeling of quarks and gluon fragmentation into baryons is reduced, deriving a new tuning of the PYTHIA 8 Monte Carlo generator using the available measurement of single- and double-particle spectra in ϒ decays.
The Euclid mission - with its spectroscopic galaxy survey covering a sky area over 15 000 deg2 in the redshift range 0.9 < z < 1.8 - will provide a sample of tens of thousands of cosmic voids. This paper thoroughly explores for the first time the constraining power of the void size function on the properties of dark energy (DE) from a survey mock catalogue, the official Euclid Flagship simulation. We identified voids in the Flagship light-cone, which closely matches the features of the upcoming Euclid spectroscopic data set. We modelled the void size function considering a state-of-the art methodology: we relied on the volume-conserving (Vdn) model, a modification of the popular Sheth & van de Weygaert model for void number counts, extended by means of a linear function of the large-scale galaxy bias. We found an excellent agreement between model predictions and measured mock void number counts. We computed updated forecasts for the Euclid mission on DE from the void size function and provided reliable void number estimates to serve as a basis for further forecasts of cosmological applications using voids. We analysed two different cosmological models for DE: the first described by a constant DE equation of state parameter, w, and the second by a dynamic equation of state with coefficients w0 and wa. We forecast 1σ errors on w lower than 10% and we estimated an expected figure of merit (FoM) for the dynamical DE scenario FoMw0, wa = 17 when considering only the neutrino mass as additional free parameter of the model. The analysis is based on conservative assumptions to ensure full robustness, and is a pathfinder for future enhancements of the technique. Our results showcase the impressive constraining power of the void size function from the Euclid spectroscopic sample, both as a stand-alone probe, and to be combined with other Euclid cosmological probes.
This paper is published on behalf of the Euclid Consortium.
The peak-patch algorithm is used to identify the densest minicluster seeds in the initial axion density field simulated from string decay. The fate of these dense seeds is found by tracking the subsequent gravitational collapse in cosmological N -body simulations. We find that miniclusters at late times are well described by Navarro-Frenk-White profiles, although for around 80% of simulated miniclusters a single power-law density profile of r-2.9 is an equally good fit due to the unresolved scale radius. Under the assumption that all miniclusters with an unresolved scale radius are described by a power-law plus axion star density profile, we identify a significant number of miniclusters that might be dense enough to give rise to gravitational microlensing if the axion mass is 0.2 meV ≲ma≲3 meV . Higher resolution simulations resolving the inner structure and axion star formation are necessary to explore this possibility further.
Polarization modulator units (PMUs) represent a critical and powerful component in CMB polarization experiments to suppress the 1/f noise component and mitigate systematic uncertainties induced by detector gain drifts and beam asymmetries. The LiteBIRD mission (expected launch in the late 2020 s) will be equipped with 3 PMUs, one for each of the 3 telescopes, and aims at detecting the primordial gravitational waves with a sensitivity of δ r <0.001 . Each PMU is based on a continuously rotating transmissive half-wave plate held by a superconducting magnetic bearing in the 5 K environment. To achieve and monitor the rotation a number of subsystems is needed: clamp and release system and motor coils for the rotation; optical encoder, capacitive, Hall and temperature sensors to monitor its dynamic stability. In this contribution, we present a preliminary thermal design of the harness configuration for the PMUs of the mid- and high- frequency telescopes. The design is based on both the stringent system constraint for the total thermal budget available for the PMUs (≲ 4 mW at 5 K) and on the requirements for different subsystem: coils currents (up to 10 mA), optical fibers for encoder readout, 25 MHz bias signal for temperature and levitation monitors.
We provide a simple computation in order to estimate the probability of a given hierarchy between two scales. In particular, we work in a model provided with a gauge symmetry, with two scalar doublets. We start from a scale-invariant classical Lagrangian, but by taking into account the Coleman-Weinberg mechanism, we obtain masses for the gauge bosons and the scalars. This approach typically provides a light (L) and a heavy (H) sector related to the two different vacuum expectation values of the two scalars. We compute the size of the hypervolume of the parameter space of the model associated with an interval of mass ratios between these two sectors. We define the probability as proportional to this size and conclude that probabilities of very large hierarchies are not negligible in the type of models studied in this work.