Solutions to vacuum Einstein field equations with cosmological constants, such as the de Sitter space and the anti-de Sitter space, are basic in different cosmological and theoretical developments. It is also well known that complex structures admit metrics of this type. The most famous example is the complex projective space endowed with the Fubini-Study metric. In this work, we perform a systematic study of Einstein complex geometries derived from a logarithmic Kähler potential. Depending on the different contribution to the argument of such logarithmic term, we shall distinguish among direct, inverted and hybrid coordinates. They are directly related to the signature of the metric and determine the maximum domain of the complex space where the geometry can be defined.
We make the case for the systematic, reliable preservation of event-wise data, derived data products, and executable analysis code. This preservation enables the analyses' long-term future reuse, in order to maximise the scientific impact of publicly funded particle-physics experiments. We cover the needs of both the experimental and theoretical particle physics communities, and outline the goals and benefits that are uniquely enabled by analysis recasting and reinterpretation. We also discuss technical challenges and infrastructure needs, as well as sociological challenges and changes, and give summary recommendations to the particle-physics community.
The non-relativistic effective theory of dark matter-nucleon interactions depends on 28 coupling strengths for dark matter spin up to 1/2. Due to the vast parameter space of the effective theory, most experiments searching for dark matter interpret the results assuming that only one of the coupling strengths is non-zero. On the other hand, dark matter models generically lead in the non-relativistic limit to several interactions which interfere with one another, therefore the published limits cannot be straightforwardly applied to model predictions. We present a method to determine a rigorous upper limit on the dark matter-nucleon interaction strength including all possible interferences among operators. We illustrate the method to derive model independent upper limits on the interaction strengths from the null search results from XENON1T, PICO-60 and IceCube. For some interactions, the limits on the coupling strengths are relaxed by more than one order of magnitude. We also present a method that allows to combine the results from different experiments, thus exploiting the synergy between different targets in exploring the parameter space of dark matter-nucleon interactions.
Recent experimental results in $B$ physics from Belle, BaBar and LHCb suggest new physics (NP) in the weak $b\to c$ charged-current and the $b\to s$ neutral-current processes. Here we focus on the charged-current case and specifically on the decay modes $B\to D^{*+}\ell^- \bar{\nu}$ with $\ell = e, \mu,$ and $\tau$. The world averages of the ratios $R_D$ and $R_D^{*}$ currently differ from the Standard Model (SM) by $3.4\sigma$ while $\Delta A_{FB} = A_{FB}(B\to D^{*} \mu\nu) - A_{FB} (B\to D^{*} e \nu)$ is found to be $4.1\sigma$ away from the SM prediction in an analysis of 2019 Belle data. These intriguing results suggest an urgent need for improved simulation and analysis techniques in $B\to D^{*+}\ell^- \bar{\nu}$ decays. Here we describe a Monte Carlo Event-generator tool based on EVTGEN developed to allow simulation of the NP signatures in $B\to D^*\ell^- \nu$, which arise due to the interference between the SM and NP amplitudes. As a demonstration of the proposed approach, we exhibit some examples of NP couplings that are consistent with current data and could explain the $\Delta A_{FB}$ anomaly in $B\to D^*\ell^- \nu$ while remaining consistent with other constraints. We show that the $\Delta$-type observables such as $\Delta A_{FB}$ and $\Delta S_5$ eliminate most QCD uncertainties from form factors and allow for clean measurements of NP. We introduce correlated observables that improve the sensitivity to NP. We discuss prospects for improved observables sensitive to NP couplings with the expected 50 ab$^{-1}$ of Belle II data, which seems to be ideally suited for this class of measurements.
We stress the importance of precise measurements of rare decays $K^+\rightarrow\pi^+\nu\bar\nu$, $K_L\rightarrow\pi^0\nu\bar\nu$, $K_{L,S}\to\mu^+\mu^-$ and $K_{L,S}\to\pi^0\ell^+\ell^-$ for the search of new physics (NP). This includes both branching ratios and the distributions in $q^2$, the invariant mass-squared of the neutrino system in the case of $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_L\rightarrow\pi^0\nu\bar\nu$ and of the $\ell^+\ell^-$ system in the case of the remaining decays. In particular the correlations between these observables and their correlations with the ratio $\varepsilon'/\varepsilon$ in $K_L\to\pi\pi$ decays, the CP-violating parameter $\varepsilon_K$ and the $K^0-\bar K^0$ mass difference $\Delta M_K$, should help to disentangle the nature of possible NP. We stress the strong sensitivity of all observables with the exception of $\Delta M_K$ to the CKM parameter $|V_{cb}|$ and list a number of $|V_{cb}|$-independent ratios within the SM which exhibit rather different dependences on the angles $\beta$ and $\gamma$ of the unitarity triangle. The particular role of these decays in probing very short distance scales far beyond the ones explored at the LHC is emphasized. In this context the role of the Standard Model Effective Field Theory (SMEFT) is very important. We also address briefly the issue of the footprints of Majorana neutrinos in $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_L\rightarrow\pi^0\nu\bar\nu$.
We search for the signature of parity-violating physics in the cosmic microwave background, called cosmic birefringence, using the Planck data release 4. We initially find a birefringence angle of β =0.30 °±0.11 ° (68% C.L.) for nearly full-sky data. The values of β decrease as we enlarge the Galactic mask, which can be interpreted as the effect of polarized foreground emission. Two independent ways to model this effect are used to mitigate the systematic impact on β for different sky fractions. We choose not to assign cosmological significance to the measured value of β until we improve our knowledge of the foreground polarization.
Cross-correlations of galaxy positions and galaxy shears with maps of gravitational lensing of the cosmic microwave background (CMB) are sensitive to the distribution of large-scale structure in the Universe. Such cross-correlations are also expected to be immune to some of the systematic effects that complicate correlation measurements internal to galaxy surveys. We present measurements and modeling of the cross-correlations between galaxy positions and galaxy lensing measured in the first three years of data from the Dark Energy Survey with CMB lensing maps derived from a combination of data from the 2500 deg$^2$ SPT-SZ survey conducted with the South Pole Telescope and full-sky data from the Planck satellite. The CMB lensing maps used in this analysis have been constructed in a way that minimizes biases from the thermal Sunyaev Zel'dovich effect, making them well suited for cross-correlation studies. The total signal-to-noise of the cross-correlation measurements is 23.9 (25.7) when using a choice of angular scales optimized for a linear (nonlinear) galaxy bias model. We use the cross-correlation measurements to obtain constraints on cosmological parameters. For our fiducial galaxy sample, which consist of four bins of magnitude-selected galaxies, we find constraints of $\Omega_{m} = 0.272^{+0.032}_{-0.052}$ and $S_{8} \equiv \sigma_8 \sqrt{\Omega_{m}/0.3}= 0.736^{+0.032}_{-0.028}$ ($\Omega_{m} = 0.245^{+0.026}_{-0.044}$ and $S_{8} = 0.734^{+0.035}_{-0.028}$) when assuming linear (nonlinear) galaxy bias in our modeling. Considering only the cross-correlation of galaxy shear with CMB lensing, we find $\Omega_{m} = 0.270^{+0.043}_{-0.061}$ and $S_{8} = 0.740^{+0.034}_{-0.029}$. Our constraints on $S_8$ are consistent with recent cosmic shear measurements, but lower than the values preferred by primary CMB measurements from Planck.
The design of optimal test statistics is a key task in frequentist statistics and for a number of scenarios optimal test statistics such as the profile-likelihood ratio are known. By turning this argument around we can find the profile likelihood ratio even in likelihood-free cases, where only samples from a simulator are available, by optimizing a test statistic within those scenarios. We propose a likelihood-free training algorithm that produces test statistics that are equivalent to the profile likelihood ratios in cases where the latter is known to be optimal.
The persistent tensions between inclusive and exclusive determinations of $|V_{cb}|$ and $|V_{ub}|$ weaken the power of theoretically clean rare $K$ and $B$ decays in the search for new physics (NP). We demonstrate how this uncertainty can be practically removed by considering within the SM suitable ratios of various branching ratios. This includes the branching ratios for $K^+\to\pi^+\nu\bar\nu$, $K_{L}\to\pi^0\nu\bar\nu$, $K_S\to\mu^+\mu^-$, $B_{s,d}\to\mu^+\mu^-$ and $B\to K(K^*)\nu\bar\nu$. Also $\epsilon_K$, $\Delta M_d$, $\Delta M_s$ and the mixing induced CP-asymmetry $S_{\psi K_S}$, all measured already very precisely, play an important role in this analysis. The highlights of our analysis are 16 $|V_{cb}|$ and $|V_{ub}|$ independent ratios that often are independent of the CKM arameters or depend only on the angles $\beta$ and $\gamma$ in the Unitarity Triangle with $\beta$ already precisely known and $\gamma$ to be measured precisely in the coming years by the LHCb and Belle II collaborations. Once $\gamma$ Once $\gamma$ is measured precisely these 16 ratios taken together are expected to be a powerful tool in the search for new physics. Assuming no NP in $|\epsilon_K|$ and $S_{\psi K_S}$ we determine independently of $|V_{cb}|$: $\mathcal{B}(K^+\to\pi^+\nu\bar\nu)_\text{SM}= (8.60\pm0.42)\times 10^{-11}$ and $\mathcal{B}(K_L\to\pi^0\nu\bar\nu)_\text{SM}=(2.94\pm 0.15)\times 10^{-11}$. This are the most precise determinations to date. Assuming no NP in $\Delta M_{s,d}$ allows to obtain analogous results for all $B$ decay branching ratios considered in our paper without any CKM uncertainties.
Context. X-ray- and extreme-ultraviolet- (XEUV-) driven photoevaporative winds acting on protoplanetary disks around young T Tauri stars may strongly impact disk evolution, affecting both gas and dust distributions. Small dust grains in the disk are entrained in the outflow and may produce a detectable signal. In this work, we investigate the possibility of detecting dusty outflows from transition disks with an inner cavity.
Aims: We compute dust densities for the wind regions of XEUV-irradiated transition disks and determine whether they can be observed at wavelengths 0.7 ≲ λobs [μm] ≲ 1.8 with current instrumentation.
Methods: We simulated dust trajectories on top of 2D hydrodynamical gas models of two transition disks with inner holes of 20 and 30 AU, irradiated by both X-ray and EUV spectra from a central T Tauri star. The trajectories and two different settling prescriptions for the dust distribution in the underlying disk were used to calculate wind density maps for individual grain sizes. Finally, the resulting dust densities were converted to synthetic observations in scattered and polarised light.
Results: For an XEUV-driven outflow around a M* = 0.7 M⊙ T Tauri star with LX = 2 × 1030 erg s-1, we find dust mass-loss rates Ṁdust ≲ 2.0 × 10−3 Ṁgas, and if we invoke vertical settling, the outflow is quite collimated. The synthesised images exhibit a distinct chimney-like structure. The relative intensity of the chimneys is low, but their detection may still be feasible with current instrumentation under optimal conditions.
Conclusions: Our results motivate observational campaigns aimed at the detection of dusty photoevaporative winds in transition disks using JWST NIRCam and SPHERE IRDIS.
We report the discovery of GJ 3929 b, a hot Earth-sized planet orbiting the nearby M3.5 V dwarf star, GJ 3929 (G 180-18, TOI-2013). Joint modelling of photometric observations from TESS sectors 24 and 25 together with 73 spectroscopic observations from CARMENES and follow-up transit observations from SAINT-EX, LCOGT, and OSN yields a planet radius of Rb = 1.150 ± 0.040 R⊕, a mass of Mb = 1.21 ± 0.42 M⊕, and an orbital period of Pb = 2.6162745 ± 0.0000030 d. The resulting density of ρb = 4.4 ± 1.6 g cm−3 is compatible with the Earth's mean density of about 5.5 g cm−3. Due to the apparent brightness of the host star (J = 8.7 mag) and its small size, GJ 3929 b is a promising target for atmospheric characterisation with the JWST. Additionally, the radial velocity data show evidence for another planet candidate with P[c] = 14.303 ± 0.035 d, which is likely unrelated to the stellar rotation period, Prot = 122 ± 13 d, which we determined from archival HATNet and ASAS-SN photometry combined with newly obtained TJO data.
RV data and stellar activity indices are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/659/A17
Joint analyses of cross-correlations between measurements of galaxy positions, galaxy lensing, and lensing of the cosmic microwave background (CMB) offer powerful constraints on the large-scale structure of the Universe. In a forthcoming analysis, we will present cosmological constraints from the analysis of such cross-correlations measured using Year 3 data from the Dark Energy Survey (DES), and CMB data from the South Pole Telescope (SPT) and Planck. Here we present two key ingredients of this analysis: (1) an improved CMB lensing map in the SPT-SZ survey footprint, and (2) the analysis methodology that will be used to extract cosmological information from the cross-correlation measurements. Relative to previous lensing maps made from the same CMB observations, we have implemented techniques to remove contamination from the thermal Sunyaev Zel'dovich effect, enabling the extraction of cosmological information from smaller angular scales of the cross-correlation measurements than in previous analyses with DES Year 1 data. We describe our model for the cross-correlations between these maps and DES data, and validate our modeling choices to demonstrate the robustness of our analysis. We then forecast the expected cosmological constraints from the galaxy survey-CMB lensing auto and cross-correlations. We find that the galaxy-CMB lensing and galaxy shear-CMB lensing correlations will on their own provide a constraint on $S_8=\sigma_8 \sqrt{\Omega_{\rm m}/0.3}$ at the few percent level, providing a powerful consistency check for the DES-only constraints. We explore scenarios where external priors on shear calibration are removed, finding that the joint analysis of CMB lensing cross-correlations can provide constraints on the shear calibration amplitude at the 5 to 10% level.
In this white paper for the Snowmass process, we discuss the prospects of probing new physics explanations of the persistent rare $B$ decay anomalies with a muon collider. If the anomalies are indirect signs of heavy new physics, non-standard rates for $\mu^+ \mu^- \to b s$ production should be observed with high significance at a muon collider with center of mass energy of $\sqrt{s} = 10$ TeV. The forward-backward asymmetry of the $b$-jet provides diagnostics of the chirality structure of the new physics couplings. In the absence of a signal, $\mu^+ \mu^- \to b s$ can indirectly probe new physics scales as large as $86$ TeV. Beam polarization would have an important impact on the new physics sensitivity.
Mini-EUSO is a telescope launched on board the International Space Station in 2019 and currently located in the Russian section of the station. Main scientific objectives of the mission are the search for nuclearites and Strange Quark Matter, the study of atmospheric phenomena such as Transient Luminous Events, meteors and meteoroids, the observation of sea bioluminescence and of artificial satellites and man-made space debris. It is also capable of observing Extensive Air Showers generated by Ultra-High Energy Cosmic Rays with an energy above 10$^{21}$ eV and detect artificial showers generated with lasers from the ground. Mini-EUSO can map the night-time Earth in the UV range (290 - 430 nm), with a spatial resolution of about 6.3 km and a temporal resolution of 2.5 $\mu$s, observing our planet through a nadir-facing UV-transparent window in the Russian Zvezda module. The instrument, launched on 2019/08/22 from the Baikonur cosmodrome, is based on an optical system employing two Fresnel lenses and a focal surface composed of 36 Multi-Anode Photomultiplier tubes, 64 channels each, for a total of 2304 channels with single photon counting sensitivity and an overall field of view of 44$^{\circ}$. Mini-EUSO also contains two ancillary cameras to complement measurements in the near infrared and visible ranges. In this paper we describe the detector and present the various phenomena observed in the first year of operation.
Mini-EUSO is a detector observing the Earth in the ultraviolet band from the International Space Station through a nadir-facing window, transparent to the UV radiation, in the Russian Zvezda module. Mini-EUSO main detector consists in an optical system with two Fresnel lenses and a focal surface composed of an array of 36 Hamamatsu Multi-Anode Photo-Multiplier tubes, for a total of 2304 pixels, with single photon counting sensitivity. The telescope also contains two ancillary cameras, in the near infrared and visible ranges, to complement measurements in these bandwidths. The instrument has a field of view of 44 degrees, a spatial resolution of about 6.3 km on the Earth surface and of about 4.7 km on the ionosphere. The telescope detects UV emissions of cosmic, atmospheric and terrestrial origin on different time scales, from a few micoseconds upwards. On the fastest timescale of 2.5 microseconds, Mini-EUSO is able to observe atmospheric phenomena as Transient Luminous Events and in particular the ELVES, which take place when an electromagnetic wave generated by intra-cloud lightning interacts with the ionosphere, ionizing it and producing apparently superluminal expanding rings of several 100 km and lasting about 100 microseconds. These highly energetic fast events have been observed to be produced in conjunction also with Terrestrial Gamma-Ray Flashes and therefore a detailed study of their characteristics (speed, radius, energy...) is of crucial importance for the understanding of these phenomena. In this paper we present the observational capabilities of ELVE detection by Mini-EUSO and specifically the reconstruction and study of ELVE characteristics.
We present cosmological constraints from the analysis of angular power spectra of cosmic shear maps based on data from the first three years of observations by the Dark Energy Survey (DES Y3). Our measurements are based on the pseudo-$C_\ell$ method and offer a view complementary to that of the two-point correlation functions in real space, as the two estimators are known to compress and select Gaussian information in different ways, due to scale cuts. They may also be differently affected by systematic effects and theoretical uncertainties, such as baryons and intrinsic alignments (IA), making this analysis an important cross-check. In the context of $\Lambda$CDM, and using the same fiducial model as in the DES Y3 real space analysis, we find ${S_8 \equiv \sigma_8 \sqrt{\Omega_{\rm m}/0.3} = 0.793^{+0.038}_{-0.025}}$, which further improves to ${S_8 = 0.784\pm 0.026 }$ when including shear ratios. This constraint is within expected statistical fluctuations from the real space analysis, and in agreement with DES~Y3 analyses of non-Gaussian statistics, but favors a slightly higher value of $S_8$, which reduces the tension with the Planck cosmic microwave background 2018 results from $2.3\sigma$ in the real space analysis to $1.5\sigma$ in this work. We explore less conservative IA models than the one adopted in our fiducial analysis, finding no clear preference for a more complex model. We also include small scales, using an increased Fourier mode cut-off up to $k_{\rm max}={5}{h{\rm Mpc}^{-1}}$, which allows to constrain baryonic feedback while leaving cosmological constraints essentially unchanged. Finally, we present an approximate reconstruction of the linear matter power spectrum at present time, which is found to be about 20% lower than predicted by Planck 2018, as reflected by the $1.5\sigma$ lower $S_8$ value.
The field of UHECRs (Ultra-High energy cosmic Rays) and the understanding of particle acceleration in the cosmos, as a key ingredient to the behaviour of the most powerful sources in the universe, is of outmost importance for astroparticle physics as well as for fundamental physics and will improve our general understanding of the universe. The current main goals are to identify sources of UHECRs and their composition. For this, increased statistics is required. A space-based detector for UHECR research has the advantage of a very large exposure and a uniform coverage of the celestial sphere. The aim of the JEM-EUSO program is to bring the study of UHECRs to space. The principle of observation is based on the detection of UV light emitted by isotropic fluorescence of atmospheric nitrogen excited by the Extensive Air Showers (EAS) in the Earth's atmosphere and forward-beamed Cherenkov radiation reflected from the Earth's surface or dense cloud tops. In addition to the prime objective of UHECR studies, JEMEUSO will do several secondary studies due to the instruments' unique capacity of detecting very weak UV-signals with extreme time-resolution around 1 microsecond: meteors, Transient Luminous Events (TLE), bioluminescence, maps of human generated UV-light, searches for Strange Quark Matter (SQM) and high-energy neutrinos, and more. The JEM-EUSO program includes several missions from ground (EUSO-TA), from stratospheric balloons (EUSO-Balloon, EUSO-SPB1, EUSO-SPB2), and from space (TUS, Mini-EUSO) employing fluorescence detectors to demonstrate the UHECR observation from space and prepare the large size missions K-EUSO and POEMMA. A review of the current status of the program, the key results obtained so far by the different projects, and the perspectives for the near future are presented.
We present new constraints on spectator axion-U(1) gauge field interactions during
inflation using the latest Planck (PR4) and BICEP/Keck 2018 data releases. This model can source
tensor perturbations from amplified gauge field fluctuations, driven by an axion rolling for a few
e-folds during inflation. The gravitational waves sourced in this way have a strongly
scale-dependent (and chiral) spectrum, with potentially visible contributions to
large/intermediate scale B-modes of the CMB. We first derive theoretical bounds on the model
imposing validity of the perturbative regime and negligible backreaction of the gauge field on the
background dynamics. Then, we determine bounds from current CMB observations, adopting a
frequentist profile likelihood approach. We study the behaviour of constraints for typical choices
of the model's parameters, analyzing the impact of different dataset combinations. We find that
observational bounds are competitive with theoretical ones and together they exclude a significant
portion of the model's parameter space. We argue that the parameter space still remains large and
interesting for future CMB experiments targeting large/intermediate scales B-modes.
Mini-EUSO is a small orbital telescope with a field of view of $44^{\circ}\times 44^{\circ}$, observing the night-time Earth mostly in 320-420 nm band. Its time resolution spanning from microseconds (triggered) to milliseconds (untriggered) and more than $300\times 300$ km of the ground covered, already allowed it to register thousands of meteors. Such detections make the telescope a suitable tool in the search for hypothetical heavy compact objects, which would leave trails of light in the atmosphere due to their high density and speed. The most prominent example are the nuclearites -- hypothetical lumps of strange quark matter that could be stabler and denser than the nuclear matter. In this paper, we show potential limits on the flux of nuclearites after collecting 42 hours of observations data.
The Fluorescence Telescope is one of the two telescopes on board the Extreme Universe Space Observatory on a Super Pressure Balloon II (EUSO-SPB2). EUSO-SPB2 is an ultra-long-duration balloon mission that aims at the detection of Ultra High Energy Cosmic Rays (UHECR) via the fluorescence technique (using a Fluorescence Telescope) and of Ultra High Energy (UHE) neutrinos via Cherenkov emission (using a Cherenkov Telescope). The mission is planned to fly in 2023 and is a precursor of the Probe of Extreme Multi-Messenger Astrophysics (POEMMA). The Fluorescence Telescope is a second generation instrument preceded by the telescopes flown on the EUSO-Balloon and EUSO-SPB1 missions. It features Schmidt optics and has a 1-meter diameter aperture. The focal surface of the telescope is equipped with a 6912-pixel Multi Anode Photo Multipliers (MAPMT) camera covering a 37.4 x 11.4 degree Field of Regard. Such a big Field of Regard, together with a flight target duration of up to 100 days, would allow, for the first time from suborbital altitudes, detection of UHECR fluorescence tracks. This contribution will provide an overview of the instrument including the current status of the telescope development.
The Extreme Universe Space Observatory Supper Pressure Balloon 2 (EUSO-SPB2) is under development, and will prototype instrumentation for future satellite-based missions, including the Probe of Extreme Multi-Messenger Astrophysics (POEMMA). EUSO-SPB2 will consist of two telescopes. The first is a Cherenkov telescope (CT) being developed to identify and estimate the background sources for future below-the-limb very high energy (E>10 PeV) astrophysical neutrino observations, as well as above-the-limb cosmic ray induced signals (E>1 PeV). The second is a fluorescence telescope (FT) being developed for detection of Ultra High Energy Cosmic Rays (UHECRs). In preparation for the expected launch in 2023, extensive simulations tuned by preliminary laboratory measurements have been preformed to understand the FT capabilities. The energy threshold has been estimated at $10^{18.2}$ eV, and results in a maximum detection rate at $10^{18.6}$ eV when taking into account the shape of the UHECR spectrum. In addition, onboard software has been developed based on the simulations as well as experience with previous EUSO missions. This includes a level 1 trigger to be run on the computationally limited flight hardware, as well as a deep learning based prioritization algorithm in order to accommodate the balloon's telemetry budget. These techniques could also be used later for future, space-based missions.
We present the status of the development of a Cherenkov telescope to be flown on a long-duration balloon flight, the Extreme Universe Space Observatory Super Pressure Balloon 2 (EUSO-SPB2). EUSO-SPB2 is an approved NASA balloon mission that is planned to fly in 2023 and is a precursor of the Probe of Extreme Multi-Messenger Astrophysics (POEMMA), a candidate for an Astrophysics probe-class mission. The purpose of the Cherenkov telescope on-board EUSOSPB2 is to classify known and unknown sources of backgrounds for future space-based neutrino detectors. Furthermore, we will use the Earth-skimming technique to search for Very-High-Energy (VHE) tau neutrinos below the limb (E > 10 PeV) and observe air showers from cosmic rays above the limb. The 0.785 m^2 Cherenkov telescope is equipped with a 512-pixel SiPM camera covering a 12.8° x 6.4° (Horizontal x Vertical) field of view. The camera signals are digitized with a 100 MS/s readout system. In this paper, we discuss the status of the telescope development, the camera integration, and simulation studies of the camera response.
Black holes are considered to be exceptional due to their time evolution and information processing. However, it was proposed recently that these properties are generic for objects, the so-called saturons, that attain the maximal entropy permitted by unitarity. In the present paper, we verify this connection within a renormalizable SU(N) invariant theory. We show that the spectrum of the theory contains a tower of bubbles representing bound states of SU(N) Goldstones. Despite the absence of gravity, a saturated bound state exhibits a striking correspondence with a black hole: Its entropy is given by the Bekenstein-Hawking formula; semi-classically, the bubble evaporates at a thermal rate with a temperature equal to its inverse radius; the information retrieval time is equal to Page's time. The correspondence goes through a trans-theoretic entity of Poincaré Goldstone. The black hole/saturon correspondence has important implications for black hole physics, both fundamental and observational.
The Extreme Universe Space Observatory - Super Pressure Balloon (EUSO-SPB2) mission will fly two custom telescopes that feature Schmidt optics to measure Čerenkov- and fluorescence-emission of extensive air-showers from cosmic rays at the PeV and EeV-scale, and search for tau-neutrinos. Both telescopes have 1-meter diameter apertures and UV/UV-visible sensitivity. The Čerenkov telescope uses a bifocal mirror segment alignment, to distinguish between a direct cosmic ray that hits the camera versus the Čerenkov light from outside the telescope. Telescope integration and laboratory calibration will be performed in Colorado. To estimate the point spread function and efficiency of the integrated telescopes, a test beam system that delivers a 1-meter diameter parallel beam of light is being fabricated. End-to-end tests of the fully integrated instruments will be carried out in a field campaign at dark sites in the Utah desert using cosmic rays, stars, and artificial light sources. Laser tracks have long been used to characterize the performance of fluorescence detectors in the field. For EUSO-SPB2 an improvement in the method that includes a correction for aerosol attenuation is anticipated by using a bi-dynamic Lidar configuration in which both the laser and the telescope are steerable. We plan to conduct these field tests in Fall 2021 and Spring 2022 to accommodate the scheduled launch of EUSO-SPB2 in 2023 from Wanaka, New Zealand.
The Extreme Universe Space Observatory on a Super Pressure Balloon II (EUSO-SPB2) is a second generation stratospheric balloon instrument for the detection of Ultra High Energy Cosmic Rays (UHECRs, E > 1 EeV) via the fluorescence technique and of Very High Energy (VHE, E > 10 PeV) neutrinos via Cherenkov emission. EUSO-SPB2 is a pathfinder mission for instruments like the proposed Probe Of Extreme Multi-Messenger Astrophysics (POEMMA). The purpose of such a space-based observatory is to measure UHECRs and UHE neutrinos with high statistics and uniform exposure. EUSO-SPB2 is designed with two Schmidt telescopes, each optimized for their respective observational goals. The Fluorescence Telescope looks at the nadir to measure the fluorescence emission from UHECR-induced extensive air shower (EAS), while the Cherenkov Telescope is optimized for fast signals ($\sim$10 ns) and points near the Earth's limb. This allows for the measurement of Cherenkov light from EAS caused by Earth skimming VHE neutrinos if pointed slightly below the limb or from UHECRs if observing slightly above. The expected launch date of EUSO-SPB2 is Spring 2023 from Wanaka, NZ with target duration of up to 100 days. Such a flight would provide thousands of VHECR Cherenkov signals in addition to tens of UHECR fluorescence tracks. Neither of these kinds of events have been observed from either orbital or suborbital altitudes before, making EUSO-SPB2 crucial to move forward towards a space-based instrument. It will also enhance the understanding of potential background signals for both detection techniques. This contribution will provide a short overview of the detector and the current status of the mission as well as its scientific goals.
It is commonly expected that a friction force on the bubble wall in a first-order phase transition can only arise from a departure from thermal equilibrium in the plasma. Recently however, it was argued that an effective friction, scaling as γ2 w (with γ w being the Lorentz factor for the bubble wall velocity), persists in local equilibrium. This was derived assuming constant plasma temperature and velocity throughout the wall. On the other hand, it is known that, at the leading order in derivatives, the plasma in local equilibrium only contributes a correction to the zero-temperature potential in the equation of motion of the background scalar field. For a constant plasma temperature, the equation of motion is then completely analogous to the vacuum case, the only change being a modified potential, and thus no friction should appear. We resolve these apparent contradictions in the calculations and their interpretation and show that the recently proposed effective friction in local equilibrium originates from inhomogeneous temperature distributions, such that the γ2 w -scaling of the effective force is violated. Further, we propose a new matching condition for the hydrodynamic quantities in the plasma valid in local equilibrium and tied to local entropy conservation. With this added constraint, bubble velocities in local equilibrium can be determined once the parameters in the equation of state are fixed, where we use the bag equation in order to illustrate this point. We find that there is a critical value of the transition strength αcrit such that bubble walls run away for α>αcrit.
The characteristics of the cosmic microwave background provide circumstantial evidence that the hot radiation-dominated epoch in the early universe was preceded by a period of inflationary expansion. Here, we show how a measurement of the stochastic gravitational wave background can reveal the cosmic history and the physical conditions during inflation, subsequent pre- and re-heating, and the beginning of the hot big bang era. This is exemplified with a particularly well-motivated and predictive minimal extension of the Standard Model which is known to provide a complete model for particle physics -- up to the Planck scale, and for cosmology -- back to inflation.
Planet-forming disks are not isolated systems. Their interaction with the surrounding medium affects their mass budget and chemical content. In the context of the ALMA-DOT program, we obtained high-resolution maps of assorted lines from six disks that are still partly embedded in their natal envelope. In this work, we examine the SO and SO2 emission that is detected from four sources: DG Tau, HL Tau, IRAS 04302+2247, and T Tau. The comparison with CO, HCO+, and CS maps reveals that the SO and SO2 emission originates at the intersection between extended streamers and the planet-forming disk. Two targets, DG Tau and HL Tau, offer clear cases of inflowing material inducing an accretion shock on the disk material. The measured rotational temperatures and radial velocities are consistent with this view. In contrast to younger Class 0 sources, these shocks are confined to the specific disk region impacted by the streamer. In HL Tau, the known accreting streamer induces a shock in the disk outskirts, and the released SO and SO2 molecules spiral toward the star in a few hundred years. These results suggest that shocks induced by late accreting material may be common in the disks of young star-forming regions with possible consequences for the chemical composition and mass content of the disk. They also highlight the importance of SO and SO2 line observations in probing accretion shocks from a larger sample.
The reduced datacubes are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/658/A104
Numerical general relativistic radiative magnetohydrodynamic simulations of accretion discs around a stellar-mass black hole with a luminosity above 0.5 of the Eddington value reveal their stratified, elevated vertical structure. We refer to these thermally stable numerical solutions as puffy discs. Above a dense and geometrically thin core of dimensionless thickness h/r ∼ 0.1, crudely resembling a classic thin accretion disc, a puffed-up, geometrically thick layer of lower density is formed. This puffy layer corresponds to h/r ∼ 1.0, with a very limited dependence of the dimensionless thickness on the mass accretion rate. We discuss the observational properties of puffy discs, particularly the geometrical obscuration of the inner disc by the elevated puffy region at higher observing inclinations, and collimation of the radiation along the accretion disc spin axis, which may explain the apparent super-Eddington luminosity of some X-ray objects. We also present synthetic spectra of puffy discs, and show that they are qualitatively similar to those of a Comptonized thin disc. We demonstrate that the existing xspec spectral fitting models provide good fits to synthetic observations of puffy discs, but cannot correctly recover the input black hole spin. The puffy region remains optically thick to scattering; in its spectral properties, the puffy disc roughly resembles that of a warm corona sandwiching the disc core. We suggest that puffy discs may correspond to X-ray binary systems of luminosities above 0.3 of the Eddington luminosity in the intermediate spectral states.
Current and future cosmological analyses with Type Ia Supernovae (SNe Ia) face three critical challenges: i) measuring redshifts from the supernova or its host galaxy; ii) classifying SNe without spectra; and iii) accounting for correlations between the properties of SNe Ia and their host galaxies. We present here a novel approach that addresses each challenge. In the context of the Dark Energy Survey (DES), we analyze a SNIa sample with host galaxies in the redMaGiC galaxy catalog, a selection of Luminous Red Galaxies. Photo-$z$ estimates for these galaxies are expected to be accurate to $\sigma_{\Delta z/(1+z)}\sim0.02$. The DES-5YR photometrically classified SNIa sample contains approximately 1600 SNe and 125 of these SNe are in redMaGiC galaxies. We demonstrate that redMaGiC galaxies almost exclusively host SNe Ia, reducing concerns with classification uncertainties. With this subsample, we find similar Hubble scatter (to within $\sim0.01$ mag) using photometric redshifts in place of spectroscopic redshifts. With detailed simulations, we show the bias due to using photo-$z$s from redMaGiC host galaxies on the measurement of the dark energy equation-of-state $w$ is up to $\Delta w \sim 0.01-0.02$. With real data, we measure a difference in $w$ when using redMaGiC photometric redshifts versus spectroscopic redshifts of $\Delta w = 0.005$. Finally, we discuss how SNe in redMaGiC galaxies appear to be a more standardizable population due to a weaker relation between color and luminosity ($\beta$) compared to the DES-3YR population by $\sim5\sigma$; this finding is consistent with predictions that redMaGiC galaxies exhibit lower reddening ratios ($\textrm{R}_\textrm{V}$) than the general population of SN host galaxies. These results establish the feasibility of performing redMaGiC SN cosmology with photometric survey data in the absence of spectroscopic data.
Catalytic particles are spatially organized in a number of biological systems across different length scales, from enzyme complexes to metabolically coupled cells. Despite operating on different scales, these systems all feature localized reactions involving partially hindered diffusive transport, which is determined by the collective arrangement of the catalysts. Yet it remains largely unexplored how different arrangements affect the interplay between the reaction and transport dynamics, which ultimately determines the flux through the reaction pathway. Here we show that two fundamental trade-offs arise, the first between efficient inter-catalyst transport and the depletion of substrate, and the second between steric confinement of intermediate products and the accessibility of catalysts to substrate. We use a model reaction pathway to characterize the general design principles for the arrangement of catalysts that emerge from the interplay of these trade-offs. We find that the question of optimal catalyst arrangements generalizes the well-known Thomson problem of electrostatics.
The $ {\varXi}_{cc}^{++}\to {\varXi}_c^{\prime +}{\pi}^{+} $ decay is observed using proton-proton collisions collected by the LHCb experiment at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 5.4 fb−1. The $ {\varXi}_{cc}^{++}\to {\varXi}_c^{\prime +}{\pi}^{+} $ decay is reconstructed partially, where the photon from the $ {\varXi}_c^{\prime +}\to {\varXi}_c^{+}\gamma $ decay is not reconstructed and the pK$^{−}$π$^{+}$ final state of the $ {\varXi}_c^{+} $ baryon is employed. The $ {\varXi}_{cc}^{++}\to {\varXi}_c^{\prime +}{\pi}^{+} $branching fraction relative to that of the $ {\varXi}_{cc}^{++}\to {\varXi}_c^{+}{\pi}^{+} $ decay is measured to be 1.41 ± 0.17 ± 0.10, where the first uncertainty is statistical and the second systematic.[graphic not available: see fulltext]
We propose a search for long lived axion-like particles (ALPs) in exotic top decays. Flavour-violating ALPs appear as low energy effective theories for various new physics scenarios such as t-channel dark sectors or Froggatt-Nielsen models. In this case the top quark may decay to an ALP and an up- or charm-quark. For masses in the few GeV range, the ALP is long lived across most of the viable parameter space, suggesting a dedicated search. We propose to search for these long lived ALPs in $ t\overline{t} $ events, using one top quark as a trigger. We focus on ALPs decaying in the hadronic calorimeter, and show that the ratio of energy deposits in the electromagnetic and hadronic calorimeters as well as track vetoes can efficiently suppress Standard Model backgrounds. Our proposed search can probe exotic top branching ratios smaller than 10$^{−4}$ with a conservative strategy at the upcoming LHC run, and potentially below the 10$^{−7}$ level with more advanced methods. Finally we also show that measurements of single top production probe these branching ratios in the very short and very long lifetime limit at the 10$^{−3}$ level.
Time irreversibility is a distinctive feature of nonequilibrium dynamics and several measures of irreversibility have been introduced to assess the distance from thermal equilibrium of a stochastically driven system. While the dynamical noise is often approximated as white, in many real applications the time correlations of the random forces can actually be significantly long-lived compared to the relaxation times of the driven system. We analyze the effects of temporal correlations in the noise on commonly used measures of irreversibility and demonstrate how the theoretical framework for white-noise-driven systems naturally generalizes to the case of colored noise. Specifically, we express the autocorrelation function, the area enclosing rates, and mean phase space velocity in terms of solutions of a Lyapunov equation and in terms of their white-noise limit values.
We discuss peculiarities that arise in the computation of real-emission contributions to observables that contain Heaviside functions. A prominent example of such a case is the zero-jettiness soft function in SCET, whose calculation at next-to-next-to-next-to-leading order in perturbative QCD is an interesting problem. Since the zero-jettiness soft function distinguishes between emissions into different hemispheres, its definition involves θ-functions of light-cone components of emitted soft partons. This prevents a direct use of multi-loop methods, based on reverse unitarity, for computing the zero-jettiness soft function in high orders of perturbation theory. We propose a way to bypass this problem and illustrate its effectiveness by computing various non-trivial contributions to the zero-jettiness soft function at NNLO and N3LO in perturbative QCD.
We present a calculation of the helicity amplitudes for the process gg → γγ in three-loop massless QCD. We employ a recently proposed method to calculate scattering amplitudes in the 't Hooft-Veltman scheme that reduces the amount of spurious non-physical information needed at intermediate stages of the computation. Our analytic results for the three-loop helicity amplitudes are remarkably compact, and can be efficiently evaluated numerically. This calculation provides the last missing building block for the computation of NNLO QCD corrections to diphoton production in gluon fusion.
Context. Classical Cepheids are primary distance indicators and a crucial stepping stone in determining the present-day value of the Hubble constant H0 to the precision and accuracy required to constrain apparent deviations from the ΛCDM Concordance Cosmological Model.
Aims: We measured the iron and oxygen abundances of a statistically significant sample of 89 Cepheids in the Large Magellanic Cloud (LMC), one of the anchors of the local distance scale, quadrupling the prior sample and including 68 of the 70 Cepheids used to constrain H0 by the SH0ES program. The goal is to constrain the extent to which the luminosity of Cepheids is influenced by their chemical composition, which is an important contributor to the uncertainty on the determination of the Hubble constant itself and a critical factor in the internal consistency of the distance ladder.
Methods: We derived stellar parameters and chemical abundances from a self-consistent spectroscopic analysis based on equivalent width of absorption lines.
Results: The iron distribution of Cepheids in the LMC can be very accurately described by a single Gaussian with a mean [Fe/H] = −0.409 ± 0.003 dex and σ = 0.076 ± 0.003 dex. We estimate a systematic uncertainty on the absolute mean values of 0.1 dex. The width of the distribution is fully compatible with the measurement error and supports the low dispersion of 0.069 mag seen in the near-infrared Hubble Space Telescope LMC period-luminosity relation. The uniformity of the abundance has the important consequence that the LMC Cepheids alone cannot provide any meaningful constraint on the dependence of the Cepheid period-luminosity relation on chemical composition at any wavelength. This revises a prior claim based on a small sample of 22 LMC Cepheids that there was little dependence (or uncertainty) between composition and near-infrared luminosity, a conclusion which would produce an apparent conflict between anchors of the distance ladder with different mean abundance. The chemical homogeneity of the LMC Cepheid population makes it an ideal environment in which to calibrate the metallicity dependence between the more metal-poor Small Magellanic Cloud and metal-rich Milky Way and NGC 4258.
Full Tables 1-8 and Appendix B are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/cat/J/A+A/658/A29
Based on observations collected at the European Southern Observatory under ESO programmes 66.D-0571 and 106.21ML.003.
We present a novel double-copy prescription for gauge fields at the Lagrangian level and apply it to the original double copy, couplings to matter and the soft theorem. The Yang-Mills Lagrangian in light-cone gauge is mapped directly to the N = 0 supergravity Lagrangian in light-cone gauge to trilinear order, and we show that the obtained result is manifestly equivalent to Einstein gravity at tree level up to this order. The application of the double-copy prescription to couplings to matter is exemplified by scalar and fermionic QCD and finally the soft-collinear effective QCD Lagrangian. The mapping of the latter yields an effective description of an energetic Dirac fermion coupled to the graviton, Kalb-Ramond, and dilaton fields, from which the fermionic gravitational soft and next-to-soft theorems follow.
We present a coherent study of the impact of neutrino interactions on the r-process element nucleosynthesis and the heating rate produced by the radioactive elements synthesized in the dynamical ejecta of neutron star-neutron star (NS-NS) mergers. We have studied the material ejected from four NS-NS merger systems based on hydrodynamical simulations which handle neutrino effects in an elaborate way by including neutrino equilibration with matter in optically thick regions and re-absorption in optically thin regions. We find that the neutron richness of the dynamical ejecta is significantly affected by the neutrinos emitted by the post-merger remnant, in particular when compared to a case neglecting all neutrino interactions. Our nucleosynthesis results show that a solar-like distribution of r-process elements with mass numbers $A \gtrsim 90$ is produced, including a significant enrichment in Sr and a reduced production of actinides compared to simulations without inclusion of the nucleonic weak processes. The composition of the dynamically ejected matter as well as the corresponding rate of radioactive decay heating are found to be rather independent of the system mass asymmetry and the adopted equation of state. This approximate degeneracy in abundance pattern and heating rates can be favourable for extracting the ejecta properties from kilonova observations, at least if the dynamical component dominates the overall ejecta. Part II of this work will study the light curve produced by the dynamical ejecta of our four NS merger models.
Observations of the SNR Cassiopeia A (Cas A) show asymmetries in the reverse shock that cannot be explained by models describing a remnant expanding through a spherically symmetric wind of the progenitor star. We investigate whether a past interaction of Cas A with a massive asymmetric shell of the circumstellar medium can account for the observed asymmetries. We performed 3D MHD simulations that describe the remnant evolution from the SN to its interaction with a circumstellar shell. The initial conditions are provided by a 3D neutrino-driven SN model whose morphology resembles Cas A. We explored the parameter space of the shell, searching for a set of parameters able to produce reverse shock asymmetries at the age of 350 years analogous to those observed in Cas A. The interaction of the remnant with the shell can match the observed reverse shock asymmetries if the shell was asymmetric with the densest portion in the nearside to the northwest (NW). According to our models, the shell was thin with radius 1.5 pc. The reverse shock shows the following asymmetries at the age of Cas A: i) it moves inward in the observer frame in the NW region, while it moves outward in other regions; ii) the geometric center of the reverse shock is offset to the NW by 0.1 pc from the geometric center of the forward shock; iii) the reverse shock in the NW region has enhanced nonthermal emission because, there, the ejecta enter the reverse shock with a higher relative velocity (between 4000 and 7000 km/s) than in other regions (below 2000 km/s). Our findings suggest the interaction of Cas A with an asymmetric circumstellar shell between 180 and 240 years after the SN event. We suggest that the shell was, most likely, the result of a massive eruption from the progenitor star that occurred about 10^5 years prior to core-collapse. We estimate a total mass of the shell of approximately 2.6 Msun.
We develop a novel data-driven method for generating synthetic optical observations of galaxy clusters. In cluster weak lensing, the interplay between analysis choices and systematic effects related to source galaxy selection, shape measurement, and photometric redshift estimation can be best characterized in end-to-end tests going from mock observations to recovered cluster masses. To create such test scenarios, we measure and model the photometric properties of galaxy clusters and their sky environments from the Dark Energy Survey Year 3 (DES Y3) data in two bins of cluster richness $\lambda \in [30; 45)$, $\lambda \in [45; 60)$ and three bins in cluster redshift ($z\in [0.3; 0.35)$, $z\in [0.45; 0.5)$ and $z\in [0.6; 0.65)$. Using deep-field imaging data, we extrapolate galaxy populations beyond the limiting magnitude of DES Y3 and calculate the properties of cluster member galaxies via statistical background subtraction. We construct mock galaxy clusters as random draws from a distribution function, and render mock clusters and line-of-sight catalogues into synthetic images in the same format as actual survey observations. Synthetic galaxy clusters are generated from real observational data, and thus are independent from the assumptions inherent to cosmological simulations. The recipe can be straightforwardly modified to incorporate extra information, and correct for survey incompleteness. New realizations of synthetic clusters can be created at minimal cost, which will allow future analyses to generate the large number of images needed to characterize systematic uncertainties in cluster mass measurements.
The majority of existing results for the kilonova (or macronova) emission from material ejected during a neutron-star (NS) merger is based on (quasi-) one-zone models or manually constructed toy-model ejecta configurations. In this study, we present a kilonova analysis of the material ejected during the first $\sim 10\,$ ms of a NS merger, called dynamical ejecta, using directly the outflow trajectories from general relativistic smoothed-particle hydrodynamics simulations, including a sophisticated neutrino treatment and the corresponding nucleosynthesis results, which have been presented in Part I of this study. We employ a multidimensional two-moment radiation transport scheme with approximate M1 closure to evolve the photon field and use a heuristic prescription for the opacities found by calibration with atomic-physics-based reference results. We find that the photosphere is generically ellipsoidal but augmented with small-scale structure and produces emission that is about 1.5-3 times stronger towards the pole than the equator. The kilonova typically peaks after $0.7\!-\!1.5\,$ d in the near-infrared frequency regime with luminosities between $3\!-\!7\times 10^{40}\,$ erg s-1 and at photospheric temperatures of $2.2\!-\!2.8\times 10^3\,$ K. A softer equation of state or higher binary-mass asymmetry leads to a longer and brighter signal. Significant variations of the light curve are also obtained for models with artificially modified electron fractions, emphasizing the importance of a reliable neutrino-transport modelling. None of the models investigated here, which only consider dynamical ejecta, produces a transient as bright as AT2017gfo. The near-infrared peak of our models is incompatible with the early blue component of AT2017gfo.
MadJax is a tool for generating and evaluating differentiable matrix elements of high energy scattering processes. As such, it is a step towards a differentiable programming paradigm in high energy physics that facilitates the incorporation of high energy physics domain knowledge, encoded in simulation software, into gradient based learning and optimization pipelines. MadJax comprises two components: (a) a plugin to the general purpose matrix element generator MadGraph that integrates matrix element and phase space sampling code with the JAX differentiable programming framework, and (b) a standalone wrapping API for accessing the matrix element code and its gradients, which are computed with automatic differentiation. The MadJax implementation and example applications of simulation based inference and normalizing flow based matrix element modeling, with capabilities enabled uniquely with differentiable matrix elements, are presented.
To a good approximation, on large scales, the evolved two-point correlation function of biased tracers is related to the initial one by a convolution with a smearing kernel. For Gaussian initial conditions, the smearing kernel is Gaussian, so if the initial correlation function is parametrized using simple polynomials, then the evolved correlation function is a sum of generalized Laguerre functions of half-integer order. This motivates an analytic "Laguerre reconstruction" algorithm which previous work has shown is fast and accurate. This reconstruction requires as input the width of the smearing kernel. We show that the method can be extended to estimate the width of the smearing kernel from the same dataset. This estimate, and associated uncertainties, can then be used to marginalize over the distribution of reconstructed shapes and hence provide error estimates on the value of the distance scale. This procedure is not tied to a particular cosmological model. We also show that if, instead, we parametrize the evolved correlation function using simple polynomials, then the initial one is a sum of Hermite polynomials, again enabling fast and accurate deconvolution. If one is willing to use constraints on the smearing scale from other datasets, then marginalizing over its value is simpler for this latter, "Hermite" reconstruction, potentially providing further speed-ups in cosmological analyses.
Euclid is poised to survey galaxies across a cosmological volume of unprecedented size, providing observations of more than a billion objects distributed over a third of the full sky. Approximately 20 million of these galaxies will have their spectroscopy available, allowing us to map the three-dimensional large-scale structure of the Universe in great detail. This paper investigates prospects for the detection of cosmic voids therein and the unique benefit they provide for cosmological studies. In particular, we study the imprints of dynamic (redshift-space) and geometric (Alcock-Paczynski) distortions of average void shapes and their constraining power on the growth of structure and cosmological distance ratios. To this end, we made use of the Flagship mock catalog, a state-of-the-art simulation of the data expected to be observed with Euclid. We arranged the data into four adjacent redshift bins, each of which contains about 11 000 voids and we estimated the stacked void-galaxy cross-correlation function in every bin. Fitting a linear-theory model to the data, we obtained constraints on f/b and DMH, where f is the linear growth rate of density fluctuations, b the galaxy bias, DM the comoving angular diameter distance, and H the Hubble rate. In addition, we marginalized over two nuisance parameters included in our model to account for unknown systematic effects in the analysis. With this approach, Euclid will be able to reach a relative precision of about 4% on measurements of f/b and 0.5% on DMH in each redshift bin. Better modeling or calibration of the nuisance parameters may further increase this precision to 1% and 0.4%, respectively. Our results show that the exploitation of cosmic voids in Euclid will provide competitive constraints on cosmology even as a stand-alone probe. For example, the equation-of-state parameter, w, for dark energy will be measured with a precision of about 10%, consistent with previous more approximate forecasts.
This paper is published on behalf of the Euclid Consortium.
Galaxy cluster masses, rich with cosmological information, can be estimated from internal dark matter (DM) velocity dispersions, which in turn can be observationally inferred from satellite galaxy velocities. However, galaxies are biased tracers of the DM, and the bias can vary over host halo and galaxy properties as well as time. We precisely calibrate the velocity bias, bv - defined as the ratio of galaxy and DM velocity dispersions - as a function of redshift, host halo mass, and galaxy stellar mass threshold ($M_{\rm \star , sat}$), for massive haloes ($M_{\rm 200c}\gt 10^{13.5} \, {\rm M}_\odot$) from five cosmological simulations: IllustrisTNG, Magneticum, Bahamas + Macsis, The Three Hundred Project, and MultiDark Planck-2. We first compare scaling relations for galaxy and DM velocity dispersion across simulations; the former is estimated using a new ensemble velocity likelihood method that is unbiased for low galaxy counts per halo, while the latter uses a local linear regression. The simulations show consistent trends of bv increasing with M200c and decreasing with redshift and $M_{\rm \star , sat}$. The ensemble-estimated theoretical uncertainty in bv is 2-3 per cent, but becomes percent-level when considering only the three highest resolution simulations. We update the mass-richness normalization for an SDSS redMaPPer cluster sample, and find our improved bv estimates reduce the normalization uncertainty from 22 to 8 per cent, demonstrating that dynamical mass estimation is competitive with weak lensing mass estimation. We discuss necessary steps for further improving this precision. Our estimates for $b_v(M_{\rm 200c}, M_{\rm \star , sat}, z)$ are made publicly available.
We derive supernova (SN) bounds on muon-philic bosons, taking advantage of the recent emergence of muonic SN models. Our main innovations are to consider scalars ϕ in addition to pseudoscalars a and to include systematically the generic two-photon coupling Gγ γ implied by a muon triangle loop. This interaction allows for Primakoff scattering and radiative boson decays. The globular-cluster bound Gγ γ<0.67 ×10-10 GeV-1 carries over to the muonic Yukawa couplings as ga<3.1 ×10-9 and gϕ<4.6 ×10-9 for ma ,ϕ≲100 keV , so SN arguments become interesting mainly for larger masses. If bosons escape freely from the SN core the main constraints originate from SN 1987A γ rays and the diffuse cosmic γ -ray background. The latter allows at most 10-4 of a typical total SN energy of ESN≃3 ×1053 erg to show up as γ rays, for ma ,ϕ≳100 keV implying ga≲0.9 ×10-10 and gϕ≲0.4 ×10-10. In the trapping regime the bosons emerge as quasi-thermal radiation from a region near the neutrino sphere and match Lν for ga ,ϕ≃10-4. However, the 2 γ decay is so fast that all the energy is dumped into the surrounding progenitor-star matter, whereas at most 10-2ESN may show up in the explosion. To suppress boson emission below this level we need yet larger couplings, ga≳2 ×10-3 and gϕ≳4 ×10-3. Muonic scalars can explain the muon magnetic-moment anomaly for gϕ≃0.4 ×10-3, a value hard to reconcile with SN physics despite the uncertainty of the explosion-energy bound. For generic axionlike particles, this argument covers the "cosmological triangle" in the Ga γ γ- ma parameter space.
Previous studies have shown that dark matter-deficient galaxies (DMDG) such as NGC 1052-DF2 (hereafter DF2) can result from tidal stripping. An important question, though, is whether such a stripping scenario can explain DF2's large specific frequency of globular clusters (GCs). After all, tidal stripping and shocking preferentially remove matter from the outskirts. We examine this using idealized, high-resolution simulations of a regular dark matter-dominated galaxy that is accreted on to a massive halo. As long as the initial (pre-infall) dark matter halo of the satellite is cored, which is consistent with predictions of cosmological, hydrodynamical simulations, the tidal remnant can be made to resemble DF2 in all its properties, including its GC population. The required orbit has a pericentre at the 8.3 percentile of the distribution for subhaloes at infall, and thus is not particularly extreme. On this orbit the satellite loses 98.5 (30) per cent of its original dark matter (stellar) mass, and thus evolves into a DMDG. The fraction of GCs that is stripped off depends on the initial radial distribution. If, at infall, the median projected radius of the GC population is roughly two times that of the stars, consistent with observations of isolated galaxies, only ~20 per cent of the GCs are stripped off. This is less than for the stars, which is due to dynamical friction counteracting the tidal stirring. We predict that, if indeed DF2 was crafted by strong tides, its stellar outskirts should have a very shallow metallicity gradient.
Using bona-fide black hole (BH) mass estimates from reverberation mapping and the line ratio [Si VI] 1.963$\rm{\mu m}$/Brγbroad as tracer of the AGN ionizing continuum, a novel BH-mass scaling relation of the form log(MBH) = (6.40 ± 0.17) - (1.99 ± 0.37) × log ([Si VI]/Brγbroad), dispersion 0.47 dex, over the BH mass interval, 106-108 M⊙ is found. Following on the geometrically thin accretion disc approximation and after surveying a basic parameter space for coronal lines production, we believe one of main drivers of the relation is the effective temperature of the disc, which is effectively sampled by the [Si VI] 1.963$\rm{\mu m}$ coronal line for the range of BH masses considered. By means of CLOUDY photoionization models, the observed anticorrelation appears to be formally in line with the thin disc prediction Tdisc ∝ MBH-1/4.
The diffusive epidemic process is a paradigmatic example of an absorbing state phase transition in which healthy and infected individuals spread with different diffusion constants. Using stochastic activity spreading simulations in combination with finite-size scaling analyses we reveal two qualitatively different processes that characterize the critical dynamics: subdiffusive propagation of infection clusters and diffusive fluctuations in the healthy population. This suggests the presence of a strong-coupling regime and sheds new light on a long-standing debate about the theoretical classification of the system.
We present results on the star cluster properties from a series of high resolution smoothed particles hydrodynamics (SPH) simulations of isolated dwarf galaxies as part of the GRIFFIN project. The simulations at sub-parsec spatial resolution and a minimum particle mass of 4 M⊙ incorporate non-equilibrium heating, cooling, and chemistry processes, and realize individual massive stars. The simulations follow feedback channels of massive stars that include the interstellar-radiation field variable in space and time, the radiation input by photo-ionization and supernova explosions. Varying the star formation efficiency per free-fall time in the range ϵff = 0.2-50${{\ \rm per\ cent}}$ neither changes the star formation rates nor the outflow rates. While the environmental densities at star formation change significantly with ϵff, the ambient densities of supernovae are independent of ϵff indicating a decoupling of the two processes. At low ϵff, gas is allowed to collapse more before star formation, resulting in more massive, and increasingly more bound star clusters are formed, which are typically not destroyed. With increasing ϵff, there is a trend for shallower cluster mass functions and the cluster formation efficiency Γ for young bound clusters decreases from $50 {{\ \rm per\ cent}}$ to $\sim 1 {{\ \rm per\ cent}}$ showing evidence for cluster disruption. However, none of our simulations form low mass (<103 M⊙) clusters with structural properties in perfect agreement with observations. Traditional star formation models used in galaxy formation simulations based on local free-fall times might therefore be unable to capture star cluster properties without significant fine tuning.
The Hubble constant (H0) is one of the fundamental parameters in cosmology, but there is a heated debate around the > 4σ tension between the local Cepheid distance ladder and the early Universe measurements. Strongly lensed Type Ia supernovae (LSNe Ia) are an independent and direct way to measure H0, where a time-delay measurement between the multiple supernova (SN) images is required. In this work, we present two machine learning approaches for measuring time delays in LSNe Ia, namely, a fully connected neural network (FCNN) and a random forest (RF). For the training of the FCNN and the RF, we simulate mock LSNe Ia from theoretical SN Ia models that include observational noise and microlensing. We test the generalizability of the machine learning models by using a final test set based on empirical LSN Ia light curves not used in the training process, and we find that only the RF provides a low enough bias to achieve precision cosmology; as such, RF is therefore preferred over our FCNN approach for applications to real systems. For the RF with single-band photometry in the i band, we obtain an accuracy better than 1% in all investigated cases for time delays longer than 15 days, assuming follow-up observations with a 5σ point-source depth of 24.7, a two day cadence with a few random gaps, and a detection of the LSNe Ia 8 to 10 days before peak in the observer frame. In terms of precision, we can achieve an approximately 1.5-day uncertainty for a typical source redshift of ∼0.8 on the i band under the same assumptions. To improve the measurement, we find that using three bands, where we train a RF for each band separately and combine them afterward, helps to reduce the uncertainty to ∼1.0 day. The dominant source of uncertainty is the observational noise, and therefore the depth is an especially important factor when follow-up observations are triggered. We have publicly released the microlensed spectra and light curves used in this work.
https://github.com/shsuyu/HOLISMOKES-public/tree/main/HOLISMOKES_VII
Context. The dynamics of the intracluster medium (ICM) is affected by turbulence driven by several processes, such as mergers, accretion and feedback from active galactic nuclei.
Aims: X-ray surface brightness fluctuations have been used to constrain turbulence in galaxy clusters. Here, we use simulations to further investigate the relation between gas density and turbulent velocity fluctuations, with a focus on the effect of the stratification of the ICM.
Methods: In this work, we studied the turbulence driven by hierarchical accretion by analysing a sample of galaxy clusters simulated with the cosmological code ENZO. We used a fixed scale filtering approach to disentangle laminar from turbulent flows.
Results: In dynamically perturbed galaxy clusters, we found a relation between the root mean square of density and velocity fluctuations, albeit with a different slope than previously reported. The Richardson number is a parameter that represents the ratio between turbulence and buoyancy, and we found that this variable has a strong dependence on the filtering scale. However, we could not detect any strong relation between the Richardson number and the logarithmic density fluctuations, in contrast to results by recent and more idealised simulations. In particular, we find a strong effect from radial accretion, which appears to be the main driver for the gas fluctuations. The ubiquitous radial bias in the dynamics of the ICM suggests that homogeneity and isotropy are not always valid assumptions, even if the turbulent spectra follow Kolmogorov's scaling. Finally, we find that the slope of the velocity and density spectra are independent of cluster-centric radii.
Recent wide-area surveys have enabled us to study the Milky Way with unprecedented detail. Its inner regions, hidden behind dust and gas, have been partially unveiled with the arrival of near-infrared (IR) photometric and spectroscopic data sets. Among recent discoveries, there is a population of low-mass globular clusters, known to be missing, especially towards the Galactic bulge. In this work, five new low-luminosity globular clusters located towards the bulge area are presented. They were discovered by searching for groups in the multidimensional space of coordinates, colours, and proper motions from the Gaia EDR3 catalogue and later confirmed with deeper VVV survey near-IR photometry. The clusters show well-defined red giant branches and, in some cases, horizontal branches with their members forming a dynamically coherent structure in proper motion space. Four of them were confirmed by spectroscopic follow-up with the MUSE instrument on the ESO VLT. Photometric parameters were derived, and when available, metallicities, radial velocities, and orbits were determined. The new clusters Gran 1 and 5 are bulge globular clusters, while Gran 2, 3 and 4 present halo-like properties. Preliminary orbits indicate that Gran 1 might be related to the Main Progenitor, or the so-called 'low-energy' group, while Gran 2, 3 and 5 appears to follow the Gaia-Enceladus/Sausage structure. This study demonstrates that the Gaia proper motions, combined with the spectroscopic follow-up and colour-magnitude diagrams, are required to confirm the nature of cluster candidates towards the inner Galaxy. High stellar crowding and differential extinction may hide other low-luminosity clusters.
Recent cosmological analyses rely on the ability to accurately sample from high-dimensional posterior distributions. A variety of algorithms have been applied in the field, but justification of the particular sampler choice and settings is often lacking. Here we investigate three such samplers to motivate and validate the algorithm and settings used for the Dark Energy Survey (DES) analyses of the first 3 years (Y3) of data from combined measurements of weak lensing and galaxy clustering. We employ the full DES Year 1 likelihood alongside a much faster approximate likelihood, which enables us to assess the outcomes from each sampler choice and demonstrate the robustness of our full results. We find that the ellipsoidal nested sampling algorithm $\texttt{MultiNest}$ reports inconsistent estimates of the Bayesian evidence and somewhat narrower parameter credible intervals than the sliced nested sampling implemented in $\texttt{PolyChord}$. We compare the findings from $\texttt{MultiNest}$ and $\texttt{PolyChord}$ with parameter inference from the Metropolis-Hastings algorithm, finding good agreement. We determine that $\texttt{PolyChord}$ provides a good balance of speed and robustness, and recommend different settings for testing purposes and final chains for analyses with DES Y3 data. Our methodology can readily be reproduced to obtain suitable sampler settings for future surveys.
Quantum coherence is one of the most striking features of quantum mechanics rooted in the superposition principle. Recently, it has been demonstrated that it is possible to harvest the quantum coherence from a coherent scalar field. In order to explore a new method of detecting axion dark matter, we consider a point-like Unruh-DeWitt detector coupled to the axion field and quantify a coherent measure of the detector. We show that the detector can harvest the quantum coherence from the axion dark matter. To be more precise, we consider a two-level electron system in an atom as the detector. In this case, we obtain the coherence measure C = 2.2 × 10−6γ(T/1s) where T and γ are an observation time and the Lorentz factor. At the same time, the axion mass ma we can probe is determined by the energy gap of the detector.
We present a demonstration of the in-flight polarization angle calibration for the JAXA/ISAS second strategic large class mission, LiteBIRD, and estimate its impact on the measurement of the tensor-to-scalar ratio parameter, r, using simulated data. We generate a set of simulated sky maps with CMB and polarized foreground emission, and inject instrumental noise and polarization angle offsets to the 22 (partially overlapping) LiteBIRD frequency channels. Our in-flight angle calibration relies on nulling the EB cross correlation of the polarized signal in each channel. This calibration step has been carried out by two independent groups with a blind analysis, allowing an accuracy of the order of a few arc-minutes to be reached on the estimate of the angle offsets. Both the corrected and uncorrected multi-frequency maps are propagated through the foreground cleaning step, with the goal of computing clean CMB maps. We employ two component separation algorithms, the Bayesian-Separation of Components and Residuals Estimate Tool (B-SeCRET), and the Needlet Internal Linear Combination (NILC). We find that the recovered CMB maps obtained with algorithms that do not make any assumptions about the foreground properties, such as NILC, are only mildly affected by the angle miscalibration. However, polarization angle offsets strongly bias results obtained with the parametric fitting method. Once the miscalibration angles are corrected by EB nulling prior to the component separation, both component separation algorithms result in an unbiased estimation of the r parameter. While this work is motivated by the conceptual design study for LiteBIRD, its framework can be broadly applied to any CMB polarization experiment. In particular, the combination of simulation plus blind analysis provides a robust forecast by taking into account not only detector sensitivity but also systematic effects.
All evolutionary biological processes lead to a change in heritable traits over successive generations. The responsible genetic information encoded in DNA is altered, selected, and inherited by mutation of the base sequence. While this is well known at the biological level, an evolutionary change at the molecular level of small organic molecules is unknown but represents an important prerequisite for the emergence of life. Here, we present a class of prebiotic imidazolidine-4-thione organocatalysts able to dynamically change their constitution and potentially capable to form an evolutionary system. These catalysts functionalize their building blocks and dynamically adapt to their (self-modified) environment by mutation of their own structure. Depending on the surrounding conditions, they show pronounced and opposing selectivity in their formation. Remarkably, the preferentially formed species can be associated with different catalytic properties, which enable multiple pathways for the transition from abiotic matter to functional biomolecules.
We use a recent census of the Milky Way (MW) satellite galaxy population to constrain the lifetime of particle dark matter (DM). We consider two-body decaying dark matter (DDM) in which a heavy DM particle decays with lifetime $\tau$ comparable to the age of the Universe to a lighter DM particle (with mass splitting $\epsilon$) and to a dark radiation species. These decays impart a characteristic "kick velocity," $V_{\mathrm{kick}}=\epsilon c$, on the DM daughter particles, significantly depleting the DM content of low-mass subhalos and making them more susceptible to tidal disruption. We fit the suppression of the present-day DDM subhalo mass function (SHMF) as a function of $\tau$ and $V_{\mathrm{kick}}$ using a suite of high-resolution zoom-in simulations of MW-mass halos, and we validate this model on new DDM simulations of systems specifically chosen to resemble the MW. We implement our DDM SHMF predictions in a forward model that incorporates inhomogeneities in the spatial distribution and detectability of MW satellites and uncertainties in the mapping between galaxies and DM halos, the properties of the MW system, and the disruption of subhalos by the MW disk using an empirical model for the galaxy--halo connection. By comparing to the observed MW satellite population, we conservatively exclude DDM models with $\tau < 18\ \mathrm{Gyr}$ ($29\ \mathrm{Gyr}$) for $V_{\mathrm{kick}}=20\ \mathrm{km}\, \mathrm{s}^{-1}$ ($40\ \mathrm{km}\, \mathrm{s}^{-1}$) at $95\%$ confidence. These constraints are among the most stringent and robust small-scale structure limits on the DM particle lifetime and strongly disfavor DDM models that have been proposed to alleviate the Hubble and $S_8$ tensions.
CRESST is one of the most prominent direct detection experiments for dark matter particles with sub-GeV/c$^2$ mass. One of the advantages of the CRESST experiment is the possibility to include a large variety of nuclides in the target material used to probe dark matter interactions. In this work, we discuss in particular the interactions of dark matter particles with protons and neutrons of $^{6}$Li. This is now possible thanks to new calculations on nuclear matrix elements of this specific isotope of Li. To show the potential of using this particular nuclide for probing dark matter interactions, we used the data collected previously by a CRESST prototype based on LiAlO$_2$ and operated in an above ground test-facility at Max-Planck-Institut für Physik in Munich, Germany. In particular, the inclusion of $^{6}$Li in the limit calculation drastically improves the result obtained for spin-dependent interactions with neutrons in the whole mass range. The improvement is significant, greater than two order of magnitude for dark matter masses below 1 GeV/c$^2$, compared to the limit previously published with the same data.
As part of the cosmology analysis using Type Ia Supernovae (SN Ia) in the Dark Energy Survey (DES), we present photometrically identified SN Ia samples using multiband light curves and host galaxy redshifts. For this analysis, we use the photometric classification framework SuperNNovatrained on realistic DES-like simulations. For reliable classification, we process the DES SN programme (DES-SN) data and introduce improvements to the classifier architecture, obtaining classification accuracies of more than 98 per cent on simulations. This is the first SN classification to make use of ensemble methods, resulting in more robust samples. Using photometry, host galaxy redshifts, and a classification probability requirement, we identify 1863 SNe Ia from which we select 1484 cosmology-grade SNe Ia spanning the redshift range of 0.07 < z < 1.14. We find good agreement between the light-curve properties of the photometrically selected sample and simulations. Additionally, we create similar SN Ia samples using two types of Bayesian Neural Network classifiers that provide uncertainties on the classification probabilities. We test the feasibility of using these uncertainties as indicators for out-of-distribution candidates and model confidence. Finally, we discuss the implications of photometric samples and classification methods for future surveys such as Vera C. Rubin Observatory Legacy Survey of Space and Time.
The integrated shear 3-point correlation function ζ_± is a higher-order statistic of the cosmic shear field that describes the modulation of the 2-point correlation function ξ_± by long-wavelength features in the field. Here, we introduce a new theoretical model to calculate ζ_± that is accurate on small angular scales, and that allows to take baryonic feedback effects into account. Our model builds on the realization that the small-scale ζ_± is dominated by the non-linear matter bispectrum in the squeezed limit, which can be evaluated accurately using the non-linear matter power spectrum and its first-order response functions to density and tidal field perturbations. We demonstrate the accuracy of our model by showing that it reproduces the small-scale ζ_± measured in simulated cosmic shear maps. The impact of baryonic feedback enters effectively only through the corresponding impact on the non-linear matter power spectrum, thereby permitting to account for these astrophysical effects on ζ_± similarly to how they are currently accounted for on ξ_±. Using a simple idealized Fisher matrix forecast for a DES-like survey we find that, compared to ξ_±, a combined |$\xi _{\pm }\ \&\ \zeta _{\pm }$| analysis can lead to improvements of order |$20\!-\!40{{\ \rm per\ cent}}$| on the constraints of cosmological parameters such as σ_8 or the dark energy equation of state parameter w_0. We find similar levels of improvement on the constraints of the baryonic feedback parameters, which strengthens the prospects for cosmic shear data to obtain tight constraints not only on cosmology but also on astrophysical feedback models. These encouraging results motivate future works on the integrated shear 3-point correlation function towards applications to real survey data.
This is the second part of a thorough investigation of the redshift-space effects that affect void properties and the impact they have on cosmological tests. Here, we focus on the void-galaxy cross-correlation function, specifically, on the projected versions that we developed in a previous work. The pillar of the analysis is the one-to-one relationship between real and redshift-space voids above the shot-noise level identified with a spherical void finder. Under this mapping, void properties are affected by three effects: (i) a systematic expansion as a consequence of the distortions induced by galaxy dynamics, (ii) the Alcock-Paczynski volume effect, which manifests as an overall expansion or contraction depending on the fiducial cosmology, and (iii) a systematic off-centring along the line of sight as a consequence of the distortions induced by void dynamics. We found that correlations are also affected by an additional source of distortions: the ellipticity of voids. This is the first time that distortions due to the off-centring and ellipticity effects are detected and quantified. With a simplified test, we verified that the Gaussian streaming model is still robust provided all these effects are taken into account, laying the foundations for improvements in current models in order to obtain unbiased cosmological constraints from spectroscopic surveys. Besides this practical importance, this analysis also encodes key information about the structure and dynamics of the Universe at the largest scales. Furthermore, some of the effects constitute cosmological probes by themselves, as is the case of the void ellipticity.
Context. X-ray- and extreme-ultraviolet- (together: XEUV-) driven photoevaporative winds acting on protoplanetary disks around young T-Tauri stars may crucially impact disk evolution, affecting both gas and dust distributions.
Aims: We constrain the dust densities in a typical XEUV-driven outflow, and determine whether these winds can be observed at μm-wavelengths.
Methods: We used dust trajectories modelled atop a 2D hydrodynamical gas model of a protoplanetary disk irradiated by a central T-Tauri star. With these and two different prescriptions for the dust distribution in the underlying disk, we constructed wind density maps for individual grain sizes. We used the dust density distributions obtained to synthesise observations in scattered and polarised light.
Results: For an XEUV-driven outflow around a M* = 0.7 M⊙ T-Tauri star with LX = 2 × 1030 erg s−1, we find a dust mass-loss rate Ṁdust ≲ 4.1 × 10−11 M⊙ yr−1 for an optimistic estimate of dust densities in the wind (compared to Ṁgas ≈ 3.7 × 10−8 M⊙ yr−1). The synthesised scattered-light images suggest a distinct chimney structure emerging at intensities I∕Imax < 10−4.5 (10−3.5) at λobs = 1.6 (0.4) μm, while the features in the polarised-light images are even fainter. Observations synthesised from our model do not exhibit clear features for SPHERE IRDIS, but show a faint wind signature for JWST NIRCam under optimal conditions.
Conclusions: Unambiguous detections of photoevaporative XEUV winds launched from primordial disks are at least challenging with current instrumentation; this provides a possible explanation as to why disk winds are not routinely detected in scattered or polarised light. Our calculations show that disk scale heights retrieved from scattered-light observations should be only marginally affected by the presence of an XEUV wind.
Decays of the neutral and long-lived η and η′ mesons provide a unique, flavor-conserving laboratory to test low-energy Quantum Chromodynamics and search for new physics beyond the Standard Model. They have drawn world-wide attention in recent years and have inspired broad experimental programs in different high-intensity-frontier centers. New experimental data will offer critical inputs to precisely determine the light quark mass ratios, η-η′ mixing parameters, and hadronic contributions to the anomalous magnetic moment of the muon. At the same time, it will provide a sensitive probe to test potential new physics. This includes searches for hidden photons, light Higgs scalars, and axion-like particles that are complementary to worldwide efforts to detect new light particles below the GeV mass scale, as well as tests of discrete symmetry violation. In this review, we give an update on theoretical developments, discuss the experimental opportunities, and identify future research needed in this field.
Although galactic outflows play a key role in our understanding of the evolution of galaxies, the exact mechanism by which galactic outflows are driven is still far from being understood and, therefore, our understanding of associated feedback mechanisms that control the evolution of galaxies is still plagued by many enigmas. In this work, we present a simple toy model that can provide insight on how non-axisymmetric instabilities in galaxies (bars, spiral arms, warps) can lead to local exponential magnetic field growth by radial flows beyond the equipartition value by at least two orders of magnitude on a timescale of a few 100 Myr. Our predictions show that the process can lead to galactic outflows in barred spiral galaxies with a mass-loading factor η ≍ 0.1, in agreement with our numerical simulations. Moreover, our outflow mechanism could contribute to an understanding of the large fraction of barred spiral galaxies that show signs of galactic outflows in the CHANG-ES survey. Extending our model shows the importance of such processes in high-redshift galaxies by assuming equipartition between magnetic energy and turbulent energy. Simple estimates for the star formation rate in our model together with cross correlated masses from the star-forming main sequence at redshifts z ~ 2 allow us to estimate the outflow rate and mass-loading factors by non-axisymmetric instabilities and a subsequent radial inflow dynamo, giving mass-loading factors of η ≍ 0.1 for galaxies in the range of M ⋆ = 109-1012 M ⊙, in good agreement with recent results of SINFONI and KMOS 3D.
Atmospheres of highly irradiated gas giant planets host a large variety of atomic and ionic species. Here we observe the thermal emission spectra of the two ultra-hot Jupiters WASP-33b and KELT-20b/MASCARA-2b in the near-infrared wavelength range with CARMENES. Via high-resolution Doppler spectroscopy, we searched for neutral silicon (Si) in their dayside atmospheres. We detect the Si spectral signature of both planets via cross-correlation with model spectra. Detection levels of 4.8σ and 5.4σ, respectively, are observed when assuming a solar atmospheric composition. This is the first detection of Si in exoplanet atmospheres. The presence of Si is an important finding due to its fundamental role in cloud formation and, hence, for the planetary energy balance. Since the spectral lines are detected in emission, our results also confirm the presence of an inverted temperature profile in the dayside atmospheres of both planets.
Study Analysis Group 21 (SAG21) of the Exoplanet Exploration Program Analysis Group (ExoPAG) was organized to study the effect of stellar contamination on space-based transmission spectroscopy, a method for studying exoplanetary atmospheres by measuring the wavelength-dependent radius of a planet as it transits its star. Transmission spectroscopy relies on a precise understanding of the spectrum of the star being occulted. However, stars are not homogeneous, constant light sources but have temporally evolving photospheres and chromospheres with inhomogeneities like spots, faculae, and plages. This SAG has brought together an interdisciplinary team of more than 100 scientists, with observers and theorists from the heliophysics, stellar astrophysics, planetary science, and exoplanetary atmosphere research communities, to study the current needs that can be addressed in this context to make the most of transit studies from current NASA facilities like HST and JWST. The analysis produced 14 findings, which fall into three Science Themes encompassing (1) how the Sun is used as our best laboratory to calibrate our understanding of stellar heterogeneities ("The Sun as the Stellar Benchmark"), (2) how stars other than the Sun extend our knowledge of heterogeneities ("Surface Heterogeneities of Other Stars") and (3) how to incorporate information gathered for the Sun and other stars into transit studies ("Mapping Stellar Knowledge to Transit Studies").
Black hole (BH) accretion discs formed in compact-object mergers or collapsars may be major sites of the rapid-neutron-capture (r-)process, but the conditions determining the electron fraction (Ye) remain uncertain given the complexity of neutrino transfer and angular-momentum transport. After discussing relevant weak-interaction regimes, we study the role of neutrino absorption for shaping Ye using an extensive set of simulations performed with two-moment neutrino transport and again without neutrino absorption. We vary the torus mass, BH mass and spin, and examine the impact of rest-mass and weak-magnetism corrections in the neutrino rates. We also test the dependence on the angular-momentum transport treatment by comparing axisymmetric models using the standard α-viscosity with viscous models assuming constant viscous length-scales (lt) and 3D magnetohydrodynamic (MHD) simulations. Finally, we discuss the nucleosynthesis yields and basic kilonova properties. We find that absorption pushes Ye towards ~0.5 outside the torus, while inside increasing the equilibrium value $Y_\mathrm{ e}^{\mathrm{eq}}$ by ~0.05-0.2. Correspondingly, a substantial ejecta fraction is pushed above Ye = 0.25, leading to a reduced lanthanide fraction and a brighter, earlier, and bluer kilonova than without absorption. More compact tori with higher neutrino optical depth, τ, tend to have lower $Y_\mathrm{ e}^{\mathrm{eq}}$ up to τ ~ 1-10, above which absorption becomes strong enough to reverse this trend. Disc ejecta are less (more) neutron rich when employing an lt = const. viscosity (MHD treatment). The solar-like abundance pattern found for our MHD model marginally supports collapsar discs as major r-process sites, although a strong r-process may be limited to phases of high mass-infall rates, $\dot{M}\, \, \raise0.14em\rm{\gt }\lower0.28em\rm{\sim }\, \, 2\times 10^{-2}$ M⊙ s-1.
Context. The mass of protoplanetary disks is arguably one of their most important quantities shaping their evolution toward planetary systems, but it remains a challenge to determine this quantity. Using the high spatial resolution now available on telescopes such as the Atacama Large Millimeter/submillimeter Array (ALMA), recent studies derived a relation between the disk surface density and the location of the "dust lines". This is a new concept in the field, linking the disk size at different continuum wavelengths with the radial distribution of grain populations of different sizes.
Aims: We aim to use a dust evolution model to test the dependence of the dust line location on disk gas mass. In particular, we are interested in the reliability of the method for disks showing radial substructures, as recent high-resolution observations revealed.
Methods: We performed dust evolution calculations, which included perturbations to the gas surface density with different amplitudes at different radii, to investigate their effect on the global drift timescale of dust grains. These models were then used to calibrate the relation between the dust grain drift timescale and the disk gas mass. We investigated under which condition the dust line location is a good mass estimator and tested how different stellar and disk properties (disk mass, stellar mass, disk age, and dust-to-gas ratio) affect the dust line properties. Finally, we show the applicability of this method to disks such as TW Hya and AS 209 that have been observed at high angular resolution with ALMA and show pronounced disk structures.
Results: Our models without pressure bumps confirm a strong dependence of the dust line location on the disk gas mass and its applicability as a reliable mass estimator. The other disk properties do not significantly affect the dust line location, except for the age of the system, which is the major source of uncertainty for this mass estimator. A population of synthetic disks was used to calibrate an analytic relation between the dust line location and the disk mass for smooth disks, finding that previous mass estimates based on dust lines overestimate disk masses by about one order of magnitude. Radial pressure bumps can alter the location of the dust line by up to ~10 au, while its location is mainly determined by the disk mass. Therefore, an accurate mass estimation requires a proper evaluation of the effect of bumps. However, when radial substructures act as traps for dust grains, the relation between the dust line location and disk mass becomes weaker, and other mass estimators need to be adopted.
Conclusions: Our models show that the determination of the dust line location is a promising approach to the mass estimate of protoplanetay disks, but the exact relation between the dust line location and disk mass depends on the structure of the particular disk. We calibrated the relation for disks without evidence of radial structures, while for more complex structures we ran a simple dust evolution model. However, this method fails when there is evidence of strong dust traps. It is possible to reveal when dust evolution is dominated by traps, providing the necessary information for when the method should be applied with caution.
We compute the QCD static force and potential using gradient flow at next-to-leading order in the strong coupling. The static force is the spatial derivative of the static potential: it encodes the QCD interaction at both short and long distances. While on the one side the static force has the advantage of being free of the O(ΛQCD) renormalon affecting the static potential when computed in perturbation theory, on the other side its direct lattice QCD computation suffers from poor convergence. The convergence can be improved by using gradient flow, where the gauge fields in the operator definition of a given quantity are replaced by flowed fields at flow time t, which effectively smear the gauge fields over a distance of order √{t }, while they reduce to the QCD fields in the limit t → 0. Based on our next-to-leading order calculation, we explore the properties of the static force for arbitrary values of t, as well as in the t → 0 limit, which may be useful for lattice QCD studies.
We obtain Proca field theory from the quantisation of the N = 2 supersymmetric worldline upon supplementing the graded BRST-algebra with an extra multiplet of oscillators. The linearised theory describes the BV-extended spectrum of Proca theory, together with a Stückelberg field. When coupling the theory to background fields we derive the Proca equations, arising as consistency conditions in the BRST procedure. We also explore non-abelian modifications, complexified vector fields as well as coupling to a dilaton field. We propose a cubic action on the space of BRST-operators which reproduces the known Proca action.
The statistical models used to derive the results of experimental analyses are of incredible scientific value and are essential information for analysis preservation and reuse. In this paper, we make the scientific case for systematically publishing the full statistical models and discuss the technical developments that make this practical. By means of a variety of physics cases -- including parton distribution functions, Higgs boson measurements, effective field theory interpretations, direct searches for new physics, heavy flavor physics, direct dark matter detection, world averages, and beyond the Standard Model global fits -- we illustrate how detailed information on the statistical modelling can enhance the short- and long-term impact of experimental results.
We study the production of very light elements (Z < 20) in the dynamical and spiral-wave wind ejecta of binary neutron star mergers by combining detailed nucleosynthesis calculations with the outcome of numerical relativity merger simulations. All our models are targeted to GW170817 and include neutrino radiation. We explore different finite-temperature, composition-dependent nuclear equations of state, and binary mass ratios, and find that hydrogen and helium are the most abundant light elements. For both elements, the decay of free neutrons is the driving nuclear reaction. In particular, ~0.5-2 × 10-6 M ⊙ of hydrogen are produced in the fast expanding tail of the dynamical ejecta, while ~1.5-11 × 10-6 M ⊙ of helium are synthesized in the bulk of the dynamical ejecta, usually in association with heavy r-process elements. By computing synthetic spectra, we find that the possibility of detecting hydrogen and helium features in kilonova spectra is very unlikely for fiducial masses and luminosities, even when including nonlocal thermodynamic equilibrium effects. The latter could be crucial to observe helium lines a few days after merger for faint kilonovae or for luminous kilonovae ejecting large masses of helium. Finally, we compute the amount of strontium synthesized in the dynamical and spiral-wave wind ejecta, and find that it is consistent with (or even larger than, in the case of a long-lived remnant) the one required to explain early spectral features in the kilonova of GW170817.
The immediate vicinity of an active supermassive black hole—with its event horizon, photon ring, accretion disk and relativistic jets—is an appropriate place to study physics under extreme conditions, particularly general relativity and magnetohydrodynamics. Observing the dynamics of such compact astrophysical objects provides insights into their inner workings, and the recent observations of M87* by the Event Horizon Telescope1-6 using very-long-baseline interferometry techniques allows us to investigate the dynamical processes of M87* on timescales of days. Compared with most radio interferometers, very-long-baseline interferometry networks typically have fewer antennas and low signal-to-noise ratios. Furthermore, the source is variable, prohibiting integration over time to improve signal-to-noise ratio. Here, we present an imaging algorithm7,8 that copes with the data scarcity and temporal evolution, while providing an uncertainty quantification. Our algorithm views the imaging task as a Bayesian inference problem of a time-varying brightness, exploits the correlation structure in time and reconstructs (2 + 1 + 1)-dimensional time-variable and spectrally resolved images. We apply this method to the Event Horizon Telescope observations of M87*9 and validate our approach on synthetic data. The time- and frequency-resolved reconstruction of M87* confirms variable structures on the emission ring and indicates extended and time-variable emission structures outside the ring itself.
For decades we have known that the Sun lies within the Local Bubble, a cavity of low-density, high-temperature plasma surrounded by a shell of cold, neutral gas and dust1-3. However, the precise shape and extent of this shell4,5, the impetus and timescale for its formation6,7, and its relationship to nearby star formation8 have remained uncertain, largely due to low-resolution models of the local interstellar medium. Here we report an analysis of the three-dimensional positions, shapes and motions of dense gas and young stars within 200 pc of the Sun, using new spatial9-11 and dynamical constraints12. We find that nearly all of the star-forming complexes in the solar vicinity lie on the surface of the Local Bubble and that their young stars show outward expansion mainly perpendicular to the bubble's surface. Tracebacks of these young stars' motions support a picture in which the origin of the Local Bubble was a burst of stellar birth and then death (supernovae) taking place near the bubble's centre beginning approximately 14 Myr ago. The expansion of the Local Bubble created by the supernovae swept up the ambient interstellar medium into an extended shell that has now fragmented and collapsed into the most prominent nearby molecular clouds, in turn providing robust observational support for the theory of supernova-driven star formation.
The intrinsic alignments of galaxies, i.e. the correlation between galaxy shapes and their environment, are a major source of contamination for weak gravitational lensing surveys. Most studies of intrinsic alignments have so far focused on measuring and modelling the correlations of luminous red galaxies with galaxy positions or the filaments of the cosmic web. In this work, we investigate alignments around cosmic voids. We measure the intrinsic alignments of luminous red galaxies detected by the Sloan Digital Sky Survey around a sample of voids constructed from those same tracers and with radii in the ranges: [20-30; 30-40; 40-50] h-1 Mpc and in the redshift range z = 0.4-0.8. We present fits to the measurements based on a linear model at large scales, and on a new model based on the void density profile inside the void and in its neighbourhood. We constrain the free scaling amplitude of our model at small scales, finding no significant alignment at 1σ for either sample. We observe a deviation from the null hypothesis, at large scales, of 2σ for voids with radii between 20 and 30 h-1 Mpc, and 1.5σ for voids with radii between 30 and 40 h-1 Mpc and constrain the amplitude of the model on these scales. We find no significant deviation at 1σ for larger voids. Our work is a first attempt at detecting intrinsic alignments of galaxy shapes around voids and provides a useful framework for their mitigation in future void lensing studies.
We carried out 3D dust + gas radiative hydrodynamic simulations of forming planets. We investigated a parameter grid of a Neptune-mass, a Saturn-mass, a Jupiter-mass, and a five-Jupiter-mass planet at 5.2, 30, and 50 au distance from their star. We found that the meridional circulation (Szulágyi et al. 2014; Fung & Chiang 2016) drives a strong vertical flow for the dust as well, hence the dust is not settled in the midplane, even for millimeter-sized grains. The meridional circulation will deliver dust and gas vertically onto the circumplanetary region, efficiently bridging over the gap. The Hill-sphere accretion rates for the dust are ~10-8-10-10 M Jup yr-1, increasing with planet mass. For the gas component, the gain is 10-6-10-8 M Jup yr-1. The difference between the dust and gas-accretion rates is smaller with decreasing planetary mass. In the vicinity of the planet, the millimeter-sized grains can get trapped easier than the gas, which means the circumplanetary disk might be enriched with solids in comparison to the circumstellar disk. We calculated the local dust-to-gas ratio (DTG) everywhere in the circumstellar disk and identified the altitude above the midplane where the DTG is 1, 0.1, 0.01, and 0.001. The larger the planetary mass, the more the millimeter-sized dust is delivered and a larger fraction of the dust disk is lifted by the planet. The stirring of millimeter-sized dust is negligible for Neptune-mass planets or below, but significant above Saturn-mass planets.
The early Earth 4 billion years ago was a scarce place for the emergence of life. After the formation of the oceans, it was most likely difficult to extract the essential ionic building blocks of life, such as phosphate or salts, from the existing geomaterial in sufficiently high concentrations and suitable mixing ratios. We show how ubiquitous heat fluxes through rock fractures implement a physical solution to this problem: Thermal convection and thermophoresis together are able to separate calcium from phosphorus and thus use ubiquitous but otherwise inert apatite as a phosphate source. Furthermore, the mixing ratio of different salts is modified according to their thermophoretic properties, providing a suitable non-equilibrium environment for the first prebiotic reactions.
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 364653263 - TRR 235 (CRC235). Funding by the Volkswagen Initiative 'Life? - A Fresh Scientific Approach to the Basic Principles of Life', from the Simons Foundation and from Germany's Excellence Strategy EXC-2094-390783311 is gratefully acknowledged. We are grateful for funding by the European Research Council (ERC starting Grant, RiboLife) under 802000 and the MaxSynBio consortium, which is jointly funded by the Federal Ministry of Education and Research of Germany and the Max Planck Society. We wish to acknowledge the support of ERC ADV 2018 Grant 834225 (EAVESDROP). We thank for financial support from ERC-2017-ADG from the European Research Council. The work is supported by the Center for Nanoscience Munich (CeNS).
We reemphasize the strong dependence of the branching ratios $B(K^+\to\pi^+\nu\bar\nu)$ and $B(K_L\to\pi^0\nu\bar\nu)$ on $|V_{cb}|$ that is stronger than in rare $B$ decays, in particular for $K_L\to\pi^0\nu\bar\nu$. Thereby the persistent tension between inclusive and exclusive determinations of $|V_{cb}|$ weakens the power of these theoretically clean decays in the search for new physics (NP). We demonstrate how this uncertainty can be practically removed by considering within the SM suitable ratios of the two branching ratios between each other and with other observables like the branching ratios for $K_S\to\mu^+\mu^-$, $B_{s,d}\to\mu^+\mu^-$ and $B\to K(K^*)\nu\bar\nu$. We use as basic CKM parameters $V_{us}$, $|V_{cb}|$ and the angles $\beta$ and $\gamma$ in the unitarity triangle (UT). This avoids the use of the problematic $|V_{ub}|$. A ratio involving $B(K^+\to\pi^+\nu\bar\nu)$ and $B(B_s\to\mu^+\mu^-)$ while being $|V_{cb}|$-independent exhibits sizable dependence on the angle $\gamma$. It should be of interest for several experimental groups in the coming years. We point out that the $|V_{cb}|$-independent ratio of $B(B^+\to K^+\nu\bar\nu)$ and $B(B_s\to\mu^+\mu^-)$ from Belle II and LHCb signals a $1.8\sigma$ tension with its SM value. As a complementary test of the Standard Model, we propose to extract $|V_{cb}|$ from different observables as a function of $\beta$ and $\gamma$. We illustrate this with $\epsilon_K$, $\Delta M_d$ and $\Delta M_s$ finding tensions between these three determinations of $|V_{cb}|$ within the SM. From $\Delta M_s$ and $S_{\psi K_S}$ alone we find $|V_{cb}|=41.8(6)\times 10^{-3}$ and $|V_{ub}|=3.65(12)\times 10^{-3}$. We stress the importance of a precise measurement of $\gamma$. We obtain most precise SM predictions for considered branching ratios of rare K and B decays to date.
Radioactive decay of unstable atomic nuclei leads to liberation of nuclear binding energy in the forms of gamma-ray photons and secondary particles (electrons, positrons); their energy then energises surrounding matter. Unstable nuclei are formed in nuclear reactions, which can occur either in hot and dense extremes of stellar interiors or explosions, or from cosmic-ray collisions. In high-energy astronomy, direct observations of characteristic gamma-ray lines from the decay of radioactive isotopes are important tools to study the process of cosmic nucleosynthesis and its sources, as well as tracing the flows of ejecta from such sources of nucleosynthesis. These observations provide a valuable complement to indirect observations of radioactive energy deposits, such as the measurement of supernova light in the optical. Here we present basics of radioactive decay in astrophysical context, and how gamma-ray lines reveal details about stellar interiors, about explosions on stellar surfaces or of entire stars, and about the interstellar-medium processes that direct the flow and cooling of nucleosynthesis ashes once having left their sources. We address radioisotopes such as $^{56}$Ni, $^{44}$Ti, $^{26}$Al, $^{60}$Fe, $^{22}$Na, $^{7}$Be, and also how characteristic gamma-ray emission from the annihilation of positrons is connected to these.
In this contribution, I review some of the latest advances in calculational techniques in theoretical particle physics. I focus, in particular, on their application to the calculation of highly non-trivial scattering processes, which are relevant for precision phenomenology studies at the Large Hadron Collider at CERN.
We compute NRQCD long-distance matrix elements that appear in the inclusive production cross sections of P-wave heavy quarkonia in the framework of potential NRQCD. The formalism developed in this work applies to strongly coupled charmonia and bottomonia. This makes possible the determination of color-octet NRQCD long-distance matrix elements without relying on measured cross section data, which has not been possible so far. We obtain results for inclusive production cross sections of χcJ and χbJ at the LHC, which are in good agreement with measurements.
Gamma rays from nuclear processes such as radioactive decay and de-excitations are among the most-direct tools to witness the production and existence of specific nuclei and isotopes in and near cosmic nucleosynthesis sites. With space-borne instrumentation such as NuSTAR and SPI/INTEGRAL, and experimental techniques to handle a substantial instrumental background from cosmic-ray activations of the spacecraft and instrument, unique results have been obtained, from diffuse emissions of nuclei and positrons in interstellar surroundings of sources, as well as from observations of cosmic explosions and their radioactive afterglows. These witness non-sphericity in supernova explosions and a flow of nucleosynthesis ejecta through superbubbles as common source environments. Next-generation experiments that are awaiting space missions promise a next level of observational nuclear astrophysics.
The Cryogenic Rare Event Search with Superconducting Thermometers (CRESST) experiment aims at the direct detection of dark matter particles via their elastic scattering off nuclei in a scintillating CaWO$_4$ target crystal. The CaWO$_4$ crystal is operated together with a light detector at mK temperature and read out by a Transition Edge Sensor. For many years, CaWO$_4$ crystals have successfully been produced in-house at Technical University of Munich (TUM) with a focus on high radiopurity which is crucial to reduce background originating from radioactive contamination. In order to further improve the CaWO$_4$ crystals, an extensive chemical purification of the raw materials and the synthesised CaWO$_4$ powder has been performed. In addition, a temperature gradient simulation of the growth process and subsequently an optimisation of the growth furnace with the goal to reduce the intrinsic stress was carried out. We present results on the intrinsic stress in the CaWO$_4$ crystals and on the CaWO$_4$ powder radiopurity. A crystal grown from the purified material was installed in the current CRESST set-up. The detector is equipped with an instrumented holder which is used to measure the alpha decay rate of the crystal. We present a preliminary analysis showing a significantly reduced intrinsic background from natural decay chains.
In this paper we quantify the temporal variability and image morphology of the horizon-scale emission from Sgr A*, as observed by the EHT in 2017 April at a wavelength of 1.3 mm. We find that the Sgr A* data exhibit variability that exceeds what can be explained by the uncertainties in the data or by the effects of interstellar scattering. The magnitude of this variability can be a substantial fraction of the correlated flux density, reaching ∼100% on some baselines. Through an exploration of simple geometric source models, we demonstrate that ring-like morphologies provide better fits to the Sgr A* data than do other morphologies with comparable complexity. We develop two strategies for fitting static geometric ring models to the time-variable Sgr A* data; one strategy fits models to short segments of data over which the source is static and averages these independent fits, while the other fits models to the full data set using a parametric model for the structural variability power spectrum around the average source structure. Both geometric modeling and image-domain feature extraction techniques determine the ring diameter to be 51.8 ± 2.3 μas (68% credible intervals), with the ring thickness constrained to have an FWHM between ∼30% and 50% of the ring diameter. To bring the diameter measurements to a common physical scale, we calibrate them using synthetic data generated from GRMHD simulations. This calibration constrains the angular size of the gravitational radius to be
μas, which we combine with an independent distance measurement from maser parallaxes to determine the mass of Sgr A* to be
M
$_{⊙}$.
<jats:title>Abstract</jats:title><jats:p>The EUSO@TurLab project aims at performing experiments to reproduce Earth UV emissions as seen from a low Earth orbit by the planned missions of the JEM-EUSO program. It makes use of the TurLab facility, which is a laboratory, equipped with a 5 m diameter and 1 m depth rotating tank, located at the Physics Department of the University of Turin. All the experiments are designed and performed based on simulations of the expected response of the detectors to be flown in space. In April 2016 the TUS detector and more recently in October 2019 the Mini-EUSO experiment, both part of the JEM-EUSO program, have been placed in orbit to map the UV Earth emissions. It is, therefore, now possible to compare the replicas performed at TurLab with the actual images detected in space to understand the level of fidelity in terms of reproduction of the expected signals. We show that the laboratory tests reproduce at the order of magnitude level the measurements from space in terms of spatial extension and time duration of the emitted UV light, as well as the intensity in terms of expected counts per pixel per unit time when atmospheric transient events, diffuse nightlow background light, and artificial light sources are considered. Therefore, TurLab is found to be a very useful facility for testing the acquisition logic of the detectors of the present and future missions of the JEM-EUSO program and beyond in order to reproduce atmospheric signals in the laboratory.</jats:p>
Astronomical X-ray polarimetry is a powerful tool to extract information from hard X-rays spectrum of celestial bodies. In this context, the ComPol project aims to fly a Compton polarimeter in a CubeSat to investigate the emissions of the binary black hole (BBH) system Cygnus X-1. Based on Compton events detection, the CubeSat is featured by two detection systems: 1) a Silicon Drift Detector (SDD) matrix employed as scatterer and 2) a scintillator read by Silicon Photon Multiplier (SiPM) array to absorb the scattered photons. This paper focuses on the development of the first detection system for the reconstruction of Compton events. The readout electronic chain is composed of two 7-pixel SDD matrices, CUBE preamplifiers and SFERA ASIC Analog Pulse Processor (APP) handled by FPGA technology for its control and data flow management. This paper presents this readout system, composed by two boards: one housing SFERA ASIC, which includes an on-chip ADC, the other which includes the SDD matrix and the preamplifiers. In this manuscript, the test results performed with the pre-prototype system devised in the first phase of the project to characterize the SDD module and to evaluate the SFERA internal ADC performances are reported together with the ones performed with the prototype system.
The mean-field approximation based on effective interactions or density functionals plays a pivotal role in the description of finite quantum many-body systems that are too large to be treated by ab initio methods. Some examples are strongly interacting medium and heavy mass atomic nuclei and mesoscopic condensed matter systems. In this approach, the linear Schrödinger equation for the exact many-body wave function is mapped onto a non-linear one-body potential problem. This approximation, not only provides computationally very simple solutions even for systems with many particles, but due to the non-linearity, it also allows for obtaining solutions that break essential symmetries of the system, often connected with phase transitions. In this way, additional correlations are subsumed in the system. However, the mean-field approach suffers from the drawback that the corresponding wave functions do not have sharp quantum numbers and, therefore, many results cannot be compared directly with experimental data. In this article, we discuss general group-theory techniques to restore the broken symmetries, and provide detailed expressions on the restoration of translational, rotational, spin, isospin, parity and gauge symmetries, where the latter corresponds to the restoration of the particle number. In order to avoid the numerical complexity of exact projection techniques, various approximation methods available in the literature are examined. Applications of the projection methods are presented for simple nuclear models, realistic calculations in relatively small configuration spaces, nuclear energy density functional (EDF) theory, as well as in other mesoscopic systems. We also discuss applications of projection techniques to quantum statistics in order to treat the averaging over restricted ensembles with fixed quantum numbers. Further, unresolved problems in the application of the symmetry restoration methods to the EDF theories are highlighted in the present work.
Starting from the Bonn potential, the relativistic Brueckner-Hartree-Fock (RBHF) equations are solved for nuclear matter in the full Dirac space, which provides a unique way to determine the single-particle potentials and avoids the approximations applied in the RBHF calculations in the Dirac space with positive-energy states (PESs) only. The uncertainties of the RBHF calculations in the Dirac space with PESs only are investigated, and the importance of RBHF calculations in the full Dirac space is demonstrated. In the RBHF calculations in the full Dirac space, the empirical saturation properties of symmetric nuclear matter are reproduced, and the obtained equation of state agrees with the results based on the relativistic Green's function approach up to the saturation density.
The finite-temperature linear response theory based on the finite-temperature relativistic Hartree-Bogoliubov (FT-RHB) model is developed in the charge-exchange channel to study the temperature evolution of spin-isospin excitations. Calculations are performed self-consistently with relativistic point-coupling interactions DD-PC1 and DD-PCX. In the charge-exchange channel, the pairing interaction can be split into isovector (T=1) and isoscalar (T=0) parts. For the isovector component, the same separable form of the Gogny D1S pairing interaction is used both for the ground-state calculation as well as for the residual interaction, while the strength of the isoscalar pairing in the residual interaction is determined by comparison with experimental data on Gamow-Teller resonance (GTR) and isobaric analog resonance (IAR) centroid energy differences in even-even tin isotopes. The temperature effects are introduced by treating Bogoliubov quasiparticles within a grand-canonical ensemble. Thus, unlike the conventional formulation of the quasiparticle random-phase approximation (QRPA) based on the Bardeen-Cooper-Schrieffer (BCS) basis, our model is formulated within the Hartree-Fock-Bogoliubov (HFB) quasiparticle basis. Implementing a relativistic point-coupling interaction and a separable pairing force allows for the reduction of complicated two-body residual interaction matrix elements, which considerably decreases the dimension of the problem in the coordinate space. The main advantage of this method is to avoid the diagonalization of a large QRPA matrix, especially at finite temperature where the size of configuration space is significantly increased. The implementation of the linear response code is used to study the temperature evolution of IAR, GTR, and spin-dipole resonance (SDR) in even-even tin isotopes in the temperature range T=0–1.5 MeV.
The strong interaction among hadrons has been measured in the past by scattering experiments. Although this technique has been extremely successful in providing information about the nucleon-nucleon and pion-nucleon interactions, when unstable hadrons are considered the experiments become more challenging. In the last few years, the analysis of correlations in the momentum space for pairs of stable and unstable hadrons measured in pp and p+Pb collisions by the ALICE Collaboration at the LHC has provided a new method to investigate the strong interaction among hadrons. In this article, we review the numerous results recently achieved for hyperon-nucleon, hyperon-hyperon, and kaon-nucleon pairs, which show that this new method opens the possibility of measuring the residual strong interaction of any hadron pair.
The topic of this work is the non-traditional baryon–baryon femtoscopy, the goal of which is to study the interaction potential between different baryon pairs, assuming that their emission source is known. A new analysis framework (CATS) has been developed to model the correlation function. Further, a new model to describe the emission source was created, which accounts for the modulation related to particle production through the decays of short-lived resonances. Finally, these new analysis methods were applied to study the strong interaction acting between proton-Lambda and Lambda-Lambda pairs.
This thesis presents several multi-messenger analyses, searching for the long-sough sources of high-energy cosmic radiation. By combining data from the IceCube Neutrino Detector and other multi-frequency observatories, the first two significant neutrino point sources - the blazar TXS 0506+056 and the Seyfert 2 galaxy NGC 1068 - are identified. Furthermore, a correlation study of high-energy neutrinos with gamma-ray blazars finds 3.2σ evidence for an astrophysical neutrino flux contribution from IBL/HBL blazars. Finally, we present a deep neural network that helps to optimize IceCube’s event selection pipeline.
We use separate universe simulations with the IllustrisTNG galaxy formation model to predict the local PNG bias parameters bΦ and bΦδ of atomic neutral hydrogen, H$_{I}$. These parameters and their relation to the linear density bias parameter b
$_{1}$ play a key role in observational constraints of the local PNG parameter f
$_{NL}$ using the H$_{I}$ power spectrum and bispectrum. Our results show that the popular calculation based on the universality of the halo mass function overpredicts the bΦ(b
$_{1}$) and bΦδ(b
$_{1}$) relations measured in the simulations. In particular, our results show that at z ≲ 1 the H$_{I}$ power spectrum is more sensitive to f
$_{NL}$ compared to previously thought (bΦ is more negative), but is less sensitive at other epochs (bΦ is less positive). We discuss how this can be explained by the competition of physical effects such as that large-scale gravitational potentials with local PNG (i) accelerate the conversion of hydrogen to heavy elements by star formation, (ii) enhance the effects of baryonic feedback that eject the gas to regions more exposed to ionizing radiation, and (iii) promote the formation of denser structures that shield the H$_{I}$ more efficiently. Our numerical results can be used to revise existing forecast studies on f
$_{NL}$ using 21 cm line-intensity mapping data. Despite this first step towards predictions for the local PNG bias parameters of H$_{I}$, we emphasize that more work is needed to assess their sensitivity on the assumed galaxy formation physics and H$_{I}$ modeling strategy.
Using the CLASH-VLT survey, we assembled an unprecedented sample of 1234 spectroscopically confirmed members in Abell~S1063, finding a dynamically complex structure at z_cl=0.3457 with a velocity dispersion \sigma_v=1380 -32 +26 km s^-1. We investigate cluster environmental and dynamical effects by analysing the projected phase-space diagram and the orbits as a function of galaxy spectral properties. We classify cluster galaxies according to the presence and strength of the [OII] emission line, the strength of the Hδ absorption line, and colours. We investigate the relationship between the spectral classes of galaxies and their position in the projected phase-space diagram. We analyse separately red and blue galaxy orbits. By correlating the observed positions and velocities with the projected phase-space constructed from simulations, we constrain the accretion redshift of galaxies with different spectral types. Passive galaxies are mainly located in the virialised region, while emission-line galaxies are outside r_200, and are accreted later into the cluster. Emission-lines and post-starbursts show an asymmetric distribution in projected phase-space within r_200, with the first being prominent at Delta_v/sigma <~-1.5$, and the second at Delta_v/ sigma >~ 1.5, suggesting that backsplash galaxies lie at large positive velocities. We find that low-mass passive galaxies are accreted in the cluster before the high-mass ones. This suggests that we observe as passives only the low-mass galaxies accreted early in the cluster as blue galaxies, that had the time to quench their star formation. We also find that red galaxies move on more radial orbits than blue galaxies. This can be explained if infalling galaxies can remain blue moving on tangential orbits.
Narrow-band imaging surveys allow the study of the spectral characteristics of galaxies without the need of performing their spectroscopic follow-up. In this work, we forward-model the Physics of the Accelerating Universe Survey (PAUS) narrow-band data. The aim is to improve the constraints on the spectral coefficients used to create the galaxy spectral energy distributions (SED) of the galaxy population model in Tortorelli et al. 2020. In that work, the model parameters were inferred from the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) data using Approximate Bayesian Computation (ABC). This led to stringent constraints on the B-band galaxy luminosity function parameters, but left the spectral coefficients only broadly constrained. To address that, we perform an ABC inference using CFHTLS and PAUS data. This is the first time our approach combining forward-modelling and ABC is applied simultaneously to multiple datasets. We test the results of the ABC inference by comparing the narrow-band magnitudes of the observed and simulated galaxies using Principal Component Analysis, finding a very good agreement. Furthermore, we prove the scientific potential of the constrained galaxy population model to provide realistic stellar population properties by measuring them with the SED fitting code \textsc{CIGALE}. We use CFHTLS broad-band and PAUS narrow-band photometry for a flux-limited (i<22.5) sample of galaxies up to redshift z∼0.8. We find that properties like stellar masses, star-formation rates, mass-weighted stellar ages and metallicities are in agreement within errors between observations and simulations. Overall, this work shows the ability of our galaxy population model to correctly forward-model a complex dataset such as PAUS and the ability to reproduce the diversity of galaxy properties at the redshift range spanned by CFHTLS and PAUS.
The scalar field theory of cosmological inflation constitutes nowadays one of the preferred scenarios for the physics of the early universe. In this paper we aim at studying the inflationary universe making use of a numerical lattice simulation. Various lattice codes have been written in the last decades and have been extensively used for understating the reheating phase of the universe, but they have never been used to study the inflationary phase itself far from the end of inflation (i.e. about 50 e-folds before the end of inflation). In this paper we use a lattice simulation to reproduce the well-known results of some simple models of single-field inflation, particularly for the scalar field perturbation. The main model that we consider is the standard slow-roll inflation with an harmonic potential for the inflaton field. We explore the technical aspects that need to be accounted for in order to reproduce with precision the nearly scale invariant power spectrum of inflaton perturbations. We also consider the case of a step potential, and show that the simulation is able to correctly reproduce the oscillatory features in the power spectrum of this model. Even if a lattice simulation is not needed in these cases, that are well within the regime of validity of linear perturbation theory, this sets the basis to future work on using lattice simulations to study more complicated models of inflation.
Analysis of large galaxy surveys requires confidence in the robustness of numerical simulation methods. The simulations are used to construct mock galaxy catalogues to validate data analysis pipelines and identify potential systematics. We compare three N-body simulation codes, abacus, gadget-2, and swift, to investigate the regimes in which their results agree. We run N-body simulations at three different mass resolutions, 6.25 × 10^8, 2.11 × 10^9, and 5.00 × 10^9 h^−1 M_⊙, matching phases to reduce the noise within the comparisons. We find systematic errors in the halo clustering between different codes are smaller than the Dark Energy Spectroscopic Instrument (DESI) statistical error for s > 20 h−1 Mpc in the correlation function in redshift space. Through the resolution comparison we find that simulations run with a mass resolution of 2.1 × 10^9 h^−1 M_⊙ are sufficiently converged for systematic effects in the halo clustering to be smaller than the DESI statistical error at scales larger than 20 h−1 Mpc. These findings show that the simulations are robust for extracting cosmological information from large scales which is the key goal of the DESI survey. Comparing matter power spectra, we find the codes agree to within 1 per cent for k ≤ 10 h Mpc^−1. We also run a comparison of three initial condition generation codes and find good agreement. In addition, we include a quasi-N-body code, FastPM, since we plan use it for certain DESI analyses. The impact of the halo definition and galaxy–halo relation will be presented in a follow-up study.
We provide the first combined cosmological analysis of South Pole Telescope (SPT) and Planck cluster catalogs. The aim is to provide an independent calibration for Planck scaling relations, exploiting the cosmological constraining power of the SPT-SZ cluster catalog and its dedicated weak lensing (WL) and X-ray follow-up observations. We build a new version of the Planck cluster likelihood. In the $\nu \Lambda$CDM scenario, focusing on the mass slope and mass bias of Planck scaling relations, we find $\alpha_{\text{SZ}} = 1.49 _{-0.10}^{+0.07}$ and $(1-b)_{\text{SZ}} = 0.69 _{-0.14}^{+0.07}$ respectively. The results for the mass slope show a $\sim 4 \, \sigma$ departure from the self-similar evolution, $\alpha_{\text{SZ}} \sim 1.8$. This shift is mainly driven by the matter density value preferred by SPT data, $\Omega_m = 0.30 \pm 0.03$, lower than the one obtained by Planck data alone, $\Omega_m = 0.37 _{-0.06}^{+0.02}$. The mass bias constraints are consistent both with outcomes of hydrodynamical simulations and external WL calibrations, $(1-b) \sim 0.8$, and with results required by the Planck cosmic microwave background cosmology, $(1-b) \sim 0.6$. From this analysis, we obtain a new catalog of Planck cluster masses $M_{500}$. We estimate the relation between the published Planck derived $M_{\text{SZ}}$ masses and our derived masses, as a measured mass bias. We analyse the mass, redshift and detection noise dependence of this quantity, finding an increasing trend towards high redshift and low mass. These results mimic the effect of departure from self-similarity in cluster evolution, showing different dependencies for the low-mass high-mass, low-z high-z regimes.