AxioNyx – Public code to simulate both Fuzzy and Cold Dark Matter in hires | FDM≡CDM on large scales

Bodo Schwabe and Mateja Gosenca tell us about AxioNyx, which is a new public code for simulating both Ultralight (or “Fuzzy”) dark matter (FDM) and Cold dark matter (CDM) simultaneously. The code simulates the FDM using adaptive mesh refinement and the CDM using N-body particles. As far as I’m aware it is the first publicly available code that can do both without needing adaptation out of the tin.

The code passes a bunch of sanity/consistency checks, matching linear theory when it should match and deviating when it should deviate. The paper discussed mainly just introduces AxioNyx; the new physics will come in future papers. Things Bodo, Mateja and collaborators will be tackling are: simulations with full cosmological initial conditions for the combination of FDM+CDM, adding baryons (long-term project), gravitational heating of stars in FDM halos, and re-assessment of earlier constraints on FDM with FDM now only a sub-fraction of the total dark matter content (e.g. the Lyman Alpha constraints). Stay tuned, and/or get in touch with them if you’re keen to help make any of that happen :-).

One neat result from this paper was the confirmation of the “Schrodinger-Vlasov” correspondence. This essentially says that FDM and CDM will behave equivalently on large enough scales. On smaller scales the fuzziness of the FDM causes it to deviate (essentially, it is so light that its deBroglie wavelength is astrophysically relevant). This correspondence has been shown statistically, and as a limiting result, in earlier papers but this is (as far as I’m aware) the first paper where the FDM and CDM are in the same gravitational potentials in the same simulation and one can see them do the same stuff on large scales. It wasn’t surprising, but it’s still good to check and see it happen.

The most interesting things happen when the proportions of FDM and CDM are similar (i.e. when one doesn’t just dominate the other entirely), which might be an interesting thing for considering in future papers too.

Paper: https://arxiv.org/abs/2007.08256
AxioNyx: https://github.com/axionyx

Bodo: https://bodoschwabe.github.io/
Mateja: https://cosmology.auckland.ac.nz/2018/08/20/new-postdoc/

Marika Asgari – KiDS 1000 is statistics dominated and in 3σ tension with Planck cosmology

Marika tells us about the recent Kilo Degree Survey (KiDS) cosmological results. These are the first results from KiDS after they have reached 1000 square degrees.

Marika first explains how they know that the results are “statistics dominated” and not “systematics dominated”, meaning that the dominant uncertainty comes from statistical errors, not systematic ones.

She then presents the cosmological results, which primarily constrain the clumpiness of matter in the universe, and which therefore constrain Ω_m and σ_8. In the combined parameter “S_8”, which is constrained almost independently from Ω_m by their data they see a more than 3σ tension with the equivalent parameter one would infer from Planck.

Marika: https://www.roe.ac.uk/~ma/
Papers: https://arxiv.org/abs/2007.15632​ and https://arxiv.org/abs/2007.15633
KiDS webinar: https://www.youtube.com/watch?v=kYkN6Yl8x6M&t=0s

Simone Aiola – ACT+WMAP is as powerful as Planck; CMB results on ΛCDM are robust

Simone talks about the latest Atacama Cosmology Telescope cosmology paper(s). He goes into detail at the beginning about the history of ACT and where it is now, as well as what data was in the latest results.

He then goes into how the data has been checked for robustness and how it is sufficiently consistent with WMAP to motivate combining the two data sets (i.e. WMAP for the large scales and ACT for the small scales and polarisation). Combined, WMAP and ACT are as constraining as Planck, but don’t add much additional constraining power when added to Planck (they do measure the same sky of course!).

The ACT polarisation data is sufficiently high in signal to noise that you can even make maps of the data that look as clear as the Planck/WMAP temperature maps, which is quite stunning.

Then Simone goes into the final cosmological results. Overall, Planck and WMAP+ACT are mostly consistent. In particular, their H0 predictions are very similar. There is a small amount of tension that manifests as a different value of the spectral index (or, more precisely, in the ns vs omega_b constraint contours). This is only ~2.4 sigma, so worth keeping an eye on, but not worth getting too excited about yet.

Papers: https://arxiv.org/abs/2007.07288 and https://arxiv.org/abs/2007.07289
Data: https://lambda.gsfc.nasa.gov/product/act/actpol_prod_table.cfm
Simone: https://users.flatironinstitute.org/~saiola/

Eva-Maria Mueller: Even BAO alone requires dark energy + cosmology from the last eBOSS data release

Eva-Maria tells us about the cosmological results from the final (cosmology relevant) SDSS/eBOSS data release. eBOSS is a spectroscopic galaxy survey, precisely mapping galaxies, quasars and the Lyman-α forest.

The two key probes are baryon acoustic oscillations (BAO) and redshift space distortions (RSD). BAO acts as a distance ladder, and eBOSS now has measurements of BAO at enough redshifts that their data alone constrains the expansion history of the universe precisely enough to prove the existence of a dark energy component. The RSD measurements probe the growth of structure, but are unable to resolve any tension between weak lensing measurements and the CMB, with their results lying right between the two alternatives, consistent with both.

The distance scale in BAO can also be anchored using constraints from big bang nucleosynthesis (BBN) allowing eBOSS to infer the expansion rate of the universe today without reference to the CMB. Their measurement is consistent with the CMB and inconsistent with the local distance ladder measurements (e.g. SH0ES).

The BAO, RSD and CMB are all consistent with the time evolution of the (unanchored) supernovae so all these data sets can be combined to give an overall constraint. This gives very tight bounds on any deviations from a cosmological constant in ΛCDM, and on any deviations from a flat geometry within our observable universe.

The next stage in spectroscopic surveys is DESI, and then Euclid. COVID permitting, we should have the first results from DESI within three years.

Eva-Maria: https://www2.physics.ox.ac.uk/contacts/people/muellere
Paper: https://arxiv.org/abs/2007.08991

Simon Birrer – TDCOSMO H0 results with more data and fewer assumptions

Simon tells us about the strong lensing time delay measurements of the Hubble constant performed by TDCOSMO.

In the recent paper he has relaxed assumptions about the density profiles around the lenses. Specifically, in this analysis it is no longer assumed that the density follows a power law, or NFW profile plus stars (with an assumption that the mass follows the light). This naturally widens the error bars on the measurement because the “mass sheet degeneracy” is no longer pinned down.

In order to pin this degeneracy back down they use kinematic data from the lenses to model the density profile. They also calibrate their model on an additional external data set of strong lenses. When using just TDCOSMO lenses the central value stays the same, but when adding the new “SLACS” lenses, the measured value of H0 drops to be almost exactly the same as the Planck value. Cat, meet pigeons.

Simon also goes into what the future will look like and what data is needed to bring the accuracy of TDCOSMO back to what it was before these assumptions were relaxed.

Simon: https://sibirrer.github.io
The paper: https://arxiv.org/abs/2007.02941
The analysis pipeline: https://github.com/TDCOSMO/hierarchy_analysis_2020_public

Benjamin Giblin – What is KiDS-1000? And why we can trust its results!

Ben Giblin tells us about the in-process KiDS-1000 results release. At the time this video is released the collaboration have satisfied themselves that their data is robust and passes all relevant consistency checks, but haven’t yet released any cosmological results (but see the data products link below that was referenced in the Comments for v2 of the paper).

Ben: https://benjamingiblin.wixsite.com/home
Paper: https://arxiv.org/abs/2007.01845
The KiDS-1000 data products: http://kids.strw.leidenuniv.nl/DR4/lensing.php

Natalia Porqueres – You can get 3D info from quasar Lyman α absorption lines using forward modelling

Natalia speaks about forward modelling in the context of Lyman α absorption lines of quasar spectra. Forward modelling essentially takes the initial conditions from your early universe model and evolves them forward to give the full observational prediction of all measurable things, for those specific initial conditions.

This is different to, e.g., having a “summary statistic” of this full set of initial conditions and/or observations that you extract to constrain your model. An example of a summary statistic might be a bispectrum in the late universe. The bispectrum captures some of the information lost from the power spectrum due to non-linearity, but not all. Whereas, in principle, if the forward modelling is done precisely enough no information is lost.

Of course “done precisely enough” is the crucial phrase and forward modelling needs to balance precision with speed. The more common summary statistics methods are usually much, much faster than forward modelling.

So, Natalia presents a new method for evolving non-linearities faster, one that in particular does a much better job at capturing under-densities than the typical particle-mesh codes. She also shows that this forward modelling technique is able to extract, statistically, information about the 3D regions between absorption lines because it uses the full set of (correlated) initial conditions as its set of model parameters.

Natalia: https://nataliaporqueres.wixsite.com/home
Paper: https://arxiv.org/abs/2005.12928

Jose Bernal – How to tell if your cosmological approximations are accurate

José tells us about how we can make sure our predictions from cosmology models can be both precise *and* accurate.

José: https://joseluisbernal.wixsite.com/home
Paper1: https://arxiv.org/abs/2005.10384
Paper2: https://arxiv.org/abs/2005.09666

Modified CLASS code: https://github.com/nbellomo/Multi_Class
José’s talk from the Cosmology from Home conference: https://youtu.be/WiTcAUXIUO4

Amanda Weltman – Fast radio bursts and cosmology

Amanda Weltman tells us about fast radio bursts (FRBs), which have been in the news recently in the context of the “missing baryons”. She tells us about that measurement (and her own theoretical work preceding it), but also about FRBs in general and how they’ll be useful for cosmology.

FRBs are what it sounds like they are, short bursts of radio frequency radiation detected from outside the solar system. We still don’t know 100% what their origin is, but it is possible at least some of them come from magnetars (neutron stars with very large magnetic fields).

A very useful property of FRBs is that they have a non-zero dispersion relation in the intergalactic medium, because they interact with the ionised electrons in that space. This makes it possible to measure the electron density of the inter-galactic medium, and/or to measure how far the FRBs are away from us. In each case this is based on how much the frequencies within each FRB have dispersed by the time we detect them.

FRB theory wiki: https://frbtheorycat.org/index.php/Main_Page
Amanda: https://en.wikipedia.org/wiki/Amanda_Weltman

Relevant papers:
Probing Diffuse Gas with Fast Radio Bursts https://arxiv.org/abs/1909.02821
Fast Radio Burst Cosmology and HIRAX https://arxiv.org/abs/1905.07132
A Living Theory Catalogue for FRBs https://arxiv.org/abs/1810.05836

Clare Burrage – Atomic lab experiments rule out almost all of chameleon dark energy model-space

Clare tells us about how chameleon dark energy models can be very tightly constrained by simple atomic lab experiments (well, simple compared to particle accelerators and space telescopes).

Chameleon models were popular for dark energy because their non-linear potentials generically create screening mechanisms, which stop them generating a “fifth force” even though they couple to matter. This means we wouldn’t normally see their effects on Earth. However, in a suitably precise atomic experiment the screening can be minimised and their effect measured.

In less than five years, Clare and her collaborators went from the idea to the completed experiment, which rules out almost all of the viable parameter space where a chameleon model can explain dark energy. Only a tiny sliver of allowed space is left, albeit at fundamental parameter values that would be natural ones – so maybe the chameleon is hiding right there waiting?

Most relevant paper: https://arxiv.org/abs/1812.08244
Clare: https://www.nottingham.ac.uk/physics/people/clare.burrage