Simon Birrer – TDCOSMO H0 results with more data and fewer assumptions

Simon tells us about the strong lensing time delay measurements of the Hubble constant performed by TDCOSMO.

In the recent paper he has relaxed assumptions about the density profiles around the lenses. Specifically, in this analysis it is no longer assumed that the density follows a power law, or NFW profile plus stars (with an assumption that the mass follows the light). This naturally widens the error bars on the measurement because the “mass sheet degeneracy” is no longer pinned down.

In order to pin this degeneracy back down they use kinematic data from the lenses to model the density profile. They also calibrate their model on an additional external data set of strong lenses. When using just TDCOSMO lenses the central value stays the same, but when adding the new “SLACS” lenses, the measured value of H0 drops to be almost exactly the same as the Planck value. Cat, meet pigeons.

Simon also goes into what the future will look like and what data is needed to bring the accuracy of TDCOSMO back to what it was before these assumptions were relaxed.

Simon: https://sibirrer.github.io
The paper: https://arxiv.org/abs/2007.02941
The analysis pipeline: https://github.com/TDCOSMO/hierarchy_analysis_2020_public

Benjamin Giblin – What is KiDS-1000? And why we can trust its results!

Ben Giblin tells us about the in-process KiDS-1000 results release. At the time this video is released the collaboration have satisfied themselves that their data is robust and passes all relevant consistency checks, but haven’t yet released any cosmological results (but see the data products link below that was referenced in the Comments for v2 of the paper).

Ben: https://benjamingiblin.wixsite.com/home
Paper: https://arxiv.org/abs/2007.01845
The KiDS-1000 data products: http://kids.strw.leidenuniv.nl/DR4/lensing.php

Natalia Porqueres – You can get 3D info from quasar Lyman α absorption lines using forward modelling

Natalia speaks about forward modelling in the context of Lyman α absorption lines of quasar spectra. Forward modelling essentially takes the initial conditions from your early universe model and evolves them forward to give the full observational prediction of all measurable things, for those specific initial conditions.

This is different to, e.g., having a “summary statistic” of this full set of initial conditions and/or observations that you extract to constrain your model. An example of a summary statistic might be a bispectrum in the late universe. The bispectrum captures some of the information lost from the power spectrum due to non-linearity, but not all. Whereas, in principle, if the forward modelling is done precisely enough no information is lost.

Of course “done precisely enough” is the crucial phrase and forward modelling needs to balance precision with speed. The more common summary statistics methods are usually much, much faster than forward modelling.

So, Natalia presents a new method for evolving non-linearities faster, one that in particular does a much better job at capturing under-densities than the typical particle-mesh codes. She also shows that this forward modelling technique is able to extract, statistically, information about the 3D regions between absorption lines because it uses the full set of (correlated) initial conditions as its set of model parameters.

Natalia: https://nataliaporqueres.wixsite.com/home
Paper: https://arxiv.org/abs/2005.12928

Jose Bernal – How to tell if your cosmological approximations are accurate

José tells us about how we can make sure our predictions from cosmology models can be both precise *and* accurate.

José: https://joseluisbernal.wixsite.com/home
Paper1: https://arxiv.org/abs/2005.10384
Paper2: https://arxiv.org/abs/2005.09666

Modified CLASS code: https://github.com/nbellomo/Multi_Class
José’s talk from the Cosmology from Home conference: https://youtu.be/WiTcAUXIUO4

Amanda Weltman – Fast radio bursts and cosmology

Amanda Weltman tells us about fast radio bursts (FRBs), which have been in the news recently in the context of the “missing baryons”. She tells us about that measurement (and her own theoretical work preceding it), but also about FRBs in general and how they’ll be useful for cosmology.

FRBs are what it sounds like they are, short bursts of radio frequency radiation detected from outside the solar system. We still don’t know 100% what their origin is, but it is possible at least some of them come from magnetars (neutron stars with very large magnetic fields).

A very useful property of FRBs is that they have a non-zero dispersion relation in the intergalactic medium, because they interact with the ionised electrons in that space. This makes it possible to measure the electron density of the inter-galactic medium, and/or to measure how far the FRBs are away from us. In each case this is based on how much the frequencies within each FRB have dispersed by the time we detect them.

FRB theory wiki: https://frbtheorycat.org/index.php/Main_Page
Amanda: https://en.wikipedia.org/wiki/Amanda_Weltman

Relevant papers:
Probing Diffuse Gas with Fast Radio Bursts https://arxiv.org/abs/1909.02821
Fast Radio Burst Cosmology and HIRAX https://arxiv.org/abs/1905.07132
A Living Theory Catalogue for FRBs https://arxiv.org/abs/1810.05836

Clare Burrage – Atomic lab experiments rule out almost all of chameleon dark energy model-space

Clare tells us about how chameleon dark energy models can be very tightly constrained by simple atomic lab experiments (well, simple compared to particle accelerators and space telescopes).

Chameleon models were popular for dark energy because their non-linear potentials generically create screening mechanisms, which stop them generating a “fifth force” even though they couple to matter. This means we wouldn’t normally see their effects on Earth. However, in a suitably precise atomic experiment the screening can be minimised and their effect measured.

In less than five years, Clare and her collaborators went from the idea to the completed experiment, which rules out almost all of the viable parameter space where a chameleon model can explain dark energy. Only a tiny sliver of allowed space is left, albeit at fundamental parameter values that would be natural ones – so maybe the chameleon is hiding right there waiting?

Most relevant paper: https://arxiv.org/abs/1812.08244
Clare: https://www.nottingham.ac.uk/physics/people/clare.burrage

Jurek Bauer – Fuzzy dark matter arising from GUT scale physics should be ruled in/out by SKA

Jurek tells us about the prospects for constraining axion (aka ultralight aka fuzzy) dark matter with future 21cm intensity mapping survey such as SKA and HIRAX.

Axion models arising from specific energy scales predict that an axion with a given mass will only provide a certain fraction of the total dark matter. It seems plausible that with SKA we will be able to detect ultralight dark matter even if it arises from a GUT scale axion model. An observational noise model for SKA was included to make this claim, but as of yet no theoretical uncertainty is included in the calculation.

Paper: https://arxiv.org/abs/2003.09655

Hamsa Padmanabhan – The overlap between HI halo modelling and cosmology

Hamsa tells us about how baryonic gases arrange themselves inside galaxies, specifically in the context of the HI halo model (with some deviation to discuss other gases like molecular hydrogen and carbon monoxide).

This is a great talk in its own right, full of really useful information for cosmologists who want to know how intensity mapping, etc, will be used for cosmology – but, it also acts as a good companion talk to Jurek Bauer’s talk on constraining axion dark matter using intensity mapping (https://youtu.be/bMlrDOWw978). Hamsa was a coauthor on Jurek’s paper and the expert in that collaboration on the HI/intensity mapping part.

This video builds up to eventually being about this paper, https://arxiv.org/abs/2002.01489​, however in getting there it covers the whole background of modelling HI and other baryonic gases within galaxies in an information packed, but accessible way.

Hamsa: https://fiteoweb.unige.ch/~padmanab/

Colin Hill – Early dark energy doesn’t make cosmology concordant again

Colin tells us about how even though early dark energy can alleviate the Hubble tension, it does so at the expense of increasing other tension. Early dark energy can raise the predicted expansion rate inferred from the cosmic microwave background (CMB), by changing the sound horizon at the last scattering surface. However, the early dark energy also suppresses the growth of perturbations that are within the horizon while it is active. This mean that, to fit the CMB, the matter density must increase (and the spectral index becomes more blue tilted). The consequence is that the matter power spectrum should get bigger.

In their paper, Colin and his coauthors show that this affects the weak lensing measurements by DES, KiDS and HSC, and therefore including those experiments in a full data analysis makes things discordant again. The Hubble parameter is pulled back down, restoring most of the tension between local and CMB measurements of H0, and the tension in S_8 gets magnified by the increased mismatch in the predicted and measured matter power spectrum.

It is also worth noting that, if you exclude the local measurements of H0, there is no preference for early dark energy in the data.

There is hope, perhaps. If the sound horizon could be changed without altering the growth of perturbations that might still be a valid resolution, but it is unlikely to be caused by early dark energy (alone).

Paper: https://arxiv.org/abs/2003.07355
Colin: http://user.astro.columbia.edu/~jch/

Adam Riess – Cepheid crowding is not the cause of the Hubble tension

Adam tells us about what he and collaborators considered to be the leading candidate for a systematic error in the SHOES measurement of the expansion rate of the Universe. This is “Cepheid crowding”, the possibility that background sources change our interpretation of Cepheid brightness, ruining one step in the SHOES distance ladder.

They devise a nice way to test whether the crowding is correctly accounted for and find that it is. So crowding cannot be the “explanation” of an error in the distance ladder measurement of H0.

He also stresses that both the early and late universe measurements of H0 are now backed up by multiple different measurements. Therefore, if the resolution isn’t fundamental physics, then no single systematic can entirely solve the tension.

We also discuss a few topics around the H0 tension, including what resolution of the tension he would pick as most likely if forced to gamble (answer: a deviation from vanilla ΛCDM in the early universe).

Paper: https://arxiv.org/abs/2005.02445
Adam: https://www.stsci.edu/~ariess/