Volker Springel | GADGET-4

Volker Springel talks about the new GADGET-4 code.

Featuring all the things you wanted to know about GADGET-4 but were afraid to ask, including:

– What new algorithms are used to make it better and faster than earlier versions
– Why you never heard of GADGET-3
– What new features you can now use when running cosmological simulations (e.g. varying the algorithms; or outputting lightcones, halo catalogues and merger trees “on the fly”)
– why storing the locations of your simulation particles as integers is better than storing them as floating point numbers
– and what the author of the most used simulation code in cosmology thinks are the most interesting questions in cosmology at the moment (both related and unrelated to simulations)

You could read the 80 page paper, or you could just watch this video instead!

Volker: https://www.mpa-garching.mpg.de/person/55019/2377
GADGET-4 code: https://wwwmpa.mpa-garching.mpg.de/gadget4/
Code paper: https://arxiv.org/abs/2010.03567

Alvaro Pozo – Potential evidence for wave dark matter (via core-halo transition in dwarf galaxies)

Alvaro tells us about a recent paper where he an collaborators detect the transition between a core (flat density profile) and halo (power law density profile) in dwarf galaxies.

The full core + halo profile matches very closely what is expected in wave/ultralight/fuzzy/axionic dark matter simulations (without baryonic effects included). That is, there is a very flat core, which then drops off suddenly and then flattens off to a decaying power-law profile. The core matches the soliton expected in wave dark matter and the halo matches an outer NFW profile expected outside the soliton.

They also detect evidence for tidal stripping of the matter in the galaxies. The galaxies closer to the centre of the milky way have their transition point between core and halo happen at smaller densities (despite the core density itself not being systematically smaller). The transition also appears to happen closer to the centre of the galaxy, which matches simulations.

Of course the core-+halo pattern they have clearly observed might be due to something else, but the match between wave dark matter simulations and observations is impressive.

The huge caveat is that the mass for the dark matter that they use is very small and in significant tension with Lyman Alpha constraints for wave-like dark matter. This might indicate that the source of this universal core+halo pattern they’re observing comes from something else, or it might indicate that the wave dark matter is more complicated than vanilla models…

Stay tuned to the arXiv for future papers looking at this in more detail!

Paper: https://arxiv.org/abs/2010.10337

A tantalising hint of parity violation in the cosmic microwave background (Minami and Komatsu)

Eiichiro Komatsu and Yuto Minami talk about their recent work, first devising a way to extract a parity violating signature in the cosmic microwave background (i.e. birefringence) and then measuring it in Planck 2018 data.

They get a 2.4 sigma hint of a result, which is “important, if true”.

This signal is measured via correlation of E mode and B mode polarisation in the CMB. If the universe is birefringent then E mode polarisation would change into B mode and there would be a non-zero correlation between the two measured modes. Unfortunately, if the detector angle on the telescope wasn’t calibrated perfectly this would mimic the interesting signal. Yuto and Eiichiro’s new method is to measure this detector angle by looking at the E-B correlation in the foregrounds, where the light hasn’t travelled far enough to be affected by any potential birefringence in the universe.

This allows them to partially distinguish between the two types of measured E-B correlation. And with this method they get the hint of a signal for the new physics in the Planck 2018 data.

The method can be applied to the data from all other telescopes that have measured the polarisation of the microwave background and can therefore be confirmed, ruled out, or at least examined by SPT, ACT, Polarbear, etc.

Yuto and Eiichiro are also working with Planck to see if they can further rule out other systematics, e.g. an intrinsic E-B correlation in the foreground polarisation.

Yuto: https://orcid.org/0000-0003-2176-8089
Eiichiro: https://wwwmpa.mpa-garching.mpg.de/~komatsu/

Paper: https://arxiv.org/abs/2011.11254
Published on PRL: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.125.221301
The slides: https://wwwmpa.mpa-garching.mpg.de/~komatsu/presentation/cosmotalk_birefringence.pdf

Talk date: Nov. 23, 2020

Maybe Milgromian gravity solves the Hubble tension!? – The KBC void & νHDM model (Haslbauer & Banik)

Moritz Haslbauer and Indranil Banik talk about the Keenan, Barger and Cowie (KBC) void and the νHDM model of cosmology.

The KBC void is a locally observed ~300 Mpc scale under-density that appears to be impossible within ΛCDM (under-densities shouldn’t have emptied out this much by now).

νHDM is a model that has sterile neutrinos as a hot dark matter component and enhanced gravity in environments with a weak gravitational field. This dark matter adequately explains the CMB and expansion history of the universe, but doesn’t cluster on the smallest scales. The modified gravity (essentially Milgromian dynamics, or MOND) then kicks in on these scales to produce phenomena like the correct rotation curves in galaxies.

Moritz and Indranil give an intro to both KBC and νHDM, and then explain how this model is consistent with the main tent-poles of modern cosmology (e.g. the CMB anisotropies, nucleosynthesis, the displacement of the gas and weak lensing in the bullet cluster, galaxy rotation curves, the clustering of galaxies) and can also alleviate some of the tensions in the standard ΛCDM model.

They focus on two specific tensions. The size and depth of the KBC void, and the Hubble tension. νHDM predicts stronger gravity in under-dense regions, so allows the KBC void to exist as-measured. This has implications for the locally measured Hubble parameter because a) the void itself would increase the local expansion rate but b) in νHDM this void would also be expanding faster than it would if it were placed in a ΛCDM universe.

At any specific point in space the exact strength of the enhancement of gravity will depend on the local environment due to the “external field effect” (an integral part of MOND since its foundation in the 1980s). In principle this is predictable by measuring the local environment, but this would require better measurements than we currently have. It is also in principle predictable statistically using a large cosmological simulation in the νHDM paradigm. So far such simulations only go up to a 750 Mpc box size (https://iopscience.iop.org/article/10.1088/0004-637X/772/1/10​), not sufficient to address the KBC void (which the current study considers semi-analytically). Smaller hydrodynamical cosmological simulations in vHDM are currently underway in Bonn to address galaxies.

Therefore, in current empirical fits, the size of this effect, at each point in space, is essentially a free parameter. Still, it is only one free parameter and while it remains free the important question is, ‘does this parameter have enough explanatory power to justify its existence?’ – Moritz and Indranil argue ‘absolutely yes!’

The paper: https://arxiv.org/abs/2009.11292
Moritz: https://moritzhaslbauer.jimdofree.com/
Indranil: https://www.youtube.com/channel/UCwO0bEeE6oNahkt8dWQFcXw

A blog post by Indranil, Moritz (and co-author Pavel) on the same topic: https://tritonstation.com/2020/10/23/big-trouble-in-a-deep-void/

George Zahariade – Quantum gravity adds (v quiet) noise to gravitational wave detectors

George tells us what happens in gravitational wave detectors when you quantise the gravitational field.

He talks about a calculation he did with Maulik Parikh and Frank Wilczek which examines what effect quantising the gravitational field would have on gravitational wave detectors.

They first treat the detector and gravitational field quantum mechanically. For certain gravitational wave states (e.g. a coherent state, a squeezed state and a thermal state) they are then able to solve the gravitational field parts of the resulting path integral (or canonical expectation values).

In the resulting expression they then take the most probable path for the detector (i.e. the classical path) and determine an equation of motion for the distance between the ends of the detector (i.e. the classical equation of motion for the detector, with quantum effects from the gravitational field included).

This new equation of motion is like the purely classical one except with the addition of a new noise term. In the case of a squeezed state this noise can be exponentially enhanced, which might have implications for gravitational waves from inflation, which at least start out in a squeezed state.

George: https://www.linkedin.com/in/george-zahariade-844b27b4
Papers: https://arxiv.org/abs/2010.08208 and https://arxiv.org/abs/2010.08205
Essay: https://arxiv.org/abs/2005.07211

Ryan Keeley – Maybe inflation is the solution to the Hubble tension!?

Ryan tells us about how the Hubble tension (between Planck measurements of the cosmic microwave background temperature anisotropies and SH0ES measurements of the expansion rate of the universe) can be completely solved with a non-standard primordial power spectrum for the curvature perturbation, which could arise e.g. if there is a kink in the inflationary potential.

The non-standard power spectrum has an oscillatory feature that exactly mimics the effects of a slightly different value for the expansion rate today. They find this power spectrum by explicitly reconstructing it, so they aren’t supplying a well motivated a priori model. However their work does represent a proof of concept that a non-standard power spectrum could mimic the effects of a different expansion rate.

While the Hubble tension remains unsolved and while all other models to explain it suffer from their own problems, work like this remains well motivated. It would perhaps be a bit fine tuned to have a feature at exactly the right place in the primordial power spectrum to mimic the effects of H0 today, but there could be many features and if one happened to align then this would be what we would see, so it can’t be ruled out a priori.

Future work will test this with polarisation data and the matter power spectrum… so stay tuned. If this is the solution it might leave measurable signatures in those results.

Ryan: http://cosmology.kasi.re.kr/members.php?member=ryan
The paper: https://arxiv.org/abs/2006.12710

AxioNyx – Public code to simulate both Fuzzy and Cold Dark Matter in hires | FDM≡CDM on large scales

Bodo Schwabe and Mateja Gosenca tell us about AxioNyx, which is a new public code for simulating both Ultralight (or “Fuzzy”) dark matter (FDM) and Cold dark matter (CDM) simultaneously. The code simulates the FDM using adaptive mesh refinement and the CDM using N-body particles. As far as I’m aware it is the first publicly available code that can do both without needing adaptation out of the tin.

The code passes a bunch of sanity/consistency checks, matching linear theory when it should match and deviating when it should deviate. The paper discussed mainly just introduces AxioNyx; the new physics will come in future papers. Things Bodo, Mateja and collaborators will be tackling are: simulations with full cosmological initial conditions for the combination of FDM+CDM, adding baryons (long-term project), gravitational heating of stars in FDM halos, and re-assessment of earlier constraints on FDM with FDM now only a sub-fraction of the total dark matter content (e.g. the Lyman Alpha constraints). Stay tuned, and/or get in touch with them if you’re keen to help make any of that happen :-).

One neat result from this paper was the confirmation of the “Schrodinger-Vlasov” correspondence. This essentially says that FDM and CDM will behave equivalently on large enough scales. On smaller scales the fuzziness of the FDM causes it to deviate (essentially, it is so light that its deBroglie wavelength is astrophysically relevant). This correspondence has been shown statistically, and as a limiting result, in earlier papers but this is (as far as I’m aware) the first paper where the FDM and CDM are in the same gravitational potentials in the same simulation and one can see them do the same stuff on large scales. It wasn’t surprising, but it’s still good to check and see it happen.

The most interesting things happen when the proportions of FDM and CDM are similar (i.e. when one doesn’t just dominate the other entirely), which might be an interesting thing for considering in future papers too.

Paper: https://arxiv.org/abs/2007.08256
AxioNyx: https://github.com/axionyx

Bodo: https://bodoschwabe.github.io/
Mateja: https://cosmology.auckland.ac.nz/2018/08/20/new-postdoc/

Marika Asgari – KiDS 1000 is statistics dominated and in 3σ tension with Planck cosmology

Marika tells us about the recent Kilo Degree Survey (KiDS) cosmological results. These are the first results from KiDS after they have reached 1000 square degrees.

Marika first explains how they know that the results are “statistics dominated” and not “systematics dominated”, meaning that the dominant uncertainty comes from statistical errors, not systematic ones.

She then presents the cosmological results, which primarily constrain the clumpiness of matter in the universe, and which therefore constrain Ω_m and σ_8. In the combined parameter “S_8”, which is constrained almost independently from Ω_m by their data they see a more than 3σ tension with the equivalent parameter one would infer from Planck.

Marika: https://www.roe.ac.uk/~ma/
Papers: https://arxiv.org/abs/2007.15632​ and https://arxiv.org/abs/2007.15633
KiDS webinar: https://www.youtube.com/watch?v=kYkN6Yl8x6M&t=0s

Simone Aiola – ACT+WMAP is as powerful as Planck; CMB results on ΛCDM are robust

Simone talks about the latest Atacama Cosmology Telescope cosmology paper(s). He goes into detail at the beginning about the history of ACT and where it is now, as well as what data was in the latest results.

He then goes into how the data has been checked for robustness and how it is sufficiently consistent with WMAP to motivate combining the two data sets (i.e. WMAP for the large scales and ACT for the small scales and polarisation). Combined, WMAP and ACT are as constraining as Planck, but don’t add much additional constraining power when added to Planck (they do measure the same sky of course!).

The ACT polarisation data is sufficiently high in signal to noise that you can even make maps of the data that look as clear as the Planck/WMAP temperature maps, which is quite stunning.

Then Simone goes into the final cosmological results. Overall, Planck and WMAP+ACT are mostly consistent. In particular, their H0 predictions are very similar. There is a small amount of tension that manifests as a different value of the spectral index (or, more precisely, in the ns vs omega_b constraint contours). This is only ~2.4 sigma, so worth keeping an eye on, but not worth getting too excited about yet.

Papers: https://arxiv.org/abs/2007.07288 and https://arxiv.org/abs/2007.07289
Data: https://lambda.gsfc.nasa.gov/product/act/actpol_prod_table.cfm
Simone: https://users.flatironinstitute.org/~saiola/

Eva-Maria Mueller: Even BAO alone requires dark energy + cosmology from the last eBOSS data release

Eva-Maria tells us about the cosmological results from the final (cosmology relevant) SDSS/eBOSS data release. eBOSS is a spectroscopic galaxy survey, precisely mapping galaxies, quasars and the Lyman-α forest.

The two key probes are baryon acoustic oscillations (BAO) and redshift space distortions (RSD). BAO acts as a distance ladder, and eBOSS now has measurements of BAO at enough redshifts that their data alone constrains the expansion history of the universe precisely enough to prove the existence of a dark energy component. The RSD measurements probe the growth of structure, but are unable to resolve any tension between weak lensing measurements and the CMB, with their results lying right between the two alternatives, consistent with both.

The distance scale in BAO can also be anchored using constraints from big bang nucleosynthesis (BBN) allowing eBOSS to infer the expansion rate of the universe today without reference to the CMB. Their measurement is consistent with the CMB and inconsistent with the local distance ladder measurements (e.g. SH0ES).

The BAO, RSD and CMB are all consistent with the time evolution of the (unanchored) supernovae so all these data sets can be combined to give an overall constraint. This gives very tight bounds on any deviations from a cosmological constant in ΛCDM, and on any deviations from a flat geometry within our observable universe.

The next stage in spectroscopic surveys is DESI, and then Euclid. COVID permitting, we should have the first results from DESI within three years.

Eva-Maria: https://www2.physics.ox.ac.uk/contacts/people/muellere
Paper: https://arxiv.org/abs/2007.08991