Steffen tells us about how the dispersion measure of fast radio bursts (FRBs) can be used to measure the distance to FRBs. Therefore, if we can find the host galaxies of FRBs and measure their redshifts we can measure the expansion rate (Hubble parameter) with FRBs.
And, he and his collaborators have done just that. At the moment the uncertainty is relatively large, but they still get a result within 10% of the more precise measurements (and consistent with both CMB and supernovae), indicating that they’re doing the right thing.
In the near future (less than five years) we’ll have (hopefully) more than 500 FRBs and a ~% level accuracy measurement of H0. These are exciting times for FRBs!
Dan Thomas tells us about recent work first creating a framework for describing modified gravity in a model independent way on non-linear scales and then running N-body simulations in that framework.
The framework involves finding a correspondence between large scale linear theory where everything is under control and small scale non-linear post-Newtonian dynamics. After a lot of care and rigour it boils down to a modified Poisson equation – on both large and small scales (in a particular gauge).
The full generality of the modification to the Poisson equation allows, essentially, for a time and space dependent value for Newton’s constant. For most modified gravity models, the first level of deviation from general relativity can be parameterised in this way (and we know that the deviations from general relativity are small because so far we haven’t found any!!)
The cosmological simulations are then done by having Newton’s constant just vary over time (i.e. it is constant in space). This allows them to actually do some simulations, but in future work they will go beyond this particular simplification.
They then compare the simulation results to semi-analytic models like Halofit and ReACT. Halofit is explicitly just applicable to ΛCDM model but does surprisingly well. ReACT however still does much better at fitting e.g. the matter power spectrum and model Euclid lensing observables.
Future work will examine more closely why ReACT fits so well and aim to improve the fit even better so that e.g. Euclid and/or the Vera C. Rubin Observatory (LSST) will be able to use this method to constrain modified gravity without needing to run a new simulation for every step of a Monte Carlo parameter fit.
Johannes Noller and Scott Melville talk about their recent paper exploring the impacts of certain positivity bounds on cosmological parameters.
Positivity bounds are restrictions on low energy effective parameters that arise from requiring the full high energy fundamental theory to satisfy certain criteria. It is possible to show that if, e.g. all the interactions of a full theory satisfies unitarity (conservation of information/probability), causality and locality, then a specific class of low energy theories must have the speed of light less than the speed of gravity.
The specific interactions Johannes, Scott (and collaborator Claudia) used to show this are interactions between dark energy and standard model matter.
This condition actually ends up lying right in the region that observations prefer for this model, effectively cutting the allowed parameter space in half.
– provide a particle physics backbone to inflation
– give neutrinos mass
– generate a dark matter candidate
– and solve baryogenesis
thus linking all of these cosmological problems. The framework can also deal with various particle physics problems, such as the origin of the accidental B-L symmetry in the standard model, the strong CP problem, and the vacuum stability problem.
So, it’s safe to say that if the framework survives scrutiny it is a massive achievement.
The framework takes ideas from the “neutrino minimal standard model”, specifically a new SU(2) gauge field that couples only to right handed particles and can generate neutrino masses, as well as provide a dark matter candidate via the lightest right handed neutrino.
It then combines those ideas with some from Azadeh’s earlier work creating the model of gauge-flation. Specifically, it allows for the fields that interact under this new SU(2) gauge symmetry to be actively generated during inflation. This allows the vacuum fluctuations present during inflation, and which generate the curvature perturbation, to also generate the particles that will decay to dark matter and to generate the asymmetry in baryon number that eventually becomes the matter asymmetry.
What’s more, the model makes a number of specific predictions. The primordial gravitational waves from inflation would have some non-Gaussianity and will be chiral. And, the dark matter mass will be ~GeV – and would thus generate gamma rays in regions of very large dark matter density.
It will be fascinating to see how this framework develops, and whether numerical reheating studies can shed light on the various particle production processes that generate the matter and dark matter during and after inflation.
Volker Springel talks about the new GADGET-4 code.
Featuring all the things you wanted to know about GADGET-4 but were afraid to ask, including:
– What new algorithms are used to make it better and faster than earlier versions
– Why you never heard of GADGET-3
– What new features you can now use when running cosmological simulations (e.g. varying the algorithms; or outputting lightcones, halo catalogues and merger trees “on the fly”)
– why storing the locations of your simulation particles as integers is better than storing them as floating point numbers
– and what the author of the most used simulation code in cosmology thinks are the most interesting questions in cosmology at the moment (both related and unrelated to simulations)
You could read the 80 page paper, or you could just watch this video instead!
Alvaro tells us about a recent paper where he an collaborators detect the transition between a core (flat density profile) and halo (power law density profile) in dwarf galaxies.
The full core + halo profile matches very closely what is expected in wave/ultralight/fuzzy/axionic dark matter simulations (without baryonic effects included). That is, there is a very flat core, which then drops off suddenly and then flattens off to a decaying power-law profile. The core matches the soliton expected in wave dark matter and the halo matches an outer NFW profile expected outside the soliton.
They also detect evidence for tidal stripping of the matter in the galaxies. The galaxies closer to the centre of the milky way have their transition point between core and halo happen at smaller densities (despite the core density itself not being systematically smaller). The transition also appears to happen closer to the centre of the galaxy, which matches simulations.
Of course the core-+halo pattern they have clearly observed might be due to something else, but the match between wave dark matter simulations and observations is impressive.
The huge caveat is that the mass for the dark matter that they use is very small and in significant tension with Lyman Alpha constraints for wave-like dark matter. This might indicate that the source of this universal core+halo pattern they’re observing comes from something else, or it might indicate that the wave dark matter is more complicated than vanilla models…
Stay tuned to the arXiv for future papers looking at this in more detail!
Eiichiro Komatsu and Yuto Minami talk about their recent work, first devising a way to extract a parity violating signature in the cosmic microwave background (i.e. birefringence) and then measuring it in Planck 2018 data.
They get a 2.4 sigma hint of a result, which is “important, if true”.
This signal is measured via correlation of E mode and B mode polarisation in the CMB. If the universe is birefringent then E mode polarisation would change into B mode and there would be a non-zero correlation between the two measured modes. Unfortunately, if the detector angle on the telescope wasn’t calibrated perfectly this would mimic the interesting signal. Yuto and Eiichiro’s new method is to measure this detector angle by looking at the E-B correlation in the foregrounds, where the light hasn’t travelled far enough to be affected by any potential birefringence in the universe.
This allows them to partially distinguish between the two types of measured E-B correlation. And with this method they get the hint of a signal for the new physics in the Planck 2018 data.
The method can be applied to the data from all other telescopes that have measured the polarisation of the microwave background and can therefore be confirmed, ruled out, or at least examined by SPT, ACT, Polarbear, etc.
Yuto and Eiichiro are also working with Planck to see if they can further rule out other systematics, e.g. an intrinsic E-B correlation in the foreground polarisation.
Moritz Haslbauer and Indranil Banik talk about the Keenan, Barger and Cowie (KBC) void and the νHDM model of cosmology.
The KBC void is a locally observed ~300 Mpc scale under-density that appears to be impossible within ΛCDM (under-densities shouldn’t have emptied out this much by now).
νHDM is a model that has sterile neutrinos as a hot dark matter component and enhanced gravity in environments with a weak gravitational field. This dark matter adequately explains the CMB and expansion history of the universe, but doesn’t cluster on the smallest scales. The modified gravity (essentially Milgromian dynamics, or MOND) then kicks in on these scales to produce phenomena like the correct rotation curves in galaxies.
Moritz and Indranil give an intro to both KBC and νHDM, and then explain how this model is consistent with the main tent-poles of modern cosmology (e.g. the CMB anisotropies, nucleosynthesis, the displacement of the gas and weak lensing in the bullet cluster, galaxy rotation curves, the clustering of galaxies) and can also alleviate some of the tensions in the standard ΛCDM model.
They focus on two specific tensions. The size and depth of the KBC void, and the Hubble tension. νHDM predicts stronger gravity in under-dense regions, so allows the KBC void to exist as-measured. This has implications for the locally measured Hubble parameter because a) the void itself would increase the local expansion rate but b) in νHDM this void would also be expanding faster than it would if it were placed in a ΛCDM universe.
At any specific point in space the exact strength of the enhancement of gravity will depend on the local environment due to the “external field effect” (an integral part of MOND since its foundation in the 1980s). In principle this is predictable by measuring the local environment, but this would require better measurements than we currently have. It is also in principle predictable statistically using a large cosmological simulation in the νHDM paradigm. So far such simulations only go up to a 750 Mpc box size (https://iopscience.iop.org/article/10.1088/0004-637X/772/1/10​), not sufficient to address the KBC void (which the current study considers semi-analytically). Smaller hydrodynamical cosmological simulations in vHDM are currently underway in Bonn to address galaxies.
Therefore, in current empirical fits, the size of this effect, at each point in space, is essentially a free parameter. Still, it is only one free parameter and while it remains free the important question is, ‘does this parameter have enough explanatory power to justify its existence?’ – Moritz and Indranil argue ‘absolutely yes!’
George tells us what happens in gravitational wave detectors when you quantise the gravitational field.
He talks about a calculation he did with Maulik Parikh and Frank Wilczek which examines what effect quantising the gravitational field would have on gravitational wave detectors.
They first treat the detector and gravitational field quantum mechanically. For certain gravitational wave states (e.g. a coherent state, a squeezed state and a thermal state) they are then able to solve the gravitational field parts of the resulting path integral (or canonical expectation values).
In the resulting expression they then take the most probable path for the detector (i.e. the classical path) and determine an equation of motion for the distance between the ends of the detector (i.e. the classical equation of motion for the detector, with quantum effects from the gravitational field included).
This new equation of motion is like the purely classical one except with the addition of a new noise term. In the case of a squeezed state this noise can be exponentially enhanced, which might have implications for gravitational waves from inflation, which at least start out in a squeezed state.
Ryan tells us about how the Hubble tension (between Planck measurements of the cosmic microwave background temperature anisotropies and SH0ES measurements of the expansion rate of the universe) can be completely solved with a non-standard primordial power spectrum for the curvature perturbation, which could arise e.g. if there is a kink in the inflationary potential.
The non-standard power spectrum has an oscillatory feature that exactly mimics the effects of a slightly different value for the expansion rate today. They find this power spectrum by explicitly reconstructing it, so they aren’t supplying a well motivated a priori model. However their work does represent a proof of concept that a non-standard power spectrum could mimic the effects of a different expansion rate.
While the Hubble tension remains unsolved and while all other models to explain it suffer from their own problems, work like this remains well motivated. It would perhaps be a bit fine tuned to have a feature at exactly the right place in the primordial power spectrum to mimic the effects of H0 today, but there could be many features and if one happened to align then this would be what we would see, so it can’t be ruled out a priori.
Future work will test this with polarisation data and the matter power spectrum… so stay tuned. If this is the solution it might leave measurable signatures in those results.