Sultan tells us about his work training neural networks on the neutral hydrogen density fields in the CAMELS simulations.
He uses a process known as normalising flows to find a mapping between the non-linear, very non-Gaussian 2D projected density field and a different Gaussian field. Once this mapping is found, the idea is that one can do full statistics on the non-linear field, by sampling from the Gaussian one. The bold ambition is to use this process to reduce the need for running computationally expensive hydrodynamical simulations – making it more feasible to get precise cosmological constraints from future surveys. Continue reading →
Lucia tells us about her work with CAMELS trying to overcome the biggest barrier CAMELS faces, small box size. It might never be possible to run 1000s of large volume hydrodynamical simulations simply because the hierarchy of scales is too big (baryon feedback happens on small scales, overcoming sample variance requires very large boxes).
Therefore, to get many many simulation boxes, with baryonic effects in them one option is semi-analytic models. This is what Lucia has and is doing and what she discusses in the video…
Pablo talks about an actual observational result from CAMELS, the measurement of the masses of the Milky Way and Andromeda. The results are in agreement with other methods we’ve used to measure the masses of these galaxies.
Leander tells us about work using CAMELS simulations and neural networks to forecast how well future spectral distortion measurements will be able to constrain baryon feedback. The answer is “very well” as it seems the measurements of PIXIE would give even % level measurements of some feedback mechanisms.
Andrina tells us about her work using CAMELS and machine learning to constrain baryon feedback using the electron density power spectrum.
The electron density is not itself an observable thing, but it is a good proxy for observable things like the thermal Sunyaev Zeldovich effect and Fast Radio Burst dispersion (or they are good proxies for the electron density).
Andrina is able to get nice constraints on baryon feedback and cosmological parameters within the CAMELS simulations. This sort of observational probe of baryon feedback is going to be an important tool for cosmologists if we want to use smaller scales to do cosmology, and the sort of connections spotted by Andrina and CAMELS will be valuable for improving these probes.
Jay tells us about how he has used the CAMELS suit of simulations to improve upon existing galaxy cluster scaling relations (i.e. trends we use to measure cluster masses using observational probes).
One example is using the concentration of ionised gas in a cluster to add a little bit more precision to a Sunyaev Zeldovich effect – mass scaling relation. The value of the concentration makes a small change to the prediction.
Jay specifically uses symbolic regression (or similar algorithms) to find expressions that link the properties of interest (e.g. mass, concentration and SZ effect), thus allowing us as human beings to also gain some intuition from what the machine finds.
Paco tells us about how CAMELS have used machine learning to be able to predict, with a single galaxy’s properties, the value of Ωm used to simulate the galaxy.
This is fascinating and if a real physical effect has some far-reaching consequences.
– We might one day be able to learn cosmological parameters by studying the Milky Way
– Running a hydrodynamical simulation with the wrong Ωm would mean, in principle, that you’ll never reproduce the exact properties of a galaxy correctly.
This is the first video in a series of videos covering research that the CAMELS group have done. CAMELS are applying machine learning to cosmology, using a suite of 1000s of simulations to train neural networks, see what the networks learn and then try to unveil what it learned in a way we mere humans can understand.
This video does a brief intro to CAMELS as well as the data release (and how to access the data).
Dillon Brout, Adam Riess and Dan Scolnic talk about the latest SH0ES measurement of the Hubble parameter, making use of the new Pantheon+ supernovae data set.
The measurement accuracy has reached ± 1.0 km s⁻¹ Mpc⁻¹, and they analysed the data in 67 different possible ways and every time reach a result that is in significant tension with Planck + ΛCDM. Their baseline analysis, the one with the best χ² with the fewest free parameters, is now in 5σ tension, on its own, with Planck.
It isn’t clear where the solution to the Hubble tension will come from, but the fact that all local measurements of H₀ come in above Planck, and that the most accurate measurement is now in 5σ tension is very interesting. It’s worth also noting that the prediction from the early universe + ΛCDM doesn’t rely uniquely on Planck. Other CMB experiments give the same small value, and even just Big Bang Nucleosynthesis and local Baryon Acoustic Oscillation measurements, combined with ΛCDM give a small value of H₀.
So, if this isn’t evidence of new physics in cosmology, it will be a very strange series of errors that is causing it.
Charles tells us about his recent work with Camille Bonvin on the dipole anisotropy tension.
We expect there to be dipoles in most observables because of our motion through the (statistically) homogeneous and isotropic universe. However, there appears to be a 4.9σ tension between the magnitude of the dipole as measured from the CMB and as measured from quasars in the local-ish universe. Continue reading →