Events in our system are self-managed. Group and event managers are encouraged to review privacy and security settings, and adjust them if needed. If you need assistance please contact Indico Support - contact Help at bottom of page. https://learn.getindico.io/categories/managing/
Corina Andreoiu (SFU, Co-Chair 2025)
Erica Caden (SNOLAB and McGill, past Co-Chair)
Thomas Brunner (McGill, past Co-Chair)
Alain Bellerive (Carleton University)
Tim Friesen (University of Calgary)
Gwen Grinyer (University of Regina)
Annika Lennarz (TRIUMF - permanent member, Co-Chair 2025)
Tony Noble (Queen’s University – permanent member)
Heather Russell (University of Victoria)
Jana Thomson (TRIUMF, Conferences Facilitator)
Nuclear physics
Exploring the unknown facets of the visible universe through the lens of rare isotopes
R. Kanungo
TRIUMF, Vancouver, Canada
Visible matter in our universe embody nature’s strong force combining protons and neutrons into complex systems. While much has been understood about the stable nuclei, the rare isotopes approaching the limits of neutron and proton binding bring a wealth of new information. Their properties guide our understanding on the state of matter in exotic environments in our Universe. The reactions and decays of these isotopes drive the creation of majority of the heavy elements in our Universe.
The challenge of accessing these rare isotopes is possible in a few laboratories around the world with two different production techniques Isotope Separator Online (ISOL) and In-flight projectile fragmentation. The TRIUMF Advanced Rare Isotope Laboratory (ARIEL) is among the premier centres with capability of ISOL type production of rare isotope beams. The presentation will give an overview of the scientific scope of TRIUMF’s rare isotope program from ISAC to ARIEL with state-of-the art experimental facilities. The stopped beam experiments on precision mass measurements, decay spectroscopy and tests of fundamental symmetries in nature allow to explore nuclear shells, shapes and look for signatures of physics beyond the known symmetries in nature. Nuclear reactions at astrophysical low-energies and with re-accelerated non-relativistic energy beams bring access to direct and indirect measurements of astrophysical reactions important for nucleosynthesis. Selected highlights of recent measurements will be presented.
I will also discuss key highlights from my research program with complementary relativistic energy rare isotope beams at RIKEN in Japan and GSI in Germany. This is leading to revelation of unconventional forms of nuclei such as nuclear halo and skin that unveil new features of nuclear shell evolution.
Abstract
The study of exotic nuclei has led to the discovery of new and unexpected characteristics of the nucleus. One of the discoveries is the changes in the shell configuration, causing the well-known magic numbers, i.e. shell closures, to disappear as well as the emergence of new magic numbers. The neutron number
In this project, a spectroscopic study of
Nuclear theories often operate under the assumption that the strong nuclear force is independent of electric charge. As a result, it is expected that exchanging the number of protons with the number of neutrons in a nucleus will produce a mirror nucleus with identical structure after electromagnetic considerations. However, due to the charge dependence of the strong nuclear force, isospin non-conserving interactions give rise to quantities like Mirror Energy Differences between analogous excited states in mirror nuclei which cannot be accounted for by Coulombic forces.
This charge dependence is being explored at TRIUMF, Canada’s particle accelerator centre. Stable
This presentation will focus on highlighting the selectivity offered by combining TIGRESS, TIP, and EMMA for lifetime and electromagnetic transition rate measurements, as well as the present state of the ongoing analysis. In the longer term, this work aims to explore the
The semi-magic
The
In total, there are
Lifetime measurement for the
In this talk, we will cover various topics related to EDI in STEM research taken from literature and resources accessible on the Natural Sciences and Engineering Research Council of Canada (NSERC) websites and many other sources cited in the presentation. We will start by introducing formal definitions of equity, diversity, and inclusion, as well as discussing why EDI is important for research, going over a few research examples in which EDI considerations were not taken into account. We will also describe the EDI structure at the University of Calgary, more specifically in the Department of Physics and Astronomy, and debate the importance of representation, recognition, and resource allocation in scientific research. We finalize this talk by touching on other relevant topics related to EDI such as positionality and intersectionality and suggesting literature for the interested audience.
The reaction rate and subsequent cross section enhancement to the
Analysis is ongoing but calculations suggest a strong enhancement as high as a factor of ~4 for the manganate, titanate, and iron phosphate at 100 keV, with the enhancement decreasing to within theoretical values as incident energy increased to 1 MeV.
The lithium tungstate values on the other hand remained closer to theoretical (bare values taken from ENDFVIII) across the energy range tested, a result that aligns well with values reported in literature.
Nuclear physics
The chemical properties of heavy elements are anticipated to diverge from established periodic trends due to significant relativistic effects. However, experimental investigations in this region remain extremely challenging and necessitate the use of nuclear physics techniques. This is especially true for the late actinides (Z > 100) and superheavy elements (Z > 104), where studies are limited to atom-by-atom analyses. To date, no experiments have definitively identified molecular species formed by elements with Z > 99, leaving their reactivity largely unexplored. A novel experimental technique developed at Lawrence Berkeley National Laboratory now allows for the direct identification of molecular species through mass-to-charge ratio measurements. This advancement has enabled new chemical studies of actinium (Ac, Z = 92) and nobelium (No, Z = 102), representing the respective bookends of the actinide series. These efforts pave the way for a new era of experiments to probe the chemistry of superheavy elements, providing quantifiable insights and facilitating deeper exploration into the uncharted regions of the periodic table. Recent experimental results and future directions will be presented.
The study of nuclear shapes has greatly benefited from the capabilities of the current generation of detector arrays, uncovering different phenomena in different regions of the nuclear chart. Shape coexistence, once thought to be quite rare is now observed to occur throughout the nuclear landscape. An experiment to study the structure of
Although the shell model forms the backbone of our understanding of nuclear structure, the breakdown of traditional magic numbers far from stability gives insight into the nature of the underlying nuclear interactions and acts as a tool to test existing models. Islands of inversion (IoI) in the nuclear landscape are characterized by the presence of deformed multi-particle multi-hole (npnh) ground states instead of the (0p0h) configurations predicted by spherical mean-field calculations. In the N=40 region, the relatively large energy gap separating the pf shell from the neutron g
One necessary extension to the Standard Model of Particle Physics (SM) is one which describes the behavior of the early universe that leads to the matter-antimatter asymmetry which we observe today. It is commonly assumed that any explanation of this matter-antimatter imbalance must rely on the violation of the combined symmetry of charge conjugation (C) and parity (P) that is presently, however, considered to be too weak in the SM. Thus, identifying a new source of CP violation is of critical importance. CP-violating effects, particularly those that originate within the atomic nucleus, can be investigated by combining precision techniques from atomic, molecular, and optical physics with rare isotopes produced at accelerator facilities such as TRIUMF in Canada. In searches for CP-violating nuclear Schiff moments, for example, molecular systems offer a sensitivity advantage of 3-4 orders of magnitude which can be further increased by up to a factor of 1000 when a radioactive, octupole-deformed nucleus is incorporated into the molecule. Among these radionuclides, the short-lived protactinium isotope
Detectors for particle physics
Plastic scintillators are a common type of particle detector that is often used for their fast timing, robustness and cheap cost. Plastic scintillators are commonly used in veto systems since they can be made to large sizes needed for the experiment as well having a variety of dopants added to them to improve the detection of specific particles. As with any scintillator it is important to understand how the scintillator behaves in the environment that it will be in for the experiment. One consideration is the temperature that the scintillator will be operated at, specifically if the scintillator will be kept at cryogenic temperatures. Many inorganic scintillators have had their scintillation light yield determined at cryogenic temperatures, with many seeing an increase of up to 100% more (eg. http://link.aps.org/doi/10.1103/PhysRevB.84.214306). To generalize such tests to plastic scintillators, I present the results of an experiment that analyzes the light yield of EJ-200, a commercial plastic scintillator from Eljen Technologies, between 300 K and 4 K using
The fundamental symmetries between matter and anti-matter can be precisely probed via the most basic (anti)atom (anti)Hydrogen. Anti-atoms are commonly studied by trapping them with confining magnetic fields, these fields induce Zeeman shifting of the atomic levels, which introduce uncertainties. To navigate this problem, and to enable quantum sensing techniques such as anti-atom interferometers, the HAICU experiment is currently developing an atomic fountain, with Hydrogen as a proxy for anti-matter. A challenge with Hydrogen is cooling the atom to low enough temperatures, typically O∼(μK). Doppler and recoil limits, O∼(mK), will be overcome by a technique we coin as Magneto-Optical Compression Cooling (MOCC).
Magnets are a vital tool for manipulating atoms, particles, and molecules in various applications. One of these categories of magnets is the Bitter coil design, which aims to bridge the magnetic capabilities of conventional electromagnets and superconducting coils. HAICU’s magnetic minimum trap design consists of two Bitter coil systems, the quadrupole and mirror coils, which provide ample access to the trapping volume. To create high fluxes high current densities are required, which produce significant heat requiring sufficient cooling. The design, construction, and testing of the quadrupole have required me to develop unique techniques, enabling the creation of a uniform and homogeneous radial background field.
Here I will present the findings that enabled us to create a watertight magnet at high water pressures, and additionally the methods developed to monitor the system during testing prior implementation.
The Measurement Of a Lepton-Lepton Electroweak Reaction (MOLLER)
experiment aims to explore new dynamics beyond the Standard Model. The experiment is placed in Hall A at the Thomas Jefferson National Accelerator Facility (JLab) in Newport News, Virginia, USA. The measurements are acquired by the scattering of longitudinally polarized electrons off unpolarized electrons in a liquid hydrogen target. MOLLER will utilize High Voltage-Monolithic Active Pixel Sensors (HV-MAPS) in the Hall A’s Compton polarimeter to monitor the polarization of the incoming beam from the Continuous Electron Beam Acceleration Facility (CEBAF). The HV-MAPS based electron detector contains a quad-planar geometry and each plane incorporates three HV-MAPS chips attached. Compton polarimeter utilizes this detector assembly placed inside the vacuum to detect the compton scattered electrons. The chips generate heat during operation, and thus require an effective cooling system in vacuum. The temperature measurement of the HV-MAPS is useful to understand the thermal properties of the pixel detector and cooling needs. This present study reviews the efforts towards the cooling strategies, structure modification, and thermal simulations to achieve an in-vacuum operation. Further, the prototyping and successful testing of the electron detector’s cooling system (using a test version of HV-MAPS chips with equivalent heat load) in a local laboratory was performed, and computational fluid dynamics studies are compared with the collected data.
The imbalance of matter and antimatter in the universe challenges our understanding of physics. While the Standard Model predicts that matter and antimatter should exist in equal proportions, observations show a matter-dominated universe. To address this, the ALPHA Collaboration conducts precision spectroscopy experiments on antihydrogen, the antimatter counterpart to the hydrogen atom. Investigating differences between matter and antimatter could reveal violations of the Standard Model and suggest new physics. The ground state hyperfine splitting of antihydrogen is an area of particular interest in these measurements. ALPHA has measured the hyperfine splitting to four parts in ten thousand (compared to seven parts in ten trillion in hydrogen) by inducing resonant positron spin flip transitions, whose energy depends linearly on magnetic field. The precision of this measurement is limited by fluctuations and non-uniformity in the magnetic fields used to trap antihydrogen. This research aims to develop a novel experimental method for high-precision measurements of the hyperfine structure of antihydrogen by performing the first-ever measurements of antiproton spin flip transitions. These transitions depend quadratically on the magnetic field, with a broad maximum resonance frequency at 655 MHz. A new microwave injection system is required since the current antihydrogen trap does not support the propagation of microwaves at 655 MHz. This system features a half-wavelength stripline resonator antenna incorporated into the antihydrogen trap to inject microwaves into the antihydrogen confinement area. Efforts are currently underway to produce the first iterations of this system, with initial measurements following soon after. Comparing these results to those expected from hydrogen will provide valuable insights into the differences between matter and antimatter, potentially revealing new physics and improving our understanding of nature.
The Standard Model (SM) of particle physics, though highly successful, is incomplete. It fails to explain phenomena such as matter-antimatter asymmetry and the nature of dark matter (DM). Extensions to the SM, such as the Dark Sector model, introduce new physics, including feebly interacting particles like millicharged particles (mCPs) that may provide insight into these phenomena. The Monopole and Exotics Detector at the LHC (MoEDAL) experiment at CERN is the first dedicated experiment focused on searching for unconventional signatures of new physics, including these exotic particles. The newly proposed MAPP-1 detector aims to enhance this search by targeting mCPs. It leverages advanced GEANT4 simulations to optimize its design and response for maximum sensitivity. This talk will present the MAPP-1 detector's design, simulation framework, and its potential to uncover evidence of new physics.
PIONEER is a next-generation pion decay experiment that will run at the Paul Scherrer Institute in Switzerland. In its initial phase, the primary objective is to improve the measurement of the branching ratio:
This talk will focus on the PIONEER calorimeter, specifically the option for a liquid xenon (LXe), light-only readout, calorimeter. The calorimeter will detect positrons and gamma rays from radiative decays. The energy deposited in the calorimeter, as well as timing and position information, allows discrimination between the different types of decays and identification of pileup events. Given the level of precision targeted, it is crucial to design a calorimeter with high energy resolution and good containment of electromagnetic showers. A large prototype LXe calorimeter is being designed to characterize and test the performance of the unique calorimeter design. This ongoing work includes the development of optical Geant4 simulations which will be validated with the results of the prototype as well as preliminary hardware development for the prototype.
It would be interesting to know how a particle could have only a single magnetic north (or south) pole. Dirac showed that magnetic monopoles are compatible with quantum mechanics, relating the fundamental magnetic charge to the smallest electric charge. Magnetic monopoles would restore the broken electric-magnetic duality in Maxwell’s equations and are predicted by various theories aiming to unify the fundamental forces of nature.
The ATLAS detector at the Large Hadron Collider at CERN recorded 138 fb⁻¹ of proton-proton collision data at √s = 13 TeV during the 2015-2018 data-taking period (Run 2), enabling some of the most stringent limits on beyond-the-standard-model physics. This presentation focuses on a search for magnetic monopoles and high-electric-charge objects (HECOs) using Run 2 data collected by a dedicated high-level trigger. Both magnetic monopoles and HECOs are highly ionising particles, and as such, they would deposit significant energy in the ATLAS detector. This talk will present the methodology used in the Run 2 analysis to search for magnetic monopoles and HECOs and the results, along with an overview of the planned approach for the upcoming Run 3 analysis.
There is strong evidence for the existence of Dark Matter. One possible form of Dark Matter is strongly self-interacting Dark Matter, or Strongly Interacting Massive Particles (SIMP), modelled after Quantum Chromodynamics (QCD). It should also be noted that, to date, no direct detection of any kind of dark matter has been made. Direct detection of dark matter at accelerators is a high priority and is part of the program for the MoEDAL experiment located at the LHC. The MAPP extension to the MoEDAL experiment focuses on searching for Mili-Charged Particles (mCPs), and Long-Lived Particles (LLP). In this talk, we will discuss pion-like SIMP, and its potential detectability at the MoEDAL MAPP experiment. In order to model this dark matter, we construct a Lagrangian describing dark-pions using an approach inspired by Chiral Perturbation theory, an effective field theory of QCD. To couple our model to the Standard Model, we include a vector portal term which kinetically mixes our dark gauge fields with standard model gauge fields. As part of our model, we also include a Wess-Zumino-Witten term, this term is important to control the overproduction of strongly self-interacting Dark Matter in the early universe. We focus on two processes: a Drell-Yan process involving a dark gauge field, which produces a pair of dark-pions, and photofusion of two dark photons to three dark-pions. Due to kinetic mixing, these dark-pions will have an effective electric charge that is a small fraction of that of the electron, making the dark pions milli-charged particles. We will also explore the sensitivity of these particles under different parameters.
The Scintillating Bubble Chamber (SBC) collaboration is developing novel particle detectors sensitive to low-energy (sub-keV) nuclear recoils by combining existing bubble chamber technology with liquid noble detectors. This approach leverages the insensitivity to electronic recoils characteristic of bubble chambers alongside the scintillation yield from a liquid noble active medium. SBC aims to achieve detection thresholds as low as 100 eV through a multi-channel readout including acoustic, imaging, and scintillation signals. The collaboration is currently commissioning two identical 10-kg detectors: SBC-LAr10 and SBC-SNOLAB. SBC-LAr10, located at Fermilab, is nearing operation and will focus on detector calibration and CEvNS studies, while SBC-SNOLAB will be purpose-built for dark matter searches in the low-background environment at SNOLAB. This talk will provide a general overview of scintillating liquid noble bubble chambers and the current status of both detectors. Additionally, I will introduce the planned methodology for the multi-channel data acquisition system and sensory readout for SBC-SNOLAB.
The PICO experiment uses bubbles chambers, where the fluid is superheated, to directly look out for the existence of WIMPs dark matter particles. Fluorocarbon material (CF3I or C3F8) is used as an active fluid that allow to study inelastic dark matter-nucleus scattering interaction. This technology is designed in such a way that it has the ability to reject electron-recoil interaction and be insensitive to gamma radiation which are major drawback background noise. The detectors are placed inside a water tank and are also operated 2 km underground in the SNOLAB clean room to provide shielding against neutrons from laboratory and cosmic rays respectively. Several piezoelectric sensors are carefully placed in a specific region of the system and are used as an acoustic shock wave discriminator to differentiate the noisier alpha signal due to the decay of 222Rn and 218Po from those due to nuclear recoil. The PICO-40L adopts the Right Side UP (RSU) geometry which eliminates the use of the buffer which can induce background effects. This has the disadvantage of having to implement a cold zone and a hot zone depending on whether the C3F8 is in exclusive contact with glass surface. The detector energy threshold was set at 2.855 keV (mainly), for about 72 live days, during 3 different phases, for about 3244 kg*days of exposure, in order to verify its stability before the physics run. There is currently a limiting factor in the experiment. The background noise rate on the walls is measured at 437 events per day, which affects the total lifetime. The causes of this issue are currently under investigation. The detector can be considered as a prototype to verify the viability of the RSU geometry for the upcoming ton-scale detector known as PICO-500, which is expected to be a magnitude lower in sensitivity with respect to the SD WIMP-proton cross section.
The PICO collaboration uses superheated bubble chambers in search of dark matter through direct detection. The PICO-500 experiment is the next generation PICO detector, expected to reach world-leading sensitivity for weakly interacting massive particle (WIMPs) interactions. Building upon the success of previous PICO detectors, this next-generation apparatus will consist of
The latest results from the DEAP-3600 experiment will be presented. DEAP-3600 experiment is a direct dark matter search that employs single-phase liquid argon to detect potential dark matter interactions in the universe. Located 2 km underground at SNOLAB in Sudbury, Canada, the experiment uses a spherical acrylic vessel capable of holding 3600 kg of liquid argon. This vessel is monitored by photomultiplier tubes to detect scintillation light produced by rare nuclear recoil events, which may indicate interactions between dark matter particles and liquid argon nuclei. A key feature of the experiment is its ability to distinguish nuclear recoil signals from background electromagnetic interactions using the pulse shape discrimination technique. In recent years, the experiment has focused on key analyses, including dark matter searches and measurements of argon properties. Ongoing analyses include searches for solar axions, neutrino absorption, and WIMP candidates, and measurement of the muon flux at SNOLAB. Following recent hardware upgrades, DEAP-3600 is set to commence its third phase of data collection, spanning one year, starting in 2025.
The detection of dark matter remains a central challenge in particle physics. Liquid argon (LAr) based experiments, like DEAP-3600, must understand potential background signals mimicking dark matter in order to achieve maximum sensitivity. Argon-1, a modular LAr detector at Carleton University utilizing silicon photomultipliers, provides a platform to study key background sources, aiding not only DEAP-3600 but also future detectors like the 400-tonne ARGO detector and its prototype, ARGOlite.
This talk presents measurements by Argon-1 of alpha scintillation quenching, a process through which a fraction of the energy deposited is not converted to scintillation light, with quenching factors depending, in general, on energy. These results complement the DEAP-3600 collaboration’s 2024 publication, extending quenching factor measurements into energy ranges inaccessible to DEAP-3600 due to contamination risks. Additionally, Argon-1’s contributions to benchmarking simulations for ARGO and ARGOlite are discussed, in preparation for the next generation of argon-based dark matter detectors.
Detectors for dark matter searches
What is the nature of dark matter? This fundamental question, which seeks to uncover its properties, composition, and origin, remains one of the greatest enigmas in modern physics. Despite direct detection experiments achieving unprecedented sensitivity — some even capable of detecting solar neutrinos — the mystery persists. Weakly Interacting Massive Particles (WIMPs) have long been a leading dark matter candidate, with the mass range below 10 GeV/c² still largely unexplored. The next-generation SuperCDMS experiment at the underground laboratory SNOLAB aims to address this frontier, employing advanced silicon and germanium cryogenic detectors to search for low-mass dark matter particles. This talk will provide an overview of the efforts performed to test SuperCDMS detectors in the Cryogenic Underground TEst facility (CUTE) at SNOLAB for the first time and will cover some preliminary results of the detector characterization and initial calibration studies.
The NEWS-G (New Experiments With Spheres-Gas) uses a spherical proportional counter to search for low-mass dark matter at the SNOLAB underground laboratory. Radon, a significant source of contamination for the NEWS-G detector and other rare event search detectors, requires effective mitigation strategies. This talk presents the performance of a novel radon trap using silver zeolite, which significantly outperforms the commonly used activated charcoal in radon capture efficiency. These results highlight silver zeolite as a promising new adsorbent for dark matter and neutrinoless double beta-decay experiments. Integrated into the NEWS-G detector at SNOLAB, this trap efficiency in a low-background environment is evaluated through alpha analyses, distinguishing radon-induced events from other alpha emitters like Po-210 on the detector’s inner surface. These findings will guide future radon reduction strategies, enhancing the sensitivity of NEWS-G dark matter searches.
The TRIUMF electron linear accelerator (e-Linac) will be the second driver beam for nuclear isotope production at the Advanced Rare IsotopE Laboratory (ARIEL). This particle accelerator will produce radioisotopes in the neutron rich region of the nuclear landscape via photofission, whose yields strongly depend on the incident beam energy. To date, the e-Linac has been commissioned up to 10kW of average beam power at 30 MeV beam energy. In addition to its support of the nuclear isotope program, the e-Linac will operate as a multi-user facility, taking full advantage its scientific potential. One such user is the DarkLight experiment, which will use high energy electron beams to search for a new force carrier, a so-called “dark photon”, that would couple the theoretical dark sector to the Standard Model. In the coming years, ARIEL and DarkLight will depend significantly on the reliable operation of the e-Linac. However, this is inhibited by the presence of particulate contamination in its superconducting rf (SRF) cavities. This contamination leads to a phenomenon known as field emission, where electrons tunnel through the surface of the SRF cavities due to the high surface electric fields applied. These rogue electrons limit the accelerating gradient and thus the final beam energy delivered to users. The TRIUMF e-Linac sees a progressive onset in field emission that cannot simply be explained by vacuum accidents. The environment of a particle accelerator provides an ideal opportunity for contaminating particulates to gain electrostatic charge, which is one of the main drivers of their dynamics in vacuum. However, fundamental parameters such as composition and charge to mass ration of these grains remain largely unknown and will be unique to each accelerator environment. After a brief introduction to ARIEL and DarkLight, I will present an analysis of particulates collected from the TRIUMF e-Linac, detailing their size, composition and potential sources.
The Cherenkov fiber-based safety shut-off system developed for the ARIEL e-Linac at TRIUMF provides a scalable and cost-effective solution for monitoring beam losses and scattering events, addressing limitations of traditional systems such as large ionization chambers (LICs) and photomultiplier tubes (PMTs) with Bismuth Germanate (BGO) crystals. LICs are bulky and unsuitable for spatially distributed or multiple monitoring points, while PMTs with BGOs, though precise, are costly and require substantial radiation shielding when placed near the beamline.
This fiber-based system is particularly advantageous for experiments like DarkLight, which generate extensive beam loss and scattering and need to be monitored near the emission areas. To ensure operational safety, the system is designed to respond within 10 microseconds to detected instabilities, enabling rapid beam shutdown and protecting experimental equipment integrity. The fiber’s flexibility and economic benefit make it ideal for setups requiring numerous detection points. Furthermore, potential position sensitivity could provide 360° directional scattering detection along extended regions, reducing the need for additional sensors.
The system’s maintenance is straightforward, as damaged fibers can be replaced without personnel entering radiation-exposed areas. These attributes make the system highly suitable for high-radiation environments and applications such as FLASH radiotherapy, where precise, reliable, and responsive beam control is essential.
The Scintillating Bubble Chamber (SBC) collaboration is combining the well-established technologies of bubble chambers and liquid noble scintillators to develop a detector sensitive to low-energy nuclear recoils with the goal of a GeV-scale dark matter search. Liquid noble bubble chambers benefit from excellent electronic recoil suppression intrinsic in bubble chambers with the addition of energy reconstruction provided by scintillation signals. The detector to be operated at SNOLAB is currently in development, featuring 10 kg of xenon-doped liquid argon superheated to 130 K at 1.4 bar. Surrounding the active volume are 32 FBK VUV-HD3 silicon photomultipliers to detect the emitted scintillation light. Deploying at SNOLAB allows for excellent cosmogenic suppression from exposure to 6010 m.w.e. of overburden, however, radiocontaminants embedded in detector materials and the surrounding rock become the major source of background. This talk aims to discuss the radiopurity campaign and current progress leading to the quasi-background-free low-mass (< 10 GeV/c2) WIMP dark matter search.
Neutrino physics
Over more than a decade, the IceCube Neutrino Observatory has accumulated enormous datasets of neutrinos with energies in the GeV to PeV-scale, opening a new window with which to observe the Universe. In this talk I will discuss the latest IceCube results, and provide a look forward of what to expect from the next generation of neutrino telescopes including the Canada-based Pacific Ocean Neutrino Experiment.
The Hyper-Kamiokande (Hyper-K) project plans to measure the phenomenon of neutrino oscillations with high precision, which may give us unprecedented insight into the matter-antimatter asymmetry in our universe. For this measurement, multiple water Cherenkov detectors will be deployed that will use a new detector technology called multi-photomultiplier tubes (mPMTs). These mPMTs each consist of nineteen 3” PMTs for the detection of Cherenkov radiation produced by the resultant charged particle in a neutrino interaction. This new technology is currently being tested in the Water Cherenkov Test Experiment (WCTE) at CERN, where charged particles and gamma-rays are injected into a detector filled with 97 mPMTs. The WCTE is a scaled-down version of the detectors that will be used in the Hyper-K experiment, and so its findings will be crucial to achieving the full physics potential of Hyper-K. This presentation will discuss the development and testing of the mPMT modules, as well as provide an overview of the WCTE, its current status, and the physics goals we look to achieve with it.
We show that, in a
Neutrinoless double-beta decay (
Leveraging nEXO's ability to reconstruct events within its detector volume, the
Experimental techniques, reconstruction and performance in experimental particle physics
Collimated beams of particles called ‘jets’ are a common product of proton-proton collisions at the Large Hadron Collider (LHC), located in CERN, Geneva. These jets are crucial for many standard model and beyond the standard model analyses performed with the ATLAS detector and as such, having the correct calibration and uncertainties for these objects is incredibly important. A large component of these calibrations are systematic uncertainties derived from in-situ measurements of the differences in detector response between data and simulated (MC) samples.
The statistical combination of various in-situ measurements to produce a combined systematic will be presented here. In addition, the collection of this and other uncertainties into calibrations which can be used by the ATLAS collaboration to account for jet energy scale (JES), jet energy resolution (JER), and jet mass scale (JMS) calibrations will also be discussed. The result of the work presented is a set of calibrations for multiple jet types and Monte Carlo simulated data generators which are currently in use by members of the ATLAS collaboration to conduct their various analyses.
The ATLAS detector is a general purpose detector at the Large Hadron Collider (LHC) that investigates a variety of physics, ranging from Higgs boson to possible particles that make up of dark matter. The LHC will be upgraded to become High-Luminosity LHC (HL-LHC) at the end of this decade, and in subsequent run periods a high-pileup environment resulting in up to 200 events per proton-proton collision bunch-crossing is expected. A more efficient trigger system in ATLAS is required to identify and calibrate the different physics objects in this high-pileup environment. Previous offline studies has shown that machine learning like GNN and DeepSets performs much better in identifying particle shower types and calibrating energy in the calorimeter compared to the existing architecture in the detector. The possible utilization of the DeepSets machine learning model for this calibration process in the online trigger is now being explored. Our DeepSets calibration model is being optimized to improve energy resolution while minimizing resources and latency. A first potential implementation proposal for inclusion in the Level-0 (L0) Global trigger in ATLAS will be discussed.
The ATLAS experiment at the Large Hadron Collider (LHC) records events with energies up to 13TeV using multiple complementary detectors. As a general-purpose detector, ATLAS employs a highly sophisticated software system to reconstruct events for a variety of analyses. These analyses encompass many orders of magnitude of energy and momentum requiring accurate reconstruction at all energy levels. Critical to this reconstruction is particle flow, where we attempt to relate coincident signals in each detector, particularly tracks in the inner detector, to calorimeter hits. Doing so allows us to combine the high accuracy of the calorimeter at higher energies with the high accuracy of the tracker at low transverse momenta.
Currently, particle flow is performed using hand-chosen parameters to categorize cells by their proximity to extrapolated track paths. This system fails in dense jet environments where many tracks are present in close proximity. One promising approach to particle flow is JetPointNet which builds on the PointNet framework to perform inference using a machine learning model. JetPointNet has shown potential at segmenting cells in both high and low energy events without pileup. JetPointNet stands to significantly improve the accuracy of jet reconstruction. This talk will discuss the ongoing efforts to develop and integrate a machine learning approach to cell attribution for event reconstruction.
In order to perform searches for high-precision measurements and searches for new phenomena, experiments must account for differences between simulation and collected data. The LHC collects a huge amount of data every second, but only some of it is useful for various kinds of analyses. One way of filtering out useful events is by the usage of a sophisticated triggering system. In order to properly analyse data, it is crucial to accurately determine and model the efficiencies of these triggering systems. The "tag-and-probe" method is one way of doing that, through so-called "efficiency scale factors". This talk will discuss how this method is used to calculate efficiency scale factors for the ATLAS experiment's muon trigger system, showing results from the latest data-taking period. These factors are used in a wide variety of analyses, some examples will be given.
Our ATLAS analysis group is performing a search in proton-proton collisions at the LHC for
There is currently an abundance of astrophysical evidence suggesting the existence of extra mass in the universe whose particle constituents cannot be explained by the Standard Model. Numerous theories were derived in an attempt to provide potential particle candidates for dark matter which are searched for by direct and indirect experiments. However, instead of focusing on finding a dark matter candidate, it can be easier to find a particle that mediates the interaction of dark matter and ordinary matter. One such particle, the dark photon, has been hypothesized to couple to the standard model particles through a mechanism called kinetic mixing at high energies. Two possible production modes for a high-mass dark photon are direct production and the HAHM both of which predict that a standard model particle can decay into a dark photon with a long lifetime which will undergo a significant distance unseen in the detector before decaying into two muons. Those muons provide a signature for dark photon production which can be accessed at extremely high energies. The Large Hadron Collider (LHC) provides access to interesting phase space in both models that cannot be probed any other way, while the ATLAS detector located at the LHC can be used to search for them.
ATLAS is one of four main experiments at the LHC. It serves as a general multi-purpose detector that consists of tracking detectors, an energy calorimeter, and a muon spectrometer. The signature muons are expected to leave a trace in the silicon tracker layers and later a track in the muon spectrometer, the outermost layer of the detector. In this talk I present an ongoing analysis of the current run 3 data from the LHC to determine whether the ATLAS experiment can observe a long-lived dark photon decaying to muons with collaborators from the University of Oregon, Berkeley Lab, and Lund University. Specifically, efficiency studies of displaced muon reconstruction across the detector levels will be discussed.
Electroweak/Higgs physics, BSM
Nuclear beta and electron capture (EC) decay serve as sensitive probes of the structure and symmetries at the microscopic scale of our Universe. As such, precision measurements of the final-state products in these processes can be used as powerful laboratories to search for new physics from the meV to TeV scale, as well as addressing fundamental questions of quantum mechanics at the subatomic scale. Significant advances in “rare isotope” availability and quality, coupled with decades of sensing technique development from the AMO community have led us into a new era of fundamental tests of nature using unstable nuclei. For the past few years, we have taken the approach of embedding radioisotopes in thin-film superconducting tunnel junctions (STJs) to precisely measure the recoiling atom that gets an eV-scale “kick” following electroweak nuclear decay. These recoils are encoded with the fundamental quantum information the decay process and final-state products, and carry unique signatures of weakly coupled beyond standard model (BSM) physics, if they exist. These include information on the neutrino mass, BSM weak forces, and potential “dark” particles created within the energy-window of the decay. Our measurements have provided a complimentary and (crucially) model-independent portal to the dark sector with sensitivities that push towards synergy between laboratory and cosmological probes. In this talk, I will briefly discuss the broad program we have developed to provide world-leading searches in these areas as well as the technological advances across several sub-disciplines of science required to enable this work, including subatomic physics, quantum engineering, atomic theory, and materials science.
In the framework of the Standard Model Effective Field Theory (SMEFT), the Standard Model can be seen as a low-energy approximation of a deeper, more fundamental theory that introduces new heavy particles at a higher energy scale,
This talk focuses on electroweak processes involving the coupling of the
This talk will present 1D constraints on the SMEFT operators most sensitive to these interactions, as well as novel 2D limits that constrain pairs of SMEFT operators simultaneously. These results help us better understand potential deviations from the Standard Model and bring us one step closer to uncovering new physics.
The Higgs boson was discovered in 2012 using data from the Large Hadron Collider (LHC) at CERN. Since its discovery, it has been observed interacting with heavy standard model (SM) particles such as Z and W bosons and heavy quarks. A more interesting search focuses on the Higgs interactions with light SM particles such as the muon manifested in the Higgs-to-dimuon decay path. According to SM predictions, this process is expected to be exceedingly rare. Analyses searching for
Neural networks (NNs) present one path to improve over BDT analysis. However, past analyses using NNs have produced shaping in the background near the signal peak. This renders categorized data unreliable for further analysis. One novel method uses NNs trained with a distance correlation (DisCo) loss term. The DisCo loss term punishes the network for shaping the background, leading to better analysis in the end product. This allows more information to be provided for training and is theorized to lead to enhanced background suppression compared to BDTs.
I present on the state of research regarding the use of DisCo NNs to categorize signal and background in the search for
The ATLAS experiment, located at CERN, studies high energy proton-proton and heavy-ion collisions produced by the Large Hadron Collider (LHC), the world's largest particle accelerator. One of the main goals of the ATLAS experiment is to study the properties of the Higgs boson. With the discovery of the Higgs boson at ATLAS and CMS in 2012, the focus has shifted to studying the properties of the Higgs, such as rarer interactions like the Higgs decay into a muon-antimuon pair, which has yet to be observed. With an increased collision energy of 13.6 TeV for the 2022-2026 run of the LHC a larger data set is expected to be produced, and with it an improved opportunity to investigate these rarer processes. It is important to produce high-quality simulated data as an expectation for what may be observed with the real data. For the simulated event samples used to compare to the data collected in 2022-2024, it has been predicted that there are 50% less simulated jets than observed data jets in the forward regions, corresponding with the locations of the experiment’s forward calorimeters. This differs from what was seen in the 2015-2018 run. A study found that introducing a correction that reduces the energy of the data jets by 10% can compensate for this mismodelling; however, such an ad-hoc correction does not identify the source of the issue. This talk will present a series of studies done to identify what may be contributing to this disagreement, including modelling issues of hadronic energy, the pileup modelling, and overall calibration issues.
KDK and KDK+ research is focused on the Potassium-40 decays (40K). The most frequent decay mode is a β- decay to calcium-40. 40K also has an electron capture decay to the excited state of argon-40, as well as two much rarer decays, in the form of an electron capture and a β+ decay to the ground state of argon-40. The electron capture decay of 40K to the ground state of 40Ar was only recently experimentally observed by the KDK experiment, where a branching ratio of 0.098% ±stat 0.023% ±sys 0.010% was measured. The KDK+ experiment is now aiming to experimentally measure the β+ decay to the ground state of 40Ar. Liquid scintillators are known to have extremely high counting efficiencies for β decays, so the plan for KDK+ is to use a liquid scintillator to measure this extremely rare β+ decay. Measuring this decay requires a stable liquid scintillator cocktail (LSC) loaded with a 40K source. The stability of loaded liquid scintillators will vary depending on the sample chemistry, necessitating an experiment to test the long-term stability of the mixture. Previous work determined an ideal solution concentration of 85% liquid scintillator and 15% 1-molar potassium chloride, and an initial stability test was conducted. This initial test showed a 6% loss in signal over a two-month period, although no primary cause was identified. This necessitates a second stability test, which will use a plastic scintillator as a comparison to determine if this decrease in signal is occurring internally in the LSC or is due to some external factors. Since this branching ratio is extremely low, a high counting efficiency is necessary, as well precise understanding of how it will change over a long-term experiment.
Almost half of the elements heavier than iron are produced through the r-process. While it is now recognized that the r-process occurs in neutron star mergers, evidence suggests additional sites must also contribute. One such possibility is core-collapse supernovae, which are predicted to be driven by the weak r-process, where heavy elements are synthesized via a series of (
To address this, the DEMAND array has been developed to study (
[1] J. Bliss et al., Phys. Rev. C 101, 055807 (2020)
[2] M. Jaeger et al., Phys. Rev. Lett. 87, 20 (2001)
This work aims to correct/measure the elemental abundance found in supernova remnants. XSPEC, the most widely used program for measuring elemental abundances from X-ray spectra, uses a fixed value (solar abundance) for the hydrogen and helium abundances. This is a reasonable approximation in hydrogen-rich portions of a supernova remnant such as the forward shock. However, this assumption is invalid for the hydrogen-deficient or heavy-element-dense regions of supernova remnants such as the ejecta. This project uses updated modeling techniques that do not assume solar abundance to correct the elemental abundances in the ejecta of supernova remnants. We have found the corrected abundance for 5 supernova remnants, with plans to expand the scope to as many supernova remnants as possible.
The nuclear Equation of State (EOS) represents the interactions of dense nuclear matter and is used to study astrophysical objects like neutron stars. It is directly correlated with neutron skin thickness, which is a phenomenon describing the layer of outermost neutrons observed to envelop large nuclei. The most accurate way to study neutron skin thickness is through Parity Violating Electron Scattering (PVES); however, such experiments contain error contributed by the Beam-Normal Single-Spin Asymmetry (BNSSA), which describes the small, normal component of a particle beam’s polarization due to any slight bends within the path of the beam itself. The BNSSA is proportional to the Compton Form Factor (CFF), which is in turn proportional to the Compton Slope Parameter (CSP).
In order to constrain the nuclear EOS, neutron skin thickness must be measured. Currently, the extraction of the neutron skin of heavy nuclei is hindered by the 20% error in theoretical predictions of the BNSSA due to assumptions about the CSP, which depends on the energy deposited into a target during elastic Compton scattering. This is responsible for a systematic error in the nuclear EOS, reducing its accuracy and hereby forcing its conformity with the precision of the CSP. To reduce the error of the Compton Slope Parameter, it is crucial to separate elastic and inelastic Compton scattering events, which can be discerned with the high energy resolution NaI detector, CATS (Compton and Two Photon Spectrometer).
CATS detector calibrations, tests, and runs were executed during July and August of 2024. Such data collected included cosmic ray data and in-beam data, however, data of Compton scattering from Carbon-12 is anticipated to be collected in the near future. The experimental results will be cross-examined with a Geant4 software detector simulation, allowing the extraction of the CSP and the hopeful reduction in its uncertainty.
The TUCAN (TRIUMF Ultra-Cold Advanced Neutron) collaboration seeks to measure the neutron electric dipole moment (nEDM) with a sensitivity of
In this presentation, I will discuss the Cs sensors and ongoing experiments to characterize both the magnetic environment at TRIUMF and the sensors themselves. This will include upcoming efforts to integrate the
The ALPHA (Antihydrogen Laser PHysics Apparatus) experiment, based at CERN, investigates the matter-antimatter asymmetry problem by producing and trapping neutral antihydrogen to compare it with hydrogen. ALPHA has performed the first measurement of the 1S-2S transition in antihydrogen, as well as the first observations of the ground state hyperfine splitting, 1S-2P Lyman-alpha transition, and the effect of gravity on the motion of antimatter. Neutral antihydrogen is trapped in ALPHA at a magnetic minimum using an octupole magnet and short solenoids to provide radial and axial confinement, respectively. However, the radial components of the octupole and short solenoid fields combine to produce azimuthal asymmetries in the magnetic field. This effect, in combination with field asymmetries from the end turns of the octupole magnet, can cause trapped antiatoms to exchange or mix axial and transverse energy components. The timescale over which these energy components are exchanged is of particular interest to ALPHA. Energy mixing has never been directly experimentally investigated, and the timescale has never been measured. Understanding the motion of trapped antiatoms will inform simulations that can be used to further the precision of ALPHA’s experimental results. Additionally, energy mixing studies will inform both analysis and design for a variety of antimatter experiments, including laser cooling of antihydrogen, microwave spectroscopy, 1S-2S and 1S-2P spectroscopy, and measurements of gravity's effect on antimatter. I will present the first direct experimental evidence of antihydrogen energy mixing in ALPHA and the impact of these results on future experiments aiming to investigate the matter-antimatter asymmetry problem.
Same-sign lepton colliders offer a promising environment to probe lepton number violation. In this talk, we show discuss processes that change lepton number by two units in the context of Majorana heavy neutral leptons and neutrinophilic scalars at μTRISTAN, a proposed same-sign muon collider. We will show that same-sign lepton colliders, with modest energy and luminosity requirements, can either reveal direct evidence of lepton number violation or significantly constrain unexplored regions of parameter space, especially in the case of a neutrinophilic scalar.
Heavy-ion collisions have reached energies high enough to melt the nucleus into its fundamental constituents, the quarks and gluons, making a Quark Gluon Plasma (QGP). In addition to creating the QGP, these collisions can transfer large momenta to a small subset of quarks and gluons (also known as partons), thus promoting these partons to a highly excited state, which will subsequently radiate a collimated spray of particles called a jet. Jet showering has been studied extensively in electron-positron collisions where no QGP is formed (i.e. in the vacuum), thus making them a calibrated probe to study the QGP. Interactions between partons of the jet and the surrounding medium cause Brownian diffusion of the momentum of jet partons through collisions with the QGP, which is the main topic studied herein. Specifically, we compare the scattering rate and transverse momentum diffusion of jet partons within an inviscid as well as viscous QGP, thus allowing the use of jets to constrain the viscosity of the QGP.
Ultra-relativistic heavy-ions collisions performed at the Relativistic Heavy-Ion Collider (RHIC) and the Large Hadron Collider (LHC) produce a de-confined state of quarks and gluons, called quark-gluon plasma (QGP). One of the primary goals of these collisions is to learn the properties of QGP, through the modifications it imparts on jets and photons. Jets are a collimated spray of particles that have been studied extensively in electron-positron collisions, where no QGP occurs, thus making them a well-understood, calibrated, probe to study the QGP. They are initiated by a highly virtual parton, either a quark or a gluon. In the early stage, the highly virtual parton undergoes energy loss via bremsstrahlung radiation. In the later stage, multiple scatterings with the medium become the dominant mechanism of energy loss. We will use both processes to study QGP properties, with more focus given towards the early, highly virtual, part of the in-medium jet evolution. As photons can leave the QGP with negligible re-scatterings, they provide information about QGP at different stages of the evolution. We will present the scattering rates for photon production and argue that to extract the fundamental properties of the QGP, photons and jets should be used together.
The Jefferson Lab
Unveiling Hadronic Mass Generation Through Light Meson Structure with ePIC
Love Preet
The Electron-Ion Collider (EIC) will be the world’s first polarized collider facility that is planned to be operational at the Brookhaven National Laboratory in the early 2030s. One of the key questions that the EIC will address is the emergence of hadronic mass. It has been observed that the elastic electromagnetic form factors of the charged pion and kaon,
[1] https://github.com/JeffersonLab/DEMPgen/releases.
[2] Z. Ahmed, R.S. Evans, et al. DEMPgen: Physics event generator for deep exclusive meson production at Jefferson lab and the EIC. Computer Physics Communications, 2024.
doi: https://doi.org/10.1016/j.cpc.2024.109444.
URL: https://www.sciencedirect.com/science/article/pii/S0010465524003679.
Fermilab's 2023 measurement of the muon's anomalous magnetic moment (
The KaonLT/PionLT Collaboration probes hadron structure by measuring deep exclusive meson production reactions at Jefferson Lab. A set of high momentum, high resolution spectrometers in Hall C allow for precision measurements from which form factors and other observables can be extracted. One possible measurement is the beam spin asymmetry, which allows for the extraction of a polarized interference cross-section
Generalized Parton Distributions (GPDs) are a huge advancement in our understanding of hadronic structure and non-perturbative QCD. To study GPDs, one may use the Deep Exclusive Meson Production (DEMP) reaction, but first one must find the Q^2 regime where DEMP is factorizable. The factorization regime is where the cross-section can be divided into two parts, a hard part calculated with pQCD, and a soft part parameterized by the GPDs. Theory predicts factorization will occur at "sufficiently high" Q^2. This presentation will discuss the current status of the PionLT experiment at Jefferson Lab to determine the onset of factorizability for the exclusive pion electro-production reaction. To determine factorizability we must perform a LT separation on the data, which divides the cross-section into components based on the virtual-photon polarization. The PionLT experiment uses the Rosenbluth technique to preform LT separations. If factorization is confirmed, one can extract GPD information from this same separated data, with implications also for other similar experiments.