Hotel Chateau-Bromont
Thomas Brunner (McGill – co-chair)
Erica Caden (SNOLAB and McGill – co-chair)
Corina Andreoiu (Simon Fraser University)
Alain Bellerive (Carleton University)
Tim Friesen (University of Calgary - past chair)
Gwen Grinyer (University of Regina)
Annika Lennarz (TRIUMF - permanent member)
Tony Noble (Queen’s University – permanent member)
Katelin Schutz (McGill)
Jana Thomson (TRIUMF, Conferences Facilitator)
The vast majority of the elements heavier than helium are formed in stellar environments through sequences of nuclear reactions and decays. Relevant stellar environments for nucleosynthesis include both quiescent burning as well as explosive environments such as novae, supernovae, X-ray bursts, and neutron star mergers. As part of the global effort to understand the origin of the elements, the rates of key nuclear reactions that contribute to nucleosynthesis processes must be well constrained. While a variety of techniques exist to estimate reaction rates theoretically, or to indirectly measure key reaction rates, the "gold standard" remains a direct measurement of the relevant cross-section, at the relevant stellar energies.
Recoil separators, which separate the desired heavy-ion products of astrophysical capture reactions from background, are a powerful tool for directly measuring astrophysical capture reactions in inverse kinematics. While particularly useful in studying reactions involving short-lived unstable species, they are also excellent tools for studying stable-ion reactions due to the clean selection of the desired reaction channel. In this talk, I will provide an overview of direct reaction measurement techniques using recoil separators, discussing recent high-profile results as example cases. I will also discuss new measurement techniques that are currently under development, including a new technique to precisely extract resonance energies, as well as direct measurements of $(\alpha,n)$ reactions that couple next-generation organic scintillators with a recoil separator.
The slow (s) and rapid (r) neutron capture processes have long been considered to produce nearly the entirety of elements above Fe, but when comparing their yields with spectroscopic data, inconsistencies in abundance arise in the Z=40 region. These differences are expected to be attributable to the intermediate (i) neutron capture process.
Working in weak i-process neutron densities on the order of 10$^{13}$ neutrons/cm$^{3}$, the $^{90}$Sr(n,$\gamma$)$^{91}$Sr capture reaction has a negative correlation to the production of Zr, possibly explaining the discrepancy between the observed and predicted elemental abundances of Zr in i-process environments such as CEMP-i stars.
I will discuss the $\beta$-Oslo analysis of $^{91}$Sr to reduce uncertainties in the $^{90}$Sr(n,$\gamma$)$^{91}$Sr reaction, measured via the $\beta$-decay of $^{91}$Rb into $^{91}$Sr with the SuN total absorption spectrometer at the NSCL in 2018. By measuring both $\gamma$-ray and excitation energies, a coincidence matrix was produced to perform the Oslo analysis, providing experimental information on the Nuclear Level Density (NLD) and $\gamma$-ray Strength Functions ($\gamma$SF), two critical components in limiting the uncertainty of the neutron capture cross section when it cannot be directly measured. This constrained uncertainty will allow us to better characterize the contribution of $^{90}$Sr to the i process and make progress in explaining observed abundances in suspected i-process stellar environments.
Motivated by fundamental symmetry tests, a measure of large electric dipole moment (EDM) would represent a clear signal of the violation of the CP symmetries. This observation highlights the imbalance in the matter and antimatter observed in our Universe. The Standard Model (SM) predicts an EDM lower than the experimental reach, prompting the need to explore beyond the SM. The $^{199}Hg$ isotope sets an upper limit on atomic EDM, accessed through Schiff moment. In order to guide nuclear structure models required for the calculation of the Schiff moment of $^{199}Hg$ , we have undertaken detailed inelastic scattering reactions of $^{198,200}Hg$ to map the distribution of $E2$ and $E3$ in these nuclei since the Schiff moment is proportional to the product of the nuclear deformation parameters $\beta_2\beta_3$. The adjacent even-even nuclei were chosen over a direct study of $^{199}Hg$ itself due to the lower-level density, as well as taking advantage of the spin 0 ground state, making the analysis of the inelastic scattering reactions much easier. We have chosen to perform $(d,d’)$ reactions that present good population of $2^+$ and $3^-$ states even with rather smaller matrix elements for excitation from the ground state. Several experiments on $^{198,200}Hg$ were performed at the Maier-Leibnitz Laboratorium of the Ludwig-Maximilians Universität München. A 22 MeV deuteron beam bombarded the targets of the compound of $^{198,200}Hg^{32}S$, and the scattered particles that were separated using the quadruple three-dipole (Q3D) magnetic spectrograph and detected on the focal plane. Very high-statistics data sets were collected from this reaction, resulting in the observation of a considerable number of new states. The cross section angular distributions are used to provide information on the spin and parities, and ultimately will be used to determine the excitation matrix elements.
Details of the analysis of the $^{198}Hg(d,d’)$ reaction to date will be given.
In the universe, most matter is mainly composed of light elements like hydrogen and helium, which were synthesised shortly after the Big Bang. Elements beyond iron are produced through neutron capture via the r-process or the s-process. However, these processes can’t explain the existence of several neutron-deficient elements, called p-nuclei, especially $^{92,94}$Mo and $^{96,98}$Ru. The $ \nu$p-process which happens during the Core-Collapse SuperNovae (CCSNe) explosion and the rp-process in type-I X-ray bursts (XRBs) have been suggested as potential sites to produce these nuclei. In both processes, the ability to synthesise heavy elements depends on the competition between $^{59}$Cu(p,$ \gamma $)$^{60}$Zn and $^{59}$Cu(p,$ \alpha $)$^{56}$Ni reactions, where the latter hinders the flow to the higher mass region, this is the Ni-Cu cycle. The upper-temperature limit for effective $\nu$p-process strongly depends on the $^{59}$Cu(p,$ \alpha $)$^{56}$Ni reaction rate. CCSNe ejects the nucleosynthesis products to the interstellar medium whereas the ashes of XRBs do not become part of the interstellar medium, XRBs are therefore an unlikely source of heavy nuclei. However, the Ni-Cu cycle directly affects the energy generation and, consequently, the shape of the XRB light curves. There is presently no direct experimental information on these reaction rates, at temperatures relevant for CCSNe and XRBs. A first direct measurement of $^{59}$Cu(p,$ \alpha $)$^{56}$Ni reaction cross-section at a higher energy (E$_{c.m.}$ = 6 MeV) was reported recently. This work will report an experiment to directly measure this reaction cross-section at E$_{c.m.}$ = 4.64 MeV, which is relevant to the temperature range for $\nu$p-process. Therefore, it is important to measure the $^{59}$Cu(p,$ \alpha $)$^{56}$Ni reaction rate to understand the Ni-Cu cycle in the $\nu$p-process and XRBs.
Understanding the formation of the heaviest elements has long been a pivotal inquiry and recent progress spurred by LIGO's detection of gravitational waves now lead us to examine kilonovae as crucial markers in unraveling the processes behind the synthesis of those elements. Notably, the emission spectra of MeV gamma rays could lead to strong insight in the identification of individual isotopes if specific lines can be associated to specific isotopes. For example, the 2.6 MeV gamma-ray emission line from thallium-208 has been well known in various branches of science, but it has never been pointed out as a potential real-time indicator of heavy element production in an astrophysical context. In this talk, I will show that Tl-208 could be detectable ~12 hours to ~10 days, and again ~1-20 years following a Galactic neutron star merger, implying that the r process in such events is capable of synthesizing elements such as lead and gold. In addition, I will discuss the implications of Tl-208 as a potential indicator of the synthesis of heavy elements via the i process in some types of AGB stars and rapidly accreting white dwarfs. This is a strong argument for the importance of future MeV telescope missions aiming to detect Galactic events, but that may also be able to reach nearby galaxies in the Local Group.
Nuclei away from the line of stability have been found to demonstrate behavior that is inconsistent with the traditional magic numbers of the spherical shell model. This has led to the concept of the evolution of nuclear shell structure in exotic nuclei, and the neutron-rich calcium isotopes are a key testing ground of these theories; there have been conflicting results from various experiments as to the true nature of a sub-shell closure for neutron-rich nuclei around $^{52}$Ca. An experiment was performed at the ISAC facility of TRIUMF; $^{52}$K, $^{53}$K, and $^{54}$K were delivered to the GRIFFIN gamma-ray spectrometer paired with the SCEPTAR and the ZDS ancillary detectors for beta-tagging, as well as DESCANT for neutron-tagging. Using this powerful combination of detectors, we combine the results to construct level schemes for the isotopes populated in the subsequent beta-decay. Preliminary results from the analysis of the gamma, beta, and neutron spectra will be presented and discussed in the context of shell model calculations in neutron-rich nuclei.
Among the most stringent tests of the Standard Model are performed through precision experiments of beta decay. In these tests, a crucial input is the Q-value or the mass difference of the mother and daughter nuclides. This extraordinary precision can only be achieved with a Penning trap mass spectrometer. The spectrometer at TITAN-TRIUMF has been a pioneer in the use of highly charged ions to boost the precision and to reduce systematics through e.g. beam purification. For further enhancement of the achievable precision at TITAN, two upgrades have been implemented: First, the trap itself has been made cryogenic to extend the interrogation type of the highly charged ions and therefore to improve the precision. Second, a new phase-based technique to determine the cyclotron frequency is being implemented. The status of the Penning trap and its impact on the achievable precision in determining the relevant Q-values will be presented.
The ongoing goal of relativistic heavy-ion collision experiments is to constrain the properties of nuclear media under extreme densities. Mounting evidence gathered from these experiments suggests that a strongly interacting plasma of quarks and gluons – the Quark Gluon Plasma (QGP) – is created within these collisions. The QGP is well described using multi-stage simulations, where relativistic dissipative hydrodynamics plays a key role. An irreducible moment expansion of the relativistic Boltzmann equation provides a systematic way to isolate the macroscopic behaviour of hydrodynamics. This formulation is the basis of modern fluid simulations of the QGP. To capture the chemical and kinetic evolution far outside thermal equilibrium, fluid dynamics is followed by molecular dynamics of hadronic (quarks and gluon) bound states. Modern Bayesian model-to-data comparisons run these multi-stage simulations on large supercomputers to produce a statistically robust extraction of QGP properties. I will discuss the constraints on the dissipative properties of the QGP, i.e., its shear and bulk viscosity, achieved using large-scale Bayesian analyses I have led, and outline a path toward future understanding of the QGP.
In order to search for the physics beyond the Standard Model at the precision frontier, it is sometimes essential to account for Next-to-Next- Leading Order (NNLO) theoretical corrections. Using the covariant approach, we calculated the full electroweak leptonic tensor up to quadratic (one loop squared) NNLO (α3) order, which can be used for the processes like e^− p and μ^− p scattering relevant to MOLLER (background studies) and MUSE experiments, respectively. In the covariant approach, we apply unitary cut of Feynman diagrams and separate them into leptonic and hadronic currents and hence, after the squaring matrix element, we can obtain differential cross section up to NNLO level.
In this presentation, I will quickly review covariant approach and provide our latest results for quadratic QED and Electroweak corrections to e^− p and μ^− p scattering processes.
The MOLLER (Measurement of a Lepton-Lepton Electroweak Reaction) experiment aims to measure the weak mixing angle with unprecedented precession at low momentum transfer. The measurement will be carried out with Jefferson Laboratory's state-of-the-art Continuous Electron Beam Accelerator Facility (CEBAF) to measure the parity violating scattering asymmetry of 11 GeV highly-polarized electrons on a liquid hydrogen target. This precision measurement will be sensitive to a variety of new physics including probing mass scales as large as 47 TeV. The University of Manitoba is leading the construction of MOLLER primary electron detector. It incorporates quartz bars and 224 Photomultiplier Tubes (PMTs) to integrate the total Cherenkov light yield by the flux of scattered electrons from each helicity state. Understanding the non-linearity of the PMTs is crucial for achieving MOLLER's proposed precision. Customized PMT voltage dividers and pre-amps have been developed at U. Manitoba to achieve the stringent design requirement of less than $0.5\pm 0.1$% non-linearity. An apparatus to replicate a helicity-dependent PMT signal that is similar to what is expected at MOLLER was prepared to evaluate the PMT non-linearity. This talk will present the results of the non-linearity precision achieved with this apparatus and the non-linearity performance of the PMTs to be installed at MOLLER.
Parity Violating Electron Scattering (PVES) experiments are a powerful tool for exploring physics beyond the Standard Model of particle physics. These experiments, which conduct highly accurate measurements of the parity-violating asymmetry across various kinematic conditions and targets, significantly contribute to advancements in particle, nuclear, and hadronic physics. Achieving precise measurements necessitates both experimental and theoretical corrections to the measured asymmetries.
Experimental corrections are required for background processes, characterized by fractional dilution factors and inherent background asymmetries. The implementation of auxiliary detectors to differentiate the main signal from background signals can enhance the signal-to-background ratio. However, certain uncontrollable factors during measurements, such as deviations in beam direction or spin alignment discrepancies caused by source imperfections, accelerator flaws, external magnetic fields, or subsystem misalignments, require post-measurement corrections to address these deviations.
This presentation will compare the Bayesian analysis with the frequentist method commonly used in PVES experiments, focusing on its application in improving the extraction of asymmetry components in both the Qweak and MOLLER experiments. The proposed Bayesian models are designed to infer background asymmetries based on observed quantities, thus enabling more accurate result corrections. A comparison of the fitted values from Bayesian analysis with the measured and precise simulation values shows that the Bayesian-fitted values more closely align with the exact simulation values than with the raw measured data. This alignment highlights the effectiveness of Bayesian analysis in compensating for measurement imperfections in PVES experiments.
We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC).
The High Voltage Monolithic Active Pixel Sensors (HVMAPS) will be used as electron detectors in The Measurement of a Lepton-Lepton Electroweak Reaction (MOLLER) experiment. The HVMAPS will map the response across the face of the quartz integrating detectors. Using two commercial GEM detectors, a position calibration system for the MOLLER's HVMAP was commisionned. The pair of of GEMs are aligned vertically with an HVMAP detector in between. Cosmic-ray muon tracks are identified using coincidence timing between the GEM detectors. From the track position and angle information, the HVMAP pixel by pixel efficiency and the overall position resolution can be determined. Each of the GEM detectors is filled with a gas mixture of Argon and Carbon Dioxide with a ratio of 80/20 at room temperature 293.15 K at 250 Torr. Finally, the muon-data position calibration is achieved by comparing measured spectra with detailed Monte Carlo simulations performed using GEANT4 and CRY codes. Preliminary results in this regard will be presented.
Of particular interest for nuclear structure and astrophysical processes are nuclides in the vicinity of N=Z=50, where the proton and neutron shells are each full or “magic”. These doubly magic isotopes can give unique insights into nuclear structure due to their doubly magic nucleons which give rise to extraordinary nuclear stability, like high binding energy, and tests of isospin symmetry. In addition, these nuclides are critical inputs to understanding the rapid-proton (or rp-)process driving x-ray bursts, more specifically closing the SnSbTe cycle. For this reason, the masses of $^{104-107}$Sn were measured at TRIUMF’s Ion Trap for Atomic and Nuclear science (TITAN) facility with the Multiple-Reflection Time-of-Flight technique. The TITAN mass values will be presented in this poster and will be used to investigate the stability of N=Z nuclei as well as their impact on our understanding of the rp-process.
There are many outstanding fundamental questions in nuclear physics that are described in the NSERC Subatomic Physics Long Range Plan. For several of these main research drivers such as " How does nuclear structure emerge from nuclear forces and ultimately from quarks and gluons?", gamma-ray spectroscopy is the investigative tool of choice. However, analysis of data from large-scale gamma-ray spectrometers is often a bottleneck for progress due to the extremely complex nature of the decays of excited nuclear states. In some cases, thousands of individual gamma rays must be analyzed in order to construct excited state decay schemes. To date, this is largely done laboriously by hand with the final result depending on the skill of the individual performing the analysis.
The essence of this research lies in its multi-pronged approach, enabling a rigorous comparison of two dominant machine learning paradigms: supervised and unsupervised techniques. The ultimate goal is to determine the most effective framework for solving problems of this nature, and subsequently to enhance the chosen framework by integrating quantum computing, harnessing the power of qubits and quantum operations to overcome the computational restrictions inherent in classical computing.
The outcome of this work stands to offer the additional benefit of acting as a prototype for novel quantum machine learning enhancements that could extend well beyond this particular application into associated fields such as particle physics, medical physics, or any other field that encounters similar limitations in computing power when dealing with multi-body problems or calculations in higher dimensional spaces.
Particle physics and dark matter experiments using large time projection chambers (TPC) with several square meters of light detectors produce tremendous amounts of data. To retrieve such a high volume of data, traditional copper or kapton-based solutions require significant trade-offs to meet the performances and constraints needed by the physics goals. A novel communication system based on silicon photonics and using optical fibers is proposed. The goal of this system is to link the acquisition system to the inside of the TPC while addressing the unique challenges of each experiment, whether it be power consumption, high data rates, cryogenic environments, radiation-sensitive or radiation-heavy environments or long-range communication. The system operates by using external lasers as optical sources for both the transmit and receive communication channels. Doing so removes a power-consuming component from the constrained environment but forbids using commercial solutions. In addition, external laser diodes solve the problem of their high sensitivity to radiation damage when this is relevant. Data is modulated upon this continuous laser source with silicon photonics micro-ring resonators (MRR). The signal is received on embedded germanium photodiodes. An application-specific CMOS chip is used to control and drive the silicon photonics MRR, receive the photodiode signals as well as doing all the required data link control. This two-chip assembly, called an optical engine transceiver, sits at both ends of the communication channel between detectors in harsh environments and the data acquisition servers (DAQ) outside of the experiments. To deploy those transceivers and integrate with the rest of the DAQ, we also designed and demonstrated an FPGA-based system capable of concentrating up to 32 optical engines onto a 100 Gb/s Ethernet link. We will present the structure and the operation of the system and how it can help data collection in physics experiments.
SPAD array-based photodetectors are emerging in large-scale photodetector panels because of their appealing characteristics such as compactness, low bias voltages and good photon detection efficiency (PDE). Conventional SiPMs are considered for covering large areas, but the large capacitance of SiPMs brings a serious limitation. They also require an analog readout circuit designed with tradeoffs between power consumption, bandwidth and noise.
To overcome some of the SiPM challenges, this paper presents a photodetection module populated with photon-to-digital converter (PDC) CMOS readout ASICs and tailored for thousands of cm2 up to m2 panels. The prototype that will be presented is a 94 × 74 mm2 module assembled with 8 × 8 PDC readout ASICs. Each ASIC has 4096 individual SPAD readout circuits at 78 μm pitch distributed over a 5 × 5 mm2 area. Each of the 64 ASICs is connected to a tile controller implemented on an external FPGA platform to demonstrate the readout architecture. The tile controller is used to configure, program the PDC bin width (from 10 ns up to μs) for photon counting and send the bin counts to a computer to visualize the data.
The tile controller can implement pulse shape discrimination, dark count mitigation, and time-to-digital converters with sub 100 ps timing resolution. These have been demonstrated on a preliminary 2 × 2 PDC readout ASIC module. We will also discuss the latest measurements on the 8 × 8 PDC tile.
Dark matter (DM) is an undetected form of matter whose existence is supported by numerous astrophysical observations on multiple scales. Since DM constitutes over 85% of the mass of the galaxy, its direct detection is one of the most important fundamental physics concerns today. For a broad region of sub-100KHz (peV) "ultralight" DM (UDM), mechanical sensors are predicted to set the deepest constraints on UDM-normal-matter interaction strength. Owing to its high mechanical quality (low force noise), superfluid helium is an excellent material for such acoustic UDM searches. Motivated by this, HeLIOS is a new class of sensitive small-scale UDM mechanical detectors that should place new constraints within hours of operation. Advantageously, its resonant response can be swept via pressurization, allowing it to explore more phase space than a fixed-frequency mechanical system. We propose several optimizations, including enhanced readout with a membrane-fiber-cavity optomechanical system, that should together improve sensitivity by several orders of magnitude.
The McDonald Institute is dedicated to cultivating an inclusive and diverse environment that prioritizes equity, diversity, inclusion, and Indigenization (EDII) within the realm of astroparticle physics research in Canada. Recognizing the significance of varied perspectives, backgrounds, and experiences, we believe that advancing our understanding of the universe is most effective through collaboration among individuals with diverse insights. Our commitment to EDII is deeply embedded in our community's mission, vision, and values, reflecting the integral role these principles represent in ensuring the success of astroparticle physics over the next several decades. As the field evolves towards longer project durations, sustaining a robust and inclusive research environment is deemed essential for maintaining scientific excellence. This presentation will focus on how the McDonald Institute is building capacity for EDII within the community and beyond, and how you can play a vital role in the future of astroparticle physics.
Lunch in the restaurant or lunch box - depending on your selection during registration.
The observed asymmetry between matter and antimatter in the Universe still awaits for an explanation. If lepton number conservation, a global symmetry of the standard model, is violated, that could help understand it. The most sensitive probe to search for this violation is through a hypothetical decay known as neutrinoless double beta decay. Observation of this decay would prove that neutrinos are their own antiparticles, the so-called Majorana particles. The primary focus of the nEXO Collaboration is the search for this process using a liquid xenon time projection chamber, at the tonne-scale rooted on the success of the EXO-200 experiment. Our projections result in a half-life sensitivity beyond $10^{28}$ yr, sufficient to cover a milestone of this search consisting of the inverted ordering of neutrinos masses. This talk will introduce the search, describe the nEXO detector and its potential for discovery of new physics.
The NEWS-G experiment searches for low-mass dark matter candidates at SNOLAB in Sudbury, Ontario. This direct dark matter search is performed using a spherical proportional counter filled with low atomic mass gases, such as 98% neon + 2% methane. The detector consists of a grounded copper sphere with a high-voltage anode in the centre, which detects ionization within the gas. The inner surface of the sphere is contaminated with Pb-210, which decays to the alpha-emitter Po-210. After each background alpha decay, space charge effects cause the event rate in the detector to increase, creating an "alpha tail". One set of runs covered a period when the gas quality in the detector was worsening. Analysis of the event rate uncovered a relationship, with worsening gas quality increasing the alpha tail. I will describe the experiment, and present its existing results and current status, and my analysis of the alpha tails. Greater understanding of the alpha tails may elucidate the cause of the detector's most common background events, single-electron ones, suggesting a valuable focus for future work.
The NEWS-G experiment at SNOLAB uses spherical proportional counters, or SPCs, to detect weakly interacting massive particles (WIMPs), which are a prime candidate for dark matter. Interactions within the gas-filled sphere create a primary ionization. The signal from the resulting electrons is passed through a digitizer and this generates raw pulses that are observed as time-series data. However, these signals have noise from electronics and other particles entering the detector. I will discuss the use of machine learning techniques for removing noise from different pulse shape types, as well as rejecting bad pulses in the data. There is a large amount of simulated data available which is used to train and test neural networks. Models are trained on this data, and subsequently can be applied to real data once fully trained. These models can potentially denoise and clean data more efficiently and with less error than traditional pulse processing, making them an important tool for the NEWS-G experiment.
The Particle Physics Group at the Montreal Tandem Accelerator is currently working on setting up an experiment to verify the existence of the so-called X-17 particle. This supposed particle of mass 17 MeV/$c^2$ was first claimed to be observed by the ATOMKI experiment in the decays of excited $^8Be$ and $^4He$ nuclei to their ground states via internal e+ e- pair creation. The anomaly observed in the angular correlations of e+ e- pairs suggests the existence of a potential dark boson. The Montreal experiment aims to independently reproduce this result and is using a multiwire proportional chamber with scintillator bars surrounding it, giving it an acceptance solid angle close to 4π. Data acquisition hardware has been tested and is being deployed, we aim to perform beam runs and data acquisition in early 2024 and expect results by summer 2024.
The Belle II experiment is operating at the SuperKEKB asymmetric energy electron-positron collider and has accumulated a collision data of 424/fb. This presentation focuses on the search for the dark photon at Belle II. The dark photon is a hypothetical particle that is proposed to serve as a mediator between the Standard Model and dark matter. We search for the production of a dark photon accompanied by an energetic initial state radiation photon. Specialized low multiplicity triggers have been active for the entire Belle II dataset to target this detector signature. In light of the X17 anomaly reported by the ATOMKI collaboration, we focus on a low-mass dark photon decaying into electron and positron pair within the Belle II detector. One of the challenges in this analysis is the background from photon conversion in detector material, which can mimic a dark photon decay. I will present the current progress of this analysis and our technique to validate the converted photon background predictions.
PICO-500 is a WIMP dark matter bubble chamber detector in its early stages of construction underground at SNOLAB. It is the upscaled successor of PICO-40L, which is currently operational with the objective of detecting dark matter or improving the world leading spin-dependent WIMP search limits set by PICO-60 in 2016. PICO-500 will have almost an order of magnitude greater sensitivity with a ~250 litre active volume of superheated C$_3$F$_8$ (freon) inside a quartz vessel with bellows to enable expansion and compression of the fluid. Its detection method is based on the Seitz model, in which recoiling freon nuclei from incoming WIMP scatters results in localized boiling if the energy deposition, within a critical radius, is in excess of the threshold energy. As was done for PICO-40L and PICO-60, the operating threshold of PICO-500 will be optimized for sensitivity to dark matter, and insensitivity to electron recoils caused by gammas and electrons entering the detector. The electron recoil sensitivity will be probed by introducing an 18 MBq $^{60}$Co gamma source into the detector and measuring the bubble nucleation rate. Cross-verification between data and simulations from PICO-40L calibrations, as well as enhancements to the calibration system for PICO-500, will contribute to an exceedingly precise understanding of the PICO-500 detector response.
The PICO collaboration searches for dark matter using superheated C3F8 in bubble chamber technology. The results of the operation of the PICO-60 detector at SNOLAB set the most stringent direct-detection constraint to date on the weakly interacting massive particle (WIMP)-proton spin-dependent cross section.
PICO-500 is the next-generation detector produced by PICO with a 250 kg live volume. Currently under development, it is the largest C3F8 bubble chamber ever constructed. As such, operating increasingly large bubble chambers at increasingly low nuclear recoil thresholds does pose significant experimental challenges pertaining to hydraulic control. A key aspect of PICO’s success is the ability to analyze a well-defined fiducial volume for which the thermodynamic threshold is stable and characterized with confidence. The PICO-500 hydraulic control system is the most advanced of any PICO detector to date.
This presentation will discuss new functionality introduced for PICO-500, including a dedicated oil degassing station, the ability to re-circulate and filter oil in the detector, a control panel engineered for minimal system losses and dual redundancy for critical operation systems. Each of these additions is predicted improve the experiment’s exposure by minimizing dead time during compression cycles, reducing spurious nucleations caused by hydraulic instability, and overall allowing for as much physics search data to be recorded as possible within the detector’s lifespan.
The SuperCDMS collaboration uses cryogenic silicon and germanium detectors to directly search for dark matter. Among the full payload of 24 detectors are 12 HV detectors. These utilize a bias voltage across the crystal substrate to amplify the phonon signal created from particle energy depositions. We have operated 4 Ge and 2 Si HV detectors in a deep underground environment for the first time at the Cryogenic Underground TEst Facility (CUTE) at SNOLAB. This detector characterization campaign aims to understand the behavior of the detectors as well as look at the development of analysis cuts that may aid future SuperCDMS analyses. This talk will describe the testing efforts undertaken at CUTE and the results we've learned from this program thus far.
SuperCDMS SNOLAB will be a direct detection experiment with world-leading sensitivity to low-mass dark matter (DM) that employs cryogenic silicon and germanium calorimeters. SNOLAB detectors are expected to exhibit a non-linear energy response, necessitating calibration signatures across the entire energy range of interest. A calibration method proposed for silicon-based detectors is to use Compton steps that provide two energy signatures at ∼0.1 keV and ∼2 keV crucial for sub-GeV DM searches. Compton steps calibration has been explored at test facilities running SuperCDMS silicon high-voltage, eV-scale (HVeV) prototype detectors. In this talk, we will present the status of the Compton step calibration and discuss numerous challenges associated with this technique.
The Large Hadron Collider at CERN has provided us with an unprecedented number of proton collisions at unprecedented energies. One of the main motivations of the LHC was the discovery of the Higgs boson, which was achieved in 2012. Now over a decade later, this begs the question: what have we been doing at the LHC since the discovery of the Higgs boson? Together, we will explore the richness of the Standard Model of particle physics, and some of its peculiarities that the LHC is letting us investigate for the first time.
The ATLAS detector at Large Hadron Collider (LHC) at CERN has collected $140~\mathrm{fb}^{-1}$ of proton-proton collisions over the course of the 2015-2018 data-taking period, allowing for some of the most stringent tests of the Standard Model of particle physics and new measurements of its parameters with unprecedented accuracy. This presentation details the first observation of the production of a $W$ boson in association with two photons in proton-proton collisions. The production cross section of this process, mediated by triple and quartic gauge couplings between the $W$ boson and the photon, is predicted and entirely determined by the Standard Model. It thus represents and important test of its validity and a unique opportunity to set limits on physics beyond the Standard Model. In addition, this process is a background to several important processes which will become accessible in the coming years as new data is collected from the LHC.
This measurement relies on the development of data-driven techniques to estimate several backgrounds to the $W\gamma\gamma$ process from misidentified particles in the ATLAS detector. In addition, a control region is used to constrain the background from top-quark decays in a simultaneous fit with the signal region. A significance of the $W\gamma\gamma$ process over background of 5.6 (5.6) standard deviations is observed (expected). The process is unfolded to particle level to obtain a fiducial cross section measurement of $\sigma_{\mathrm{fid}} = 13.8 \pm 1.1 (\mathrm{stat}) \substack{+2.1 \\ -2.0} (\mathrm{syst}) \pm 0.1 (\mathrm{lumi})~$fb, in agreement with the latest Sherpa and MadGraph Next-to Leading Order Monte Carlo simulation samples.
Many extensions to the Standard Model (SM) predict the existence of strongly interacting dark sectors, which behave similarly to Quantum Chromodynamics (QCD) but interact weakly with SM particles. Depending on the parameters of the dark sector, one potential LLP signature could be that of an "emerging jet" – a spray of particle tracks after proton-proton collision dominantly composed of displaced tracks and containing many displaced vertices within the jet cone. Graph Neural Networks (GNN) have shown great promise in capturing complex dependencies and patterns in graph-structured data, making them well-suited for analyzing the intricate topology of emerging jets. GNN-based architecture is used to tag emerging jets with high accuracy while significantly suppressing the QCD background. The architecture of the GNN also enables the classification of displaced tracks as well as the identification of displaced vertices within the jet cone. I will present a comprehensive evaluation of the GNN's performance in these jet, track, and vertex-level classification tasks providing valuable insight into the long-lived particle decay signatures.
The Large Hadron Collider (LHC) at the Conseil Européen pour la Recherche Nucléaire (CERN) features a 27km ring of superconducting magnets that accelerate protons into extremely high-energy collisions to probe for new physics. Found along the ring is ATLAS (A Toroidal LHC ApparatuS), a cylindrical multi-purpose detector with various subsystems. At the heart of the detector, the electromagnetic calorimeter is instrumental in measuring the energy of particles generated during high-energy collisions. The electromagnetic (EM) calorimeter measures the energy of charged particles and photons using liquid argon (LAr) and lead layers.
After the High-Luminosity (HL-LHC) upgrade, the LHC will achieve an instantaneous luminosity nearly 10 times greater than that of the original LHC. To withstand the higher radiation and data rates, the ATLAS experiment will be upgraded to maintain or improve detector performance in this new challenging environment. The planned upgrades to the LAr Calorimeter system include new front-end and readout electronics. Notably, the requirements include radiation tolerance of on-detector electronics, trigger and DAQ upgrades, revision of electronics architecture where all data is sent off-detector, and novel readout electronics. For the latter, a novel approach incorporating cutting-edge technology and a more flexible electronics architecture will be implemented. With the new system, all raw analog data will be digitized on-detector and then quickly sent off-detector via optical fibres to be processed using novel digital signal processing algorithms. A description of the new readout electronics system as well as the current development status will be presented.
A search for a novel experimental collider signature called 'Emerging Jets' is presented using the ATLAS experiment at the Large Hadron Collider. This signature is based on a model of 'Dark-QCD' wherein dark quarks will shower and hadronize analogous to quantum chromodynamics (QCD) in the Standard Model (SM). If produced in particle collisions, these dark particles would form dark-jets: collimated sprays of dark hadrons travelling away from the collision point. These dark hadrons would eventually decay into SM particles, causing displaced tracks and decay vertices which do not originate at the collision points but are seen to emerge into the detectors. This work shows a complete analysis overview, including the use of a Boosted Decision Tree (BDT) to improve the separating power between this unique signal and SM backgrounds, a description of the data-driven background estimation method and initial sensitivity predictions.
The ‘Data-Directed Paradigm’ (DDP) is a search methodology that efficiently explores possibilities for new physics within a vast number of spectra featuring smooth-falling Standard Model backgrounds. DDP deviates from the traditional analytical approach by sidestepping the requirement for a simulated or functional-form-based background estimate. Instead, it uses a neural network trained to predict the log-likelihood-based significance such that, when used on data, should be able to identify histograms that present an anomalous bump. The adoption of the DDP approach results in a significant reduction in time requirements and has the potential to enhance the ATLAS data discovery reach by enabling the prompt investigation of numerous unexplored regions.
Currently, the DDP is trained on samples that have analytical functions as background with injected artificial gaussian signals. These background functions are obtained either from smoothly falling shapes or from fitting data-like histograms, such as the Dark Machines samples. The training of the model is currently being optimized in a series of tests, in the hope of eventually using it with ATLAS data in the context of anomaly detection.
At the Francium Trapping Facility located at TRIUMF our group uses laser cooling and atom trapping techniques to confine Fr and Rb atoms in a magneto-optical trap allowing us to investigate highly forbidden optical transitions in these atoms with precision laser spectroscopy. For Fr, these investigations are important precursors to future atomic parity-violating (APV) experiments. Atomic parity-violation arises from the parity-violating weak neutral interaction between an atom’s electrons and nucleons. APV effects scale with nuclear charge roughly like Z$^3$ making Fr, the heaviest alkali, an ideal candidate for these types of experiments. The highly forbidden parity-violating E1$_{pv}$ transition between 7s and 8s states in Fr is a signature of APV but is too weak to observe directly. We rely on interference of the E1$_{pv}$ amplitude with a parity-conserving Stark-induced E1$_{stark}$ amplitude in the presence of an external electric field - the so-called Stark interference technique - to enhance the APV signal. We will measure the asymmetry of this signal under parity reversals. In this talk I will present theoretical aspects and experimental requirements for realizing an asymmetry measurement in the E1$_{stark}$-M1 interference signal on the same 7s->8s transition in Fr. Measurement of this signal will lend critical experience in interference type measurements in preparation for future APV experiments.
This work is supported by NSERC and TRIUMF via NRC, and the Universities of Manitoba and Maryland.
The TRIUMF Ultra Cold Advanced Neutron (TUCAN) Collaboration is developing an apparatus to measure the neutron electric dipole moment (nEDM) to a sensitivity of 10−27 ecm, an order of magnitude improvement over the current best limit. The measurement will be conducted over around 105 experiment cycles using spin-polarized ultracold neutrons produced by the high-intensity TUCAN source in a well-characterized magnetically-quiet environment. Characterizing remnant magnetic-field related systematic effects is vital to improving the nEDM limit.
An nEDM measurement relies upon precise magnetic field data collected during an experiment cycle. The mechanisms that produce bias in an nEDM measurement can be simulated computationally to relate cycle-by-cycle magnetic field measurements with an overall systematic nEDM uncertainty. This presentation will focus on the details of how a simulation is constructed to model an nEDM measurement and how the simulation can be used to understand the propagation of magnetic field measurement uncertainties to systematic uncertainty in the TUCAN EDM measurement.
Antimatter remains one of the most vital, yet mysterious areas of particle physics. The deficiency of antimatter in nature, despite its theoretically predicted abundance, leaves questions regarding our understanding of fundamental symmetries. Antihydrogen is the antimatter counterpart of the hydrogen atom, and it provides a simple antimatter system to test these symmetries, such as simultaneous charge, parity and time symmetry as well as uncovering potential deviations from the Standard Model.
The ALPHA (Antihydrogen Laser PHysics Apparatus) Collaboration based at CERN, is at the forefront of exploring fundamental properties of antihydrogen with its unique antihydrogen trapping apparatus and novel experimental techniques. The aim of my research is to measure the ground state hyperfine splitting of antihydrogen; a characteristic that arises from the interaction of the antiproton and orbiting positron and a physical property that is sensitive to QED effects. In order to precisely measure these features, the experimental procedure, magnetic field, and systematics of the experiment must be carefully considered. Presently, we look to improve upon our previous measurement’s precision of 4 parts per $10^{4}$, and to compare the resulting value with that of the well-known hydrogen atom.
In this presentation I will report on the advancements and recent results achieved by the ALPHA Collaboration towards high precision measurements of the hyperfine structure of antihydrogen, and prospects for further improvements.
One possible extension to the Standard Model of Particle Physics (SM) is one which provides a mechanism, active in the early universe that has lead to the matter-antimatter asymmetry that we observe today. One of the most prominent proposed solutions to this asymmetry relates this imbalance of matter and antimatter to new sources of CP violation which may manifest, for instance, in permanent electric dipole moments (EDMs). Searches for EDMs and their associated CP-violating effects, including those that originate inside the atomic nucleus have been performed for decades. One novel concept in this search is the combination of precision atomic, molecular, and optical (AMO) techniques with rare isotopes that can be synthesized at accelerator facilities such as TRIUMF. Certain species of radionuclides, such as the well-studied $^{225}$Ra exhibit an octupole deformation that can enhance sensitivity to nuclear CP-violating effects by up to three orders of magnitude. Recent calculations have shown that $^{229}$Pa may increase sensitivity to nuclear CP-violating effects by an additional 40 times when compared $^{225}$Ra. Parallel advancements in experimental techniques have led to the use of molecular systems to enhance precision by three to four orders of magnitude via an increase in the effective electric fields experienced by valence electrons in the presence of the nucleus. One further advantage that can be taken advantage of is the ability to laser-cool the molecular system in order to improve sensitivity even further. In this case, it is necessary to create a system that is isoelectronic to neutral RaF which has shown evidence of a suitable laser-cooling scheme. For these reasons, the ambition of the newly formed RadMol collaboration is to study $^{229}$PaF$^{3+}$. This contribution will present opportunities and challenges for studies of $^{229}$PaF$^{3+}$ as well as progress towards forming highly-charged radioactive molecules in general.
The detection of dark matter is currently one of the leading challenges in particle physics. The DEAP-3600 experiment, the largest argon dark matter detector in the world, uses 3.3 tonnes of liquid argon and 255 photomultiplier tubes to detect scintillation light produced by a dark matter particle scattering on an argon nucleus. Since the interaction of dark matter scattering on argon is such a rare process, having a well understood background model plays a critical role in rejecting events that may mimic a dark matter signal.
Alpha particles produce a reduced scintillation signal compared to electrons of the same energy, an effect known as “quenching”, which is in general energy dependent. While typical decays producing alpha radiation are much too high in energy to mimic a dark matter signal, alpha particles traversing through detector materials will have their energy degraded, which may cause them to reconstruct in the dark matter search region. As such, understanding alpha particle quenching in liquid argon, particularly at low energy, is important in generating a well understood background model. In this talk, we discuss progress made in measuring alpha scintillation quenching in liquid argon using Argon-1, a modular detector instrumented with two silicon photomultipliers located at Carleton University.
Silicon photomultipliers (SiPMs) are the photo-detection technology of choice for future noble-liquid scintillator rare-event search experiments, both in neutrino-less double beta decay and dark matter. The Light only Liquid Xenon (LoLX) experiment is a small-scale R&D liquid xenon (LXe) detector located at McGill University. LoLX aims to perform detailed characterization of SiPM performance in LXe, and to characterize the light emission and transport from LXe to inform future LXe detectors. To perform these characterizations, we require a flexible and accurate optical simulation framework which can be easily adapted to various detector configurations. Chroma is a fast and modern GPU accelerated photon transport simulation framework, with simulation speeds outperforming by up to 200 times the performance of traditional CPU simulations such as GEANT4. The next-generation neutrinoless double beta decay experiment, nEXO, and the LoLX collaborations have worked to create a robust simulation toolkit using the Chroma framework to enable accurate optical simulations for detector sensitivity characterizations and rapid R&D development. The Chroma-based simulation toolkit has undergone extensive recent upgrades, which includes extensions to allow for coupling to stand-alone GEANT4 simulations, scintillation light generators based on the NEST model, and Cherenkov light generators based on the G4Cerenkov model. To produce accurate simulations, we also require a detailed response model simulation of SiPM light detection and correlated noise effects. The SiPM response model includes modelling of all effects such as after-pulsing, optical crosstalk, external crosstalk and dark noise. We will present an overview of the upgraded simulation toolkit and response model, and its impact on future detector characterization and noble-liquid detector R&D for nuclear and particle physics research.
Lunch in restaurant or lunch box - depending on your selection during registration
Baryon asymmetry of our universe is one of the most important phenomena that drives new physics models. I will summarize the landscape of the BSM physics required to explain this asymmetry and I will discuss the wide variety of experimental observations required to test these models.
Dark photons can oscillate into Standard Model (SM) photons via kinetic mixing. The conversion probability depends sensitively on properties of the ambient background, such as the density and electromagnetic field strength, which cause the SM photon to acquire an in-medium effective mass. Resonances can enhance the conversion probability when there is a level-crossing between the dark photon and background-dependent SM photon states. In this work, we show that the widely used Landau-Zener (LZ) approximation breaks down when there are multiple level-crossings due to a non-monotonic SM photon potential. Phase interference effects, especially when the dark photon mass is close to an extremum of the SM photon effective mass, can cause deviations from the LZ approximation at the level of a few orders of magnitude in the conversion probability. We present an analytic approximation that is valid in this regime and that can accurately predict the conversion probabilities in a wide range of astrophysical environments.
Current experiments investigating the existence of magnetic monopoles and other exotic phenomena at the LHC such as MoEDAL are underway. This has included the search for velocity dependent visible monopoles. The upcoming MAPP experiment will pursue the possible detection of milli-magnetically charged dark monopoles. Many novel extended features of our model are proposed, including dark magnetic moment and beta-dependent milli-charged dark magnetic couplings. This theory has lead to the development of a MadGraph model with numerical results agreeing strongly with analytic expectations for the production mechanisms of both Drell-Yan and Photon Fusion. We conclude with a brief discussion of our upcoming detector simulation.
Studying atomic parity violation in weak interactions by using atomic spectroscopy-based techniques plays a vital role in testing the Standard Model on a different momentum scale. In Francium Trapping Facility at TRIUMF, we trap and cool francium atoms in a magneto optic trap and then drive a highly forbidden 7s-8s transition in this trapped radioactive atomic sample of francium. Recently, we achieved a milestone by measuring a very faint magnetic dipole transition, 13 orders of magnitude weaker than the allowed transition, with better than 10 % accuracy. In this talk, I will shed light on two crucial contributions: 1) a power build up cavity which increased the light intensity by ~ 4000-fold in the interaction region where atomic sample resides in our apparatus, 2) optical amplification of detection signal by several folds by repeated absorption and emission of photons by the atoms, carrying signature of 7s-8s excitation, on a cycling transition eventually leading into a detectable signal with burst of photons. Our detection technique is now upped to measure the transition rates feasible towards the final atomic parity violation experiment. In this talk, I will discuss our measurement method for M1 transition, the results and how this measurement plays a significant role in getting a step closer towards measuring the atomic parity violation in francium.
This work was made possible with funding contributions from NSERC, University of Manitoba, University of Maryland, NRC and TRIUMF.
Pions are fantastic tools to look for new physics! The PIENU experiment at TRIUMF has provided, to date, the most precise experimental determination of $R^{\pi}_{e/\mu}$, the ratio of pions decaying to positrons relative to muons. While more than an order of magnitude less precise that the Standard Model (SM) calculation, the PIENU result is a precise test of the universality of charged leptons interaction, a key principle of the Standard Model (SM), constrains a large range of new physics scenario, and allows dedicated searches for exotics such as sterile neutrinos.
I’ll go over a short overview of $R^{\pi}_{e/\mu}$ measurements and their entangled history with the development of the SM and introduce the next generation precision pion decay experiment in the making: PIONEER! This newly proposed experiment aims at pushing the boundaries of precision on $R^{\pi}_{e/\mu}$ and expanding the physics reach by improving on the measurement of the very rare pion beta decay $\pi^+\rightarrow \pi^0 e^+\nu$. This will provide a new and competitive input to the determination of $|V_{ud}|$, an element of the Cabibbo-Kobayashi-Maskawa (CKM) quark-mixing matrix.
In the pursuit of unraveling one of the fundamental questions in physics—why the Universe is matter-dominated—one possible approach is to scrutinize the nature of neutrinos through the investigation of neutrinoless double beta ($0\nu\beta\beta$) decay. The detection of $0\nu\beta\beta$ decay would signify lepton number violation in weak processes and confirm the Majorana nature of neutrinos. This signature would demonstrate physics beyond the Standard Model and potentially shed light on the observed matter-antimatter asymmetry in the universe.
The nEXO experiment is a proposed tonne-scale experiment searching for $0\nu\beta\beta$ decay in the isotope $^{136}$Xe. nEXO's projected half-life sensitivity exceeds $10^{28}$ years at the 90\% confidence level. The cryostat and Xe-filled time projection chamber (TPC) of the experiment are housed in a water tank. With a diameter of 12.3 m and a height of 12.8 m, the tank contains 1.5 kilotonnes of ultra-pure deionized water, providing shielding for the cryostat and the Xe-filled TPC against external radiation. In addition, the water tank will be instrumented with 125 8" photomultiplier tubes (PMTs) to detect Cherenkov radiation of passing muons, enabling vetoing of muon induced backgrounds in the TPC.
In this talk, I describe a calibration system that is under development for nEXO's OD to calibrate the timing properties of the PMT's readout system and to monitor the optical properties of the water. The presentation introduces the OD calibration design, leveraging results from GPU-accelerated ray-tracing software (Chroma), and outlines the hardware requirements.
The nEXO experiment is being designed to search for neutrino-less double beta decay ($0\nu\beta\beta$) in a 5000 kg liquid xenon time projection chamber (TPC) enriched to the isotope xenon-136. nEXO requires unprecedentedly low radioactive background levels to achieve its desired sensitivity exceeding $10^{28}$ years. External backgrounds are shielded in part by immersing the TPC and its cryostat in a concentric cylindrical water tank. This water shield will be instrumented with PMTs to detect the Cherenkov light of passing cosmic muons and permit potential muon-induced background events to be vetoed in the analysis.
Extensive simulations of muon-induced cosmogenic backgrounds were performed in FLUKA and GEANT4 and compared with each other. In particular neutron activation rates of Xe-136 to Xe-137 as well as of detector components were studied in detail for the anticipated location of nEXO at SNOLAB. This study and comparison between FLUKA and GEANT4 results will be presented.
Barium-tagging is a technique being developed as a potential upgrade for an experiment called nEXO. The technique aims to identify single ions of the isotope Ba-136 to confirm the decay of its parent nucleus Xe-136. Due to the event localisation capability of the nEXO detector, with Ba-tagging, the detector volume can be probed for Ba-136. Identification of Ba-136 would serve as unambiguous proof for the observed event to be related to the decay of Xe-136 and not other backgrounds.
The main interest in this decay stems from the search of a lepton number violating process called neutrinoless double-beta decay (0vbb). If observed, this would be the first experimental evidence of the Majorana nature of neutrinos (i.e. neutrinos are their own antiparticles) which would have profound implications on fundamental physics. Xe-136 is one of the isotopes studied since it decays via the Standard-Model allowed two-neutrino double beta decay, hence, it could also exhibit the neutrinoless channel. Thus, the nEXO experiment will search for 0vbb using large quantities (5 tonnes) of liquid xenon (LXe) that is 90% enriched in Xe-136.
A successful Ba-tagging technique will greatly boost the discovery potential of nEXO. Several approaches to this technique are being actively investigated to achieve this feat of extracting and detecting a single ion from tonnes of LXe. The scheme being presented involves the use of a capillary for extracting some LXe from a potential 0vbb event location in the nEXO time projection chamber. After the transition of this LXe to gas phase, ion manipulation components isolate and identify the Ba ion. The talk will discuss the developments of a specific section of the Ba-tagging setup, the linear Paul trap, that is designed to trap and bunch ions extracted from gaseous xenon for identification. Results obtained from studies performed with LPT using a cesium ion source will be presented along with plans for future studies using a multi-element ion source.
Low background noble liquid-based experiments are among the leading world efforts in neutrino science and dark matter search. They rely on the development of large-scale photon counting technologies to detect noble liquid scintillation in the VUV range (argon at 128 nm or xenon at 175 nm) as a mean to quantify, position, and discriminate meaningful events.
The Université de Sherbrooke is leading the development of suitable detectors for these applications based on photon-to-digital converters (PDC), a digital version of silicon photomultipliers (SiPM). The very low sensitivity of silicon photodetectors at wavelengths below 250 nm (VUV range), caused by the short penetration depth of UV photons in silicon and by the trapping of photocarriers by surface potentials, represents some of the main design challenges.
For example, commercial SiPMs claim 20-24% efficiency at 175 nm but less than 2% at 128 nm, which is far from the demonstrated sensitivity for silicon detectors (> 50%). Indeed, high detection efficiency was demonstrated on CCD cameras in the 1990s by using delta-doping near the silicon surface. Other methods, such as PureB, “black silicon” or the use of passivation layers are all existing techniques yet to see large scale implementation in SiPM technologies.
In collaboration with the Lawrence Berkeley National Laboratory (LBNL), we follow this path by developing a delta doped surface layer as a post fabrication process on front-side illuminated single photon avalanche detectors (SPADs). Our goal is to surpass the current state-of-the-art SiPM sensitivity below 350 nm. The presence of metals in the device limits the attainable processing temperature to below 450°C. This low thermal budget greatly complexifies the surface preparation and growth process.
This talk will review our latest work on the delta-doping of frontside illuminated SPADs.
The Light-Only Liquid Xenon (LoLX) experiment is designed to study scintillation light and characterize the performance of silicon photomultipliers (SiPMs) in a liquid xenon (LXe) medium. To ensure the SiPMs are functioning properly, it is necessary to verify that all of them are submerged. LoLX therefore requires a LXe level sensor. Outgassing requirements, cryogenic temperature, space constraints and the low relative permittivity of LXe ($\epsilon_{LXe} = 1.85$) prohibit the use of standard liquid level sensors.
We developed a highly sensitive capacitive liquid level sensor which consists of sinusoidal electrodes inside a flexible Kapton printed circuit board. The liquid level sensor is 30 cm long with a capacitance of $35.4 \pm 0.1$ pF. The capacitance is read by a FDC1004 evaluation board and an Arduino Uno, and then sent to the LoLX slow control system to be converted to a liquid level. From measurements made in sunflower oil ($\epsilon_{oil = 3.29}), we calculated that the sensitivity of the sensor in LXe will be 0.1889 ± 0.0001 pF/cm, which is orders of magnitude higher than liquid level sensors used by other groups. We plan to cross-calibrate the sensor in LXe with temperature-sensitive resistors as fixed-point sensors once it has been installed. Commissioning in LXe is planned at the next run of LoLX in March 2024.
A neutron star is one of the densest objects in the universe. Therefore, studying and understanding binary neutron star mergers can help us to understand the behaviour of ultra-dense matter also used to describe nuclear matter in extreme conditions. During those stellar events, most of the gravitational potential energy is transferred to neutrinos that escape the neutron-rich ejecta, and some of those particles are possibly detected on Earth. They carry information about the equation of state of ultra-dense matter and their detection in neutrino observatories can be compared to theoretical predictions coming from hydrodynamics simulations. More specifically the use of a given equation of state in simulations predicts a spectrum of neutrinos that is specific to this equation of state. However, binary neutron star mergers are rare. Instead of trying to detect neutrinos coming from a single event we propose to study the contribution from binary neutron star mergers to the diffuse neutrino background. Comparing our predictions of these relic neutrinos with event rates registered at neutrino observatories might allow us to put constraints on the equation of state of hot ultra-dense matter.
In many environments, such as the early universe or stars, finite temperature effects modify the properties of particles, including their dispersion relations. In particular, the Standard Model photon acquires an effective mass. Furthermore, collective plasma excitations give rise to a longitudinal mode which is absent in vacuum. The key object to compute the effective in-medium mass of both the transverse and longitudinal photons is the photon self-energy. Beautiful analytic approximations have previously been computed for on-shell photons, through a series of approximations. However, these expressions are not valid for off-shell photons. These can be important for dark matter phenomenology in astrophysical and cosmological environments, for example. In this work, I will present new analytic expressions for the photon self-energy, valid for off-shell photons. I will relate them to the on-shell self-energy. I will discuss applications in dark matter phenomenology.
Lunch or lunch box depending on your selection during registration