Welcome to the
Q Lab!

The Quantum Information and Optics Lab, affectionately known as the Q Lab, is a part of Thomas Jefferson High School for Science and Technology in Northern Virginia. Each year, the lab welcomes a handful of seniors conducting their capstone research project. Equipped with state-of-the-art microscopes, optical equipment and sensors, the Q Lab enables these young physicists to conduct research in a college-like environment.




Recently Updated Projects

Hydrodynamic Quantum Field Theory Analogs

Vishal Nandakumar, Pranav Panicker, Kai Wang

We explore the hydrodynamic quantum field theory, a model of quantum dynamics inspired by Louis de Broglie’s double-solution pilot wave theory and informed by the hydrodynamic pilot-wave system discovered by Couder and Fort in 2005. de Broglie originally proposed that every quantum particle contains an internal oscillation at the Compton frequency, exchanging its rest mass energy with its pilot wave field energy. de Broglie postulated that this pilot wave would satisfy the Klein-Gordon equation and Dagan and Bush have extended this theory by modeling the particle’s oscillations as localized disturbances to the scalar pilot wave field. We start by physically modeling the superwalking droplets and the pilot wave theory in a silicone oil bath. We extend the two-dimensional form of the hydrodynamic pilot wave to three dimensions by exploring the free particle, the harmonic oscillators, and other quantum analogs. We also explore the possible link between the non-Markovian dynamics of the physical pilot wave system and nonlocality in quantum systems.


Exploring Geometrical Properties of Chaotic Systems Through an Analysis of the Rulkov Neuron Maps

Nivika Gandhi, Brandon Le

Dynamical systems theory is a branch of mathematical physics with countless applications to numerous fields. Some dynamical systems exhibit chaotic behavior, characterized by a sensitive dependence on initial conditions commonly known as the "butterfly effect." While extensive research has been conducted on chaos emerging from a dynamical system's temporal dynamics, our research examines extreme sensitivity to initial conditions in discrete-time dynamical systems from a geometrical perspective. Specifically, we develop methods of detecting, classifying, and quantifying geometric structures that lead to chaotic behavior in maps, including certain bifurcations, fractal geometry, strange attractors, multistability, fractal basin boundaries, and Wada basins of attraction. We also develop slow-fast dynamical systems theory for discrete-time systems, with a specific application to modeling the spiking and bursting behavior emerging from the electrophysiology of biological neurons. Our research mainly focuses on two simple low-dimensional slow-fast Rulkov maps, which model both non-chaotic and chaotic spiking-bursting neuronal behavior. We begin by exploring the maps' individual dynamics and parameter spaces, performing bifurcation analyses, describing and quantifying their chaotic dynamics, and modeling an injection of current into them. Then, by putting these neurons into different physical arrangements and coupling them with a flow of current, we find that complex dynamics and geometries emerge from the existence of multistability and sensitivity to initial conditions in higher-dimensional state space. We then analyze the complexity and fractalization of these coupled neuron systems' attractors and basin boundaries using our mathematical and computational methods. This paper begins with a conversational introduction to the geometry of chaos, then integrates mathematics, physics, neurobiology, computational modeling, and electrochemistry to present original research that provides a novel perspective on how types of geometrical sensitivity to initial conditions appear in discrete-time neuron systems.


Chaos Modeling: A Comparison of Classical and Quantum Reservoir Computer Capabilities

Arjun Bhat, Kanjonavo Sabud

Dynamic systems have always been an integral part of our world. Computational models are being explored for their potential of modeling dynamic systems ranging from storm development and wildfire behavior to natural language understanding and stock market prediction. The development of an RNN subfield known as Reservoir Computing (RC) has received wide attention and is observed to be well suited for handling dynamic system forecasting. Likewise, quantum machine learning has also been seen to increase model capabilities. In this project, we researched on combining the two by implementing a Quantum Reservoir Computer (QRC) for modeling a benchmark dynamic system -- the Lorenz 1963. Furthermore, we used a standard Artificial Neural Network (ANN) as a control model. Our results showed that Classical RC with forecast horizons averaging 90% in train lengths of 9000 timesteps greatly outperformed the ANN model which averaged 30%. However, our QRC model completely failed to model the Lorenz 63 system and achieved a 0% forecast horizon. As research on QRC and its dynamic systems modeling capabilities is still in its infancy, it is highly likely that our implementation of QRC was not prepared to model a system as complex as Lorenz 63. Future work, therefore, includes improving our implementation of QRC. It would also be interesting to explore the intersection of graph theory and networks within the reservoir of a reservoir computer.


Development of a Scalable Silicon Photonic On-Chip Memory Architecture

Sathvik Redrouthu, Pranav Vadde, Pranav Velleleth

Over the past decade, there has been a dramatic increase in the parameter count of neural networks, driven by advances in machine learning algorithms, hardware, and data availability. This increase has enabled significant improvements in performance on a wide range of tasks, from image classification to natural language processing. In 2012, the state-of-the-art image classification model, AlexNet, had only 61 million parameters. By 2015, the VGG-19 model had 143 million parameters, and by 2016, the ResNet-152 model had 60 million parameters. In 2018, the DenseNet-264 model had 36.4 million parameters, while the EfficientNet-B7 model, released in 2019, had 66 million parameters. The parameter counts of natural language models have also increased significantly. In 2015, the state-of-the-art language model had only 5 million parameters. By 2018, the OpenAI GPT-2 model had 1.5 billion parameters, and by 2020, the GPT-3 model had 175 billion parameters. This increase in parameter count has raised concerns about the computational cost and environmental impact of training and running these models. For example, training the GPT-3 model on a single accelerator can consume up to 1.2 GWh of electricity. To address these concerns, in addition to exploring various techniques for reducing the neural network parameter counts while maintaining performance, we have developed a set of silicon photonic accelerators with significantly higher speed and energy ratings for inference processes. Despite these advantages, these accelerators don’t have the capabilities to efficiently execute larger models, as data must be converted to electrical signals for traditional intermediate memory storage. Thus, we explore nonvolatile optical memory, with the goal of removing these intermediate conversions and improving overall performance. We begin by evaluating the effectiveness of on-chip silicon photonic memory architectures; notability, those using Phase Change Materials. We then turn to the original free-space experiments within nonvolatile optical memory and go on to design an experiment taking these to the silicon photonic domain. We address introduced issues like on-chip crosstalk through novel innovations to introduce scalability with optical memory cells, which was previously not possible. We finish by evaluating our on-chip memory system against conventional systems and other silicon photonic architectures in literature in “storage time”, “scalability”, “storage capacity”, and “read/write time.”

Simulating Self-Gravitating Dark Matter with Variational Quantum Computing

Abhinav Angirekula, Dhruv Anurag, Rohan Kompella

Galaxies have far too little observable matter for them to be gravitationally bound. Thus, there must be a theoretical, unobservable from of matter that makes up this missing mass: dark matter. Dark matter does not interact with light and we can only see its effects through gravitational lensing. The current leading candidates for dark matter are weakly interacting massive particles (WIMPs), primordial black holes, and axions. Accurate dark matter simulations are vital for researching their true nature, as scientists can compare simulations to their observations. If there are discrepancies between the two, there might be an undiscovered property or interaction that dark matter has. While dark matter simulation appears to be incredibly complicated, it really just boils down to time evolving a system using differential equations. In our case, the Schrödinger-Poisson equations are a system of non-linear differential equations that govern the evolution of several types of dark matter models: from fuzzy dark matter to standard cold dark matter. Mocz and Szasz were able to solve the Schrödinger-Poisson equations using a classical spectral method, and used a variational quantum algorithm outlined by Lubasch et. al. and were able to successfully model dark matter. However, instead of running their quantum algorithm on a quantum computer, they ran it on a simulation of a quantum computer on a classical machine. For our project, we want to replicate the results of Mocz and Szasz and implement both their classical and quantum algorithms. We want to run their quantum algorithm on an actual quantum computer (e.g. IBM), and the number of qubits used in it is under the maximum limit of quantum computers we could use so that we can compare the efficiency of each algorithm.

Research Poster

Research Presentation Slides

Presentation Video (FCPS Only)

Exploring Quantum Finance

Thomas Winston

Quantum Computing is unquestionably the future. There are 2 general architectures behind any quantum computer, quantum annealing or a gate based approach. Quantum annealing is a way to solve an incredible spectrum of optimization problems. Even better, there is only 1 company actually generating a profit using quantum computing, and that is DWAVE, a quantum annealing company. My project specifically focuses on using quantum annealing to solve the large scale optimization problem that is the stock market, through a mathematical framework called Modern Portfolio Theory, which breaks down portfolio optimization into a basic weight of risk vs reward.

Tracking Microplastics with Quantum Machine Learning and Dynamics

Anirudh Mantha

Plastic pollution in the ocean has become a major concern in recent years. Approximately 400 million tons of plastic waste are generated annually, and when this plastic enters the ocean, it poses a threat to marine life. Microplastics, which are small plastic particles that break off from larger plastic clumps, are particularly hazardous. These microplastics are difficult to track, as current methods rely on detecting surfactants, chemicals that reduce the surface tension between two liquids. However, studies have shown that surfactants are often associated with microplastics, this takeaway has led to a lot of research being done tracking microplastics by measuring the surface tension of the water and seeing the surfactant concentration. If there are surfactants, they assume that there are microplastics there, however this is not an accurate measure of figuring out this. To address this issue, it is necessary to develop a more effective method for tracking microplastics. One potential solution is to use a combination of remote sensing and mathematical analysis of ocean current models. Machine learning could be used to locate the initial plastics, while a mathematical model could be used to predict their future location of when they branch off. There are many variables to consider when modeling ocean currents, such as wind, water density, gravity, storms, and biomes. By successfully integrating these two approaches, we may be able to accurately predict the location of microplastics and aid in their removal.