Research at NC State
Two projects, one degree, one quantum circuit, and one very useful tree.
During my master's I worked on two research projects: building an interactive probabilistic risk assessment tool in D3.js that made complex data 15x more readable, and developing an optimized parallel quantum circuit in Python to represent complex polynomials efficiently.
Graduate Student Developer — Probabilistic Risk Assessment
Built an interactive radial tree visualization in D3.js and TypeScript for probabilistic risk assessments. The tool lets users add, remove, and rearrange nodes intuitively — turning dense tabular risk data into something navigable. The result was a 15x improvement in how quickly stakeholders could read and interpret the data.
Visualization work is interesting because the hard problem isn't the code — it's understanding how people read information. A technically correct visualization that nobody can parse is useless. The question is always: what does the person looking at this actually need to understand?
Research Assistant — Quantum Circuits
Developed an optimized parallel quantum circuit in Python to represent complex polynomials, achieving a balanced trade-off between time and memory efficiency.
Quantum circuit design sits at the intersection of linear algebra, physics, and computer science. The constraints are real and strange: quantum operations are reversible, qubits decohere, and the "memory" of a quantum computation is fundamentally different from classical memory. Finding representations that are efficient under these constraints requires thinking differently about what "efficient" means.
The IEEE paper
I also co-authored an IEEE paper — "Explainable Approach for Species Identification using LIME" — published at the 2022 IEEE Bombay Section Signature Conference. The paper used LIME (Local Interpretable Model-agnostic Explanations) to examine what features a species classifier actually learned, and whether those features corresponded to real biological characteristics.
In several cases they didn't. The model had learned artifacts of how the images were captured. This is why interpretability matters: a model that scores well on a test set is not the same as a model that learned the right thing.