CS Honors Capstone Projects / 2022-23


You are here because you are interested in signing up for the capstone honors program courses (CS 495 in Fall + CS 496 in Spring).
Here are the course details

There are two paths to register for the honors capstone program course. One path is to pick a project offered by our faculty mentors - see the projects list below. The other is to propose your own project to a faculty member who you would like to mentor your project. Once you have identified a prospective faculty mentor, you will need to directly contact them to get their approval to register.

Note: If you are an Honors College student, you will need to submit your capstone proposal by June 1, 2022:
https://honorscollege.rutgers.edu/academics/curriculum/capstone-requirement


"Gamification" of scientific experiments: what is the role of incentives and motivation on complex human memory tasks

Mentor:  Prof. Qiong Zhang

Psychologists have studied human memory for decades under well-controlled experiments conducted in the laboratory (or online using Amazon Mechanical Turk). Participants complete large repetitions of the same task in exchange for monetary compensation. It is not clear, however, if human memory behavior would alter under different incentives and motivation, e.g. when one is engaged and motivated in playing a memory game without monetary compensation. Computational simulation work in our lab have made predictions on some effects. The goal of this project is to convert an established laboratory memory task to its corresponding gamified version and examine if observed human memory behavior align with predictions from computational models of human memory.

Prof. Qiong is particularly interested in students with experience in game development.

Prof. Zhang is not accepting any more students

 

Extremal Combinatorial Objects in Derandomization

Mentor: Prof. Karthik C. S.

Over the past few decades, randomization has become one of the most pervasive paradigms with applications to algorithm design, cryptography, and combinatorial constructions. But can we reduce or even eliminate the use of randomness in these settings? In many scenarios, we can answer this question in the affirmative by using highly non-trivial extremal combinatorial objects such as expanders, error correcting codes, small biased sets, extractors, dispersers, almost independent sets, and so on. In this project, the student will first survey the known constructions of many of these objects, which in many cases uses simple and yet rich ideas in algebra, geometry, number theory, and combinatorics. Next, the we will explore if we can propose constructions which are better than the state-of-the-art parameters for one or more of these objects.

 

Understanding the capabilities and limitations of beyond-binary states in quantum computing

Mentor: Prof. Yipeng Huang

Project Description

 

Specialized accelerator computer hardware for scientific simulations

Mentor: Prof. Yipeng Huang

Project Description

 

Correct and Efficient Math Libraries

Mentor: Prof. Santosh Nagarakatte

Project Description

 

Algorithmic Discrepancy Theory and Applications

Mentor: Prof. Peng Zhang

Discrepancy theory is a subfield of combinatorics. It asks the following question: Given a family of vectors, how do we partition these vectors into two groups such that the two groups are as balanced as possible? One straightforward way is to randomly put each vector into one of the two groups with equal probability and independently from all the other vectors. The goal of discrepancy theory is to improve upon random partition. Recently, algorithmic discrepancy theory has found rich applications in randomized experimental design, kernel density estimation, quantizing neural networks, etc. However, several challenges arise when applying discrepancy algorithms to real applications: e.g., we may have to deal with additional constraints, want smaller constants, and need even faster algorithms. In this project, the student will learn the state-of-art algorithmic discrepancy theory, implement the algorithms and compare them empirically; next, we will try to make some improvements for one or two applications.

 

Controlling a robotic hand – with your brain

Mentor: Prof. Konstantinos Michmizos

Imagine using your brain to control a robotic hand. In our lab, we a) wonder how our brain moves our limbs, and b) study how we can be inspired by our knowledge about our brain to develop algorithms that move a robotic hand. There are 5 areas of knowledge required for succeeding in this project - computer science, neuroscience, robotics, mathematics, and machine learning. When you come (or when you leave), you are expected to have some knowledge in at least three of these areas. Specifically, you are expected to get familiar with some of the hardware that we currently use in the lab, that includes an 128-channel EEG system, the Wonik Allegro robotic hand, our in-house robotic head, a robotic hexapod, and the Bionik Arm rehabilitation robot. You will also read – and digest – the relevant papers on EEG-driven robotic systems, including our own (e.g., Kumar & Michmizos, Nature Scientific Reports, 2022). And you will hopefully help us in extending our current methods and applications. If successful, your Capstone project will develop novel brain-inspired methods and algorithms that can reach clinical applications, such as those of rehabilitation robotics and neuro-prosthetics.

 

Sound analysis of safety-critical programs

Mentors:
Prof. Srinivas Narayana
Prof. Santosh Nagarakatte

It is important to be able to extend the functionality of software without affecting its correctness. Recently, the Linux operating system introduced an extension mechanism, termed eBPF, that allows normal programmers to write low-level software that runs within the operating system kernel. Key to this extensibility is the ability of the kernel to formally verify the safety of user-provided extension software, to ensure that this software cannot crash the operating system or leak privileged information. Unfortunately, the kernel's verifier has had numerous bugs in the past, making it possible for a malicious user to exploit the operating system. In this project, as part of a larger effort, you will explore formalism, design, and software to ensure the soundness of this safety verifier.

 

Understanding the zero-shot learning capabilities of pretrained language models

Mentor: Prof. Karl Stratos

Recently, the field of natural language processing (NLP) has been galvanized by the seemingly astounding capabilities of large-scale pretrained language models (PLMs) to solve unknown​ tasks (i.e., zero-shot learning) as long as they can be framed in natural language (e.g., provide the input "Translate the following sentence to German: [sentence]" to a PLM, expect a German translation of [sentence] as output). This approach, also known as "prompting", has generated a number of papers trying to use and better understand it, but most are superficial applications and analyses which do not offer a satisfying explanation for why prompting works and how. In this project, we will thoroughly chart the current landscape of prompting prominent PLMs (e.g., GPT-3, T0) by systematic experiments on standard zero-shot performance benchmarks (e.g., T0 datasets) and aim to develop a new understanding and methods of prompting.