April 27, 2024

Innovation & Tech Today

CHECK THIS OUT

Buyer’s guide: The Top 50 Most Innovative Products
Society for Science/Chris Ayers Photography

Gen Z’s Search for Ethical AI at the Regeneron Science Talent Search

Society for Science/Chris Ayers Photography

While many are increasingly losing trust in AI, young scientists of Generation Z are embracing its potential and striving to use it in a positive light. 17-year-old Achyuta Rajaram from Exeter, NH, recently won $250,000 at the Regeneron Science Talent Search for his research efforts about ethical AI.

Rajaram’s research focuses on AI algorithms and comprehending their decision-making processes. By inspecting the inner workings of these algorithms, his work represents a significant step forward to improving the ethical framework of AI, ensuring its fairness and safety. 

Innovation & Tech Today spoke with Rajaram about his inspiration behind the research, the role young scientists have in the public’s acceptance of AI, and more. 

Innovation & Tech Today: Congratulations on winning first place in the Regeneron Science Talent Search 2024! Can you share what inspired you to research AI algorithms and their decision-making processes?

Achyuta Rajaram: I have always been interested in studying the nature of intelligence and sort’ve seen it as an engineering problem. However, studying human brains directly is extremely challenging. The scalpels of modern biology are more like sledgehammers; you can’t run minimal interventions on the human brain due to the massive complexity of all the entangled systems involved. 

Given my background in computer science, I felt that neural networks are the natural place to study the algorithms behind intelligence; we can do “surgery” on neural networks, manually editing them in ways that would be simply impossible for a biological organism. 

This allows us to gain general insight into the nature of intelligence; one example I love to cite is that interpretability research found that humans implement the same “building blocks” of vision as deep learning models, using Gabor filters for simple, low-level tasks like edge detection. 

I was thus primarily inspired by the promise of “reverse engineering intelligence” by looking at neural networks. On top of this, I was excited by the practical applications of understanding what happens “under the hood” of these large complex models, which allows us to make safer, more efficient, and more robust systems in the real world.

I&T Today: How do you envision your research contributing to the improvement of ethics in AI and making algorithms fairer, safer, and more effective?

Society for Science/Chris Ayers Photography

Rajaram: I think interpretability research, including the research I conducted, has shown immense promise toward making algorithms safer, more effective, and more robust. Let’s break this down. 

Efficiency: Neural networks are incredibly expensive to run. Especially in the modern day, where state-of-the-art methods often require immense scale, in terms of both billion-item datasets to train them and supercomputers to run them on. By better understanding the internals of models, we can remove unnecessary components, saving computational resources while retaining overall functionality. My work specifically could be applied to find redundant “circuits” within a larger model. This method of “pruning” away model components for increased efficiency promises to democratize access to powerful neural network-based systems. 

Safety: As your question implies, there are plenty of ways for AI, or machine learning systems more broadly, to cause harm; from race and gender biases that lie lurking within training data, to future LLMs potentially assisting in bio-weapon creation. With these very real risks, I believe that gaining a full mechanistic understanding of model behavior is the only way to ensure that the systems that we use in the real world are safe; as long as neural networks remain as “black boxes,” we won’t have strong guarantees over the fairness and safety of these algorithms. I hope that my work can serve to identify and remove unsafe components, or ones containing biases, in larger computer vision systems. 

Robustness: There are plenty of examples of vision models making mistakes and failing to generalize in the wild. One example I studied in my work was adversarial textual attacks on CLIP, an open-source model used widely. More specifically, there is an interesting failure mode of this model where it misclassifies traffic lights if there is a sign saying the opposite color next to them. We were able to find and remove the component of the model that caused this issue, rendering it robust to this attack. I’m excited about interpretability science’s applications to understanding the root cause of these failures in the wild.

I&T Today: Given the rising mistrust in AI, especially among the general public, how do you think young scientists like yourself can help bridge this gap and promote trust in AI technologies?

Rajaram: Firstly, I think it’s important to view “AI” not as a monolith, but as a collection of different technologies, each with its capabilities and risks. Given this, I think that a greater mechanistic understanding of model behavior should drastically increase trust in AI technologies across the board, as we would be able to create performance guarantees and fully understand the potential failure modes of deployed models. As a young scientist, I think that my duty is twofold; first, to work on the research that will alleviate these concerns, and second, to communicate the results to the public to “bridge the gap.” I believe that the only way to increase trust is by making the public more educated about this technology and its complexities.

I&T Today: How do you plan to continue your research journey or pursue your interests in the field of AI and ethics?

Rajaram: I plan to continue studying computer science and machine learning at MIT as well as continue to work at Torralba Lab at MIT CSAIL building scalable systems to automatically interpret large models.

I&T Today: Could you share any advice for other young scientists who aspire to make a positive impact through their research, particularly in fields like AI and technology?

Society for Science/Chris Ayers Photography

Rajaram: I believe that my main pieces of advice are the following. First, be courageous! Research is difficult, especially when you are working on truly important problems. Secondly, I believe that the people you work with are as important as the problems you are working on; I am extremely grateful to my research mentor, Dr. Sarah Schwettmann, for supporting me as a person and a scientist throughout the process of this project. Finally, especially in rapidly evolving fields such as AI, I think it is important to keep up with the state of research across a broad range of subfields. I believe that insight can come from anywhere, and as long as you keep learning, you can do anything!

Picture of By Lindsey Feth

By Lindsey Feth

Managing Editor, Innovation & Tech Today

All Posts

More
Articles

SEARCH OUR SITE​

Search

GET THE LATEST ISSUE IN YOUR INBOX​

SIGN UP FOR OUR NEWSLETTER NOW!​

* indicates required

 

We hate spam too. You'll get great content and exclusive offers. Nothing more.

TOP POSTS THIS WEEK

INNOVATION & TECH TODAY - SOCIAL MEDIA​

Looking for the latest tech news? We have you covered.

Don’t be the office chump. Sign up here for our twice weekly newsletter and outsmart your coworkers.