Bio
I’m currently a TechCongress fellow serving in the office of Senator Ron Wyden. I’ve gotten to write legislative text and letters on privacy (consumer & financial), federal IT, security, surveillance, and responsible AI.
Before that I was a software engineer on the Privacy and Civil Liberties team at Palantir for a year and a half. I spent most of my time building products for data governance and privacy. Along the way I touched most parts of the stack, designing and building things from APIs to data stores to front-end features to performance metrics and monitoring. Towards the end, I spent more time working with customers – breaking down their governance and security needs, helping them set up governance tools and access controls, and integrating their feedback and feature requests back into the roadmap for the product. I also co-ran a program called the PCL Vanguard, in which we coordinated a course on privacy and civil liberties for cohorts of employees from throughout the company and mentored them through a project to advance PCL in their core work area.
Before that I was getting my bachelor’s degree in Computer Science at Princeton, with certificates in Cognitive Science and Technology and Society. I worked as a mentor at the AI4All program, a research assistant in the Eviction Lab, and a grader for a machine learning course. I ran the Princeton Tiger, a humor magazine of ill repute, and co-ran INTERFACE, a discussion group on issues at the intersection of computing tech and society.
Goals
I want to work on governance for data and computing tech: basically, ways to make sure that new computing technologies and applications have the most positive possible societal impacts.
Approaches I’m exploring, or am interested in exploring:
-
building tech that advances pro-social outcomes for computing (e.g. privacy-enhancing tech, AI measurement and monitoring);
-
figuring out pro-social product policies and designs for new, high-consequence computing applications; and
-
creating policy to require companies and governments to implement pro-social policies, designs, and technologies in their use of computing tech (e.g. privacy legislation, responsible AI requirements) and otherwise prevent bad attractor states.
So far, it seems like these approaches have good synergies. Building tech is a great way to understand what’s possible, what’s impossible, and what’s doable but way way harder than you think. That’s the foundation on which great tech policy is built. In turn, policy-level considerations – especially unsolved questions and hard tradeoffs – can set the agenda for building tech.
I’m always trying to learn more about:
-
making AI and autonomous systems more safe, safely used, and well-understood;
-
privacy-enhancing technology and clever models for data access;
-
tradeoffs in online speech and expression, hard problems in content moderation at scale;
-
how surveillance shifts balances of power between people and institutions; and
-
human cognition and decision-making and the nature of intelligence.