Do not call up that which you cannot put down. --H.P. Lovecraft
What is true is already so.
Owning up to it doesn't make it worse.
Not being open about it doesn't make it go away.
And because it's true, it is what is there to be interacted with.
Anything untrue isn't there to be lived.
People can stand what is true,
for they are already enduring it.
Limited in his nature,
infinite in his desires,
man is a fallen god
who remembers the heavens.
--Alphonse de Lamartine
I am a mathematician and a computer scientist. Just as electrical engineering applies the understanding of electricity and magnetism to build physical artifacts, a computer scientist applies the abstract formal reasoning of mathematics to build software objects. In this sense computer science is mathematical engineering. In other words, my art is the application of abstractions to understand and manipulate reality.
As of Summer 2022 I am a graduate student pursuing an M.S. in mathematics at the University of Minnesota, Twin Cites.
After graduation I intend to pursue a PhD in computer science.
My interests include theoretical computer science, particularly algorithmic information theory, machine learning, and artificial intelligence.
My view of the relationship between these things is not mainstream;
in particular I will be briefly dissapointed, while my body is being deconstructed for raw materials, if modern deep learning techniques turn out to be sufficient for A.G.I.
This situation is known as
prosiac here means that it would be a let down if so much power could be gained with so little understanding.
I am increasingly convinced (due to the success of e.g. chatGPT) that this is unfortunately what is going to happen,
and for that reason I support a pause on large training runs.
Most A.I. research involves poorly justified engineering modifications that marginally improve performance on meaningless benchmarks.
Slightly better is theoretical work that seeks to understand the generalization properties of ANNs.
However what I would really like to see is a more general frame that explains how one might have arrived at ANNs in some fashion other than trial and error (or copying evolution's homework).
My long term goal is to work on A.I. alignment. I will first study A.I. capabilities, since it is unwise to try to control something you don't understand. Indeed, it would be easiest to solve alignment after writing down correct pseudo-code for an A.G.I., or failing that, a complete specification of its behavior. The closest we have to the later is probably AIXI, an (uncomputable) optimal RL agent discovered by Marcus Hutter. Unfortunately it is likely preferable to solve alignment FIRST.
In the past I've worked in robotics. In particular I was a machine learning intern at Dexai robotics and I am published in ICRA. My team also won the 2020 ION Autonomous Snowplow Competition.
See my CV
My hobbies include bouldering, reading, and in the past mixed martial arts.