hyper-exponential.com
Subscribe
Sign in
Home
Interviews
Archive
About
Latest
Top
How to do humans learn about values?
TLDR: Tan Zhi Xuan builds cooperative, theory-of-mind agents for safer AI, favoring human-like value learning and interpretable, model-based systems…
Jun 19
•
Mykhaylo Filipenko
Thoughts on ASI - part 2: Giving up Control to AGI
TLDR: I believe that we have given up a lot of control to technology already and that this resulted in a net benefit for humanity. Giving up more…
Jun 2
•
Mykhaylo Filipenko
May 2025
Interview with Agustin Covarrubias
TLDR: Agus runs Kairos. Kairos helps AI safety groups at university to be run efficiently. Kairos also runs SPAR - one of the best known AI safety…
May 9
•
Mykhaylo Filipenko
April 2025
Thoughts on ASI - part 1: A spiritualist's view
TLDR: It's close to impossible to predict what awaits us "post singularity". Nevertheless, I try to lay out arguments that can give us hope to be…
Apr 24
•
Mykhaylo Filipenko
1
Why are LLMs so Good at Generating Code?
An Interview with Georg Zoller
Apr 16
•
Mykhaylo Filipenko
There is hope for humanity ..
.. in one screenshot
Apr 4
•
Mykhaylo Filipenko
Some thoughts on alignment ..
.. actually human alignment
Apr 2
•
Mykhaylo Filipenko
March 2025
Safe AI with Singular Learning Theory ..
.. an interview with Jesse Hoogland from Timaeus
Mar 20
•
Mykhaylo Filipenko
February 2025
Can we have safer AI through certification?
An Interview with Jan Zawadzski from CertifAI
Feb 27
•
Mykhaylo Filipenko
The right people at the right place ..
.. make the biggest difference
Feb 4
•
Mykhaylo Filipenko
January 2025
About bets and choices ..
.. and how they affect our lives
Jan 28
•
Mykhaylo Filipenko
1
Dr. Jobst Heitzig: AGI with non-optimizer and how to start an AI safety lab in Germany?
TLDR: Dr.
Jan 8
•
Mykhaylo Filipenko
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts