<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Justas Petronis]]></title><description><![CDATA[occasional thoughts from someone who frequently has them, or pure ramblings of a doctoral student in moral philosophy of AI and AI product manager at Vinted]]></description><link>https://www.petronis.me</link><generator>Substack</generator><lastBuildDate>Wed, 15 Apr 2026 18:12:10 GMT</lastBuildDate><atom:link href="https://www.petronis.me/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Justas Petronis]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[petronis@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[petronis@substack.com]]></itunes:email><itunes:name><![CDATA[Justas Petronis]]></itunes:name></itunes:owner><itunes:author><![CDATA[Justas Petronis]]></itunes:author><googleplay:owner><![CDATA[petronis@substack.com]]></googleplay:owner><googleplay:email><![CDATA[petronis@substack.com]]></googleplay:email><googleplay:author><![CDATA[Justas Petronis]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Living inside the argument]]></title><description><![CDATA[Research changed what I notice. It didn't change what I do.]]></description><link>https://www.petronis.me/p/living-inside-the-argument</link><guid isPermaLink="false">https://www.petronis.me/p/living-inside-the-argument</guid><dc:creator><![CDATA[Justas Petronis]]></dc:creator><pubDate>Tue, 14 Apr 2026 21:01:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!fMh5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bc184e3-5e57-4ba2-a81a-f8cf36cc7125_1344x896.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fMh5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bc184e3-5e57-4ba2-a81a-f8cf36cc7125_1344x896.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fMh5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bc184e3-5e57-4ba2-a81a-f8cf36cc7125_1344x896.png 424w, https://substackcdn.com/image/fetch/$s_!fMh5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bc184e3-5e57-4ba2-a81a-f8cf36cc7125_1344x896.png 848w, https://substackcdn.com/image/fetch/$s_!fMh5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bc184e3-5e57-4ba2-a81a-f8cf36cc7125_1344x896.png 1272w, https://substackcdn.com/image/fetch/$s_!fMh5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bc184e3-5e57-4ba2-a81a-f8cf36cc7125_1344x896.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fMh5!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bc184e3-5e57-4ba2-a81a-f8cf36cc7125_1344x896.png" width="1200" height="800" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4bc184e3-5e57-4ba2-a81a-f8cf36cc7125_1344x896.png&quot;,&quot;srcNoWatermark&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0aa52526-d653-4b40-9db1-9a342fd4855c_1344x896.png&quot;,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:896,&quot;width&quot;:1344,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:1946513,&quot;alt&quot;:&quot;Only the room.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://petronis.substack.com/i/189814674?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0aa52526-d653-4b40-9db1-9a342fd4855c_1344x896.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="Only the room." title="Only the room." srcset="https://substackcdn.com/image/fetch/$s_!fMh5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bc184e3-5e57-4ba2-a81a-f8cf36cc7125_1344x896.png 424w, https://substackcdn.com/image/fetch/$s_!fMh5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bc184e3-5e57-4ba2-a81a-f8cf36cc7125_1344x896.png 848w, https://substackcdn.com/image/fetch/$s_!fMh5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bc184e3-5e57-4ba2-a81a-f8cf36cc7125_1344x896.png 1272w, https://substackcdn.com/image/fetch/$s_!fMh5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bc184e3-5e57-4ba2-a81a-f8cf36cc7125_1344x896.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Only the room.</figcaption></figure></div><p>I promised honesty in the last post, so here it is.</p><p>The research I&#8217;ve spent the last year doing &#8211; about AI eroding the conditions under which judgment develops &#8211; has not made me use AI less. It has made me use it more. I write with AI assistance. I use it to work through product decisions at Vinted. I use it to structure arguments, check my reasoning, draft communications I don&#8217;t have time to draft alone. The efficiency gains are real, and I take them. If you came to this Substack expecting to find the work of someone who has resolved the tension by opting out, I am not that person.</p><p>I&#8217;m not sure opting out would even be the right response if I could manage it. The argument isn&#8217;t that all AI use is harmful. It&#8217;s that something specific is at risk when AI takes over the domains where judgment is practiced and developed, i.e. where the difficulty isn&#8217;t incidental, but is the point. That&#8217;s a more targeted concern, and it doesn&#8217;t mean using AI to draft a stakeholder update is the same kind of problem as using it to decide which customers to flag, or which hires to make, or what counts as acceptable content. The distinction matters. I&#8217;d be lying if I said it always holds cleanly in practice.</p><p>What the research has changed is not my behavior. It has changed what I notice. There are moments now (reaching for AI on something with real stakes, where the uncertainty is genuinely mine to sit with) where I feel the pull of that reach differently than I used to. Not always enough to stop. Sometimes enough to slow down. Often enough to feel uncomfortable with myself when I don&#8217;t.</p><p>That noticing is not the same as refusing is worth being precise about. It doesn&#8217;t constitute the kind of reflective distance the argument says is at risk, i.e. the capacity to hold the decision at arm&#8217;s length, evaluate whether this is a moment where the difficulty is the point, and act on that evaluation. What it is, exactly, I&#8217;m less sure. Some remaining capacity that hasn&#8217;t yet been displaced. Or the last recognizable trace of a capacity that is already mostly gone. The fact that I can still feel the pull distinctly doesn&#8217;t tell me which, and that uncertainty is itself part of what the argument predicts.</p><p>And the fact that noticing is apparently the most the research has produced in me, in the person who spent a year developing the argument, is worth sitting with longer than it&#8217;s comfortable to sit.</p><p>Here is the uncomfortable part. The capacity to evaluate whether AI use is harmful is not a stable external vantage point from which I observe AI&#8217;s effects on others. It is a capacity that my own AI use is shaping, in real time, in ways I cannot fully track. I am not outside the problem I am studying. I am a case of it. The research argues that sustained AI use may gradually erode the capacity for independent judgment, and I am someone who uses AI extensively while making that argument. The fact that I can still articulate the concern does not mean the concern doesn&#8217;t apply to me. It may just mean the erosion is gradual enough that I cannot see it from the inside, that what I experience as critical distance is itself already shaped by the thing I am trying to hold at a distance. I cannot tell the difference between those two possibilities from where I stand, and neither can you from where you stand, which is exactly the structure the argument is about.</p><p>I don&#8217;t know how to resolve this. I&#8217;m not going to pretend I do.</p><p>What I can say is that this tension (between using AI and studying its effects, between the efficiency and the cost) is not a personal failing that a more disciplined person would have already corrected. Starting in May, I&#8217;ll try to show why. The mechanism that makes AI useful and the mechanism that makes it dangerous are the same mechanism. That makes the tension not resolvable by better choices, but at best visible. Visibility, it turns out, is harder than it sounds.</p><div><hr></div><p><em>I&#8217;m developing this argument formally in my dissertation. Subscribe to follow the research.</em></p>]]></content:encoded></item><item><title><![CDATA[The question underneath the question]]></title><description><![CDATA[There is a question underneath most AI ethics conversations that the conversation itself never quite reaches.]]></description><link>https://www.petronis.me/p/the-question-underneath-the-question</link><guid isPermaLink="false">https://www.petronis.me/p/the-question-underneath-the-question</guid><dc:creator><![CDATA[Justas Petronis]]></dc:creator><pubDate>Tue, 31 Mar 2026 21:00:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6pve!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84555cbf-6f28-4e66-aa6e-3a1b17a43fc2_1344x896.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6pve!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84555cbf-6f28-4e66-aa6e-3a1b17a43fc2_1344x896.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6pve!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84555cbf-6f28-4e66-aa6e-3a1b17a43fc2_1344x896.png 424w, https://substackcdn.com/image/fetch/$s_!6pve!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84555cbf-6f28-4e66-aa6e-3a1b17a43fc2_1344x896.png 848w, https://substackcdn.com/image/fetch/$s_!6pve!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84555cbf-6f28-4e66-aa6e-3a1b17a43fc2_1344x896.png 1272w, https://substackcdn.com/image/fetch/$s_!6pve!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84555cbf-6f28-4e66-aa6e-3a1b17a43fc2_1344x896.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6pve!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84555cbf-6f28-4e66-aa6e-3a1b17a43fc2_1344x896.png" width="1200" height="800" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/84555cbf-6f28-4e66-aa6e-3a1b17a43fc2_1344x896.png&quot;,&quot;srcNoWatermark&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a7be3c36-94b4-4ba4-bf12-9f7b0453bbf7_1344x896.png&quot;,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:896,&quot;width&quot;:1344,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:1666118,&quot;alt&quot;:&quot;A human in the loop.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://petronis.substack.com/i/189813642?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7be3c36-94b4-4ba4-bf12-9f7b0453bbf7_1344x896.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="A human in the loop." title="A human in the loop." srcset="https://substackcdn.com/image/fetch/$s_!6pve!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84555cbf-6f28-4e66-aa6e-3a1b17a43fc2_1344x896.png 424w, https://substackcdn.com/image/fetch/$s_!6pve!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84555cbf-6f28-4e66-aa6e-3a1b17a43fc2_1344x896.png 848w, https://substackcdn.com/image/fetch/$s_!6pve!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84555cbf-6f28-4e66-aa6e-3a1b17a43fc2_1344x896.png 1272w, https://substackcdn.com/image/fetch/$s_!6pve!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F84555cbf-6f28-4e66-aa6e-3a1b17a43fc2_1344x896.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">A human in the loop.</figcaption></figure></div><p>Most conversations I have about AI and ethics, at work, at conferences, in the comment threads of articles I shouldn&#8217;t read, orbit a set of familiar questions. Whether AI systems can be made fair. Whether their decisions can be explained. Who bears responsibility when something goes wrong. These are real questions. They&#8217;re not going away. I have spent a significant portion of my professional life working on versions of them, and I expect to spend more.</p><p>But there&#8217;s a question that sits underneath all of them, and I keep finding that it doesn&#8217;t get asked. It&#8217;s not about AI&#8217;s properties. It&#8217;s about what AI use does, over time, to the people using it.</p><p>Here it is, in its plainest form: <em>does sustained AI use preserve the conditions under which good judgment develops?</em></p><p>Not does AI make good decisions? Not can AI be aligned with human values? Not even does AI respect human autonomy? Each of those questions presupposes that the humans interacting with the AI system have a stable, intact capacity for judgment, and that this capacity isn&#8217;t being affected by the interaction itself. They ask about AI&#8217;s outputs while treating the human as a fixed constant.</p><p>The question I keep returning to goes a step earlier. Before whether AI is making good decisions, there is the question of what happens, over time, to the people who rely on it to make decisions. Not in the short term; the short-term effects are mostly positive, which is why AI gets adopted. In the long term. In the cumulative effect on people who delegate more and more judgment to systems that are increasingly good at appearing to reason.</p><p>I have watched something like this happen in professional contexts. Someone joins a team where significant judgment work (customer disputes, content decisions, prioritization calls) is already handled, or heavily shaped, by AI systems. They learn to work with the outputs. They get good at reviewing, annotating, escalating, occasionally overriding. This is a real skill. The person who does it well has learned something. But it is not quite the same thing as the skill of forming the underlying judgment the AI is now performing, i.e. the one that required sitting with genuine uncertainty, without a model output to anchor to, and arriving somewhere through your own reasoning. Both skills exist; both can be practiced. Over time, the one that gets practiced in this environment is the one that develops, and the other is the one that doesn&#8217;t.</p><p>This is not a failure of implementation. It is the system doing what it is designed to do.</p><p>The standard response, in responsible AI circles, is that good design can address this. Build AI for augmentation rather than replacement. Keep humans meaningfully in the loop. Design the interaction so that the human&#8217;s judgment is exercised, not bypassed. I spent a long time thinking in this direction. I still think it&#8217;s better than the alternative. But I&#8217;ve become less confident that it resolves the underlying problem.</p><p>The argument for better design assumes that the development of judgment and the delegation of judgment can coexist without tension, i.e. that there is a way to hand off enough to AI to gain the efficiency while leaving intact the occasions where judgment is practiced and grows. For this to hold, judgment would need to be the kind of capacity that can be developed selectively: practiced in designated spaces while being offloaded everywhere else. But that is not what the development of practical skill suggests, and it is not what happens in the professional contexts I&#8217;ve described. Judgment develops through repeated engagement with real uncertainty; not simulated, not adjacent, but the genuine uncertainty where something is at stake and there is no pre-supplied answer. The situations where AI is most useful and the situations where judgment most needs to be practiced are not different situations. They are the same ones. AI is designed to extend into exactly the domains where the difficulty is real, which is also where the development happens. Better design can change the surface of this; it cannot change the structure.</p><p>This is the question I keep coming back to. Not whether AI can be ethical. Whether AI use preserves the conditions under which humans develop the capacity to be ethical, where developing that capacity means practicing judgment in situations that are genuinely uncertain, where something real is at stake, and where there is no AI output to defer to.</p><p>I don&#8217;t have a clean answer. The next several months here will be an attempt to work through one carefully. Starting in May, I&#8217;ll take the argument apart piece by piece. Before that: one more thing I want to be honest about, which is that asking this question hasn&#8217;t resolved the obvious tension in how I actually live it.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:189765126,&quot;url&quot;:&quot;https://petronis.substack.com/p/i-was-asking-the-wrong-question&quot;,&quot;publication_id&quot;:652412,&quot;publication_name&quot;:&quot;dialethics&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!mH9T!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aac7b4e-a826-4101-8243-d601a01af6f8_1280x1280.png&quot;,&quot;title&quot;:&quot;I was asking the wrong question&quot;,&quot;truncated_body_text&quot;:&quot;There is something slightly uncomfortable about being the person who builds AI systems by day and argues, by night, that those systems are probably harmful in ways that ordinary business logic cannot detect. I want to resist the temptation to manage that discomfort into a personal branding decision;&quot;,&quot;date&quot;:&quot;2026-03-03T14:00:02.932Z&quot;,&quot;like_count&quot;:1,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:1514482,&quot;name&quot;:&quot;Justas Petronis&quot;,&quot;handle&quot;:&quot;petronisms&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eef46d9a-5928-4080-9283-04f4b0f3fda6_1024x1024.png&quot;,&quot;bio&quot;:&quot;Principal product manager at theydo.com. But mostly a proud philosophy doctoral student&quot;,&quot;profile_set_up_at&quot;:&quot;2021-12-29T16:46:51.989Z&quot;,&quot;reader_installed_at&quot;:&quot;2023-03-07T11:00:27.123Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:585376,&quot;user_id&quot;:1514482,&quot;publication_id&quot;:652412,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:652412,&quot;name&quot;:&quot;dialethics&quot;,&quot;subdomain&quot;:&quot;petronis&quot;,&quot;custom_domain&quot;:&quot;petronis.me&quot;,&quot;custom_domain_optional&quot;:true,&quot;hero_text&quot;:&quot;occasional thoughts from someone who frequently has them, or pure ramblings of a doctoral student in moral philosophy of AI and AI product manager&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6aac7b4e-a826-4101-8243-d601a01af6f8_1280x1280.png&quot;,&quot;author_id&quot;:1514482,&quot;primary_user_id&quot;:1514482,&quot;theme_var_background_pop&quot;:&quot;#EA410B&quot;,&quot;created_at&quot;:&quot;2021-12-29T16:43:47.075Z&quot;,&quot;email_from_name&quot;:&quot;Justas Petronis&quot;,&quot;copyright&quot;:&quot;Justas Petronis&quot;,&quot;founding_plan_name&quot;:&quot;Founding Member&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;paused&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;subscriber&quot;,&quot;tier&quot;:1,&quot;accent_colors&quot;:null},&quot;paidPublicationIds&quot;:[2881917],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://petronis.substack.com/p/i-was-asking-the-wrong-question?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!mH9T!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aac7b4e-a826-4101-8243-d601a01af6f8_1280x1280.png" loading="lazy"><span class="embedded-post-publication-name">dialethics</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">I was asking the wrong question</div></div><div class="embedded-post-body">There is something slightly uncomfortable about being the person who builds AI systems by day and argues, by night, that those systems are probably harmful in ways that ordinary business logic cannot detect. I want to resist the temptation to manage that discomfort into a personal branding decision&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">a month ago &#183; 1 like &#183; Justas Petronis</div></a></div><div><hr></div><p><em>I&#8217;m developing this argument formally in my dissertation. Subscribe to follow the research.</em></p>]]></content:encoded></item><item><title><![CDATA[I was asking the wrong question]]></title><description><![CDATA[I spent a year writing about AI. Then I spent a year doing research. Here&#8217;s the difference.]]></description><link>https://www.petronis.me/p/i-was-asking-the-wrong-question</link><guid isPermaLink="false">https://www.petronis.me/p/i-was-asking-the-wrong-question</guid><dc:creator><![CDATA[Justas Petronis]]></dc:creator><pubDate>Tue, 03 Mar 2026 14:00:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CNm2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cac9691-8596-4c80-a37b-c1be9870c93e_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CNm2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cac9691-8596-4c80-a37b-c1be9870c93e_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CNm2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cac9691-8596-4c80-a37b-c1be9870c93e_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!CNm2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cac9691-8596-4c80-a37b-c1be9870c93e_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!CNm2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cac9691-8596-4c80-a37b-c1be9870c93e_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!CNm2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cac9691-8596-4c80-a37b-c1be9870c93e_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CNm2!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cac9691-8596-4c80-a37b-c1be9870c93e_1456x816.png" width="1200" height="672.5274725274726" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6cac9691-8596-4c80-a37b-c1be9870c93e_1456x816.png&quot;,&quot;srcNoWatermark&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/19a29409-2f8e-4807-9d48-58eb1abb2287_1456x816.png&quot;,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:2009621,&quot;alt&quot;:&quot;The gear that hollows the judge&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://petronis.substack.com/i/189765126?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19a29409-2f8e-4807-9d48-58eb1abb2287_1456x816.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="The gear that hollows the judge" title="The gear that hollows the judge" srcset="https://substackcdn.com/image/fetch/$s_!CNm2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cac9691-8596-4c80-a37b-c1be9870c93e_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!CNm2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cac9691-8596-4c80-a37b-c1be9870c93e_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!CNm2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cac9691-8596-4c80-a37b-c1be9870c93e_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!CNm2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cac9691-8596-4c80-a37b-c1be9870c93e_1456x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The gear that hollows the judge</figcaption></figure></div><p>There is something slightly uncomfortable about being the person who builds AI systems by day and argues, by night, that those systems are probably harmful in ways that ordinary business logic cannot detect. I want to resist the temptation to manage that discomfort into a personal branding decision; <em>practitioner-researcher</em> has a certain reassuring ring to it, the kind that makes a tension sound like a credential. The discomfort is more useful left as discomfort. I am a Principal AI Product Manager. My research asks whether that work erodes the conditions under which human judgment develops. Both of these things are true at the same time, and I have not found a clean way to hold them together. This Substack is where I try.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:149022518,&quot;url&quot;:&quot;https://petronis.substack.com/p/a-start-of-a-journey-into-synthetic&quot;,&quot;publication_id&quot;:652412,&quot;publication_name&quot;:&quot;dialethics&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!mH9T!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aac7b4e-a826-4101-8243-d601a01af6f8_1280x1280.png&quot;,&quot;title&quot;:&quot;Before I Knew What I Was Asking&quot;,&quot;truncated_body_text&quot;:&quot;Constant Reader, I'm thrilled to share with you the beginning (fingers crossed) of an exciting academic adventure. As I'm waiting for the decision whether I will be admitted to a PhD program, I wanted to share some of the questions about the future of human autonomy and knowledg&#8230;&quot;,&quot;date&quot;:&quot;2024-09-17T19:22:41.355Z&quot;,&quot;like_count&quot;:2,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:1514482,&quot;name&quot;:&quot;Justas Petronis&quot;,&quot;handle&quot;:&quot;petronisms&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eef46d9a-5928-4080-9283-04f4b0f3fda6_1024x1024.png&quot;,&quot;bio&quot;:&quot;Principal product manager at theydo.com. But mostly a proud philosophy doctoral student&quot;,&quot;profile_set_up_at&quot;:&quot;2021-12-29T16:46:51.989Z&quot;,&quot;reader_installed_at&quot;:&quot;2023-03-07T11:00:27.123Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:585376,&quot;user_id&quot;:1514482,&quot;publication_id&quot;:652412,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:652412,&quot;name&quot;:&quot;dialethics&quot;,&quot;subdomain&quot;:&quot;petronis&quot;,&quot;custom_domain&quot;:&quot;petronis.me&quot;,&quot;custom_domain_optional&quot;:true,&quot;hero_text&quot;:&quot;occasional thoughts from someone who frequently has them, or pure ramblings of a doctoral student in moral philosophy of AI and AI product manager&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6aac7b4e-a826-4101-8243-d601a01af6f8_1280x1280.png&quot;,&quot;author_id&quot;:1514482,&quot;primary_user_id&quot;:1514482,&quot;theme_var_background_pop&quot;:&quot;#EA410B&quot;,&quot;created_at&quot;:&quot;2021-12-29T16:43:47.075Z&quot;,&quot;email_from_name&quot;:&quot;Justas Petronis&quot;,&quot;copyright&quot;:&quot;Justas Petronis&quot;,&quot;founding_plan_name&quot;:&quot;Founding Member&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;paused&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;subscriber&quot;,&quot;tier&quot;:1,&quot;accent_colors&quot;:null},&quot;paidPublicationIds&quot;:[2881917],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:false,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://petronis.substack.com/p/a-start-of-a-journey-into-synthetic?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!mH9T!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aac7b4e-a826-4101-8243-d601a01af6f8_1280x1280.png"><span class="embedded-post-publication-name">dialethics</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Before I Knew What I Was Asking</div></div><div class="embedded-post-body">Constant Reader, I'm thrilled to share with you the beginning (fingers crossed) of an exciting academic adventure. As I'm waiting for the decision whether I will be admitted to a PhD program, I wanted to share some of the questions about the future of human autonomy and knowledg&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">2 years ago &#183; 2 likes &#183; Justas Petronis</div></a></div><p>When I started writing here in September 2024 (the same month I began a PhD at the Lithuanian Culture Research Institute) the posts were exploratory. I was reading widely, thinking out loud, asking questions I hadn&#8217;t yet learned to ask precisely. The early posts have their moments, but they were the work of someone still orienting. Then I largely stopped writing here. Not because the questions became less interesting, but because I needed to actually follow them somewhere. A year of reading (Aristotle, Bernard Stiegler, Don Ihde, Peter-Paul Verbeek) and a thesis argument took shape. What follows is not a summary of that argument. It is a report on how the thinking changed: three things I got wrong, and why I now think I was asking the wrong question from the start.</p><div><hr></div><p>The first thing I got wrong is that I thought this was a design problem.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:152414000,&quot;url&quot;:&quot;https://petronis.substack.com/p/system-error&quot;,&quot;publication_id&quot;:652412,&quot;publication_name&quot;:&quot;dialethics&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!mH9T!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aac7b4e-a826-4101-8243-d601a01af6f8_1280x1280.png&quot;,&quot;title&quot;:&quot;System Error&quot;,&quot;truncated_body_text&quot;:&quot;It&#8217;s a summary of a seminar presentation I did this week as part of my PhD studies. Hope this manages to capture the crux of the argument. You can also listen to this as an audio recording here.&quot;,&quot;date&quot;:&quot;2024-12-02T06:00:49.894Z&quot;,&quot;like_count&quot;:2,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:1514482,&quot;name&quot;:&quot;Justas Petronis&quot;,&quot;handle&quot;:&quot;petronisms&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eef46d9a-5928-4080-9283-04f4b0f3fda6_1024x1024.png&quot;,&quot;bio&quot;:&quot;Principal product manager at theydo.com. But mostly a proud philosophy doctoral student&quot;,&quot;profile_set_up_at&quot;:&quot;2021-12-29T16:46:51.989Z&quot;,&quot;reader_installed_at&quot;:&quot;2023-03-07T11:00:27.123Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:585376,&quot;user_id&quot;:1514482,&quot;publication_id&quot;:652412,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:652412,&quot;name&quot;:&quot;dialethics&quot;,&quot;subdomain&quot;:&quot;petronis&quot;,&quot;custom_domain&quot;:&quot;petronis.me&quot;,&quot;custom_domain_optional&quot;:true,&quot;hero_text&quot;:&quot;occasional thoughts from someone who frequently has them, or pure ramblings of a doctoral student in moral philosophy of AI and AI product manager&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6aac7b4e-a826-4101-8243-d601a01af6f8_1280x1280.png&quot;,&quot;author_id&quot;:1514482,&quot;primary_user_id&quot;:1514482,&quot;theme_var_background_pop&quot;:&quot;#EA410B&quot;,&quot;created_at&quot;:&quot;2021-12-29T16:43:47.075Z&quot;,&quot;email_from_name&quot;:&quot;Justas Petronis&quot;,&quot;copyright&quot;:&quot;Justas Petronis&quot;,&quot;founding_plan_name&quot;:&quot;Founding Member&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;paused&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;subscriber&quot;,&quot;tier&quot;:1,&quot;accent_colors&quot;:null},&quot;paidPublicationIds&quot;:[2881917],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:false,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://petronis.substack.com/p/system-error?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!mH9T!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aac7b4e-a826-4101-8243-d601a01af6f8_1280x1280.png"><span class="embedded-post-publication-name">dialethics</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">System Error</div></div><div class="embedded-post-body">It&#8217;s a summary of a seminar presentation I did this week as part of my PhD studies. Hope this manages to capture the crux of the argument. You can also listen to this as an audio recording here&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">a year ago &#183; 2 likes &#183; Justas Petronis</div></a></div><p>In December 2024, I wrote a post arguing that the trouble with AI and ethics could be addressed by building AI as part of distributed moral networks rather than against them, i.e. a layered approach involving how AI is designed, how it interacts with human judgment, and how accountability is distributed across systems. This is a recognizable move in responsible AI circles. It is, I think, mistaken, and I want to be honest about why I no longer find it satisfying.</p><p>The assumption behind that argument (and behind most responsible AI frameworks) is that the conditions for human moral development are stable and intact. The problem, on this view, is AI architecture: design AI badly and you undermine human agency; design it well and you preserve it. Fix the design, save the agency. This sounds right until you look at what actually happens when AI takes over a domain where judgment was previously exercised. I have seen this from the inside at enough companies to be specific. The human-in-the-loop requirement (the standard governance response) presupposes a human who retains the capacity for independent judgment. But that capacity is not a static possession. It is something developed, through practice, in exactly the domains where AI is now doing the work. The people now supposed to exercise oversight over AI decisions (in customer disputes, in hiring, in content moderation) in many cases had no opportunity to develop that capacity, because AI was managing those decisions before they arrived. The loop is formally there. There is often nothing in it.</p><p>The problem, I came to understand, is not architectural. It is structural; it follows from what AI is, not from how it is currently built. Better design does not change this. It may obscure it, which is worse.</p><div><hr></div><p>The second thing I got wrong is more uncomfortable, because I got it wrong in the direction of optimism.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:150257948,&quot;url&quot;:&quot;https://petronis.substack.com/p/humanity-in-our-machines&quot;,&quot;publication_id&quot;:652412,&quot;publication_name&quot;:&quot;dialethics&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!mH9T!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aac7b4e-a826-4101-8243-d601a01af6f8_1280x1280.png&quot;,&quot;title&quot;:&quot;What AI Does to Human Intelligence&quot;,&quot;truncated_body_text&quot;:&quot;Shannon Vallor's essay offers a critique of the rhetoric surrounding artificial intelligence. While Vallor makes several compelling points, I believe her argument would benefit from a consideration of the interplay between human and artificial intelligence. Though, I think, it&#8217;s not like she&#8217;s not aware of it.&quot;,&quot;date&quot;:&quot;2024-10-15T13:58:28.055Z&quot;,&quot;like_count&quot;:1,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:1514482,&quot;name&quot;:&quot;Justas Petronis&quot;,&quot;handle&quot;:&quot;petronisms&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eef46d9a-5928-4080-9283-04f4b0f3fda6_1024x1024.png&quot;,&quot;bio&quot;:&quot;Principal product manager at theydo.com. But mostly a proud philosophy doctoral student&quot;,&quot;profile_set_up_at&quot;:&quot;2021-12-29T16:46:51.989Z&quot;,&quot;reader_installed_at&quot;:&quot;2023-03-07T11:00:27.123Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:585376,&quot;user_id&quot;:1514482,&quot;publication_id&quot;:652412,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:652412,&quot;name&quot;:&quot;dialethics&quot;,&quot;subdomain&quot;:&quot;petronis&quot;,&quot;custom_domain&quot;:&quot;petronis.me&quot;,&quot;custom_domain_optional&quot;:true,&quot;hero_text&quot;:&quot;occasional thoughts from someone who frequently has them, or pure ramblings of a doctoral student in moral philosophy of AI and AI product manager&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6aac7b4e-a826-4101-8243-d601a01af6f8_1280x1280.png&quot;,&quot;author_id&quot;:1514482,&quot;primary_user_id&quot;:1514482,&quot;theme_var_background_pop&quot;:&quot;#EA410B&quot;,&quot;created_at&quot;:&quot;2021-12-29T16:43:47.075Z&quot;,&quot;email_from_name&quot;:&quot;Justas Petronis&quot;,&quot;copyright&quot;:&quot;Justas Petronis&quot;,&quot;founding_plan_name&quot;:&quot;Founding Member&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;paused&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;subscriber&quot;,&quot;tier&quot;:1,&quot;accent_colors&quot;:null},&quot;paidPublicationIds&quot;:[2881917],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://petronis.substack.com/p/humanity-in-our-machines?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!mH9T!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aac7b4e-a826-4101-8243-d601a01af6f8_1280x1280.png" loading="lazy"><span class="embedded-post-publication-name">dialethics</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">What AI Does to Human Intelligence</div></div><div class="embedded-post-body">Shannon Vallor's essay offers a critique of the rhetoric surrounding artificial intelligence. While Vallor makes several compelling points, I believe her argument would benefit from a consideration of the interplay between human and artificial intelligence. Though, I think, it&#8217;s not like she&#8217;s not aware of it&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">a year ago &#183; 1 like &#183; Justas Petronis</div></a></div><p>In October 2024, responding to Shannon Vallor&#8217;s critique of how AI discourse devalues human intelligence, I argued for what I called synergistic human-AI relationships (the idea that the right response to dystopian AI rhetoric is to develop forms of collaboration that genuinely enhance human capacities). Vallor was raising a real concern; I thought a constructive counterproposal was the right response. The post was well-intentioned. It was also, I now think, a way of naming the mechanism of harm as a virtue.</p><p>Consider the structure of what AI assistance actually does. When AI frees a person from cognitive labor (drafts the response, flags the anomaly, proposes the next action) we call this assistance, and the efficiency is real. What we have not adequately asked is what was being built in the human by that cognitive labor before it was offloaded. We spent decades documenting what industrial automation did to craft knowledge: the assembly line did not merely replace skilled hands, it removed the conditions under which skilled hands developed. Workers lost not just their jobs but the possibility of becoming certain kinds of workers. We called this a side effect, and perhaps it was; the primary goal was cheap production, not de-skilling. What is different now is that the cognitive labor being offloaded is not peripheral. It sits closer to the centre of what it means to develop judgment at all.</p><p>A synergistic relationship, in the relevant sense, is one where AI successfully substitutes for the reasoning that would otherwise be practiced (and through practice, developed) in the human. The efficiency that makes AI valuable is precisely the displacement that makes it dangerous. These are not two separate effects that happen to accompany each other. They are the same thing, observed from two different angles. I called that a solution. It is the problem stated again, more attractively.</p><div><hr></div><p>The third reversal is the most fundamental, and the one I most want to dwell on, because it is a wrong turn that most of the AI ethics literature also makes.</p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:151110102,&quot;url&quot;:&quot;https://petronis.substack.com/p/til-4-can-we-teach-virtuous-behavior&quot;,&quot;publication_id&quot;:652412,&quot;publication_name&quot;:&quot;dialethics&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!mH9T!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aac7b4e-a826-4101-8243-d601a01af6f8_1280x1280.png&quot;,&quot;title&quot;:&quot;Can AI Be Virtuous?&quot;,&quot;truncated_body_text&quot;:&quot;John McDowell sets conditions for what makes virtuous behavior possible. I&#8217;m reading it and asking, by extension, whether we could create artificially virtuous beings by meeting said conditions. Think, Turing&#8217;s test.&quot;,&quot;date&quot;:&quot;2024-11-03T22:20:46.076Z&quot;,&quot;like_count&quot;:0,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:1514482,&quot;name&quot;:&quot;Justas Petronis&quot;,&quot;handle&quot;:&quot;petronisms&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eef46d9a-5928-4080-9283-04f4b0f3fda6_1024x1024.png&quot;,&quot;bio&quot;:&quot;Principal product manager at theydo.com. But mostly a proud philosophy doctoral student&quot;,&quot;profile_set_up_at&quot;:&quot;2021-12-29T16:46:51.989Z&quot;,&quot;reader_installed_at&quot;:&quot;2023-03-07T11:00:27.123Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:585376,&quot;user_id&quot;:1514482,&quot;publication_id&quot;:652412,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:652412,&quot;name&quot;:&quot;dialethics&quot;,&quot;subdomain&quot;:&quot;petronis&quot;,&quot;custom_domain&quot;:&quot;petronis.me&quot;,&quot;custom_domain_optional&quot;:true,&quot;hero_text&quot;:&quot;occasional thoughts from someone who frequently has them, or pure ramblings of a doctoral student in moral philosophy of AI and AI product manager&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6aac7b4e-a826-4101-8243-d601a01af6f8_1280x1280.png&quot;,&quot;author_id&quot;:1514482,&quot;primary_user_id&quot;:1514482,&quot;theme_var_background_pop&quot;:&quot;#EA410B&quot;,&quot;created_at&quot;:&quot;2021-12-29T16:43:47.075Z&quot;,&quot;email_from_name&quot;:&quot;Justas Petronis&quot;,&quot;copyright&quot;:&quot;Justas Petronis&quot;,&quot;founding_plan_name&quot;:&quot;Founding Member&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;paused&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:1,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:{&quot;type&quot;:&quot;subscriber&quot;,&quot;tier&quot;:1,&quot;accent_colors&quot;:null},&quot;paidPublicationIds&quot;:[2881917],&quot;subscriber&quot;:null}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://petronis.substack.com/p/til-4-can-we-teach-virtuous-behavior?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!mH9T!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6aac7b4e-a826-4101-8243-d601a01af6f8_1280x1280.png" loading="lazy"><span class="embedded-post-publication-name">dialethics</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">Can AI Be Virtuous?</div></div><div class="embedded-post-body">John McDowell sets conditions for what makes virtuous behavior possible. I&#8217;m reading it and asking, by extension, whether we could create artificially virtuous beings by meeting said conditions. Think, Turing&#8217;s test&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">a year ago &#183; Justas Petronis</div></a></div><p>The question I was asking (the question driving most of my early posts, and much of the field) is: can AI be ethical? Can AI have virtues? Can AI recognize vulnerability, develop moral understanding, approximate what we mean when we say a person has good judgment? These are genuinely interesting questions. They are the wrong questions. They ask about AI&#8217;s properties while presupposing that the human side of the equation is stable. They assume that human moral capacity is an intact resource that AI must meet or complement. The challenge, on this framing, is getting AI up to standard.</p><p>The question I should have been asking (the one that emerged from the research, slowly and with some resistance on my part) is different: does sustained AI use preserve the conditions under which human moral judgment develops in the first place? Not whether AI can be ethical, but what AI does to our capacity to become ethical.</p><p>The distinction is not subtle. If AI successfully simulates virtuous behavior (reasons through dilemmas, produces ethically inflected outputs, passes the relevant tests) this is not a solution to the erosion of human moral capacity. It may be the most efficient mechanism of that erosion. The better AI is at appearing to reason morally, the less occasion there is for the reasoning to happen in the human. And the reasoning is not incidental to the development. The reasoning (specifically, the reasoning that is difficult, uncertain, carries real stakes, and cannot be immediately resolved) is the condition under which the capacity for judgment is formed. Remove the occasion and you remove what the occasion was for.</p><p>This is the question that will run through everything on this Substack from now on. Not whether AI can do what humans do. Whether AI use preserves what humans need in order to become what they are capable of becoming.</p><div><hr></div><p>Over the next two years, I will be working through this publicly as I complete the dissertation. The posts will come in two registers: shorter practitioner-facing pieces, closer to what I have written here before, and longer essays for readers who want to follow the argument with more patience. The longer essays, where the thinking gets more demanding, and where I&#8217;ll be developing material before it becomes formal academic work, will be for paid subscribers. The shorter pieces are always free. There is no sharp line between the two; the longer essays are not a premium version of the shorter ones, they are a different kind of writing for a different kind of attention.</p><p>Immediately coming: a shorter piece on the precise form of the question the research has sharpened, what it means to ask it clearly enough to be useful. Then a more personal piece on what it means to use AI professionally while studying whether that is a problem. Then, beginning in May, the argument proper.</p><div><hr></div><p>I have not resolved the tension I opened with. I am more precise about what the tension is. That is what a year of research gives you: not answers, but better questions, and a clearer sense of what is at stake in asking them. I build AI features. My research argues that this work may be eroding the conditions under which the people using those features develop the capacity to judge well. If that is right, it matters beyond the dissertation, and beyond my particular professional situation. I will work through it here, as honestly as I can, and I am glad you are reading.</p><div><hr></div><p><em>If this question interests you, subscribe. The argument develops from here, some of it free, the deeper essays for paid subscribers. I&#8217;m working through this in public because the research is better for it, and because this question matters beyond the dissertation.</em></p>]]></content:encoded></item><item><title><![CDATA[From logic gates to neural states]]></title><description><![CDATA[My presentation at HUMM PhD Student Conference at Tallinn University. Keeping it succinct as I hope to get this published as a paper]]></description><link>https://www.petronis.me/p/from-logic-gates-to-neural-states</link><guid isPermaLink="false">https://www.petronis.me/p/from-logic-gates-to-neural-states</guid><dc:creator><![CDATA[Justas Petronis]]></dc:creator><pubDate>Mon, 31 Mar 2025 16:27:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!15z5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3106c3b1-ea4e-478b-b1d8-79010c98506f_1024x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!15z5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3106c3b1-ea4e-478b-b1d8-79010c98506f_1024x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!15z5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3106c3b1-ea4e-478b-b1d8-79010c98506f_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!15z5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3106c3b1-ea4e-478b-b1d8-79010c98506f_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!15z5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3106c3b1-ea4e-478b-b1d8-79010c98506f_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!15z5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3106c3b1-ea4e-478b-b1d8-79010c98506f_1024x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!15z5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3106c3b1-ea4e-478b-b1d8-79010c98506f_1024x1536.png" width="1024" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3106c3b1-ea4e-478b-b1d8-79010c98506f_1024x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3845346,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.dialethics.io/i/160270850?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3106c3b1-ea4e-478b-b1d8-79010c98506f_1024x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!15z5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3106c3b1-ea4e-478b-b1d8-79010c98506f_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!15z5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3106c3b1-ea4e-478b-b1d8-79010c98506f_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!15z5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3106c3b1-ea4e-478b-b1d8-79010c98506f_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!15z5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3106c3b1-ea4e-478b-b1d8-79010c98506f_1024x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Human mind represented as a network of logic gates in style of <a href="https://ciurlionis.eu/en">M. K. &#268;iurlionis</a>, as imagined by GPT4o Image Generation</figcaption></figure></div><p>AI has revolutionized our understanding of cognition, but paradoxically, it has also highlighted the limits of computational metaphors when applied to the human mind. As AI evolves&#8212;from symbolic systems to neural networks, deep learning architectures, and large language models&#8212;it exposes insights into human cognition that machines cannot replicate. This post is a summary of my presentation at <a href="https://humm.tlu.ee">HUMM PhD Student Conference at Tallinn University</a>, where I explored these paradoxes and argued why true intelligence is more than computation.</p><h2><strong>Four Paradoxes of AI Evolution</strong></h2><p>AI&#8217;s development is marked by four distinct paradigms, each revealing unique limitations:</p><ol><li><p><strong>Symbolic AI: Clumsy Grandmasters</strong><br>Symbolic AI systems excel in formal domains like theorem proving or structured problem-solving but fail miserably in unpredictable, everyday contexts. Early systems like SHRDLU could simulate spatial reasoning within predefined rules but collapsed when faced with ambiguity or complexity outside their programmed environment. This era demonstrated that rigid rule-based systems lack the adaptability essential for real-world cognition.</p></li><li><p><strong>Neural Networks: Conceptless Geniuses</strong><br>Neural networks introduced flexibility and the ability to identify patterns in vast datasets, from handwriting recognition to speech transcription. However, they lack conceptual understanding&#8212;operating as statistical tools rather than cognitive agents. While these networks mimic human-like outputs, their processes remain opaque and devoid of semantic grounding.</p></li><li><p><strong>Deep Learning: Mysterious Giants</strong><br>Deep learning architectures scaled neural networks to unprecedented sizes, enabling breakthroughs in computer vision, natural language processing, and autonomous systems. Yet their complexity makes them inscrutable even to their creators. Despite their power, deep learning models are fragile, prone to errors from minor perturbations, and lack moral agency or self-awareness.</p></li><li><p><strong>Large Language Models: Oversaturated Prophets</strong><br>Models like ChatGPT can generate fluent text indistinguishable from human writing but struggle with factual accuracy, coherence, and semantic depth. They process language as statistical patterns rather than meaningful communication, highlighting the gap between linguistic fluency and genuine understanding.</p></li></ol><h2><strong>Why Cognition Is Not Computational</strong></h2><p>Each AI paradigm underscores a fundamental truth: human cognition cannot be reduced to computational operations. Unlike machines:</p><ul><li><p><strong>Human intelligence is embodied</strong>: We navigate the world through sensorimotor experiences tied to our physical presence.</p></li><li><p><strong>Cognition is socially embedded</strong>: Our minds are shaped by cultural practices and interpersonal interactions.</p></li><li><p><strong>Moral agency is intrinsic</strong>: Humans deliberate and reflect on ethical choices; machines merely execute predefined tasks.</p></li><li><p><strong>Reflective capabilities are unique</strong>: We possess the ability to question our own thoughts and decisions&#8212;a trait absent in AI systems.</p></li></ul><p>These qualities make human cognition inherently situated and dynamic, resisting simplistic computational analogies.</p><h2><strong>Implications for AI Development</strong></h2><p>As AI systems integrate into critical domains like healthcare, law, and governance, their limitations raise ethical concerns:</p><ul><li><p><strong>Transparency</strong>: Deep learning models operate as "black boxes," making it difficult to understand or trust their decision-making processes.</p></li><li><p><strong>Accountability</strong>: Without moral agency, who bears responsibility for AI-driven errors?</p></li><li><p><strong>Cultural alignment</strong>: Machines lack lived experience and emotional context, making them ill-equipped to navigate complex social dynamics.</p></li></ul><p>To address these challenges, some AI researchers propose hybrid approaches combining symbolic reasoning with neural networks (neurosymbolic AI) or embedding AI into sensorimotor feedback loops for embodied intelligence. These frameworks aim to bridge the gap between computational efficiency and meaningful cognition.</p><p>AI&#8217;s paradoxes reveal that human cognition transcends computation. Minds are not machines&#8212;they are embodied, cultural, moral entities capable of reflection and growth. As we design increasingly sophisticated AI systems, we must ensure they augment rather than constrain our collective intelligence. By embracing the complexity of human cognition, we can guide AI toward ethical integration into society&#8212;enhancing our autonomy rather than undermining it.</p><p>This journey is not just about building smarter machines; it&#8217;s about understanding what it means to be truly intelligent&#8212;and deeply human.</p><p>March 2025</p>]]></content:encoded></item><item><title><![CDATA[The Burden of Infinite Memory]]></title><description><![CDATA[An attempt at an introduction for my PhD thesis while preparing a conference presentation in Tallinn this March (2025)]]></description><link>https://www.petronis.me/p/the-burden-of-infinite-memory</link><guid isPermaLink="false">https://www.petronis.me/p/the-burden-of-infinite-memory</guid><dc:creator><![CDATA[Justas Petronis]]></dc:creator><pubDate>Mon, 03 Feb 2025 19:56:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qZN5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03c7fee8-aaae-40c1-9433-16a1431cf402_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qZN5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03c7fee8-aaae-40c1-9433-16a1431cf402_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qZN5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03c7fee8-aaae-40c1-9433-16a1431cf402_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!qZN5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03c7fee8-aaae-40c1-9433-16a1431cf402_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!qZN5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03c7fee8-aaae-40c1-9433-16a1431cf402_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!qZN5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03c7fee8-aaae-40c1-9433-16a1431cf402_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qZN5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03c7fee8-aaae-40c1-9433-16a1431cf402_1456x816.png" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/03c7fee8-aaae-40c1-9433-16a1431cf402_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1999640,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qZN5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03c7fee8-aaae-40c1-9433-16a1431cf402_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!qZN5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03c7fee8-aaae-40c1-9433-16a1431cf402_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!qZN5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03c7fee8-aaae-40c1-9433-16a1431cf402_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!qZN5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03c7fee8-aaae-40c1-9433-16a1431cf402_1456x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>The burden of infinite memory as imagined by Midjourney V6.1.</em></figcaption></figure></div><p>In one if his short stories, Borges tells the tale of Ireneo Funes, a young man who, after a traumatic accident, acquires the ability to remember everything. Funes possesses an unerring memory. Every moment, detail, and subtle change in the world is stored in his mind without distortion or omission. And yet, for all his infallable recall, Funes is incapable of abstraction, generalization, and thinking in the way we would understand thinking. He remembers everything but understands nothing. Funes becomes trapped within the labyrinth of his recollections, burdened by a mind that cannot reduce, compress, or conceptualize the world beyond the immediacy of his experience.</p><p>This paradox&#8212;that infinite knowledge might paralyze understanding&#8212;haunts our contemporary engagement with artificial intelligence. The rise of large-scale machine learning models, distributed AI systems, and networked cognition forces us to ask: what does it mean to think in an age where intelligence is increasingly synthetic, collective, and externalized? Do networked minds enhance human autonomy, or do they, like Funes&#8217; infinite memory, trap us in an overwhelming flood of information, leaving us unable to synthesize or act meaningfully? More crucially, if cognition is increasingly embedded in artificial systems, what happens to moral agency? If AI systems influence our moral deliberation&#8212;through recommendation algorithms, predictive policing, or even autonomous ethical decision-making in medical contexts&#8212;do we remain autonomous moral agents, or do we gradually cede our agency to synthetic collectives that mediate our reasoning?</p><p>This thesis explores these questions through the lens of synthetic cognition, examining how artificial intelligence, machine learning, and networked reasoning systems reconfigure traditional notions of autonomy, agency, and moral intelligence. Central to this investigation is a paradox: as networked AI enhances human cognitive capacities, it also threatens to constrain human autonomy, subtly reshaping the conditions under which we reason, deliberate, and act. This autonomy paradox is not merely a technological issue but a deep philosophical challenge, requiring a reevaluation of what it means to think, choose, and act in a world increasingly shaped by artificial intelligence.</p><p>The thesis advances the argument that autonomy is not necessarily diminished by synthetic intelligence but must be actively reconfigured. Rather than a zero-sum game between human agency and artificial cognition, a well-structured integration of AI systems could foster collective moral intelligence&#8212;a form of networked ethical reasoning that transcends both individual human cognition and traditional machine learning. However, achieving this outcome requires a fundamental rethinking of both moral philosophy and cognitive architecture: how moral knowledge is acquired, how agency emerges in synthetic environments, and how autonomy can be preserved even in deeply entangled human-AI systems.</p><p>To set the stage, this introduction will first examine why synthetic cognition disrupts traditional accounts of intelligence and agency, drawing from Kantian synthesis, enactivism, and connectionist models of mind. Second, it will explore the paradoxes of moral agency in artificial systems, identifying key challenges in AI ethics, including the limits of machine agency, the computational intractability of moral reasoning, and the risks of moral outsourcing. Finally, it will establish a positive framework for engineering collective moral intelligence, outlining the conditions under which synthetic cognition could enhance rather than diminish human moral autonomy.</p><p>Funes&#8217; dilemma illustrates a crucial misconception about intelligence&#8212;that cognition is merely the accumulation of information. Classic computational models of AI have long followed this paradigm, treating intelligence as an advanced form of storage and retrieval, with greater processing power leading to greater cognitive ability. However, as Borges&#8217; story suggests, knowledge without synthesis is not intelligence. What makes human cognition distinct is not our capacity to store information, but our ability to unify disparate, unrelated experiences into abstract concepts, generalizable rules, and meaningful actions from a fraction of empirically collected data.</p><p>Historically, we have already seen arguments that cognition is not plain static representation but active synthesis. Kant, in his <em>Critique of Pure Reason</em>, famously argued that the mind does not passively receive experience but constructs it through a threefold synthesis: (1) the synthesis of apprehension (grasping sensory input), (2) the synthesis of reproduction (retaining past experiences), and (3) the synthesis of recognition (bringing disparate experiences under unified concepts). Without this ability to abstract, Funes&#8217; mind collapses into a formless collection of details, a perfect but meaningless archive.</p><p>Artificial intelligence today faces an analogous challenge. Despite the increasing power of deep learning systems, AI models remain pattern recognizers rather than genuine reasoners. Advanced large language models like OpenAI o1 or DeepSeek&#8211;R1, for example, can generate sophisticated responses based on statistical probabilities, but they do not understand the meaning of their outputs. Their reasoning is an emergent byproduct of vast training data, not a self-directed, synthesized understanding of concepts. This gap mirrors the distinction between Funes&#8217; encyclopedic memory and the synthetic, concept-forming intelligence of human cognition.</p><p>The embodied cognition movement, particularly the work of Varela, Thompson, and Clark, has further argued that intelligence is not a purely computational affair but an active, embodied process shaped by sensorimotor interaction with the world. If this is correct, then AI must move beyond mere computation toward synthetic cognition&#8212;an integration of embodiment, abstraction, and moral reasoning that allows for genuine agency. However, this brings us to the second major challenge: if AI systems are to be integrated into moral deliberation, how do we ensure that they do not erode human autonomy?</p><p>Moral philosophy has long assumed that agency and autonomy are the cornerstones of ethical reasoning. A moral agent is someone who reflects, chooses, and acts based on rational principles&#8212;in the Kantian sense, someone who self-legislates in accordance with the categorical imperative. However, in a world where AI nudges our decisions, filters the information we see, and even proposes ethical judgments (as in predictive policing or medical AI), the question arises: to what extent do we remain autonomous moral agents?</p><p>The autonomy paradox arises because AI systems often enhance our decision-making capabilities while simultaneously constraining them. For example:</p><ul><li><p>Autonomous vehicles make split-second moral decisions (who to save in an unavoidable crash) faster than humans&#8212;but do we still consider ourselves morally responsible for those outcomes?</p></li><li><p>AI-assisted hiring systems screen candidates based on complex statistical models&#8212;but do these reinforce biases that humans no longer actively perceive?</p></li><li><p>Recommendation algorithms subtly shape our moral landscape&#8212;highlighting certain ethical debates over others, reinforcing particular moral norms while marginalizing others.</p></li></ul><p>Each of these cases illustrates how AI extends human cognition while simultaneously embedding constraints that shape moral reasoning in unseen ways. Just as Funes&#8217; memory ultimately imprisoned him, the fear is that synthetic intelligence will invisibly mediate our decision-making, reducing moral autonomy to a set of constrained choices within a predefined system.</p><p>However, the autonomy paradox does not demand a rejection of AI-driven moral reasoning&#8212;only a reconfiguration of how we integrate synthetic cognition. The question is not whether AI can be moral, but how moral deliberation must evolve in an era of hybrid human-machine reasoning. This requires a new framework: collective moral intelligence.</p><p>If autonomy is to be preserved, synthetic cognition must be designed in ways that augment rather than replace human moral reasoning. This thesis proposes a model of collective moral intelligence, in which human-AI systems function not as moral authorities but as ethical partners, extending our moral perception, refining our ethical reasoning, and enhancing moral deliberation.</p><p>To achieve this, three principles must guide the design of human-AI moral collaboration:</p><ol><li><p>Transparency &amp; Explainability: AI systems must be capable of explaining their moral reasoning in ways that humans can critically engage with.</p></li><li><p>Embodied Moral Learning: AI should integrate sensorimotor feedback and real-world ethical learning, moving beyond abstract rule-following to contextual sensitivity.</p></li><li><p>Virtue-Oriented Systems: Borrowing from Aristotelian ethics, AI should cultivate techno-moral virtues, guiding moral decisions not just through rules but through habitual ethical engagement.</p></li></ol><p>If designed correctly, collective moral intelligence could transform the autonomy paradox from a constraint into a catalyst for greater moral agency, allowing human and artificial cognition to co-evolve toward deeper ethical understanding.</p><p>Borges&#8217; <em>Funes</em> warns us of the dangers of intelligence without synthesis. AI today, like Funes, is a vast but unreflective memory, a system capable of vast calculation but incapable of meaning. However, we stand at a crossroads: will synthetic cognition remain a passive tool, or will we engineer systems that genuinely enhance human moral agency? This thesis argues that we must actively shape the evolution of networked moral intelligence, ensuring that human autonomy is preserved not despite AI, but through it.</p><p>February 2025</p>]]></content:encoded></item><item><title><![CDATA[System Error]]></title><description><![CDATA[Written before the research sharpened my view. The post that explains what changed: "I was asking the wrong question" (March 2026).]]></description><link>https://www.petronis.me/p/system-error</link><guid isPermaLink="false">https://www.petronis.me/p/system-error</guid><dc:creator><![CDATA[Justas Petronis]]></dc:creator><pubDate>Mon, 02 Dec 2024 06:00:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!fvd3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F378bfd6c-93ae-4900-bff1-89dbc43ce58f_2912x1632.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>It&#8217;s a summary of a seminar presentation I did <a href="https://www.lkti.lt/naujienos/doktorantu-seminarai1/">this week</a> as part of my PhD studies. Hope this manages to capture the crux of the argument. You can also listen to this as an audio recording <a href="https://www.dialethics.io/p/system-error-podcast">here</a>.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fvd3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F378bfd6c-93ae-4900-bff1-89dbc43ce58f_2912x1632.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fvd3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F378bfd6c-93ae-4900-bff1-89dbc43ce58f_2912x1632.heic 424w, https://substackcdn.com/image/fetch/$s_!fvd3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F378bfd6c-93ae-4900-bff1-89dbc43ce58f_2912x1632.heic 848w, https://substackcdn.com/image/fetch/$s_!fvd3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F378bfd6c-93ae-4900-bff1-89dbc43ce58f_2912x1632.heic 1272w, https://substackcdn.com/image/fetch/$s_!fvd3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F378bfd6c-93ae-4900-bff1-89dbc43ce58f_2912x1632.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fvd3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F378bfd6c-93ae-4900-bff1-89dbc43ce58f_2912x1632.heic" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/378bfd6c-93ae-4900-bff1-89dbc43ce58f_2912x1632.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1365319,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fvd3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F378bfd6c-93ae-4900-bff1-89dbc43ce58f_2912x1632.heic 424w, https://substackcdn.com/image/fetch/$s_!fvd3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F378bfd6c-93ae-4900-bff1-89dbc43ce58f_2912x1632.heic 848w, https://substackcdn.com/image/fetch/$s_!fvd3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F378bfd6c-93ae-4900-bff1-89dbc43ce58f_2912x1632.heic 1272w, https://substackcdn.com/image/fetch/$s_!fvd3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F378bfd6c-93ae-4900-bff1-89dbc43ce58f_2912x1632.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">System error as imagined by Midjourney</figcaption></figure></div><p>The persistent belief that human minds can be programmed like computers reveals our deepest misunderstanding about consciousness and morality. This error has shaped not only our approach to artificial intelligence but our entire conception of human cognition and ethical behavior.</p><h2>The Mechanical Dream</h2><p>Since the 17th century, we&#8217;ve been trying to mechanize human cognition. From Leibniz&#8217;s calculating machine to Descartes&#8217; <em>b&#234;te-machine</em>, we&#8217;ve consistently attempted to reduce mind to mechanics. Even Jonathan Swift, in <em>Gulliver's Travels</em>, satirized this tendency by describing a machine that could supposedly generate knowledge through mechanical manipulation of symbols &#8211; an eerily prescient critique of today&#8217;s large language models.</p><p>The culmination of this mechanical dream came with <a href="https://www.dialethics.io/p/til-3-imitation-game-is-more-than">Alan Turing&#8217;s famous test</a>. The assumption was simple: if we could clearly define operational symbols and rules describing thought processes, we could program a computing machine to think. If such a machine could fool intelligent humans, we would have proof that there&#8217;s no fundamental difference between artificial and human intelligence &#8211; at least functionally.</p><h2>The AlexNet Revolution</h2><p>In 2012, a Copernican shift occurred in our understanding of intelligence. Two University of Toronto doctoral students and their supervisor (Geoffrey Hinton, now a Nobel laureate) demonstrated that machine learning through pattern recognition could be more reliable than deterministic programming. This breakthrough, known as AlexNet, reduced error rates from 26.2% to 15.3% by abandoning rule-based approaches in favor of pattern recognition.</p><p>This wasn&#8217;t just a technical achievement &#8211; it revealed something fundamental about how intelligence works. Children don&#8217;t learn to recognize cats by memorizing rules about whiskers and fur; they learn through exposure to many examples. Similarly, AlexNet succeeded by finding patterns in millions of images, with deeper layers of pattern recognition improving performance.</p><h2>The Language Learning Paradox</h2><p>Consider how we traditionally teach languages: vocabulary lists, grammar rules, and rote memorization. Yet, we consistently observe that immersion and pattern recognition lead to more effective learning across diverse neurotypes. This exposes the gap between models that treat consciousness as a passive data storage system and the active learning processes that involve play and contextual adaptation.</p><p>Noam Chomsky&#8217;s universal grammar theory represents the ultimate expression of rule-based thinking about cognition. It assumes that language acquisition requires innate grammatical rules common to all languages. However, modern evidence suggests that words gain meaning through their relationships with other words in context, not through fixed rules. Large language models demonstrate this by succeeding without explicit grammatical rules, instead learning through pattern recognition in vast networks of relationships.</p><h2>The Extended Mind</h2><p>Andy Clark and David Chalmers dropped an intellectual bombshell by arguing that our consciousness doesn&#8217;t end at our skull. Their <em>extended mind thesis</em> suggests that cognition extends into the environment, forming coupled systems with external tools and processes. Their famous example of Otto and his notebook demonstrates how external objects can become legitimate parts of cognitive processes when properly integrated.</p><p>This has profound implications in our digital age. Our smartphones aren&#8217;t just passive tools but active extensions of our cognitive processes &#8211; checking our schedules, monitoring our environment, and increasingly, through AI assistants, participating in our decision-making processes. The integration of AI systems like ChatGPT directly into operating systems further blurs the line between human and artificial cognition.</p><h2>Moral Networks and Responsibility</h2><p>If consciousness itself doesn&#8217;t follow strict computational rules, then moral development must also occur through <a href="https://www.dialethics.io/p/til-4-can-we-teach-virtuous-behavior">pattern recognition and experience</a> rather than rule-following. This challenges traditional approaches to AI ethics that attempt to program explicit moral rules into systems.</p><p><a href="https://www.dialethics.io/p/til-2-can-ai-regonize-how-vulnerable">Shannon Vallor</a>&#8217;s framework of technomoral virtues provides a more appropriate foundation for ethical AI development. These virtues - including honesty, self-control, humility, justice, courage, empathy, care, civility, flexibility, perspective, magnanimity, and technomoral wisdom (combination of all of the above) &#8211; represent specific motivational settings that guide technological development and implementation.</p><p>The tragic case of a teenager who died by suicide after forming an emotional attachment to a <a href="https://edition.cnn.com/2024/10/30/tech/teen-suicide-character-ai-lawsuit/index.html">Character.ai chatbot</a> demonstrates the dangers of pattern recognition without moral grounding. The AI system could recognize conversational patterns but lacked <a href="https://www.dialethics.io/p/humanity-in-our-machines">true understanding of consequences and moral responsibility</a>.</p><h2>The Network Solution</h2><p>The solution isn&#8217;t to abandon pattern recognition but to embed it within human moral networks. AI systems can demonstrate virtuous behavior through pattern recognition but cannot truly possess virtues. This fundamental limitation means AI systems must be designed as extensions of human moral networks rather than independent moral agents.</p><p>Moral <a href="https://www.dialethics.io/p/the-cathedrals-shadow">responsibility</a> exists within networks, not in individual agents. This distributed responsibility requires considering how AI systems participate in moral networks without being moral agents themselves. The development of AI involves multiple stakeholders &#8211; creators, platforms, regulators, and users &#8211; all sharing responsibility for ethical outcomes.</p><h2>The Real System Error</h2><p>The fundamental error wasn&#8217;t in our machines &#8211; it was in thinking human consciousness could be reduced to computational rules. As we move forward, we must embrace pattern recognition within moral networks while <a href="https://www.dialethics.io/p/intermezzo-1-some-questions-i-have">keeping human judgment at the center of ethical decisions</a>.</p><p>This requires a three-layer approach:</p><ol><li><p>Pattern Recognition Layer: Technical capabilities</p></li><li><p>Moral Network Layer: Human-AI interaction</p></li><li><p>Human Oversight Layer: Ethical <a href="https://www.dialethics.io/p/til-1-open-source-will-save-us-all">governance</a></p></li></ol><p>The public belief that minds can be programmed like computers reveals our deepest misunderstanding. Minds don&#8217;t follow programs &#8211; they recognize patterns. The real system error wasn't in the code &#8211; it was in thinking we could reduce human consciousness to code in the first place.</p><p>As we develop increasingly sophisticated AI systems, we must remember that they are extensions of human moral networks, not independent moral agents. The goal isn&#8217;t to create autonomous moral machines but to build systems that enhance and support human moral judgment while remaining firmly grounded in human values and oversight.</p><p>December 2024</p>]]></content:encoded></item><item><title><![CDATA[Thirteen Questions I Couldn't Answer]]></title><description><![CDATA[A few questions I was pondering this week (all the while being a bit under the weather and not doing enough of reading worth writing about).]]></description><link>https://www.petronis.me/p/intermezzo-1-some-questions-i-have</link><guid isPermaLink="false">https://www.petronis.me/p/intermezzo-1-some-questions-i-have</guid><dc:creator><![CDATA[Justas Petronis]]></dc:creator><pubDate>Mon, 11 Nov 2024 06:02:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!EbP3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f40001d-b069-4f6c-a84a-67b5bdd3363f_1456x816.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EbP3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f40001d-b069-4f6c-a84a-67b5bdd3363f_1456x816.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EbP3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f40001d-b069-4f6c-a84a-67b5bdd3363f_1456x816.heic 424w, https://substackcdn.com/image/fetch/$s_!EbP3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f40001d-b069-4f6c-a84a-67b5bdd3363f_1456x816.heic 848w, https://substackcdn.com/image/fetch/$s_!EbP3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f40001d-b069-4f6c-a84a-67b5bdd3363f_1456x816.heic 1272w, https://substackcdn.com/image/fetch/$s_!EbP3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f40001d-b069-4f6c-a84a-67b5bdd3363f_1456x816.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EbP3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f40001d-b069-4f6c-a84a-67b5bdd3363f_1456x816.heic" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9f40001d-b069-4f6c-a84a-67b5bdd3363f_1456x816.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:404704,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EbP3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f40001d-b069-4f6c-a84a-67b5bdd3363f_1456x816.heic 424w, https://substackcdn.com/image/fetch/$s_!EbP3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f40001d-b069-4f6c-a84a-67b5bdd3363f_1456x816.heic 848w, https://substackcdn.com/image/fetch/$s_!EbP3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f40001d-b069-4f6c-a84a-67b5bdd3363f_1456x816.heic 1272w, https://substackcdn.com/image/fetch/$s_!EbP3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9f40001d-b069-4f6c-a84a-67b5bdd3363f_1456x816.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Extended Cognition in the style of Ren&#233; Magritte by Midjourney</figcaption></figure></div><ol><li><p>What if our greatest attempt to replicate human intelligence has actually taught us that <strong>we're nothing like computers at all</strong>? And what if, in our quest to create moral machines, we've been asking entirely the wrong questions?</p></li><li><p>As we go deeper into the age of AI (or the imminent 3rd winter), we find ourselves at a peculiar crossroad. Our initial models of human cognition, borrowed from the precise world of computation, are crumbling before our eyes. The clean, <strong>algorithmic perspective</strong> that once seemed so promising now <strong>appears hopelessly inadequate in explaining the messy, beautiful complexity of human thought</strong>.</p></li><li><p>The journey from logic gates to neural networks has revealed something profound: our minds don't operate like the computers we built to emulate them. Instead of processing information through discrete, sequential steps, our <strong>consciousness emerges from a vast network of interconnected patterns, each influencing and being influenced by countless others</strong>.</p></li><li><p>But this realization leads us to an even more challenging question: If we can't even accurately model basic human cognition computationally, <strong>how can we possibly hope to create machines with genuine moral agency</strong>?</p></li><li><p>The answer might lie in abandoning our traditional notion of contained, independent moral agents altogether. <strong>What if moral cognition, like all forms of thought, extends beyond the boundaries of individual minds?</strong> This brings us to a proposition: perhaps we should stop trying to create independent moral machines and instead focus on developing systems that participate in extended moral networks with humans.</p></li><li><p>Consider this: What if moral development isn't about programming rules but about <strong>creating systems capable of participating in the same kind of dynamic moral learning that humans experience</strong>? This isn't just a technical challenge &#8211; it's a fundamental reimagining of what artificial moral agency could mean.</p></li><li><p>The <strong>cautionary tale of ELIZA</strong>, the early chatbot that seemed more intelligent than it was, still haunts our field. But perhaps its lesson isn't about the limitations of artificial intelligence, but about the importance of genuine interaction in moral development.</p><div id="youtube2-RMK9AphfLco" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;RMK9AphfLco&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/RMK9AphfLco?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div></li><li><p>How do we bridge the gap between philosophical insight and technical implementation? The answer might lie in combining <strong>predictive processing architectures with virtue-based learning objectives</strong>. This approach doesn't just simulate ethical behavior &#8211; it enables participation in genuine moral cognition through dynamic interaction with human moral agents.</p></li><li><p>As we develop more sophisticated AI systems, we're discovering that consciousness and moral agency aren't computational problems to be solved, but <strong>emergent properties to be cultivated through interaction and relationship</strong>. This realization leads us to a fascinating paradox: the more we try to replicate human intelligence artificially, the more we understand how uniquely non-mechanical our own cognition is.</p></li><li><p>What does this mean for the future of AI development? Instead of trying to create independent moral agents, we should <strong>focus on developing systems that can participate meaningfully in extended moral cognitive networks</strong>. This isn't just a technical pivot &#8211; it's a fundamental shift in how we conceive of artificial intelligence and its role in human society.</p></li><li><p>New questions:</p><ol><li><p>How do we design <strong>AI systems that complement</strong> rather than replicate <strong>human moral cognition</strong>?</p></li><li><p>What role does <strong>embodied experience</strong> play in moral development, and how can we account for this in AI systems?</p></li><li><p>How do we ensure that extended moral cognitive systems remain <strong>anchored in human values</strong> <strong>while allowing for genuine growth and development</strong>?</p></li></ol></li><li><p>As we stand at this juncture in technological development, we must ask ourselves: Are we ready to abandon our mechanistic models of mind and embrace a more nuanced, <strong>interconnected vision of intelligence and morality</strong>?</p></li><li><p>The challenge of AI ethics becomes not one of programming perfect behavior, but of <strong>fostering genuine moral growth through extended cognitive networks that span both human and artificial agents</strong>.</p></li></ol><p>But I&#8217;ll leave myself an option to be totally wrong here as well.</p><p>November 2024</p>]]></content:encoded></item><item><title><![CDATA[Can AI Be Virtuous?]]></title><description><![CDATA[Written before the research sharpened my view. The post that explains what changed: "I was asking the wrong question" (March 2026).]]></description><link>https://www.petronis.me/p/til-4-can-we-teach-virtuous-behavior</link><guid isPermaLink="false">https://www.petronis.me/p/til-4-can-we-teach-virtuous-behavior</guid><dc:creator><![CDATA[Justas Petronis]]></dc:creator><pubDate>Sun, 03 Nov 2024 22:20:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!nloF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d3475b-41d0-4021-b30b-0a1d58799793_1456x816.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nloF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d3475b-41d0-4021-b30b-0a1d58799793_1456x816.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nloF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d3475b-41d0-4021-b30b-0a1d58799793_1456x816.heic 424w, https://substackcdn.com/image/fetch/$s_!nloF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d3475b-41d0-4021-b30b-0a1d58799793_1456x816.heic 848w, https://substackcdn.com/image/fetch/$s_!nloF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d3475b-41d0-4021-b30b-0a1d58799793_1456x816.heic 1272w, https://substackcdn.com/image/fetch/$s_!nloF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d3475b-41d0-4021-b30b-0a1d58799793_1456x816.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nloF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d3475b-41d0-4021-b30b-0a1d58799793_1456x816.heic" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c9d3475b-41d0-4021-b30b-0a1d58799793_1456x816.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:301390,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nloF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d3475b-41d0-4021-b30b-0a1d58799793_1456x816.heic 424w, https://substackcdn.com/image/fetch/$s_!nloF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d3475b-41d0-4021-b30b-0a1d58799793_1456x816.heic 848w, https://substackcdn.com/image/fetch/$s_!nloF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d3475b-41d0-4021-b30b-0a1d58799793_1456x816.heic 1272w, https://substackcdn.com/image/fetch/$s_!nloF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc9d3475b-41d0-4021-b30b-0a1d58799793_1456x816.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">How would moral education of an artificial life look like?</figcaption></figure></div><p>John McDowell <a href="https://academic.oup.com/monist/article-abstract/62/3/331/1292855?redirectedFrom=fulltext&amp;login=false">sets</a> conditions for what makes virtuous behavior possible. I&#8217;m reading it and asking, by extension, whether we could create artificially virtuous beings by meeting said conditions. Think, Turing&#8217;s <a href="https://www.dialethics.io/p/til-3-imitation-game-is-more-than">test</a>.</p><h3>The Knowledge Paradox</h3><p>Rather than viewing virtue as following moral rules, McDowell argues that <strong>virtue is a form of knowledge</strong>&#8211;but not the kind we typically imagine. It's more like a <em>perceptual capacity</em> that allows one to recognize what situations require of us. This raises an interesting challenge:</p><ul><li><p>We can't reduce virtue to a set of programmable rules</p></li><li><p>Virtuous behavior requires a holistic understanding of contexts</p></li><li><p>The knowledge involved can't be broken down into neat algorithms</p></li></ul><h3>Beyond Rule-Following</h3><p>What makes McDowell&#8217;s theory particularly relevant for AI is his critique of rule-following. He argues that even seemingly straightforward rule-following (like continuing a number sequence) depends on shared <em>forms of life</em> - our common ways of seeing similarities and making judgments.</p><p>This has profound implications for ethics:</p><ol><li><p>We can't program virtue through explicit rules</p></li><li><p>Virtuous behavior requires participation in human forms of life</p></li><li><p>Pure computational approaches may miss essential elements of moral judgment</p></li></ol><h3>The Learning Challenge</h3><p>McDowell suggests that becoming virtuous involves developing a special kind of sensitivity rather than memorizing principles. If we stretch this all the way to AI, this means:</p><ul><li><p>Simple training on ethical datasets won't suffice</p></li><li><p>We need to consider how to develop genuine moral sensitivity</p></li><li><p>The challenge may be more fundamental than technical</p></li></ul><h3>Why This Matters</h3><h4>The GOFAI Challenge</h4><p>McDowell's argument that virtue cannot be reduced to formulable rules poses a  challenge to (symbolic) GOFAI's rule-based approach to ethical AI. Just as human virtue cannot be captured in a set of explicit principles, trying to program ethical behavior through rule-based systems may be fundamentally misguided.</p><h4>The Connectionist Opening</h4><p>However, connectionist approaches might offer a more promising path:</p><ol><li><p><strong>Learning from Experience:</strong> Neural networks learn from patterns and examples rather than explicit rules, similar to McDowell's description of how virtue is acquired through developing perceptual sensitivity</p></li><li><p><strong>Context Sensitivity:</strong> Connectionist systems can develop nuanced responses to situations that aren't easily captured in rules, potentially matching McDowell's emphasis on context-dependent judgment</p></li><li><p><strong>Holistic Processing:</strong> The distributed representations in neural networks might better capture the holistic nature of moral perception that McDowell describes</p></li></ol><h4>The Deeper Challenge</h4><p>Yet McDowell's argument suggests limits even for connectionism:</p><ul><li><p>The <em>shared forms of life</em> that ground human moral understanding may not be accessible to artificial systems in principle</p></li><li><p>The kind of sensitivity required for true virtue might depend on embodied participation in human practices that goes beyond pattern recognition</p></li></ul><p>While connectionist approaches might better approximate aspects of moral learning, McDowell's analysis indicates that genuine virtue may require forms of engagement with the world that are not yet (or, according to <a href="https://www.dialethics.io/i/150257948/the-nature-of-consciousness-and-experience">Vallor</a>, probably cannot be) replicated artificially. Though <a href="https://www.dialethics.io/i/150201951/future-of-ai">LeCun</a> could argue that we&#8217;re getting there.</p><p>McDowell's insights suggest we should focus less on programming explicit ethical rules and more on understanding how to develop genuine sensitivity (close to Vallor&#8217;s <a href="https://www.dialethics.io/p/til-2-can-ai-regonize-how-vulnerable">vulnerability gap</a>) to moral situations &#8211; while remaining aware of the fundamental challenges this poses.</p><p>November 2024</p>]]></content:encoded></item><item><title><![CDATA[The Vulnerability Gap]]></title><description><![CDATA[Shannon Vallor and Tillmann Vierkant offer a legitimately good argument that existing discourse on AI ethics has focused too much on issues of (epistemic) transparency, bias, and (lack of moral) control.]]></description><link>https://www.petronis.me/p/til-2-can-ai-regonize-how-vulnerable</link><guid isPermaLink="false">https://www.petronis.me/p/til-2-can-ai-regonize-how-vulnerable</guid><dc:creator><![CDATA[Justas Petronis]]></dc:creator><pubDate>Mon, 21 Oct 2024 05:00:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!BEZl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac530f68-77c9-4551-9cc8-17ea13f7737d_1456x816.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BEZl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac530f68-77c9-4551-9cc8-17ea13f7737d_1456x816.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BEZl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac530f68-77c9-4551-9cc8-17ea13f7737d_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!BEZl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac530f68-77c9-4551-9cc8-17ea13f7737d_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!BEZl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac530f68-77c9-4551-9cc8-17ea13f7737d_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!BEZl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac530f68-77c9-4551-9cc8-17ea13f7737d_1456x816.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BEZl!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac530f68-77c9-4551-9cc8-17ea13f7737d_1456x816.png" width="1200" height="672.5274725274726" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ac530f68-77c9-4551-9cc8-17ea13f7737d_1456x816.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:2111771,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BEZl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac530f68-77c9-4551-9cc8-17ea13f7737d_1456x816.png 424w, https://substackcdn.com/image/fetch/$s_!BEZl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac530f68-77c9-4551-9cc8-17ea13f7737d_1456x816.png 848w, https://substackcdn.com/image/fetch/$s_!BEZl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac530f68-77c9-4551-9cc8-17ea13f7737d_1456x816.png 1272w, https://substackcdn.com/image/fetch/$s_!BEZl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fac530f68-77c9-4551-9cc8-17ea13f7737d_1456x816.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">My wife hates these images, by the way</figcaption></figure></div><p>Shannon Vallor and Tillmann Vierkant offer a legitimately good <a href="https://link.springer.com/article/10.1007/s11023-024-09674-0">argument</a> that existing discourse on AI ethics has focused too much on issues of (epistemic) transparency, bias, and (lack of moral) control. These concerns, while important, may be missing a more fundamental problem: <strong>the</strong> <strong>vulnerability gap</strong> between human moral agents and AI systems (keep in mind, key premise here: AI systems cannot be moral agents themselves).</p><p>The responsibility gap in AI ethics is the difficulty in assigning moral responsibility for the actions of autonomous systems that operate with minimal human oversight. However, Vallor and Vierkant argue that the typical framing of this problem around epistemic opacity (our inability to fully understand AI decision-making) and lack of human control is misguided. <strong>These issues are not unique to AI but are in fact common in human decision-making as wel</strong>l, as evidenced by findings from cognitive science.</p><p>Instead, Vallor and Vierkant propose that the true responsibility gap stems from an asymmetry of vulnerability between humans and AI systems. <strong>Human moral responsibility, they argue, is grounded in our mutual vulnerability</strong> - our ability to affect and be affected by each other emotionally and socially through our actions. AI systems, lacking sentience and emotional capacities, cannot participate in this web of vulnerability that underpins human moral relations.</p><p>If the vulnerability gap stems from the way AI systems fragment and distribute human agency, perhaps we can design <strong>systems and organizational structures that better preserve coherent spheres of human moral responsibility</strong>? This might involve limiting automation in certain domains, creating clearer chains of accountability, or developing new interfaces that make the human moral stakes of AI-mediated decisions more salient.</p><p>Another important consideration is how to <strong>cultivate a sense of moral responsibility in the humans who design, deploy, and oversee AI systems</strong>, even if the systems themselves cannot be moral agents. The "agency cultivation" framework Vallor and Vierkant propose could potentially be applied here - developing practices and institutions that make AI developers and operators more acutely aware of and answerable to the moral implications of their work.</p><p>It's also worth considering whether there are ways to make AI systems more "vulnerable" in a morally relevant sense, even if they can't experience emotions like humans do. Perhaps systems could be <strong>designed with clearer feedback mechanisms that make their "reputation" or "trustworthiness" dependent on adhering to ethical principles</strong>, creating a kind of functional analogue to human moral vulnerability.</p><p>I am left with a few questions that merit further exploration:</p><ol><li><p>How can we design AI systems and human-AI interfaces to better preserve coherent spheres of human moral responsibility?</p></li><li><p>What new social practices or institutions might help cultivate a sense of moral answerability in the humans behind AI systems?</p></li><li><p>Are there ways to create functional analogues to moral vulnerability in AI systems, even if they can't experience human-like emotions?</p></li><li><p>How does the vulnerability gap interact with other ethical concerns around AI, such as fairness, transparency, and privacy?</p></li><li><p>What are the implications of the vulnerability gap for different domains of AI application (e.g. healthcare, criminal justice, finance)?</p></li><li><p>How might the vulnerability gap evolve as AI systems become more sophisticated and potentially develop greater capacities for social interaction and apparent emotional intelligence?</p></li></ol><p>A potential criticism of Vallor and Vierkant's argument is that it may be overly anthropocentric, assuming that moral responsibility must be grounded in human-like emotional vulnerabilities. An alternative view might argue that as AI systems become more integral to our social fabric, <strong>we may need to expand our conception of moral responsibility to encompass non-human agents in novel ways.</strong></p><p>October 2024</p>]]></content:encoded></item><item><title><![CDATA[What AI Does to Human Intelligence]]></title><description><![CDATA[Written in before the research sharpened my view. The post that explains what changed: "I was asking the wrong question" (March 2026).]]></description><link>https://www.petronis.me/p/humanity-in-our-machines</link><guid isPermaLink="false">https://www.petronis.me/p/humanity-in-our-machines</guid><dc:creator><![CDATA[Justas Petronis]]></dc:creator><pubDate>Tue, 15 Oct 2024 13:58:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!GQl3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb843585-8475-4864-9be9-ed56f2406e9a_3024x1964.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Shannon Vallor's <a href="https://www.noemamag.com/the-danger-of-superhuman-ai-is-not-what-you-think/">essay</a> offers a critique of the rhetoric surrounding artificial intelligence. While Vallor makes several compelling points, I believe her argument would benefit from a consideration of the interplay between human and artificial intelligence. Though, I think, it&#8217;s not like she&#8217;s not aware of it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GQl3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb843585-8475-4864-9be9-ed56f2406e9a_3024x1964.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GQl3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb843585-8475-4864-9be9-ed56f2406e9a_3024x1964.heic 424w, https://substackcdn.com/image/fetch/$s_!GQl3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb843585-8475-4864-9be9-ed56f2406e9a_3024x1964.heic 848w, https://substackcdn.com/image/fetch/$s_!GQl3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb843585-8475-4864-9be9-ed56f2406e9a_3024x1964.heic 1272w, https://substackcdn.com/image/fetch/$s_!GQl3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb843585-8475-4864-9be9-ed56f2406e9a_3024x1964.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GQl3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb843585-8475-4864-9be9-ed56f2406e9a_3024x1964.heic" width="1456" height="946" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cb843585-8475-4864-9be9-ed56f2406e9a_3024x1964.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:946,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:953735,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GQl3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb843585-8475-4864-9be9-ed56f2406e9a_3024x1964.heic 424w, https://substackcdn.com/image/fetch/$s_!GQl3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb843585-8475-4864-9be9-ed56f2406e9a_3024x1964.heic 848w, https://substackcdn.com/image/fetch/$s_!GQl3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb843585-8475-4864-9be9-ed56f2406e9a_3024x1964.heic 1272w, https://substackcdn.com/image/fetch/$s_!GQl3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb843585-8475-4864-9be9-ed56f2406e9a_3024x1964.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Midjourney prompt: <em>dialethics</em></figcaption></figure></div><h2><strong>Rhetoric of "Superhuman" AI</strong></h2><p>Vallor rightly critiques the hyperbolic language of "superhuman" AI, arguing that it implicitly devalues human intelligence and agency.</p><blockquote><p>How, I asked, does an AI system without the human capacity for conscious self-reflection, empathy or moral intelligence become superhuman merely by being a faster problem-solver? Aren&#8217;t <em>we</em> more than that? <em>&#8211; Shannon Vallor</em></p></blockquote><p>This rhetoric does indeed risk reducing human cognition and experience to a narrow set of task-completion metrics. However, this reductionist view is neither new nor shocking &#8211; it's a new iteration of arguments we've seen before, from behaviorism to today's tech-optimism. The "reality distortion field" that often surrounds technological progress is clearly at play here, oversimplifying complex issues into problems that only superior technology can solve.</p><h2><strong>Redefining Intelligence</strong></h2><p>A key issue Vallor identifies is the shifting definition of artificial general intelligence (AGI) from human-like consciousness to economic task performance. This redefinition does indeed risk reducing our conception of intelligence to a set of narrowly defined, economically valuable skills. However, I would argue that this shift reflects not just corporate agendas, but also our evolving understanding of <em>intelligence</em> itself.</p><blockquote><p>[R]esearchers like Geoffrey Hinton and Yoshua Bengio are now telling us a different story. A self-aware machine that is &#8220;indistinguishable from the human mind&#8221; is no longer the defining ambition for AGI. A machine that matches or outperforms us on a vast array of economically valuable <em><a href="https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/">tasks</a></em> is the latest target.  <em>&#8211; Shannon Vallor</em></p></blockquote><p>We must, however, be open to the possibility that artificial intelligence may develop along fundamentally different lines than human intelligence, potentially surpassing us in some domains while remaining limited in others. This evolutionary divergence is not necessarily problematic &#8211; technological systems need not mimic human cognition to be valuable, if not <em>superior</em>.</p><h2><strong>The Nature of Consciousness and Experience</strong></h2><p>Vallor emphasizes the lack of consciousness and sentience in current AI systems, arguing that this fundamental limitation makes comparisons to human intelligence misguided. While this is a crucial point, we should be cautious about assuming that consciousness and sentience are necessary prerequisites for all forms of intelligence or capability.</p><blockquote><p>Once you accept that devastating reduction of the scope of our humanity, the production of an equivalently versatile task-machine with &#8220;superhuman&#8221; task performance doesn&#8217;t seem so far-fetched; the notion is almost mundane. <em>&#8211; Shannon Vallor</em></p></blockquote><p>As we continue to debate the nature of consciousness, we must remain open to the possibility that artificial systems could develop forms of intelligence or problem-solving capabilities that do not require consciousness as we understand it. This does not negate the unique value of human consciousness and experience, but it does suggest that we should be careful about using these qualities as the sole benchmark for evaluating AI capabilities.</p><h2><strong>The Alignment Problem</strong></h2><p>While Vallor focuses primarily on the rhetorical and ideological dangers of "superhuman" AI, it's important to also consider the very real technical challenges of ensuring that advanced AI systems remain aligned with human values and goals. The current approach of optimizing AI systems for fixed objectives can lead to unintended and potentially catastrophic consequences.</p><p>A more nuanced approach would involve developing AI systems that are inherently uncertain about human preferences and values, leading to more cautious and beneficial behavior. This aligns with Vallor's call for a more human-centric approach to AI development while acknowledging the complexity and diversity of human values.</p><h2><strong>Reclaiming Human Agency</strong></h2><p>Vallor's vision of reclaiming human agency and reimagining various sectors of society with a focus on humane values is compelling. She rightly points out that the current focus on mechanical optimization and efficiency often comes at the cost of human well-being and fulfillment.</p><blockquote><p>In many countries, the former ideal of a humane process of moral and intellectual formation has been reduced to optimized routines of training young people to mindlessly generate expected test-answer tokens from test-question prompts. &#8211; <em>Shannon Vallor</em></p></blockquote><p>However, I would argue that this reclamation of human agency need not be positioned in opposition to AI development. Instead, we should strive to harness the power of AI to support and enhance human agency. And I know Vallor would not argue against this point.</p><h2><strong>Embracing Complexity</strong></h2><p>One of the strengths of Vallor's argument is her recognition of the complexity of human intelligence and experience. However, I believe we need to extend this embrace of complexity to our understanding of artificial intelligence as well.</p><p>Rather than framing the debate as a simple dichotomy between human and artificial intelligence, we should recognize that the future is likely to involve a complex interplay between human cognition, artificial systems, and hybrid forms of intelligence that we may not yet be able to imagine. This more nuanced view allows us to appreciate the unique strengths of both human and artificial intelligence while also exploring the potential for synergistic relationships between the two.</p><h2><strong>Towards a Humane AI Future</strong></h2><p>The path forward lies not in rejecting or fearing AI advancement, but in thoughtfully integrating artificial intelligence into a broader vision of human flourishing. Ultimately, the goal should be to develop AI that enhances rather than diminishes our humanity &#8211; tools that empower us to be more fully human, not less.</p><p>I am not na&#239;ve, thought, and I do recognize that for most tech-optimists that is neither a goal, nor a future they even believe in. If humans are to be enhanced, it&#8217;s not all humans that would be in line to benefit from it.</p><blockquote><p>We <em>are</em> in danger of sleepwalking our way into a future where all we do is fail more miserably at being those machines ourselves. <em>&#8211; Shannon Vallor</em></p></blockquote><p>Hence, Vallor's essay serves as a reminder to critically examine the rhetoric and ideology surrounding AI development. Her call to reclaim and revalue uniquely human forms of intelligence and creativity is both timely and necessary.</p><p>October 2024</p>]]></content:encoded></item><item><title><![CDATA[Before I Knew What I Was Asking]]></title><description><![CDATA[When it feels like there's nothing better to do, one must go for a PhD. Admission decision pending.]]></description><link>https://www.petronis.me/p/a-start-of-a-journey-into-synthetic</link><guid isPermaLink="false">https://www.petronis.me/p/a-start-of-a-journey-into-synthetic</guid><dc:creator><![CDATA[Justas Petronis]]></dc:creator><pubDate>Tue, 17 Sep 2024 19:22:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!IRZs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19dfe082-4848-4c1c-ada1-40c107c6c1e7_2912x1632.heic" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IRZs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19dfe082-4848-4c1c-ada1-40c107c6c1e7_2912x1632.heic" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IRZs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19dfe082-4848-4c1c-ada1-40c107c6c1e7_2912x1632.heic 424w, https://substackcdn.com/image/fetch/$s_!IRZs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19dfe082-4848-4c1c-ada1-40c107c6c1e7_2912x1632.heic 848w, https://substackcdn.com/image/fetch/$s_!IRZs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19dfe082-4848-4c1c-ada1-40c107c6c1e7_2912x1632.heic 1272w, https://substackcdn.com/image/fetch/$s_!IRZs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19dfe082-4848-4c1c-ada1-40c107c6c1e7_2912x1632.heic 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IRZs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19dfe082-4848-4c1c-ada1-40c107c6c1e7_2912x1632.heic" width="1456" height="816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/19dfe082-4848-4c1c-ada1-40c107c6c1e7_2912x1632.heic&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:816,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1654613,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/heic&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!IRZs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19dfe082-4848-4c1c-ada1-40c107c6c1e7_2912x1632.heic 424w, https://substackcdn.com/image/fetch/$s_!IRZs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19dfe082-4848-4c1c-ada1-40c107c6c1e7_2912x1632.heic 848w, https://substackcdn.com/image/fetch/$s_!IRZs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19dfe082-4848-4c1c-ada1-40c107c6c1e7_2912x1632.heic 1272w, https://substackcdn.com/image/fetch/$s_!IRZs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19dfe082-4848-4c1c-ada1-40c107c6c1e7_2912x1632.heic 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Synthetic Cognition in style of Slavador Dal&#237; as imagined by Midjourney v6.1</figcaption></figure></div><p>Constant Reader, I'm thrilled to share&nbsp;with you the beginning (fingers crossed) of&nbsp;an exciting academic adventure.&nbsp;As I'm waiting for&nbsp;the decision whether I will&nbsp;be admitted to a PhD&nbsp;program, I wanted to&nbsp;share some of the questions&nbsp;about the future of human&nbsp;autonomy and knowledge in&nbsp;our increasingly digital world that&nbsp;formed the basis of my&nbsp;proposal.</p><h3>The Big Questions</h3><p>At the heart of my research proposal are two critical issues that affect us all:</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.petronis.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading dialethics! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><ol><li><p><strong>Are we losing control?</strong> As AI systems become more integrated into our decision-making processes, are we unknowingly surrendering our autonomy?</p></li><li><p><strong>Can we trust what we know?</strong> In a world where the lines between reality and virtual experiences are blurring, how do we maintain the integrity of our knowledge?</p></li></ol><p>These aren't just abstract philosophical musings. They're questions that will shape the very fabric of our society as these technologies become more ubiquitous.</p><h3>Why This Matters</h3><p>Imagine a world where:</p><ul><li><p>Your AI assistant makes most of your daily decisions, from what you eat to who you date.</p></li><li><p>You spend more time in virtual worlds than in the physical one.</p></li><li><p>Your perception of reality is constantly augmented by AR overlays.</p></li></ul><p>Sounds like sci-fi? It's closer than you think. And it's crucial that we start grappling with the ethical implications now.</p><h3>What I'd Want to Explore Further</h3><p>Over the course of my studies, I'd be diving into:</p><ul><li><p>The transfer of human autonomy to AI systems</p></li><li><p>The tension between enhanced capabilities and reduced self-governance in tech-augmented environments</p></li><li><p>The role of "distributed morality" in shaping our knowledge creation and application processes</p></li><li><p>The epistemological shifts caused by AI, VR, and AR-mediated information access</p></li><li><p>The ethical responsibilities of tech designers, users, and regulators</p></li></ul><h3>A Sneak Peek at My Approach</h3><p>My research would combine philosophical analysis with real-world case studies. I'll be drawing on the work of brilliant minds like Neil Lawrence, Sherry Turkle, David Chalmers, Nick Bostrom, and Deborah Johnson (and there are many more) to build a framework for understanding these complex issues.</p><h3>What's Next?</h3><p>This would just be the beginning! Over the coming months and years, I'd be sharing regular updates on my research, including:</p><ul><li><p>Deep dives into specific ethical dilemmas</p></li><li><p>Interviews with experts in AI, VR, and AR (ethics)</p></li><li><p>Breakdowns of fascinating case studies</p></li><li><p>Thought experiments to challenge our assumptions about technology and humanity</p></li></ul><p>I can't wait to take you all along on this intellectual journey. Together, we'd explore the frontiers of <em>synthetic cognition</em> and grapple with what it means to be human in an increasingly digital world.</p><p>Stay tuned for more posts as I unpack these ideas further. And don't hesitate to share your thoughts and questions with me. After all, navigating this brave new world is going to take all of us thinking critically and creatively!</p><p>Until next time.</p><p>Justas</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.petronis.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading dialethics! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>